Click here to skip navigation
This website uses features which update page content based on user actions. If you are using assistive technology to view web content, please ensure your settings allow for the page content to update after initial load (this is sometimes called "forms mode"). Additionally, if you are using assistive technology and would like to be notified of items via alert boxes, please follow this link to enable alert boxes for your session profile.
An official website of the United States Government.

Frequently Asked Questions Assessment Policy

Can you provide some general guidance on writing assessments?

General policy guidance on assessment tools is provided in Chapter 2 of the Delegated Examining Operations Handbook (DEOH), http://www.opm.gov/policy-data-oversight/hiring-authorities/competitive-hiring/deo_handbook.pdf.  Writing evaluations belong to a class of assessments referred to as "work sample tests."  The guidance in the DEOH is not specific to writing assessments but the same principles would apply.  As with any other procedure used to make an employment decision, a writing assessment should be:

 

  • Supported by a job analysis,
  • Linked to one or more critical job competencies,
  • Included in the vacancy announcement, and
  • Based on standardized reviewing and scoring procedures.

 

Other considerations may be important, such as the proposed method of use (e.g., as a selective placement factor, quality ranking factor) and specific measurement technique.  

 

Writing performance has been evaluated using a wide range of techniques such as portfolio assessment, timed essay assignments, multiple-choice tests of language proficiency, self-reports of writing accomplishments (e.g., winning an essay contest, getting published), and grades in English writing courses.  Each technique has its advantages and disadvantages.

 

For example, with the portfolio technique, applicants are asked to provide writing samples from school or work.  The advantage of this technique is that it has high face validity (that is, applicants perceive that the measure is valid based on simple visual inspection).  Disadvantages include difficulty verifying authorship, lack of opportunity (e.g., prior jobs may not have required report writing, the writing samples are proprietary or sensitive), and positive bias (e.g., only the very best writing pieces are submitted and others are selectively excluded). 

 

Timed essay tests are also widely used to assess writing ability.  The advantage of timed essay tests is that all applicants are assessed under standardized conditions (e.g., same topic, same time constraints).  The disadvantage is that writing skill is based on a single work sample.  Many experts believe truly realistic evaluations of writing skill require several samples of writing without severe time constraints and the use of multiple judges to enhance scoring reliability.

 

Multiple-choice tests of language proficiency have also been successfully employed to predict writing performance (perhaps because they assess the knowledge of grammar and language mechanics thought to underlie writing performance).  Multiple-choice tests are relatively cheap to administer and score, but unlike the portfolio or essay techniques, they lack a certain amount of face validity.  Research shows that the very best predictions of writing performance are obtained when essay and multiple choice tests are used in combination.

 

There is also an emerging field based on the use of automated essay scoring (AES) in assessing writing ability.  Several software companies have developed different computer programs to rate essays by considering both the mechanics and content of the writing.

The typical AES program needs to be "trained" on what features of the text to extract.  This is done by having expert human raters score 200 or more essays written on the same prompt (or question) and entering the results into the program.  The program then looks for these relevant text features in new essays on the same prompt and predicts the scores that expert human raters would generate.  AES offers several advantages over human raters such as immediate online scoring, greater objectivity, and capacity to handle high-volume testing.  The major limitation of current AES systems is that they can only be applied to pre-determined and pre-tested writing prompts, which can be expensive and resource-intensive to develop.

 

However, please keep in mind that scoring writing samples can be very time-consuming regardless of method (e.g., whether the samples are obtained using the portfolio or by a timed essay).  A scoring rubric (that is, a set of standards or rules for scoring) is needed to guide judges in applying the criteria used to evaluate the writing samples.  Scoring criteria typically cover different aspects of writing such as content organization, grammar, sentence structure, and fluency.  We would recommend that only individuals with the appropriate background and expertise be involved in the review, analysis, evaluation, and scoring of the writing samples.

 

For more information regarding the development of written assessments, please contact Assessment_Information@opm.gov.

Unexpected Error

There was an unexpected error when performing your action.

Your error has been logged and the appropriate people notified. You may close this message and try your command again, perhaps after refreshing the page. If you continue to experience issues, please notify the site administrator.

Working...