Human Resources and Security Specialists should use this tool to determine the correct investigation level for any covered position within the U.S. Federal Government.
General policy guidance on assessment tools is provided in Chapter 2 of the Delegated Examining Operations Handbook (DEOH), http://www.opm.gov/policy-data-oversight/hiring-authorities/competitive-hiring/deo_handbook.pdf. Writing evaluations belong to a class of assessments referred to as "work sample tests." The guidance in the DEOH is not specific to writing assessments but the same principles would apply. As with any other procedure used to make an employment decision, a writing assessment should be:
Other considerations may be important, such as the proposed method of use (e.g., as a selective placement factor, quality ranking factor) and specific measurement technique.
Writing performance has been evaluated using a wide range of techniques such as portfolio assessment, timed essay assignments, multiple-choice tests of language proficiency, self-reports of writing accomplishments (e.g., winning an essay contest, getting published), and grades in English writing courses. Each technique has its advantages and disadvantages.
For example, with the portfolio technique, applicants are asked to provide writing samples from school or work. The advantage of this technique is that it has high face validity (that is, applicants perceive that the measure is valid based on simple visual inspection). Disadvantages include difficulty verifying authorship, lack of opportunity (e.g., prior jobs may not have required report writing, the writing samples are proprietary or sensitive), and positive bias (e.g., only the very best writing pieces are submitted and others are selectively excluded).
Timed essay tests are also widely used to assess writing ability. The advantage of timed essay tests is that all applicants are assessed under standardized conditions (e.g., same topic, same time constraints). The disadvantage is that writing skill is based on a single work sample. Many experts believe truly realistic evaluations of writing skill require several samples of writing without severe time constraints and the use of multiple judges to enhance scoring reliability.
Multiple-choice tests of language proficiency have also been successfully employed to predict writing performance (perhaps because they assess the knowledge of grammar and language mechanics thought to underlie writing performance). Multiple-choice tests are relatively cheap to administer and score, but unlike the portfolio or essay techniques, they lack a certain amount of face validity. Research shows that the very best predictions of writing performance are obtained when essay and multiple choice tests are used in combination.
There is also an emerging field based on the use of automated essay scoring (AES) in assessing writing ability. Several software companies have developed different computer programs to rate essays by considering both the mechanics and content of the writing.
The typical AES program needs to be "trained" on what features of the text to extract. This is done by having expert human raters score 200 or more essays written on the same prompt (or question) and entering the results into the program. The program then looks for these relevant text features in new essays on the same prompt and predicts the scores that expert human raters would generate. AES offers several advantages over human raters such as immediate online scoring, greater objectivity, and capacity to handle high-volume testing. The major limitation of current AES systems is that they can only be applied to pre-determined and pre-tested writing prompts, which can be expensive and resource-intensive to develop.
However, please keep in mind that scoring writing samples can be very time-consuming regardless of method (e.g., whether the samples are obtained using the portfolio or by a timed essay). A scoring rubric (that is, a set of standards or rules for scoring) is needed to guide judges in applying the criteria used to evaluate the writing samples. Scoring criteria typically cover different aspects of writing such as content organization, grammar, sentence structure, and fluency. We would recommend that only individuals with the appropriate background and expertise be involved in the review, analysis, evaluation, and scoring of the writing samples.
However, a writing sample like that is not scored; it's merely used to inform the hiring manager of the candidate's writing ability. And just like a "real" assessment or test, there are still many aspects to the process that need to be followed:
You will also need to consider:
It is up to your discretion how you (or the hiring manager) collect the writing sample. For example, you could ask the candidate to respond to a question or have them fix a grammatically-incorrect paper. Just keep in mind what writing skill you want to measure in order to guide the development of the assessment.
As with all testing practices, it's paramount to standardize the process, meaning that all candidates who are asked to produce a writing sample are treated the same, given the same question, and so forth.
In short, OPM does not offer specific guidance to agencies on the use of personality tests to assess candidates. Please check your agency's policies on using personality tests to assess candidates because policies may vary by agency.
In general, personality tests that are designed to measure work-related traits in normal adult populations are permissible. The personality factors assessed most frequently in work situations include Conscientiousness, Extraversion, Agreeableness, and Openness to Experience. As with any assessment tool used to make an employment decision, personality tests must meet the technical standards established in the Uniform Guidelines on Employee Selection Procedures (http://uniformguidelines.com/).
It is important to recognize that some personality tests are designed to diagnose psychiatric conditions (e.g., paranoia, schizophrenia, compulsive disorders) rather than work-related personality traits. The Americans with Disabilities Act (ADA) considers any test designed to reveal such psychiatric disorders as a "medical examination." Examples of such medical tests include the Minnesota Multiphasic Personality Inventory (MMPI) and the Millon Clinical Multi-Axial Inventory (MCMI).
Under the ADA, personality tests meeting the definition of a medical examination may only be administered after an offer of employment has been made. The following memorandum, "OPM Adjudication of Psychiatric/Psychological Objections," contains further information on making the distinction between medical and non-medical psychological and personality tests: http://www.chcoc.gov/Transmittals/TransmittalDetails.aspx?TransmittalID=1742.
For information on the validity and proper use of personality tests, see OPM's Assessment Decision Guide: http://www.opm.gov/policy-data-oversight/assessment-and-selection/reference-materials/assessmentdecisionguide.pdf
OPM encourages agencies to consider the use of structured interviews. When designed appropriately and used correctly, they have the ability to predict the future job performance of applicants with relatively low adverse impact on minority groups compared to other assessment tools. The structured interview is among the most valid assessment tools available.
For more information regarding structured interviews, please visit the Structured Interviews page of OPM's Assessment and Selection website (http://www.opm.gov/policy-data-oversight/assessment-and-selection/structured-interviews/) and the Structured Interview Guide (http://www.opm.gov/policy-data-oversight/assessment-and-selection/structured-interviews/guide.pdf).
Another good source for more information on structured interviews is the U.S. Merit Systems Protection Board report, "The Federal Selection Interview: Unrealized Potential," (http://www.mspb.gov/netsearch/viewdocs.aspx?docnumber=253635&version=253922&application=ACROBAT).
Response distortion (whether high or low) has long been a challenge with self-report occupational questionnaires. Employing the following suggestions may help:
Research has shown that warning applicants in advance that their responses are subject to verification can be a powerful incentive to answer honestly.
Yes, under delegated competitive examining, an agency may establish its own retesting policy and procedures. The Uniform Guidelines on Employee Selection Procedures (Section 12: Retesting of Applicants, http://uniformguidelines.com/uniformguidelines.html#50) requires employers to provide applicants with "a reasonable opportunity for retesting and reconsideration." It's good practice to provide a reasonable opportunity for retesting, which should also be consistently communicated to all applicants. Unless the examination announcement states otherwise, the default policy is that applicants may reapply and be reassessed at any time as long as the examination is still open.
The technical and/or administrative basis for a retesting policy should be clearly explained and documented (e.g., availability of alternate forms of an assessment, impact on the validity or integrity of the assessment process). Additional retesting information appears in other authoritative sources such as:
Factors that should be considered with retesting include:
Employers and other users of high-stakes assessments are subject to legal and other pressures to provide reassessment and reconsideration opportunities to applicants. The major consideration is the potential for retesting to undermine the integrity and usefulness of the assessment procedure.
Yes, agencies can develop – or purchase – their own assessments as long as the development, validation, and use of the assessments are consistent with:
Detailed information on assessment method considerations can be found in OPM's Assessment Decision Guide (http://www.opm.gov/policy-data-oversight/assessment-and-selection/reference-materials/assessmentdecisionguide.pdf). The guide covers the essential concepts behind personnel assessment and will allow your agency to:
The guide also contains an extensive list of resource materials if you need more information on a particular topic and a glossary for quick clarification of assessment terms and concepts.
The Assessment Decision Tool (http://apps.opm.gov/adt/ADTClientMain.aspx?JScript=1) is designed to assist HR professionals in developing assessment strategies based on specific competencies and other factors relevant to their hiring situations (e.g., applicant volume, level of available resources). The issues to consider when selecting or developing an assessment strategy or specific assessment tool are complex. The level of expertise needed to develop most assessments can vary greatly, and some can be quite substantial.
If an agency is interested in purchasing an assessment, Appendix B of the Delegated Examining Operations Handbook (http://www.opm.gov/policy-data-oversight/hiring-authorities/competitive-hiring/deo_handbook.pdf) lists criteria you may want to consider when choosing an assessment vendor. Under delegated examining, the decision to administer assessments for particular occupations and the responsibility to defend the use of those assessments rests with the agencies. Also, many vendors offer professionally-developed assessments, including OPM: http://www.opm.gov/services-for-agencies/assessment-evaluation/
The "Presidential Memorandum on Hiring Reform" (http://www.whitehouse.gov/the-press-office/presidential-memorandum-improving-federal-recruitment-and-hiring-process) eliminated the requirement for applicants to submit essay-style demonstrations of their qualifications as part of the initial application process.
Applicant response formats that have been offered as possible alternatives to lengthy, written demonstrations of competencies or knowledge, skills, and abilities (KSAs) include the following:
Another option is to use a multi-step, hurdled approach such that the written narratives are introduced after the initial application is submitted but before the formation of the Certificate of Eligibles.
For more information, please contact Assessment_Information@opm.gov.
If the intended results are not achieved with a particular question, it may be considered for elimination before final scoring of the assessment (i.e., given an effective weight of zero). Any adjustments to the scoring procedure should be based on a sound rationale, implemented uniformly for all applicants, evaluated for potential negative impact (e.g., maintaining coverage of critical competencies), and thoroughly documented.
It is highly recommended that you administer the interview questions as part of a trial run (or pilot) before using any interview questions in the "real" interview(s). Doing a trial run of the interview questions allow you to determine whether the question(s) is(are) clearly worded and elicit an acceptable range of responses. A pilot test will often reveal if any revisions need to be made. To be useful, the pilot test should mimic the actual structured interview process as closely as possible. Refer to page 14 of OPM's Structured Interview Guide (http://www.opm.gov/policy-data-oversight/assessment-and-selection/structured-interviews/guide.pdf) for a discussion of pilot testing interview questions and evaluating the interview process.
Yes, OPM approval is required when using tests to determine basic eligibility or as the sole basis for ranking applicants for inservice placement (reference Part E.9[d] of the Operating Manual on Qualification Standards for General Schedule Positions (http://www.opm.gov/policy-data-oversight/classification-qualifications/general-schedule-qualification-policies/#url=app). For occupations not requiring an OPM test, agencies may develop and implement their own tests for inservice placement without OPM approval as long as such tests are used as part of a comprehensive set of assessment procedures.
However, for delegated competitive examining, OPM approval is not required as long as the assessment procedure is consistent with the technical standards of the Uniform Guidelines on Employee Selection Procedures (http://uniformguidelines.com/). Specifically, the Uniform Guidelines require that the method of test use (e.g., as a screening device with a passing score, for grouping or ranking, combined with other assessments) be supported by findings of a job analysis and test validation study. For example, if the test is to be used for ranking, the agency should have evidence showing that higher scores on the test are related to better job performance.
When a test is used as a "screen out," it becomes part of the minimum requirements for the position and is subject to the same job-relatedness requirements as any other selective placement factor (see the guidance in the Delegated Examining Operations Handbook on the use of selective factors in Chapter 5, Section B, http://www.opm.gov/policy-data-oversight/hiring-authorities/competitive-hiring/deo_handbook.pdf).
There was an unexpected error when performing your action.
Your error has been logged and the appropriate people notified. You may close this message and try your command again, perhaps after refreshing the page. If you continue to experience issues, please notify the site administrator.