Skip to page navigation
U.S. flag

An official website of the United States government

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

OPM.gov / Policy / Assessment & Selection / Other Assessment Methods
Skip to main content

Assessment Centers

The assessment center is not a place as its name seems to suggest, nor is it a single process or method. Rather, an assessment center employs multiple assessment methods and exercises to evaluate a wide range of competencies used to make a variety of employment decisions (e.g., employee selection, career development, promotion). Assessment centers can be used to assess small groups of people at relatively the same time. Many assessment center exercises resemble work sample tests designed to simulate the actual challenges found on the job.

Assessment center exercises can be used to measure many different types of job related competencies, including interpersonal skills, oral and written communication, planning and evaluating, and reasoning and problem solving abilities. A frequently used assessment center exercise is the in-basket test. A typical in-basket test is designed to simulate administrative tasks. During this exercise, applicants are asked to play the role of a person new to the job and are instructed to read and react to a pile of memos, messages, reports, and articles.

Some assessment center exercises can be used to evaluate groups and individual behaviors in group situations. For example, in a leaderless group discussion, a group of applicants is tasked with solving a problem or a series of problems in a limited amount of time. Other assessment center exercises include, but are not limited to, job knowledge tests, personality tests, and structured interviews. Applicant performance is usually observed and evaluated by multiple assessors (i.e., raters). When used for internal promotion purposes, assessment centers are frequently designed to reflect values and practices specific to an organization, but when used to assess external applicants, assessment centers should be designed to focus on the job and level of the job (e.g., manager) rather than practices unique to the organization. While assessment centers can be designed for various types of jobs, they are particularly effective for assessing higher-level managerial and leadership competencies. Assessment centers require extensive experience to develop, considerable logistical planning to set up, and numerous personnel to administer. Highly trained assessors are needed to observe and evaluate applicant performance on the group and individual exercises.

Considerations

  • Validity - Overall, assessment center scores do a good job predicting occupational success (i.e., they have a high degree of criterion-related validity), but the level of predictive validity can vary depending on the purpose of the assessment, the extent of assessor training, and the assessment methods used (See Gaugler, Rosenthal, Thornton & Bentson, 1987); Generally, there is little evidence assessment centers provide useful information about the relative strengths and weaknesses of a given individual. So while assessment centers are highly useful for making selection decisions, they are less useful for providing comprehensive developmental feedback
  • Face Validity/Applicant Reactions - Applicants typically react favorably to assessment center exercises and often perceive the process as being very fair (i.e., as having a high degree of face validity); Exercises simulating actual job tasks provide effective realistic job previews
  • Administration Method - Used to assess small groups of people at more or less the same time; Can assess individual performance either alone or in a team environment; Enables "hands-on" performance by the applicant and typically in a simulated work setting
  • Subgroup Differences - Generally little or no performance differences are found between men and women or applicants of different races, although the presence of gender and/or racial differences may depend on the competencies being assessed
  • Development Costs - Often costly to develop, both in terms of time and money; Usually requires frequent updating because the scenarios and problems used in the exercises are often remembered by the applicants long after the administration (raising potential test security issues) and because exercise content may become outdated over time (e.g., memos might be sent via e-mail rather than Fax)
  • Administration Costs - Usually expensive to administer; Requires several assessors to observe and rate applicant performance and may require a spacious testing location conducive to rating many applicants at one time; Administration time often depends on number of applicants
  • Utility/Return On Investment (ROI) - Productivity gains realized by selecting managers and skilled individuals average well above administrative costs
  • Common Uses - Can be used for promotion or selection purposes; Used to measure many types of job related skills, but most widely used to assess candidates for leadership, managerial, customer service, and sales positions; May require a pre-screen to limit the number of applicants scheduled for the labor-intensive assessment center process

References

(See Section VI for a summary of each article)

Arthur, W. Jr., Day, E. A., McNelly, T. L., & Edens, P. S. (2003). A meta-analysis of the criterion-related validity of assessment center dimensions. Personnel Psychology, 56, 125-154.

Caldwell, C., Thornton, G. C., & Gruys, M. (2003). Ten classic assessment center errors: Challenges to selection validity. Public Personnel Management, 32(1), 73-88.

Gaugler, B. B., Rosenthal, D. B., Thornton, G. C., & Bentson, C. (1987). Meta-analysis of assessment center validity. Journal of Applied Psychology, 72(3), 493-511.

Testing and Assessment: An Employer's Guide to Good Practices. (2000). Washington, DC: U.S. Department of Labor, Employment and Training Administration. Note: Article can be accessed at http://www.onetcenter.org/guides.html.

Woehr, D., & Winfred, A. (2003). The construct-related validity of assessment center ratings: A review and meta-analysis of the role of methodological factors. Journal of Management, 29(2), 231-258.

Zedeck, S. (1986). A process analysis of the assessment center method. In B. M. Staw & L. L. Cummings (Eds.), Research in organizational behavior, 8, 259-296.

The Society for Industrial and Organizational Psychology (SIOP) website contains information on Assessment Centers.

Back to Top

Control Panel