Click here to skip navigation
An official website of the United States Government - whitehouse.gov

Sign In

Training and Development Policy Wiki

Page History: Training Evaluation


Compare Page Revisions




Page Revision: 7/2/2014 11:52:00 AM


Training Evaluation

Training evaluation is a continual and systematic process of assessing the value or potential value of a training program, course, activity or event.  Results of the evaluation are used to guide decision-making around various components of the training (e.g. instructional design, delivery, results) and its overall continuation, modification, or elimination. 

To assist agencies in evaluating their training programs, OPM published the 2011 OPM Training Evaluation Field Guide. The Training Evaluation Field Guide is designed to assist agency training representatives (i.e., training managers and supervisors, training liaisons/coordinators, agency evaluators, instructional designers, training facilitators and others who have a significant role in training effectiveness) in evaluating training effectiveness and in demonstrating training value to stakeholders and decision makers. Data and information were gathered from fifteen federal agency representatives who volunteered their time to attend a one-day working meeting, participate in individual interviews and submit samples of their tools and case studies. This Field Guide reflects the input from the working group.  The OPM Training Evaluation Field Guide Supplement is an abbreviated version of the OPM Training Evaluation Field Guide.

 

View this quick video for more information on the training evaluation field guide and ways agencies can use evaluation and training data to inform decisions related to training investments.


Logic Models:

Logic models are very easy tools to help you plan your training program. A logic model provides a representation of a "theory of change" (if...then) that clearly aligns the training inputs and activities to the outputs and results.  Logic models are created in the planning phase and can be completed for programs, initiatives, individual courses, events and activities.  For more detailed information on logic models, you can watch this video or take EPA's FREE Logic Modeling Course.

 
Contact Cheryl.Ndunguru if you would like training on how to use the OPM Training Evaluation Field Guide and/or Logic Models in your agency.

Regulations

Agencies are required to evaluate their training programs annually to determine how well such plans and programs contribute to mission accomplishment and meet organizational performance goals (5 CFR 410.202).  The law authorizes OPM to require Federal agencies to report training data (Example: Sample Agency Training Report FY2010).  An important part of the evaluation process involves consideration of training costs, and other elements not directly addressed in the typical evaluation.  Agencies must track and report accurate training data on all completed training events to OPM as prescribed by the Final Rule on Training Reporting Requirements, published on May 17, 2006, in the Federal Register, The Federal Workforce Flexibility Act of 2004 (P.L. 108-411).

Agencies can reference the Guide for Collection and Management of Training Information for detailed information on reporting training data.


Other Evaluation Methods:
The Training Evaluation Field Guide uses the Kirkpatrick Model of evaluation as the basis of instruction.  However, there are other viable and well researched models available for use.  Agencies should use the model that more closely meets their needs.

  • Daniel Stufflebeam's CIPP Model (Context, Input, Process, Product)The CIPP Model for evaluation is a comprehensive framework for guiding formative and summative evaluations of programs, projects, personnel, products, institutions, and systems. This model was introduced by Daniel Stufflebeam in 1966 to guide mandated evaluations of U.S. federally funded projects. http://srmo.sagepub.com/view/encyclopedia-of-evaluation/n82.xml

 

  • Robert Stake's Responsive Evaluation Model: Robert Stake (1975) coined the term responsive evaluation. Responsive evaluation distinguishes four generations in the historical development of evaluation: measurement, description, judgment and negotiation. ‘Measurement’ includes the collection of quantitative data. http://mailer.fsu.edu/~sullivan/SEA_Newsletter/Responsive_Evaluation.pdf

 

  • Robert Stake's Countenance Model: This model focuses on description and judgment. Stake wrote that greater emphasis should be placed on description, and that judgment was actually the collection of data. Stake wrote about connections in education between antecedents, transactions and contingencies (outcomes). He also noted connections between intentions and observations, which he called congruence. Stake developed matrices for the notation of data for the evaluation. Data is collected through these matrices. http://ged550.wikispaces.com/Robert+Stake's+Countenance+Model

 

  • Kaufman's Five Levels of Evaluation: Modeled after University of Wisconsin professor Donald Kirkpatrick's four level evaluation method, Roger Kaufman's theory applies five levels. It is designed to evaluate a program from the trainee's perspective and assess the possible impacts implementing a new training program may have on the client and society.
    http://www.ehow.com/info_8582553_kaufmans-five-levels-evaluation.html#ixzz2ePyjynYl

 

 

  • PERT (Program Evaluation and Review Technique: The Program (or Project) Evaluation and Review Technique, commonly abbreviated PERT, is a statistical tool, used in project management that is designed to analyze and represent the tasks involved in completing a given project. First developed by the U.S. Navy in the 1950s, it is commonly used in conjunction with the critical path method (CPM). http://en.wikipedia.org/wiki/Program_Evaluation_and_Review_Technique

 

  • Michael Scriven's Goal-Free Evaluation Approach: This approach is premised on the assumption that an evaluation should establish the value of a program by examining what it is doing rather than what it is trying to do. http://www.click4it.org/index.php/Goal-Free_Evaluation

 

  • Illuminative Evaluation Model: An illuminative evaluation is a custom built research strategy which lacks in formal statements of objectives, avoids (but does not exclude) statistical procedures, employs subjective methods, and is primarily interested in the informing function of an evaluation, rather than the more usual inspectoral or grading functions of an evaluation. http://www.iisd.org/casl/CASLGuide/EvalModel.htm

 

Other Helpful Resources

Certifications:  Both Kirkpatrick and Phillips offer training evaluation certificates although a certified evaluator is not necessary to evaluate the effectiveness of agency training. The 2011 OPM Training Evaluation Field Guide, books and/or training courses on performance measurement, program evaluation, and training evaluation should provide enough information to successfully evaluate your agency training.

Associations: The American Evaluation Association (AEA) is an international professional association of evaluators devoted to the application of many other forms of evaluation. Evaluation involves assessing the strengths and weaknesses of programs, policies, personnel, products, and organizations to improve their effectiveness. The AEA has approximately 5500 members representing all 50 states in the US as well as over 60 foreign countries.

 

FAQs:

WHAT should you evaluate?

Training evaluations can help the organization to reach many different goals during the life cycle of a training program.  One primary reason to evaluate is to determine if the benefits derived from the training justified the costs.

Some additional reasons include:

  • Examining the assumptions upon which an existing or proposed training course or program is based
  • Inquiring, up front, about the expected results
  • Assessing how much of the knowledge and skills learned during training transferred to on-the-job behaviors 
  • Collecting information about inputs, activities and outcomes.
  • Comparing it to some pre-set standards or targets.
  • Determining whether the results of the training contributed to the achievement of the organization’s goals
  • Reporting findings in a manner that facilitates their use and improves program effectiveness 


WHY
should you evaluate?

Aside from the requirement in 5 CFR 410.202, agencies face very real demands to demonstrate training program efficiency, program effectiveness and public accountability. Use of evaluation data meets these demands in various ways:

Planning: To assess needs, set priorities, direct allocation of resources, and guide policy

Analysis of Course/Program Effectiveness or Quality: To determine achievement of objective, identify strengths and weaknesses of a program/course, determine the cost-effectiveness of a program/course, and assess causes of success or failure

Direct decision-making: To improve effectiveness, identify and facilitate needed change, and continue, expand, or terminate a program/course

Maintain accountability: To stakeholders, funding sources, and the general public

 

WHEN should you evaluate?

There are several basic questions to ask when deciding when to carry out an evaluation. If the answers to these questions are "Yes", this may be the time to evaluate.

  • Is the program/course important or significant enough to warrant evaluation?
  • Is there a legal requirement to carry out a program evaluation?
  • Will the results of the evaluation influence decision-making about the program/course?
  • Will the evaluation answer questions posed by your stakeholders or those interested in the evaluation?

 

HOW can you evaluate?

Once you've determined whether or not your program or course warrants evaluation, there are various methods and models agencies can use to evaluate their training courses. Here are two of the most popular: (You can find other evaluation methods above under “Other Evaluation Methods”)

 

Kirkpatrick 4 Levels

The four levels of Kirkpatrick's evaluation model essentially measure:

  • Reaction - what they thought and felt about the training
  • Learning - the resulting increase in knowledge or capability
  • Behavior - extent of behavior and capability improvement and implementation/application
  • Results - the effects on the business or environment resulting from the trainee's performance

All these levels are recommended for full and meaningful evaluation of learning in organizations.

 

Jack Phillips' Five Level ROI Model

Building upon the Kirkpatrick model, Jack Phillips added the fifth level the Return on Investment (ROI) produced by a training course using the financial formula:

ROI(%) = (Net Program Benefits/Program Costs) x 100

Control Panel

Unexpected Error

There was an unexpected error when performing your action.

Your error has been logged and the appropriate people notified. You may close this message and try your command again, perhaps after refreshing the page. If you continue to experience issues, please notify the site administrator.

Working...