Click here to skip navigation
An official website of the United States Government - whitehouse.gov

Sign In

Training and Development Policy Wiki

Page History: Training Evaluation


Compare Page Revisions




Page Revision: 6/13/2011 2:42:48 PM


Training Evaluation

What is Training Evaluation?

Training evaluation is a continual and systematic process of assessing the value or potential value of training programs to guide decision-making for the program’s future.

When we Evaluate...

--We examine the assumptions upon which an existing or proposed training program is based

--We inquire, up front, about the expected results of the training program

--We create then study the goals and objectives of the program.

--We collect information about a program’s inputs and outcomes.

--We compare it to some pre-set standards.

--We report findings in a manner that facilitates their use.

 Why Evaluate?

Agencies are required to evaluate their training programs annually to determine how well such plans and programs contribute to mission accomplishment and meet organizational performance goals (5 CFR 410.202). In addition, demands to demonstrate training program efficiency, program effectiveness and public accountability are increasing. Evaluation can help meet these demands in various ways:

Planning
--To assess needs.
--To set priorities.
--To direct allocation of resources.
--To guide policy

Analysis of program effectiveness or quality
--To determine achievement of project objectives.
--To identify strengths and weaknesses of a program.
--To determine the cost-effectiveness of a program.
--To assess causes of success or failure.

Direct decision-making
--To improve program management and effectiveness.
--To identify and facilitate needed change.
--To continue expand or terminate a program.

Maintain accountability
--To stakeholders.
--To funding sources.
--To the general public.
 
When to Evaluate

There are several basic questions to ask when deciding whether to carry out an evaluation. If the answers to these questions are "Yes", this may be the time to evaluate.

--Is the program important or significant enough to warrant evaluation?

--Is there a legal requirement to carry out an evaluation? 

--Will the results of the evaluation influence decision-making about the program?

--Will the evaluation answer questions posed by your stakeholders or those interested in the evaluation?

How To Evaluate

Once you've determined whether or not your program warrants evaluation, there are various methods agencies can use to evaluate their training programs.  Here are the most popular:

Kirkpatrick 4 Levels:
Jack Phillips' Five Level ROI Model



The Training Evaluation Field Guide is designed to assist agency training representatives in evaluating training program effectiveness and in demonstrating training value to stakeholders and decision makers.

Field Guide Development Process

Data were gathered from fifteen federal agency representatives who volunteered their time to attend a one-day working meeting, participate in individual interviews and submit samples of their tools and case studies. This Field Guide reflects the input from the working group.

Key Audience and Usage

This Guide is designed for all federal employees who have a role in training evaluation and effectiveness within their agencies.
Specific users for this field guide are:

•Training managers and supervisors
•Training liaisons/coordinators
•Agency evaluators
•Instructional designers
•Training facilitators
•Any others who have a significant role in training effectiveness


Solve Problems (What are some evaluation options available to me?)


Stay Current What can I learn that will help me refresh my knowledge base and add value?)

 

Other Evaluation Resources

For reference, should you be keen to look at more ideas, there are many to choose from...

Daniel Stufflebeam's CIPP Model (Context, Input, Process, Product)
Robert Stake's Responsive Evaluation Model
Robert Stake's Congruence-Contingency Model
Kaufman's Five Levels of Evaluation
CIRO (Context, Input, Reaction, Outcome)
PERT (Program Evaluation and Review Technique)
Alkins' UCLA Model
Michael Scriven's Goal-Free Evaluation Approach
Provus's Discrepancy Model
Eisner's Connoisseurship Evaluation Models
Illuminative Evaluation Model
Portraiture Model
and also the American Evaluation Association

 

 

Control Panel

Unexpected Error

There was an unexpected error when performing your action.

Your error has been logged and the appropriate people notified. You may close this message and try your command again, perhaps after refreshing the page. If you continue to experience issues, please notify the site administrator.

Working...