I just had a discussion with the great Rob Briner about student evaluations and teacher effectiveness, sparked by this tweet:
Smile sheets, or affective/reaction measures of training and teaching effectiveness, are a staple of training evaluation. Often, trainee feedback is the only measure we use to assess training effectiveness. But does it make sense? Over the last three decades, many training professionals and I-O psychology practitioners have criticized the overuse of these training reaction measures.
When it comes to training evaluation, the Kirkpatrick model is the industry standard. Kirkpatrick’s model has four levels: reaction, learning, behavior, and results. Smile sheets—a common name for measures of learner reaction (Level 1)—have come under fire because they are seen as uncorrelated with the higher levels (Levels 2-4) of evaluation. If learner reaction isn’t correlated with learning, critics argue, why bother measuring it at all?
In this article, I’ll offer a new way of looking at the issue.
A Construct and Use Problem
Satisfaction is a distinct construct from learning and may not be correlated with it. To say that satisfaction is correlated with learning assumes that satisfaction is an indication or necessary precursor to learning.
Many trainers, facilitators, and training managers assume that learner satisfaction leads to learning because it has a certain intuitive appeal. If they liked the course, they probably learned more. That feels right, doesn’t it?
Reporting on learner satisfaction is also an easy way to satisfy stakeholders. If X% of the learners liked the training, it must have been a success. It’s easy enough to declare victory and move on without measuring learning, behavior change, or business results.
Many people have experienced courses that were easy and from which they learned very little. However, they may express satisfaction with this course because it didn’t demand very much. Think of the courses that just made you feel good but from which you derived minimal benefit. So, if satisfaction isn’t a measure of learning, what should we use it for? Should colleges stop measuring it? Should training managers not ask trainees if the experience was satisfactory?
Here are some recommendations about how to use satisfaction measures in training evaluation:
- Use satisfaction to assess continuance. As Goldstein and Ford noted many years ago, satisfaction measures can be used to assess whether your program should continue. This is especially important for optional training programs for employees. If employees hated the experience, it may not be worth continuing.
- Use satisfaction measures to assess implementation. For e-learning professionals, satisfaction is especially useful when measuring the roll out of a program. Was the user experience (UX) a good one? Was it easy to access the program through the LMS? These are important matters in e-learning. Satisfaction can also be related to face-to-face or virtual instructor-led training (VILT) where access to the materials, session, and presentation all matter to the trainee.
- Use satisfaction to measure content relevance. Ask trainees if they found the training content relevant to the work. This is a clear antecedent to behavior change and can often be a way for organizations to identify if a vendor’s content or even organization-specific content is relevant to the experience of the organization.
Training evaluation is an important part of the process, and while satisfaction doesn’t correlate with learning, we shouldn’t expect it to. What we should expect is that it helps us as training professionals to deliver a better experience for trainees.