How to Improve Course Evaluations—and Measure a Course’s True Value

How often do you evaluate your evaluations? By revamping course evaluations to ask the right questions at the right time, you can gather what you need to improve program design, customer retention, and course marketing.

Typically, evaluations ask learners to rate how they feel about the content, instructor, etc. on a scale of 1 to 5. But these metrics don’t tell you if the course was effective. Did leaners apply their new skills and knowledge? Did they improve their job or business performance as a result? Your course evaluations should help you measure a course’s value, not just tell you how learners felt about the experience.

Besides asking better questions at the end of a course, follow up with learners a few months after the course to see if they’ve applied what they’ve learned and if the course made a difference. This feedback will help you understand whether the course is achieving its goals and where you need to improve the design and/or delivery of the program.

Post-course evaluations show your association where your programs are hitting the mark—and give you the opportunity to collect positive marketing testimonials about the course’s value. You can also gather evidence supporting the need to allocate more resources to instructional design or instructor training.

course evaluations

How to improve traditional course evaluations aka smile sheets

Smile sheets have traditionally focused on learner satisfaction, which is not a measure of effectiveness. Two studies determined that “smile sheets were basically uncorrelated with learning results.” But smile sheets can still serve a purpose as long as you improve them. In fact, Will Thalheimer wrote a book about them: Performance-Focused Learner Surveys: Using Distinctive Questioning to Get Actionable Data and Guide Learning Effectiveness.

Thalheimer says smile sheet questions should focus on research-based factors, such as whether the learning supported:

•    Understanding: Was new information clear? Did learners understand it?

•    Remembering: Were there enough activities, like retrieval practice, to help them remember new information?

•    Motivation to apply what was learned: Are they ready to apply their new skills on the job?

•    Post-course follow-through: What do they need to support them as they apply these new skills on the job?

He calls these the Four Pillars of Training Effectiveness. A few years ago, he came up with The World’s Best Smile-Sheet Question: How able are you to put what you’ve learned into practice in your work? In the linked article above, he provides options for the learner to select that best describes their current readiness. He also suggests these evaluation questions:

•    Now that you’ve completed the learning experience, how well do you feel you understand the concepts taught?

•    After the course, when you begin to apply your new knowledge at your worksite, which of the following supports are likely to be in place for you?

•    Which aspects of the learning helped you the most in learning what was taught?

•    What could have been done better to make this a more effective learning experience?

Thalheimer doesn’t like the numerical Likert scale where 1 to 5 typically represents Strongly Disagree, Somewhat Disagree, Neither Disagree Nor Agree, Somewhat Agree, and Strongly Agree. He says the answers are vague and may mean different things to different people. Worse, they don’t give associations actionable information.

“You've got an average score of 4.2 (out of 5). Is that good or bad? Where's the cut-off between good and bad, exactly? How can that 4.2 score help you [improve] training materials? If you do want to improve something in the training, what should you improve?”

And don’t use a net promoter score either.

course evaluations

Learning-Transfer Evaluation Model

Thalheimer created the eight-tiered Learning-Transfer Evaluation Model (LTEM)—the linked report is well worth reading. His evaluation philosophy is focused on transferring learning from the course to the workplace and designing learning that makes that possible.

The eight tiers of LTEM are:

1.    Attendance
2.    Activity
3.    Learner Perceptions
4.    Knowledge
5.    Decision-Making Competence
6.    Task Competence
7.    Transfer
8.    Effects of Transfer

Tiers 1 and 2, Attendance and Activity, measure a learner’s presence or participation in activities, neither of which indicates learning.

So, we move onto tier 3, Learner Perceptions, which is where smile sheets come in. Thalheimer reminds us: “Just because learners say they like a learning event doesn’t mean they learned. Therefore, surveying learners on their general satisfaction—and on other factors not related to learning effectiveness—is an inadequate way of evaluating learning.”

In tier 4, the learner passes a quiz or test. But knowledge recitation is not knowledge retention. “Just because learners demonstrate a skill or competency during a learning event doesn’t mean they’ll remember how to use the skill or competency later.” That’s the tricky part—and will be the real indicator of an effective course.

Useful metrics begin at tier 5 where learners demonstrate their decision-making competence on the job. They know what to do when faced with scenarios similar to those covered by course content.

To prepare learners, Thalheimer says, “We can present [them] with realistic scenarios and ask them to make realistic decisions based on their interpretation of the scenario information.”

Tier 5 asks learners only to make decisions, not to implement those decisions. In tier 6, they demonstrate whether they can perform tasks competently using their new skills and knowledge. Can they actually do in real life what they were taught?

To help learners develop task competence, Thalheimer recommends using the SEDA model during the course. SEDA stands for Situation, Evaluation, Decision, and Action. He says, “We need to present them with realistic Situations, have them Evaluate those situations (again, without help or hints—because this is an assessment!), enable them to make Decisions, and have them take Actions in line with those decisions.”

A few months after the course ends, at tier 7, learners are using their new skills and knowledge on the job. They’ve transferred learning to the real world. Success!

In tier 8, evaluations should assess the impact of this transfer on the learner and other stakeholders. Learning for learning’s sake is a wonderful experience, but what’s most important are the impact of the experience on their job performance, career advancement, project assignments, co-workers or team, business or employer, employees, and/or clients/customers.  

Ask learners at the start of the course about potential impact. Give them something to work toward.

•    What are their goals?
•    How do they want to improve job or business performance?
•    What type of opportunities would new competencies open up for them?
•    What impact might their new competencies make on their co-workers, employer, employees, and/or clients/customers?

Looking back over these eight tiers, most course evaluations stop at tier 4. So how do you know whether they’ve achieved the goals of tiers 5, 6, 7, and 8? Send follow-up course evaluations to learners a few months after the program ends. You may even want to check in a year later too.

Post-course evaluations show learners how focused you are on results. You care about the value they receive from your association’s programs. Who else in your market does that? These evaluations are touchpoints that keep you connected with course alumni and top of mind when they decide to continue their education.

Tags
evaluations
feedback
planning
Sign up for our newsletter to be the first to receive our blog posts and updatesSubscribe