Tips for Significantly Enhancing Your Learner Surveys

The odds are good your learner surveys aren’t so good. A harsh pronouncement, but probably true, unless you’re already following the advice of Will Thalheimer, founder of Work-Learning Research. He literally wrote the book on learner surveys: Performance-Focused Learner Surveys: Using Distinctive Questioning to Get Actionable Data and Guide Learning Effectiveness

At the 2024 Learning Business Summit, Will shared tips for improving learning surveys. If you didn’t attend, here’s some of his advice for designing performance-focused learner surveys that deliver the data you need to improve the impact of your education programs. 

The problems with traditional program evaluations

In Will’s research, 65% of learning teams aren’t happy with their learning measurement tools and want to make modest or substantial improvements. In his poll of session attendees, 45% said their learner survey data was somewhat useful in improving programs and 35% said their data was not very useful.

Why is this happening? When you measure learning at the end of a program, you’re mostly measuring comprehension. Learners haven’t had enough time to forget anything yet. In the session chat, attendees confirmed this, saying their evaluations measure reaction more than learning. Association leaders would rather know if learners liked the program, not if they learned anything.

Researchers say traditional program evaluations (aka smile sheets) have no correlation to learning. High marks could mean anything. Learners are overconfident about how much information they’ll retain. This bias affects evaluation. 

Will told a story illustrating another learner bias. In sales training, tough instructors received low evaluation scores but produced the most high-performing salespeople. This finding reminded me of the learning science research presented by Brian McGowan in his Summit session: learners don’t always know what kind of instruction is good for them. They prefer easy, ineffective instructional methods over challenging, effective methods.

Traditional evaluation methods, like the Likert scale—with its “strongly agree,” “agree,” etc. options—don’t provide useful data. If you get an average rating of 3.8, what does that really mean? 

What good learner survey questions look like

Learner surveys must ask unbiased questions focused on how people learn. Here’s an example from Will of a good learner survey question:

Question: HOW ABLE ARE YOU to put what you’ve learned into practice in your work? CHOOSE THE ONE OPTION that best describes your current readiness.

  1. My CURRENT ROLE DOES NOT ENABLE me to use what I learned.
  2. I am STILL UNCLEAR about what to do and/or why to do it.
  3. I NEED MORE GUIDANCE before I know how to use what I learned.
  4. I NEED MORE EXPERIENCE to be good at using what I learned. 
  5. I CAN BE SUCCESSFUL NOW even without more guidance/experience.
  6. I CAN PERFORM NOW AT AN EXPERT LEVEL in using what I learned. 

Why Will’s learner survey questions are more effective than traditional questions

What did you notice? 

The options aren’t what the learner expects, so they attract more attention.

They elicit more valuable information. They give the impression you’re taking the learner’s experience more seriously, especially since the options range from negative to positive impact.

The questions are about the learner, not the program, instructor, or venue. The impact on the learner is what matters. 

The options have more granularity. Will said he adds an over-the-top choice, like option F, to slow the learner down so they’ll more carefully consider all options. 

Notice the lack of jargon, like “learning objectives.” Learner-friendly language helps them make the right choice. The use of the uppercase helps learners home in on the gist of each option. 

The resulting data is more useful. You can see the percentage of learners who chose each option and decide if those results are acceptable. If 30% of learners need more guidance before using what they learned (option C), your program is failing them. If 10% are still unclear about what to do (option B), what’s going on there?

Good surveys focus less on learner satisfaction and course reputation, which aren’t correlated to learning impact. Instead, you need to find out if learners:

  • Comprehend the material
  • Remember it
  • Are motivated to apply it
  • Have follow-up support
woman thinking about a question from one of her online learner surveys

Craft questions that send messages to your learners and instructors

Will suggests adding nudging questions to your surveys. The example below nudges the learner to follow through with what they’ve learned. 

Question: After the course, when you begin to apply your new knowledge at work, which of the following supports are likely to be in place for you? (Select as many items as are likely to be true.)

  1. MY MANAGER WILL ACTIVELY SUPPORT ME with key supports like time, resources, advice, and/or encouragement.
  2. I will use a COACH OR MENTOR to guide me in applying the learning to my work.
  3. I will regularly receive support from a COURSE INSTRUCTOR to help me in applying the learning to my work.
  4. I will be given JOB AIDS like checklists, search tools, or reference materials to guide me in applying the learning to my work.
  5. Through a LEARNING APP or other means, I will be PERIODICALLY REMINDED of key concepts and skills that were taught.
  6. I will NOT get much direct support but will rely on my own initiative. 

You can see how these options would prompt your team to think about the support you could give learners and prompt learners to think about the things they could do to make sure they don’t forget what they’ve learned. 

Here’s another example of a nudging message.

Question: Compared to most webinars, how well did the session keep YOUR attention. Select one choice.

  1. I had a HARD TIME STAYING FOCUSED.
  2. My attention WANDERED AT A NORMAL LEVEL.
  3. My attention RARELY WANDERED.
  4. I was very much SPELLBOUND throughout the session.

You can ask questions to nudge a positive perception of your brand, like these questions about accessibility, belonging, and barriers:

  • How well did the design and organization of the learning program enable you to fully participate?
  • How much did you feel a valued and respected member of the group during the learning?
  • How well did the learning experience prepare you to deal with the barriers that you may face as you use what you learned in your work?

Three questions to add to your learner surveys

Will suggests adding these open-ended questions to the end of your survey to elicit valuable insight.

  • What aspects of the training made it MOST EFFECTIVE FOR YOU? What should WE DEFINITELY KEEP as part of the training?
  • What aspects of the training COULD BE IMPROVED? Remember, your feedback is critical, especially in providing us with constructive ideas for improvement. 
  • Is there anything else we should have asked about? Is there anything else you want to tell us?

Notice the use of “we” and “us,” which makes these questions sound like a real person talking. 

You need more useful learner data to improve your programs and rise above the competition. What’s the point of using the same old learner evaluations if they’re not eliciting the most essential information: did the program make the intended positive impact? 

Tags
evaluations
learning science
Sign up for our newsletter to be the first to receive our blog posts and updatesSubscribe