Learning-Transfer Evaluation Model with Will Thalheimer

Written by Robin Hoyle

Key points summary

01:16
Will discusses the scientific research he completed and the practical recommendations he created that aid in building better learning interventions.

02:44
The Learning Transfer Evaluation Model (LTEM) has eight tiers, one of which is based around focusing on the end result. Will goes on to discuss how it’s best to not only measure knowledge, but to also measure peoples’ decision-making competence and their ability to implement and use skills.

06:13
Will discusses his views on the Kirkpatrick Model and explains how the purpose of a learning evaluation model is to think better and to do better. It’s a tool to help nudge us toward good behaviours and away from bad ones.

09:42
Industry research shows that 15% of learning professionals believe that learner validation questionnaires are helpful, highlighting a large gap. He also explains the correlation between smile sheets and learning results are weak. 

13:06
Will covers the four pillars of training effectiveness and how focusing on these specifically provides meaningful data and meaningful responses in order to be able to make an informed decision about training moving forward.  

21:00
Many organisations took part in the Learning Trends survey, and this showed that the top three areas of focus in 2023 are all online. E-learning came first, the second being online instructor-led training and videos came third.   

 Wait, there's more

Conversation transcript

Hello, and welcome to the Workplace learning Matter series from Huthwaite International. In this episode, Robin talks to consultant, speaker, researcher and author Will Thalheimer about the learning transfer evaluation model. This is the audio recording from that conversation.

Robin Hoyle: So, Will Thalheimer. Welcome to the Workplace Learning Matters podcast. Thank you for joining us today. You came up with a learning transfer evaluation model five years ago now. So, I guess the first thing to ask is what is it that interested you in looking at learning transfer and the whole evaluation piece in the first place? What was your inspiration for looking into that area?

Will Thalheimer: Sure, great question, Robin, and thanks for having me. First of all, for those of your listeners who don't know what LTEM is, I'll just briefly explain it. It's an evaluation model used in the learning and development field. It's got eight tiers. It's designed to help learning teams evaluate their learning programmes.

I'm a research guy. I spent a lot of time focusing on scientific research and translating that into practical recommendations for how to build better learning interventions. And I've been doing this for a long time. About a decade ago, maybe a little bit more than that, I got interested in learning evaluation. I realised, wow, this research is really good. We should use it to design our learning, but at the same time, that's not enough. If we don't create learning that's effective – if we don't get feedback on that, we're not going to be able to continually improve it. I went down a rabbit hole.

Learning evaluation is a complicated space. I'm still learning. I realised some of the things that we were doing – smile sheets, happy sheets, focusing on attendance, butts in seats – wasn't that effective. That's how I got interested in the whole learning evaluation space.

Robin Hoyle: And I guess that focusing on attendance, I've been in the learning and development space for a lot of years, and I can remember going to various committees and management boards and others and telling people how many people had sat on how many seats in how many conference rooms over the previous year. And everybody would go, “Well, that's terrific”. And then they would still give me a 5% budget cut. So clearly, I wasn't achieving very much, I guess. What have you learned from how people have used LTEM to make the case for better practise within not only learning design and delivery, but also for that transfer process?

Will Thalheimer: One of the things that LTEM does, and it's probably too much to go into all eight tiers, but one of the things it does is to focus on some of our end results, like learning transfer and the effects of transfer, behaviour change and organisational results, the impact on learners, et cetera. But also, to drill down into some of the major learning goals that we have. Not only measuring knowledge, but also people's decision-making competence and their ability to actually implement skills, use skills – we call it task competence at tier six – and then there's some things in there that are at lower levels, things like learner perceptions, measuring activities and things like that.

LTEM has been used in a lot of different ways. Some people just use it as a gap analysis. What are we doing now in our learning evaluation and what could we do better for next time? Very simple. And most people are just doing attendance completion rates and learner surveys, that's all they're doing. So, then they look at it and they go, oh, well, maybe next year, or maybe for some strategically important course, we could measure decision-making competence. Maybe our situation here is a little precarious.

Maybe we need to improve our reputation, maybe we need to now look at behaviour change and look at transfer. So, it's really just a way to think about it.

But also, people have used LTEM – and some of this has been prizing to me – some people have used LTEM to support the learning design process or to support conversations they have with their internal and external customers. “Hey, let's talk about what you want to achieve” and by looking at LTEM, in the ways you can measure it, it focuses on actual outcomes as opposed to sort of, “Oh, we want everybody to kumbaya, kumbaya.”

Robin Hoyle: Yeah. And I think, interestingly, my colleagues and I who do learning design activities here at Huthwaite, that's how we use it. Yes, we would love to do a lot of learning transfer evaluation. The reality is, not many of our customers want to invest time, money, and resources into doing that detailed evaluation of what's changed in the workplace. They kind of expect us to have done that in advance. 

So, we use it as a design model. And I think that's fair enough to say, actually, how are we going to enable people to do the task competence that you talked about, or the decision-making competence and be able to see how that works? I think that's important for us to use it. And I guess that differentiates from, if you like, the industry standard evaluation model, which of course is Kirkpatrick.

Now, you said when you first came out with LTEM in 2018 that the Kirkpatrick model unintentionally undermines our efforts as learning professionals to improve our learning initiatives, which is quite a strong statement, Will. So, what led you to that conclusion? And how do you think LTEM plugs the gaps that are in the existing model?

Will Thalheimer: Yeah, no one has ever accused me of being a shrinking violet. So, look, first of all, the Kirkpatrick model, the four-level model – I often call it the “Kirkpatrick Katzell model” – because it turns out that Raymond Katzell was instrumental in developing the four-level idea. It's got some good points to it.

Let's step back a minute. What do we want a model to do? We want to be able to look at a model and to think better and to do better. We want the model to nudge us toward good behaviours and away from bad behaviours. The four-level model is really good at saying, “Hey, learning people, don't forget about behaviour change. Don't forget about actual results.” It also sends a good message about learner perceptions. Level one is not as important as some of the other things.

Now, the big weakness in the Kirkpatrick model and why I said that I think it has sort of let us astray a little bit is that it puts all learning into one bucket. Level two is all about learning. We can measure learning in a lot of different ways. We could measure the regurgitation of trivia, the recall or recognition of meaningless facts, knowledge or meaningful facts and knowledge, or decision-making competence or task competence or skills. Now that's a wide spectrum from trivia to skills.

But when you put it all in one bucket, people tend to default to the lowest common denominator. “Oh, we need a level two assessment. Let's do a knowledge check.” So, it just sends us down the wrong pathway. It misses out on having learning wisdom baked into it. That was the point that I was making and that really plays out.

People now look at LTEM and they begin thinking about the learning design. LTEM is pretty new, but already somebody's done their doctoral dissertation on it. Dr. El Hamurabi’s hypothesis: if you introduce LTEM to a learning team, they're going to do better learning evaluation. Okay, well that's a no brainer. But she also had this hypothesis that they're going to begin to be inspired to do better learning designs, and it makes sense, right? You say, “Oh wait, we're going to measure decision-making competence. We should probably give our learners more practise making decisions during the learning.” Right. So, it really drives not just the learning evaluation but also learning design as well.

Robin Hoyle: Absolutely. And we've known what gets measured gets done for an awful long time. Why shouldn't that apply within the learning and development space as much as any other? Seems absolutely sensible to me.

You talked a little bit there around learner perceptions and as well as LTEM you've also written books about smile sheets, happy sheets, and end-of-programme reaction sheets from users. And I, along with many of my colleagues within L&D, I guess, have kind of thought we've got to do that because the sponsor wants to see some scores on the doors – that everybody gave it 4.7 out of five average, or whatever it is. So, we have to do that. But we kind of do that as a marketing exercise, rather than an L&D exercise. It has very little impact on performance.

Your books would suggest that that's not the whole story and that we could use reactions from learners in a better way than we have traditionally done. So, what is it that we've been getting wrong at the learner validation questionnaire at the end of the programme?

Will Thalheimer: So, when I was on my own at Work Learning Research, I did some industry research and asked people, “Hey, are your learner surveys helping you to design better, deliver better, et cetera?” And only 15% of us learning professionals said, “Yeah, they're very effective in helping us.” So that's a pretty large gap, right? And it's aligned with what the scientific research says.

Believe it or not, there's been actual studies on learner surveys. What the researchers have found through two meta-analyses – this is scientific studies and many other scientific studies – that our smile sheet results are correlated with learning results at 00.09. Anything below 00:30 is weak. 00.09 is virtually no correlation at all. That means if you get high marks in your smile sheets, you could have a good course, you could also have a bad course. You got low marks, you could have a poorly designed course, but you could also have a well-designed course with traditional smile sheets. We really can't tell.

So, when I saw that research, my first instinct, of course, was well, they're going to throw them out, let's not use them. And then I said, wait a minute. We've been doing this for decades. It's a tradition. It's also respectful to ask our learners what they think. So, then I said, “Can we make them better?” That's what I'm always about, right? Like, hey, what are we doing? Can we make it better? It’s a sort of gap analysis. So obviously I wrote the book, and I wrote it twice. It's now in its second edition. So, I believe, yes, we can make them better. But that's what really got me into that.

Robin Hoyle: And in terms of making them better, the focus is a checklist around assessing and analysing your own happy sheets to be able to say, does it do this? Does it do that? Does it do the other? And I circulated that to my team and my colleagues to say we ought to be thinking about some of these things, we ought to be looking again at the validation survey and making sure that it's appropriate to ask people's opinion of what happened, but how can we gather real meaningful data? And I know that you've been a big guy on talking about meaningful data, so I see a lot of impact surveys which tell me how many people watched a video or how many people downloaded a document or a piece of e-learning of things.

How do we get people to focus on meaningful data rather than the marketing type data of how many eyeballs have been on something?

Will Thalheimer: Well, it's tough. First of all, in the book I say this, and at some point, I'm going to count up the number of times I said that because it must be over 20. Say whatever you do, don't just do learner surveys. So that's number one. We should be doing a lot more than that. We don't have to measure every course beyond that.

As you mentioned earlier, there's some costs and resourcing associated with building good learning evaluations. The learner surveys typically get a bad name, but they are our entry point if we're going to be managers of change, if we're going to help champion change on our learning teams, we’ve got to do what's relatively easy to do or we're just going to hit resistance.

In the learner surveys, what we can do is focus on effectiveness as opposed to satisfaction. Focus on things that we know about learning that are important. So, for example, I talk about the four pillars of training effectiveness. These are:

1. Does the learner comprehend the content? 
2. Are they motivated to apply it when they get back on the job? 
3. Are they likely to be able to remember what they've learned?
4. Are they getting after training support to support them in transferring this to the job?

We can measure a lot of other things, but these things are essential. What we have to do is we have to get rid of some of the methodologies we use. And this is provocative.

Likert scales and numeric scales are deadly, in the sense that they're too fuzzy. When something's fuzzy and people can't wrap their minds around it fully, then bias leaks in. We want the learners to make precision decisions about the questions we ask. And there's all this research to show that we humans are not always good about knowing how learning works. If we ask learners questions that are somewhat fuzzy or not really focused on what's important, they're going to go off in all different types of directions. We have to make sure that we guide the questions to get that meaningful data, to get meaningful responses from people, so that we have meaningful data that we can make sense of.

Wait, there's more

Robin Hoyle: And I think that's right because when you've got a five-point Lickert scale, the difference between three and four is very difficult for people to understand. What does that mean objectively? That usually means “I had a good lunch, thank you.”

Will Thalheimer: If you spent any time in the learning field and you've looked at a bunch of learner surveys and you're using Lickert scales, all the numbers are between 3.8 and 4.5. There's no differentiation. And when there's no differentiation, you can't make a decision. It pushes us into paralysis. That's not a good thing. Our goal is to make decisions based on the data. If we can't make a decision, then it's not worth doing.

Robin Hoyle: And that evidence-based practise has informed the work that you're now doing. So, you're now working with tier one, and you've been focusing on mapping learning trends, but you've also been looking at the mapping of those learning trends between organisations that are doing okay and organisations that are exemplary. So, you've been able to say, “What is it that the people who are doing really well are doing differently from the people who are bumbling along, so to speak. What is it that the research has shown you and how does that help you to help organisations be more effective? What is it that you found?

Will Thalheimer: Well, you're in luck. We at TiER1 Performance have just put to bed the first draft of our Learning Trends report. I've got a lot of this top of mind for me right now.

Learning Trends is a survey we've been doing for eight years now. Last year, we redesigned it to make it a diagnostic that learning teams and learning professionals can use to reflect on their own practises, to benchmark against the average response in the industry, but also these exemplary organisations.

We define exemplary by, are you getting good learning results? Is your learning creating work performance? But also, are you doing good learning evaluation?

Our thought process is this: if you think you're doing well and you're actually doing good learning evaluation, then we're going to trust those responses. We also pick an element of professional development in there as well. So, we divided people into about 10% of organisations are exemplary and the rest – 90% – are typical. And when we look at the exemplary results, we find things like the exemplary organisations are more fulfilled in their work. The individuals that work there actually feel better about the work that they're doing. They are more innovative. These people from exemplary organisations feel that their learning team is more innovative, and they use many more learning tools and methods than people from typical organisations.

There's a long list of things which – I don't remember them all, but it's really fascinating that you separate people based on their work performance and also their learning evaluation, professional development, and you see these really strong differences in typical organisations versus the exemplary ones.

Robin Hoyle: And I think one of the things that I've seen in the past is that there are similar types of learning trends and what they look at is the organisations that are successful, which usually means shareholder return or revenues or profitability or whatever, and the idea is they are models of good practise because they spend more resource on training and development, and it kind of feels to me like the causal relationship is the wrong way around. They've got more resources, so they devote more of it on training and development, rather than they spend more on training and development, so they've got more resources. That doesn't seem to me to have been clearly articulated by people who are looking just at big companies.

Will Thalheimer: Absolutely. I'm working on a new book now. It's going to be called The CEO's Guide to Training E-learning and Work: Reshaping Learning into a Competitive Advantage. And for one of the chapters in there, I decided I really ought to look at what the research says on training. Is there good research, as you state? There is research that shows that if you do more and better training, that you're going to have better organisational outcomes. But the question is, what about – maybe it's flipped, right? – maybe organisations who are successful have more resources to spend on training.

Now, what the researchers do, they try to do a timed analysis. So, they look at year one, what do you spend on training? In year three, what are your organisational outcomes? And they try to differentiate it that way.

I'm a little sceptical of that, so I like to look a little bit deeper. And research has found, for example, that training can improve attitudes and motivation, that leadership development actually works. So, there's a bunch of other research avenues that look at things. Also, we know from the learning research that if you do certain things, if you use retrieval practise, realistic practise, that people remember a lot more.

If you use space repetitions, people remember more. There's a lot of evidence, overall, that what we do, thankfully, does work. Now, does that mean it can't be better? No, absolutely not. I'm a big believer that sort of muddling along a lot of the time and doing good work when we could be doing excellent work.

Robin Hoyle: And I think one of the things that we've been spending a lot of time on recently is working on that workplace transfer phase. The idea of taking what you have learned and building new behaviours, adapting your existing behaviours, adopting new behaviours back in the workplace and doing that in a quite rigorous way, quite programmatic way, in a way. 
And that really speaks to that idea of retrieval practise. But instead of it just being a let's rerun that knowledge check that we did six weeks ago and see if you still remember it. It's actually about application, it's about, well, can you remember what to do in this circumstance when you are in front of a customer or running a meeting or whatever else it is that you're dealing with? Is that the kind of retrieval practise that you see giving real benefit?

Will Thalheimer: Absolutely. And because this is top of mind for me, in the Learning Trends survey, we asked people in organisations, what are the actual learning assets you're going to be working on in 2023? And what was amazing is the top three have to do with online stuff. E-learning was one choice, the next one was online instructor-led training and then videos. So, all of that is online now. Classroom training did make a rebound from years before, but I do think that we have an opportunity now with the new tools that we've developed out of the pandemic that have been bubbling up to add some asynchronous components, some after-training follow through, if you will, some performance coaching to the learning we've done. And we've known this for years, right? We've talked about informal learning, on-the-job learning, learning in the workflow for years.

But I think we're just beginning to figure out, well, how does that actually work? It's really complicated. It's not like, oh, we're just going to do this. A lot going on there. Should we leverage people's managers? Should we use tools? What should we do?
Robin Hoyle: But that idea of workplace support, which you talked about around LTEM, it seems to me to be absolutely crucial, however it is provided. I think, we've tended to go down the route of going, well, it has to be the managers to do this, and there may be cases where that's the right thing to do. And I would certainly hope in many cases that it's the right thing to do.

But the reality is, what people need is workplace support. They don't necessarily need it just from one source. They need it from maybe a range of sources training people, peers or their managers.

Will Thalheimer: Absolutely. The notion that training works all by itself is crazy. The notion that we can do away with training is also crazy because there's some things you just have to go deep with. You have to have your misconceptions challenged. You have to do practise. You’ve got to do feedback. You have to build new knowledge structures around some things, new ways of thinking. Both of those ends of the spectrum, training is no good, training is everything. Both of those are completely nuts.

Robin Hoyle: Yeah. And I call it the pharmacy model, which is that you go to the pharmacist, and you stick your tongue out and they give you some pills and you're cured. And people sometimes see training in the same light, get some pills in the form of two days in a conference room, and you're cured of whatever ills beset you.

Will, that's been fascinating conversation. Thank you very much for contributing to the Workplace Learning Matters podcast. I think people will find some genuine interest out of that. We're going to put some links to some of the stuff that you've been doing on the podcast with it so that people can go a little bit deeper should they wish to. But for now, Will Thalheimer, thank you very much indeed for your contribution.

Will Thalheimer: Thank you, Robin, for inviting me.

Wait, there's more

Tell us your perspective