In L&D, what gets measured, gets done. Right?

Written by Robin Hoyle

It’s one of those oft cited phrases isn’t it? A throwaway cliché. Designed to focus attention on what’s important or how we can motivate individuals to do the things that matter. “What gets measured gets done!”

In some respects it is true. By setting measures – or targets – for certain activities or outcomes, we communicate to the organisation what is important. Hopefully, this shows what the organisation cares about and so influences individuals to focus on the things that matter.

But it is only part of the story. 

Measures are not always all they are cracked up to be. Yes, we can count things, but are we counting the right things? Because we can associate a quantity or a number with some aspect of performance, does that mean it matters? 

In learning and development, we are far from immune to the malady of measurement missteps. 

You’re probably familiar with the idea of vanity measurement. This refers to situations where we collect data on how many people looked at something or attended an event. Clearly, if people viewed that video or downloaded that infographic or logged on to our Learning Experience Platform, then it must be good – right?

Not right. At least, not necessarily right. The metrics used on digital platforms have often been adapted from marketing tools. But counting clicks and views is not the same as measuring learning and is some distance from assessing impact. That, I would have thought, was obvious. Yet, when I am asked to judge various submissions for learning awards, I frequently come across these same vanity numbers being described as proving impact.  

These days, the marketing metrics L&D folks value are not even seen as informative by marketers. They realised some time ago that capturing eyeballs and attention was only useful if you want to sell advertising. 

Now, the focus is on conversion. In other words, marketers have recognised that looking at the number of views is only important if the viewers then go on to do something like make a purchase. The same is true in learning. A number of our colleagues viewing our video or digital module only amounts to a hill of beans if some kind of action happens. And impact can only be satisfactorily measured if that action positively changes workplace behaviour.

We have also accorded importance to measures which, time and again, have not been found to correlate with change in behaviour or development of skills.

Chief among these is the end of digital module quiz. Completing a bit of e-learning and then being asked to answer badly written multiple choice questions is not ‘proving learning’. Doing so before logging off from the module you logged on to 30 minutes earlier is not an example of spaced practice. Potentially, it proves you have a slightly more long term memory than a goldfish, but other than that, it is a bureaucratic process designed by someone who thinks having a test shows them something about what people have learned. 

Happy sheets are another measure that are much referenced without being significant in terms of quality, learning or impact. Giving someone a survey that asks them how satisfied they are with the course is far from objective and probably says more about the comfort of the seats, the lunch provided and who they were sitting next to, than it does about the effectiveness of the event. Making your happy sheet into some kind of faux Net Promoter Score only compounds the error. Net Promoter Scores are designed to work with large numbers of people. Drawing some kind of measure from a cohort of 6 before they leave the classroom that shows an NPS score of +67 is about as meaningless as numbers can get. 


Learn how to truly embed new sales skills in your organisation. Download the whitepaper.


 

Targets

What’s more, some of these questionable measurements become even more questionable when they are converted into targets.

Charles Goodhart was a Bank of England Economist. In the 1970s, he was credited with creating the adage, most often expressed as: “When a measure becomes a target, it ceases to be an effective measure..” Goodhart was talking about monetary policy, but about the same time, John Campbell came up with what was subsequently known as Campbell’s Law, which said much the same thing about testing in US schools. 

Effectively, these ideas suggest that as soon as targets are set, then these have the effect – and often the intention – of skewing behaviour towards achieving the target. However, the purpose of the initial measurement and monitoring is often lost. In other words, achieving the target becomes an end in itself, however it is achieved and regardless of the effect on the original purpose of the measure.

So, while we may believe that engagement in – and completion of – a specific course or sequence of learning activities is a good thing, it should never be elevated to the status of ‘good thing’ in its own right. Learning must have the greater goal of performance improvement, however that performance is measured.

Targets for completion of modules or numbers of video views are just the digital equivalent of measuring the number of bums on seats. Being at the end of a piece of eLearning and in possession of a pulse has little to do with performance improvement and nothing to do with measuring impact.

So, what do we measure?

First, what is the base line? How are people performing before the learning intervention? From that assessment, what shift in that performance would represent positive and valuable progress?

Those are important numbers, and they are difficult to gather unless you are close to the business and understand the roles your people play. Is it about errors in using software? Is it about time to complete a task? Is it about numbers of service users satisfactorily served? What does satisfactorily mean in that context? 

Understanding these business metrics and how your learning intervention potentially shifts those metrics is the essence of learning design and the only worthwhile basis for measuring the impact of what you do.

In more ways than most people think, performance before and after cannot only be measured, but a return on investment can be built into that measurement. In other words, how much did the learning intervention cost (including participant time) and what was the value of the performance uplift to the organisation? 

If it is too difficult to isolate the performance uplift attributable to your learning activity,  then how about having a control group? One group that does not participate in the learning intervention and one that does. What happens differently in those two groups? Do people leave, get promoted, take on new responsibilities? Is money saved? Is efficiency increased? Do surveys suggest higher employee satisfaction? 

The measures that matter in your organisation already exist. If someone somewhere is monitoring a performance metric, then the best way of showing our relevance and our value to the organisation is to positively shift the dial on those measures – not achieve metrics we have invented to make us look important. 

Don’t get me wrong, L&D are not the only people who grasp at the straw which is some kind of quantitative measurement. I have said before: "Are we measuring what is important or according significance to the things we can easily measure?"

As L&D people we should be reflecting back to our colleagues in other departments where they are measuring the wrong things. Because if ‘what gets measured, gets done’ is true, then the chances are they could be doing different and more impactful things. 

And isn’t helping people do better things better our job? 

I think it is.

This article was originally published on TrainingZone.

making-learning-stick-whitepaper

Embed your sales training and make it stick

Discover how to make learning stick in your organisation by downloading the "Making Learning Stick" Whitepaper now!

Tell us your perspective