I’m constantly concerned about the relationship between learning interventions and performance at work. It’s an age old question that dogs our profession. It’s also one of the hardest questions to answer because there is a lot of nuance involved.
Did a learning intervention directly improve performance at work?
Which also presents serious challenges for advocates of social and informal learning processes. If all learning at work is about performance improvement, then how do you directly prove that on the job learning or a coaching conversation had a positive impact on performance without finding a way for that to be tracked, measured and evaluated?
This, right here, is the biggest challenge to L&D.
One of the elements of this challenge is that there’s no way to truly create controlled environments where you can test intervention A against intervention B without falling foul of ethical issues.
For example, let’s take Big Box Company. They want their managers to go through business acumen training. You put Group A through generic off the shelf training on what business acumen means and how they can better understand the business by understanding generic management terminology. They’re also given performance support in the form of on the job coaching, curated resources that help them read more widely about the topic and generic e-learning. You put Group B through tailored training delivered by a senior leader who provides real insight into the way the business operates, what its key drivers are and goals for next few years. Their performance support is also in the form on on the job coaching, their curated resources are specifically related to the industry they’re in and the e-learning is specific to the company. You put Group C through nothing.
Can you build in measurement to know which intervention lead to performance improvement? Yes, you probably could. But could you claim that business improvement was because of that intervention and nothing else? No, no you can’t.
See, in the medical and psychology fields, when you run an experiment you control for every other factor except for the thing you want to specifically measure and evaluate. At work, in organisations, you simply cannot control for every other thing except for the learning intervention.
The best you can do in this situation is correlate one thing to the other. That’s not causation, and it’s the closest you’re likely to get. But here’s the rub. What if the performance of Groups A and B improved? How do you then justify one intervention of the other? Was Group A’s placebo training better than Group B’s targeted intervention? Worse, though, what if the performance of neither group improved? How do you then justify the interventions at all?
And right there is the challenge of advocating for social and informal learning processes.
What you begin to realise is that what you need is what Julian Stodd calls scaffolded learning. The intervention itself has to be a part of business process so that it’s not seen as an adjunct but as a core act of working.
There’s an experiment to be had with being able to do the above in a way which is ethical and fair. The way I’ve described it is quite light and there’s a lot of detail which could be part of the mix that I’ve not included. I just wanted to highlight how challenging it is to design and build learning interventions at all, without the promise of performance improvement attached.