26 July 2019

Learning Measurement: Using All the Levers to Have Impact

Dani Johnson
Co-founder & Principal Analyst

TL;DR

  • This is the second in a series of 7 articles highlighting learning measurement patterns.
  • This article focuses on using all levers (data sources, measurements, metrics) to have learning impact on an organization.
  • A huge thanks to forMetris for sponsoring this research!

 

“We’re trying to make sure that we’re matching up our learning to the way the [workforce] navigates – and the actual day-to-day.”

Brian Floyd, Director, Sales International Training Development, Behr

Newsflash: most learning doesn’t happen in a classroom. We’ve been aware of this for years; and it’s prompted several sound and important movements, such as a focus on social learning (learning from others), microlearning (learning that better fits around work), blended learning (learning designed to take into account several methods and modalities for learning), and on-the-job learning (the original way to learn that has made a comeback in recent years through stretch assignments, special assignments, internships, etc.).

However, while many L&D functions have introduced different modalities, these modalities are often harder to measure and to tie to actual business results (if we had a nickel for every time we have been asked the question, “yes, but how do you measure informal learning?”). Because of the pressure on a lot of L&D departments to show value, many of them continue to measure stuff they know they can measure in a way that will show some sort of positive outcome. We think there may be a better way.

L&D functions should pay attention to conditions and environment

Late last year, we spent a significant amount of time trying to figure out how L&D functions should be thinking about their responsibilities. Because of the changes in technology, learner expectations, and the like, what they used to do (create and disseminate content, ensure compliance, etc.) is woefully inadequate.

Why? Organizations appear to be starting a broader conversation about employee development. It’s no longer just about “learning,” but also includes career development and performance as well. And if you think about it, this makes a lot of sense: it’s difficult to develop individuals if you don’t know where they’re going (career) and how they’re doing (performance).

This larger discussion further complicates L&D’s ability to have and show impact. If we are no longer just paying attention to learning in its purest most formal form, and if we must now account for individual career paths and performance, using the course as the unit of measurement and measuring things like completions simply doesn’t make sense anymore.

A framework for thinking of conditions

Instead, some of the more evolved organizations we talked to appear to be taking more of an environmental approach – focusing on the conditions and how they encourage, motivate, and prompt learning.

This isn’t the first we’ve talked about conditions. As a part of our Learning Technology Landscape project last fall, we spent some time thinking through the changing nature of L&D responsibilities and identified six areas that comprise the conditions that encourage an employee’s development experience.

 

Figure 1 CONNECTING: THE NEW (OLD) WAY TO LEARN

A New Employee Development Model | Source: RedThread Research, 2019

 

These 6 areas are:

  • Plan: Define how we encourage and enable our employees to plan their careers – in our company and outside our company (in some cases). Planning includes careers, skills gap analyses, and other things that will help employees get where they want to go.
  • Discover: Defines how organizations enable their employees to find the types of content and opportunities that will take them in the direction they’d like to go in their careers
  • Consume: Defines how organizations enable their people to access and consume content
  • Experiment: Defines how organizations provide opportunities for practice of new skills and knowledge – both on and off the job
  • Connect: Defines how organizations enable people to connect with each other and learn from each other – again, inside and outside of the organization
  • Perform: Defines how organizations enable their employees to perform better on the job and learn while doing it.

As L&D functions begin to pay attention to all of these areas, their job gets bigger, but also infinitely more impactful. Instead of relying solely on creating and disseminating information, they have many more levers to pull. Not only that, but the levers can be tailored to the goals of the organization.

Measuring conditions, not courses.

The many levers associated with a focus on creating the conditions of learning also provides L&D functions with many, many more data points. These data points provide more discrete information about what’s going on in the organization and makes it easier to draw conclusions about what should be tweaked or how people should be nudged to move them toward the outcome that the business wants.

Let’s assume, for example, that a company has a goal of encouraging more innovation from its employees, and that it has decided to measure it in two ways: keeping track the number of new ideas submitted by employees, and counting the number of new patents the company gets per year (incidentally, these two measures lead to a discussion on leading and lagging indicators, which we’ll discuss in our next installment).

Once the L&D function understands how the company is measuring success, they’re able to identify things they can do to influence more innovation for each of the six conditions for learning, and then identify metrics they can use to track it. It may look something like this:

 

Figure 2 Using All the Levers to Have Impact

Example: Innovation Condition Metrics | Source: RedThread Research, 2019

Measuring conditions provides a broader view of the system and allows L&D functions to respond more holistically. It also allows L&D functions to correlate the things that they are doing with the organizational goal. While they’re simple correlations (which means they’re not necessarily causal), understanding how development opportunities in innovation are either trending with, against, or not at all with the organizational goal can provide a good deal of insight into how things are going and how they should be tweaked in order to encourage more of the right things.

It also allows L&D functions to do more safe experimentation – or informed experimentation. Instead of shots in the dark or trying to tie one single course to a business result, L&D functions can make informed decisions about what is working and what is not, and confidently swap out the things that are not working.

How leaders are doing it:

Behr’s sales organization L&D function that focuses on its sales organization implemented A/B testing to get a feel for what their workforce wanted to watch. They took two equally-rated subjects and then created different lengths of videos – some were 45 seconds, some 2 minutes, some five minutes. Their going-in assumption was that their audience would prefer the shorter videos. But by tracking engagement data, they were able to determine that their sweet spot was actually about 2 ½ minutes.

Behr also uses engagement data to understand what is of interest – what skills and knowledge the workforce wants. They not only focus on the national initiatives and the mandatory content analytics, but they also spend time analyzing what the workforce is looking at that isn’t mandatory. This helps to identify gaps and trends that the team is experiencing at the store level. This information is not just useful for L&D – it’s also useful for the organization.

We recognize that a focus on measuring and analyzing conditions or environment is a departure from the ways most L&D functions are currently showing impact. But we think it’s better. And while we ran across exactly no companies that have completely adopted this way of working (we just invented it last fall, but we think it’ll catch on), several were starting to pay attention to environmental factors as a part of their measurement strategies.

We know that this point – using all the levers – may feel a bit like an aside from the main discussion of learning impact, but we were surprised from our discussions how much what you do matters to how you measure. You don’t get the extra data points unless you’re paying attention to the different areas, and you aren’t able to have as much impact if you focus exclusively on courses and traditional stuff. We think that it will be crucial to L&D functions being key players in their organizations in the future.

Questions to ask:

  • How often are we defaulting to courses or initiatives in our efforts to build a skilled workforce?
  • To what extent are we leveraging other types of learning?
  • How many of our metrics measure things that are not typically L&D-related, but more geared toward the business?
  • How much are we experimenting with different types of learning?
  • Do we have the ability to be agile – swapping things out that aren’t working for things that may work better?

Written by

Dani Johnson
Dani Johnson

Dani is Co-founder and Principal Analyst for RedThread Research. She has spent the majority of her career writing about, conducting research in, and consulting on human capital practices and technology. Her ideas can be found in publications such as Wall Street Journal, CLO Magazine, HR Magazine, and Employment Relations. Dani holds an MBA and an MS and BS in Mechanical Engineering from BYU.

Share This