Learning Measurement; Be Consistent – Develop a Data and Metrics Culture

August 23rd, 2019

Over the last few months, we have had the opportunity to talk to several learning leaders about their practices to understand how they were having impact on the overall business goals of their organization. While each L&D function necessarily impacts (and measures that impact) differently, our interviews with learning leaders helped us to identify several patterns.

This is the 5th in a series of 7 articles highlighting these patterns. A huge thanks to forMetris for sponsoring this research!

“Metrics aren’t always immediately useful.”

Susie Lee, Degreed

One of the reasons so many L&D functions struggle with learning measurement and learning impact is that they have no consistent data. In fact, according to Brandon Hall, only 51% of companies say that they are effective or very effective at measuring formal learning. And even fewer are effective when it comes to measuring informal (19%) and experiential (29%) learning1.

While these statistics focus on a more traditional way of viewing learning and development, the fact that only 51% of organizations are effectively measuring formal learning – which, by the way, they have complete control over and for which they have complete access to the data – is telling. By and large, L&D functions do not have a data culture. But they could have one.

How should they start? Leaders told us that they needed to overcome two major challenges in order to get consistent data: patience and standardization.

Be patient – it’s a virtue

Most L&D functions either really struggle to collect data and information on a regular basis, don’t do it at all. At least a part of this struggle stems from the practice of focusing on one-time measurements. When L&D functions focus on calculating the ROI or learner satisfaction associated with one course or initiative, the tendency is to collect only the information needed to serve that one purpose.

This focus on point-in-time results means that longitudinal data, interactions, and correlations are hard to come by in many organizations. Interactions and correlations over time provide ongoing insights about what is happening and why. Without consistent L&D data, it is difficult, if not impossible, to understand the impact employee development is having on organizational goals. Understanding this impact is the first step in being able to make intentional decisions about where to go next.

Collecting data over time can be challenging, and the fact that data and metrics may not be immediately useful can add to that challenge. But establishing continuous collection goes a long way in building a data and metrics culture.

In our conversations with leaders, three pieces of advice for how to consistently collect data stood out:

  • Start where you are and keep at it. Metrics aren’t always immediately useful – it often takes time, dedication, and investment. But the results are worth it. Several leaders emphasized the importance of starting where you are and building capability as you go along. No need to boil the ocean, but you must be consistent. Rachel Hutchinson, director of L&D at Hilti, said her team collects data for 6 months on any given initiative or change: 4 months to get results and 2 extra months to make sure it wasn’t a blip in the data.
  • Consider continuous data feeds. This bears repeating: L&D functions should think in terms of continuous data feeds instead of static reports or one-off calculations of metrics for two reasons. First, continuous data feeds provide the most recent data, giving L&D functions the ability to adjust to conditions more quickly. Secondly, it’s only slightly more difficult to set up a data feed than ask for a one-time data dump. Working with IT and other business functions to set up feeds will ultimately save the entire organization time and effort.
  • Automate data collection. In circumstances where it is necessary to collect data (rather than using other sources), automate it! Learning leaders we spoke with do this by planning for surveys, evaluations, and feedback as a part of the design process for any initiative, utilizing scheduling software or investing in measurement software that helps you do it (Watershed, forMetris, etc.).

How leaders are doing this:

One organization that participated in our roundtables highlighted their use of Google Analytics to understand what parts of their learning site people were paying attention to.

Using standardized tools and looking at data over time gave this organization a view into their environment. Specifically, monitoring these metrics over time gave the L&D function a better understanding of the topics that were of most interest as well as information about when and where best to approach employees for learning.

Standardization

The other part of the consistency story is data and metrics standardization. Why? In order to consistently monitor and make good decisions, data and metrics need to be correct and comparable.

Standardization also ensures that L&D data and metrics are consumable by other business functions, by central data analytics teams, and by other technologies and systems. L&D functions should start by identifying any existing data standards their organization may have and adopting them.

That said, our interviews with L&D leaders indicated that the first challenge was often standardizing data and metrics within their own department. They talked about three types of standardization:

  • Consistent formats. If data will be compared over time or with data from different systems or functions, using consistent formats is key. One leader mentioned the first time she realized the importance of this was in trying to pull a dataset for a given timeframe. Since she worked for a global organization, dates were stored differently depending on where in the world they were. While technology is getting smarter and better able to correct some of the challenge, identifying standards for data formats upfront can save a lot of headache down the line.
  • Consistent scales and data types. In situations where data needs to be collected, particularly through surveys or evaluations, consistent scales and data types should be used. Three-point scales vs. 4-point scales vs. open-ended answers all affect how easily data can be compared. One leader told us of the struggle he had of simply trying to get the entire organization to use one form so that course evaluation information was standard and could be compared.
  • Consistent collection methods. How and when you ask questions, who you ask them to, what words you use to ask them, who is asking them, and the format in which they are asked, can have an effect on responses. More evolved L&D functions are standardizing some of these things – sometimes with the use of technology – too ensure that bias doesn’t creep into their data and to ensure consistency.

How leaders are doing this

Derek Mitchell, while working for a large communications organization, had an interesting solution to gathering consistent data and minimizing the effect of the learning data collection effort: he eliminated the traditional survey altogether and replaced it with one simple question: “Describe your experience in one word.”

From that one answer, his team was able to assign a sentiment – positive or negative – and were able to see proportionally who said good things and bad things without biasing responses in any way. He also reduced the tax on the organization by getting rid of the 10-question survey and replacing it with just one questions.

Unfortunately, developing a data and metrics culture within L&D functions is most likely not second nature: it takes work and investment. And it’s often not the sexy part of what we do. But we think that this culture and the ability to consistently collect and analyze data is the first nut L&D functions need to crack. As organizations begin to collect and assess information regularly, they will better understand how employee development is affecting the organization and their options for having impact will increase.

Questions to ask:

  • Are we consistent in how we gather our metrics (i.e., do we use the same scales, gather at the same time, etc.)?
  • Do we look at data over time so that we can draw longitudinal conclusions?
  • To what extent to we make information, metrics, and data available to those who have the power to do something with them (i.e., front-line managers, individuals)?
  • What steps have been taken to standardize how we collect and structure data?
  • How conscious are we of making our metrics and data digestible?
Stacia Garr Redthread Research
Stacia Garr
Co-Founder & Principal Analyst

Footnotes

  1. Learning Measurement Standards: Consistency Equals Effectiveness. Brandon Hall, Richard Pacher. April 2018.