Over the last few months, we have had the opportunity to talk to several learning leaders about their practices to understand how they were having impact on the overall business goals of their organization. While each L&D function necessarily impacts (and measures that impact) differently, our interviews with learning leaders helped us to identify several patterns.
This is the 5th in a series of 7 articles highlighting these patterns. A huge thanks to forMetris for sponsoring this research!
“Metrics aren’t always immediately useful.” – Susie Lee, Degreed (formerly BofA)
One of the reasons so many L&D functions struggle with learning measurement and learning impact is that they have no consistent data. In fact, according to Brandon Hall, only 51% of companies say that they are effective or very effective at measuring formal learning. And even fewer are effective when it comes to measuring informal (19%) and experiential (29%) learning1.
While these statistics focus on a more traditional way of viewing learning and development, the fact that only 51% of organizations are effectively measuring formal learning – which, by the way, they have complete control over and for which they have complete access to the data – is telling. By and large, L&D functions do not have a data culture. But they could have one.
How should they start? Leaders told us that they needed to overcome two major challenges in order to get consistent data: patience and standardization.
Be patient – it’s a virtue
Most L&D functions either really struggle to collect data and information on a regular basis, don’t do it at all. At least a part of this struggle stems from the practice of focusing on one-time measurements. When L&D functions focus on calculating the ROI or learner satisfaction associated with one course or initiative, the tendency is to collect only the information needed to serve that one purpose.
This focus on point-in-time results means that longitudinal data, interactions, and correlations are hard to come by in many organizations. Interactions and correlations over time provide ongoing insights about what is happening and why. Without consistent L&D data, it is difficult, if not impossible, to understand the impact employee development is having on organizational goals. Understanding this impact is the first step in being able to make intentional decisions about where to go next.
Collecting data over time can be challenging, and the fact that data and metrics may not be immediately useful can add to that challenge. But establishing continuous collection goes a long way in building a data and metrics culture.
In our conversations with leaders, three pieces of advice for how to consistently collect data stood out:
The other part of the consistency story is data and metrics standardization. Why? In order to consistently monitor and make good decisions, data and metrics need to be correct and comparable.
Standardization also ensures that L&D data and metrics are consumable by other business functions, by central data analytics teams, and by other technologies and systems. L&D functions should start by identifying any existing data standards their organization may have and adopting them.
That said, our interviews with L&D leaders indicated that the first challenge was often standardizing data and metrics within their own department. They talked about three types of standardization:
Unfortunately, developing a data and metrics culture within L&D functions is most likely not second nature: it takes work and investment. And it’s often not the sexy part of what we do. But we think that this culture and the ability to consistently collect and analyze data is the first nut L&D functions need to crack. As organizations begin to collect and assess information regularly, they will better understand how employee development is affecting the organization and their options for having impact will increase.
- Are we consistent in how we gather our metrics (i.e., do we use the same scales, gather at the same time, etc.)?
- Do we look at data over time so that we can draw longitudinal conclusions?
- To what extent to we make information, metrics, and data available to those who have the power to do something with them (i.e., front-line managers, individuals)?
- What steps have been taken to standardize how we collect and structure data?
- How conscious are we of making our metrics and data digestible?