Events

Learning Measurement: Having Impact, Not Just Showing It

Posted on Thursday, July 18th, 2019 at 5:18 AM    

Six decades and not much change

We have been talking about learning measurement for more than six decades. Ever since Kirkpatrick came up with his 4-level model, learning & development (L&D) functions have been trying to understand the relationship between what they do and business outcomes. Since that time, over 20 have been developed to help L&D functions support their claim to impact.

Lately, we have become really interested in a variation of that question: How does what and how we measure affect the organization? What resulted was a surprise. While we approached it with a “there’s got to be a right way” mindset, what we have found is quite different. In fact, through this study, we have made two large realizations:

Realization 1: There has never been a more important time to get learning impact right.

Let’s start with a short history lesson. At the beginning of the first industrial revolution, we started using assembly lines to create products. These lines were very inefficient and fraught with problems: people learned only on the job, no one had a clear idea of what their roles were, and the workloads were incredibly uneven.

Then along came a guy named Frederick Winslow Taylor. Taylor was a mechanical engineer who applied engineering principles to the lines in order to clean them up and make the more efficient.

One of the first things he did was to establish processes and divide up the work. Then he trained employees to do that work in efficient ways. He, in essence, turned a chaotic malfunctioning system involving both individuals and equipment into one cohesive well-functioning machine.

This made Taylor famous. He wrote a book on these principles called The Principles of Scientific Management and is considered one of the world’s first management consultants.

As industry progressed, these principles were applied to types of work beyond assembly lines. They helped organizations grow and develop by structuring their workforce and ensuring that everyone knew what their role was and how to do it most efficiently.

Today, organizations continue to define roles, add people, train employees the ‘correct’ way, eliminate wasted time and effort, and gain efficiencies using these principles. This focus on efficiency has worked for a really long time. The US, for example, had fairly consistent gains in productivity from when we started measuring in 1947 through 2007.

However, like any system or machine, it’s impossible to make it infinitely efficient. At some point, any system will reach its optimal efficiency and additional effort won’t yield very big gains. We’re seeing that now in the market.

 

Image 1 Having Impact, Not Just Showing It

Image 1: US Productivity is Falling Off – Possibly Because of a Focus on Efficiency | Source: Data from the US Bureau of Labor Statistics

 

Image 1 shows trend lines from 1947 to 2007 (blue line), trend lines from 2001 through 2007 (orange), and the actual productivity gains from 2007 to 2017. As you can see, our actual productivity growth is the lowest it has been since we started measuring.

So efficiency is not getting us as far as it used to. And we need to start rethinking our reliance on it for business gains. Interestingly, more evolved organizations already are. A few months ago, we stumbled upon some research done by IBM. This study compares business focuses of industry outperformers (shown in orange in Image 2) to all other companies in that industry (shown in gray). The results are telling.

 

Image 2 Having Impact, Not Just Showing It

Image 2: Significant Outperformers Focus on Different things | Source: IBM Global Study

 

While the rest of the industry is focused on improving operational efficiency, outperformers focus on items that are inherently messy and inefficient, such as developing new products and services, expanding into new markets, and developing new distribution channels. And they need workforces who can think outside of the proverbial black box, take calculated risks, and think critically.

So what does this have to do with L&D and its impact? For starters, we’re dealing with a completely different world. When Taylor basically invented the L&D department, he did so to ensure that people were being trained to do a certain role in the most efficient way possible.

But L&D’s traditional methods and measures are not going to get it done. Most organizations cannot wait (and most likely don’t care) for L&D functions to calculate an ROI on a program or initiative to determine if it was effective and then adjust; it’s too late by then. Organizations need L&D functions to react more quickly and with flexibility – which requires a radically different mindset than the one currently en vogue.

We think that L&D functions everywhere need a chakabuku – a swift, spiritual kick to the head that alters their reality forever.1 L&D functions desperately need to rethink their measurement strategies and understand the implications that they have – not just for employee development, but for organizational performance as well.

Realization 2: Learning Impact is Hard.

We were hoping that our research efforts would point to definitive things that all organizations should do. What we found was just the opposite. We talked to over 40 really smart leaders doing really innovative things when it came to how they were showing and having impact. But none of them were doing the exact same things.

Some of these leaders identified strongly with the idea of ROI. Some lived and died by Kirkpatrick, Phillips, or some other model. Some focused heavily on drawing a straight line from what they did to business results. But all of them were having at least some modicum of success. As it turns out, how learning impacts an organization is specific to that organization, its goals, and its resources.

Fortunately, patterns exist. While metrics and tactics should be aligned to the organization, business goals, and employee development needs, our literature review, roundtables, and interviews led us to 7 patterns that more evolved organizations follow for moving beyond simply showing impact to actually having impact.

 

Image 3 Having Impact, Not Just Showing It

Image 3: Learning Impact Patterns Found in More Evolved L&D Functions | Source: RedThread Reserach, 2019

 

Over the next 7 weeks, we’ll be sharing these patterns, examples of what organizations are doing, and questions you can ask yourself about your own learning impact practices.

Next article in series: Learning Impact: Tying to Business Goals.


Learning Impact: Anything New?

Posted on Friday, March 8th, 2019 at 1:55 AM    

Introduction

If you have been following our Learning Impact project, you know that the main premise of this research is that we’re evaluating “learning” in organizations all wrong. Therefore, in conducting a fairly in-depth review of existing literature on the topic, we were not at all surprised about the state of learning impact – only somewhat disappointed.

We looked at over 50 academic and business articles, reports, and books for this literature review, which has given us a decent understanding of the known world of evaluating learning. This short article will summarize:

  • What we saw
  • What we learned
  • Overall impressions

Learning Impact: Anything New?

Word cloud of the learning impact literature: Most prevalent words in literature reviewed.

What we saw

What we hoped to see in the literature were new ideas – different ways of defining impact for the different conditions we find ourselves in. And while we did see some, the majority of what we read can be described as same. Same trends and themes based on the same models with little variation. We have highlighted four of those themes below.

Models, models, models!

Much of the literature focused on established evaluation models. These articles generally fell into three main categories: use cases, suggested improvements, or critiques.

By far, the most common model addressed in the research is Kirkpatrick, although, given the number of articles written on how to use it effectively, adaptation is still a challenge. It was frequently called the “industry standard” (including by Kirkpatrick Partners themselves).1

Articles on Kirkpatrick appeared to be fairly passionate, either for or against. While many authors doubled down on it, we also ran across several articles, like this one or this one, that offer what we consider to be fair critiques of the model, including that there is a lack of empirical evidence for existing learning evaluation models and their tie to business results.

Other models, including Phillips’ Chain of Impact, Kaufman’s 5 Levels of Evaluation, Brinkerhoff’s Success Case Method, and Anderson’s Return on Expectation, among others, were also explored. In total, we looked at over 20 models. They are summarized in the table below.

Evaluation Model Summary

 

Model Year Incredibly simplified steps  Read more
Kirkpatrick 4 Levels of Training Evaluation

Kirkpatrick

1976 Termed “the industry standard” by many of the articles we read, Kirkpatrick’s four levels are used widely to determine learning effectiveness.

  • Reaction
  • Learning
  • Behavior
  • Results
“The Kirkpatrick Model”
Kaufman’s 5 levels of Evaluation

Kaufman, Keller, Watkins

1994 Kaufman’s model adapts Kirkpatrick’s original model to include 5 levels and is used to evaluate a program from the employee’s perspective.

  • Input and process
  • Acquisition
  • Application
  • Organization Output
  • Societal outcomes
“What Works and What Doesn’t: Evaluation Beyond Kirkpatrick”
Success Case Method

Robert Brinkerhoff

2006 Particularly effective in assessing important or innovative programs. It focuses on looking at extremes: most successful and least successful  cases, and examining them in detail.

  • Needs assessment
  • Program plan and design
  • Program operation and implementation
  • Learning
  • Usage and endurance of training
  • Payoff
“Success Case Method”
Chain of Impact

Jack Phillips

1973 Adapts the Kirkpatrick model by adding a fifth step: ROI. Purpose is to translate business impact of learning into monetary terms so that it can be compared more readily.

  • Evaluation
  • Reaction
  • Learning
  • Behavior
  • Results
  • ROI
ROI Institute
Value of Learning Model

Valerie Anderson

2007 Consists of a three-stage cycle applied at the organizational level. One of the few models that does not necessarily use the course or initiative as the unit of measurement. Anderson also introduced the term, “Return on Expectation” as a part of her work.

  • Determine current alignment against strategic priorities
  • Use a range of methods to assess and evaluate the contribution of learning including learning function measures, ROE – expectation, ROI, and Benchmark and capacity measures
  • Establish the most relevant approaches for your org
A new model of value and evaluation: A new model of value and evaluation
CIPP Model Daniel Stufflebeam 1973 Framework was designed as a way of linking evaluation to program decision-making (i.e., making decisions about what happens to the program). Has a use case for resource allocation and/or cost-cutting measures. Utilizes the following areas:

  • Context
  • Input
  • Product or results
  • Analyze
“The CIPP Evaluation Model: How to Evaluate for Improvement and Accountability”
UCLA Model

Marvin Alkins

Dale Woolley

1969 Five kinds or need areas of evaluation – each designed to provide and report information useful for making judgments relative to the categories:

  • Systems assessment
  • Program planning
  • Program Implementation
  • Program Improvement
  • Program certification
“A Model for Educational Evaluation”
Discrepancy Model 1966 Used in situations where there is an understanding that a program does not exist in a vacuum, but instead within a complex organizational structure.

Program Cycle Framework:

  • Design
  • Installation
  • Process
  • Product
  • Cost-benefit
“ABCs of Evaluation”
Goal Free Evaluation

Michael Scriven

1991 Focuses on actual outcomes of a program rather than only those goals that are identified. Scriven believed that goals of a particular program should not be taken as a given.

  • Goals and objectives
  • Processes and Activities
  • Outcomes
“The ABCs of Evaluation”
LTEM: Learning Transfer Evaluation Model

Will Thalheimer

2018 Designed to help organizations get feedback to build more effective learning interventions and validate results.

  • Tier 8: Effects of Transfer
  • Tier 7: Transfer
  • Tier 6: Task Competence
  • Tier 5: Decision-making Competence
  • Tier 4: Knowledge
  • Tier 3: Learner Perceptions
  • Tier 2: Activity
  • Tier 1: Attendance
“The learning-transfer evaluation model: Sending messages to enable learning effectiveness.”

Justification as the goal

Much of the literature reviewed focused on utilizing learning evaluation, measurement, and analytics to either prove L&D’s worth to the organization or to validate L&D’s choices and budget. Words and phrases like “justify” and “show value” were used often.

Interestingly, according to David Wentworth at Brandon Hall Group, the pressure to defend L&D’s decisions and actions appears to be coming from the L&D function itself (44%), rather than other areas of the business (36%),2 which means, while business leaders may not be explicitly asking for “proof”, L&D departments most likely feel the need to quantify employee development in order to have that proverbial seat at the table.
Literature also focused heavily on Return on Investment, or ROI. How-to articles and research in this space continues to attempt to tie the outcomes of a specific program or initiative to financial business results.

Course-focused

Almost all of the literature we reviewed utilized the ‘course’ or ‘program’ as the unit of measurement. While several models address the need to take into account environment and other variables, they appear to do so in order to either control the entire experience or be able to isolate the “learning” from everything else.

To date, we have not been able to find any literature that addresses evaluating or measuring continuous learning as we understand it (i.e., individuals utilizing the environment and all resources available to them to continuously develop and improve). We feel that this is a shortfall of the current research and should be addressed.

Finally, the research focused heavily on learning from the L&D function’s point of view. Few appear to be looking at the field of learning evaluation/measurement/analytics from a holistic viewpoint. We expected to see more literature addressing L&D’s role in delivering the business strategy, or at lease providing information to other functions that could be helpful to them in making decisions.

Aged ideas

While we do not disparage any of the great work that has been done in the area of learning measurement and evaluation, many of the models and constructs are over 50 years old, and many of the ideas are equally as old.

On the whole, the literature on learning measurement and evaluation failed to take into account that the world has shifted – from the attitudes of our employees to the tools available to develop them to the opportunities we have to measure. Many articles focused on shoe-horning many of the new challenges L&D functions face into old constructs and models.
We realize that this last finding may spark a heated conversation – to which we say, GOOD! It’s time to have that conversation.

5 articles you should read

Of the literature we reviewed, several pieces stood out to us. Each of the following authors and their work contained information that we found useful and mind-changing. We learned from their perspectives and encourage you to do the same.

Article 1:  Making an Impact3

Laura Overton and Dr. Genny Dixon at Towards Maturity

“96% are looking to improve the way they gather and analyze data on learning impact. However, only 17% are doing it.”

Highlights:

  • Points out areas where L&D functions measure and what is important to them
  • Provides an interesting discussion on evidence to understanding impact
  • Shows compelling data about benefits of those who measure vs. those who guess
  • Gives some good hints for getting started

Towards Maturity’s 2016 report provides some interesting statistics about the world of learning metrics / measurement / analytics / evaluation. This article provides a sound platform for continued research on learning measurement and evaluation, as it provides a good summary of how learning leaders are currently thinking about the space.

Article 2:  Human Capital Analytics @Work4

Patti Phillips and Rebecca Ray at The Conference Board

“Aspirational organizations use analytics to justify actions. Experienced organizations build on what they learned at the aspirational level and use analytics to guide actions.”

Highlights:

  • Outlines an analytics maturity model for organizations to gauge their evolution when it comes to using data.
  • Provides a good discussion on “converting data to money”, or utilizing data to provide a comparison of cost savings
  • Identifies four key elements to help organizations make analytics work: Frameworks, Process Models, Guiding Principles, and Capability, all of which should be considered when putting together a learning strategy
  • Recounts some good examples and case studies

This article broadens the discussion about learning measurement to people analytics in general – something that L&D functions should be considering as they revamp their measurement and evaluation methods.

Article 3: The Learning-Transfer Evaluation Model5

Will Thalheimer at Work-Learning Research, Inc.

“For too long, many in the learning profession have used these irrelevant metrics as indicators of learning.”

Highlights:

  • Honest (yet biting) assessment of the current 4-level models and their success to this point in time
  • Section about the messages that measurement can send
  • Discussion on measuring negative impact, as well as positive impact
  • Introduction of the first new model for learning evaluation in about 10 years

Will addresses several points that have evolved our thinking. On top of that, Will is a witty writer who is easy to read and downright entertaining.

Article 4: Making data analytics work for you – instead of the other way around6

Helen Mayhew, Tamin Saleh, and Simon Williams

“Insights often live at the boundaries.”

Highlights:

  • Emphasizes the importance of focusing on “purpose-driven” data, or data that will help you meet your specific purpose.
  • Introduces the idea that large differences can come from exploiting and amplifying incrementally small improvements.
  • States that incomplete information is not useless and should not be treated as garbage – it has value and can be essential in helping people connect the dots
  • Provides a good discussion on using feedback loops instead of feedback lines

This article addresses data analytics in general, but provides several applicable points that L&D departments can incorporate.

Article 5: Leading with Next-Generation Key Performance Indicators7

Michael Schrage and David Kiron

“Measurement Leaders look to KPIs to help them lead — to find new growth opportunities for their company and new ways to motivate and inspire their teams.”

Highlights:

  • Provides a decent discussion on Key Performance Indicators and what they currently mean in organizations
  • Points to Chief Marketing Officers and their increasing accountability for growth-oriented objectives (we think CLOs and L&D in general are close behind).
  • Has an excellent discussion on leading versus lagging indicators, and the importance of both in “measuring”.
  • Recounts several good case studies that helped us think differently about what a KPI is and how it can be used

We found this article eye-opening. While it is not geared specifically to “learning”, it provides several, adaptable ideas that we feel will be important for next-generation learning measurement and evaluation.

Bonus: 4 Measurement Strategies That Create the Right Incentives for Learning8

Grovo

“As human beings, we are compelled to adapt our behavior to the metrics we are held against.”

Highlights:

  • Has a great discussion on how even the act of measuring learning can be a motivation.
  • Introduces several non-traditional ways to “measure” learning
  • Makes the point that measurement strategy should be a part of the learning strategy, not as a way to measure its effectiveness.

Yes, we know it’s a blog, and yes, we realize it was written by a vendor. But this piece made some interesting points – particularly, how what we measure impacts the business.

Overall Impressions

If we were to sum up all we read into a short statement, it would be this: L&D has a long way to go. That said, we are also hopeful. As L&D functions further integrate into the rest of the business, as tools for analytics and measurement get better, and as we begin to define new models that incorporate new ways of learning and new environmental variables, we can imagine a world, in the not too distant future, where we finally – after more than 50 years of trying – maybe crack this nut.

We would love to hear what you think – what did we miss? What else should we be looking at? Comment below.

RedThread Research is an active HRCI provider