Learning Impact: Anything New?

March 8th, 2019

Introduction

If you have been following our Learning Impact project, you know that the main premise of this research is that we’re evaluating “learning” in organizations all wrong. Therefore, in conducting a fairly in-depth review of existing literature on the topic, we were not at all surprised about the state of learning impact – only somewhat disappointed.

We looked at over 50 academic and business articles, reports, and books for this literature review, which has given us a decent understanding of the known world of evaluating learning. This short article will summarize:

  • What we saw
  • What we learned
  • Overall impressions

Learning Impact: Anything New?

Word cloud of the learning impact literature: Most prevalent words in literature reviewed.

What we saw

What we hoped to see in the literature were new ideas – different ways of defining impact for the different conditions we find ourselves in. And while we did see some, the majority of what we read can be described as same. Same trends and themes based on the same models with little variation. We have highlighted four of those themes below.

Models, models, models!

Much of the literature focused on established evaluation models. These articles generally fell into three main categories: use cases, suggested improvements, or critiques.

By far, the most common model addressed in the research is Kirkpatrick, although, given the number of articles written on how to use it effectively, adaptation is still a challenge. It was frequently called the “industry standard” (including by Kirkpatrick Partners themselves).1

Articles on Kirkpatrick appeared to be fairly passionate, either for or against. While many authors doubled down on it, we also ran across several articles, like this one or this one, that offer what we consider to be fair critiques of the model, including that there is a lack of empirical evidence for existing learning evaluation models and their tie to business results.

Other models, including Phillips’ Chain of Impact, Kaufman’s 5 Levels of Evaluation, Brinkerhoff’s Success Case Method, and Anderson’s Return on Expectation, among others, were also explored. In total, we looked at over 20 models. They are summarized in the table below.

Evaluation Model Summary

 

Model Year Incredibly simplified steps  Read more
Kirkpatrick 4 Levels of Training Evaluation

Kirkpatrick

1976 Termed “the industry standard” by many of the articles we read, Kirkpatrick’s four levels are used widely to determine learning effectiveness.

  • Reaction
  • Learning
  • Behavior
  • Results
“The Kirkpatrick Model”
Kaufman’s 5 levels of Evaluation

Kaufman, Keller, Watkins

1994 Kaufman’s model adapts Kirkpatrick’s original model to include 5 levels and is used to evaluate a program from the employee’s perspective.

  • Input and process
  • Acquisition
  • Application
  • Organization Output
  • Societal outcomes
“What Works and What Doesn’t: Evaluation Beyond Kirkpatrick”
Success Case Method

Robert Brinkerhoff

2006 Particularly effective in assessing important or innovative programs. It focuses on looking at extremes: most successful and least successful  cases, and examining them in detail.

  • Needs assessment
  • Program plan and design
  • Program operation and implementation
  • Learning
  • Usage and endurance of training
  • Payoff
“Success Case Method”
Chain of Impact

Jack Phillips

1973 Adapts the Kirkpatrick model by adding a fifth step: ROI. Purpose is to translate business impact of learning into monetary terms so that it can be compared more readily.

  • Evaluation
  • Reaction
  • Learning
  • Behavior
  • Results
  • ROI
ROI Institute
Value of Learning Model

Valerie Anderson

2007 Consists of a three-stage cycle applied at the organizational level. One of the few models that does not necessarily use the course or initiative as the unit of measurement. Anderson also introduced the term, “Return on Expectation” as a part of her work.

  • Determine current alignment against strategic priorities
  • Use a range of methods to assess and evaluate the contribution of learning including learning function measures, ROE – expectation, ROI, and Benchmark and capacity measures
  • Establish the most relevant approaches for your org
A new model of value and evaluation: A new model of value and evaluation
CIPP Model Daniel Stufflebeam 1973 Framework was designed as a way of linking evaluation to program decision-making (i.e., making decisions about what happens to the program). Has a use case for resource allocation and/or cost-cutting measures. Utilizes the following areas:

  • Context
  • Input
  • Product or results
  • Analyze
“The CIPP Evaluation Model: How to Evaluate for Improvement and Accountability”
UCLA Model

Marvin Alkins

Dale Woolley

1969 Five kinds or need areas of evaluation – each designed to provide and report information useful for making judgments relative to the categories:

  • Systems assessment
  • Program planning
  • Program Implementation
  • Program Improvement
  • Program certification
“A Model for Educational Evaluation”
Discrepancy Model 1966 Used in situations where there is an understanding that a program does not exist in a vacuum, but instead within a complex organizational structure.

Program Cycle Framework:

  • Design
  • Installation
  • Process
  • Product
  • Cost-benefit
“ABCs of Evaluation”
Goal Free Evaluation

Michael Scriven

1991 Focuses on actual outcomes of a program rather than only those goals that are identified. Scriven believed that goals of a particular program should not be taken as a given.

  • Goals and objectives
  • Processes and Activities
  • Outcomes
“The ABCs of Evaluation”
LTEM: Learning Transfer Evaluation Model

Will Thalheimer

2018 Designed to help organizations get feedback to build more effective learning interventions and validate results.

  • Tier 8: Effects of Transfer
  • Tier 7: Transfer
  • Tier 6: Task Competence
  • Tier 5: Decision-making Competence
  • Tier 4: Knowledge
  • Tier 3: Learner Perceptions
  • Tier 2: Activity
  • Tier 1: Attendance
“The learning-transfer evaluation model: Sending messages to enable learning effectiveness.”

Justification as the goal

Much of the literature reviewed focused on utilizing learning evaluation, measurement, and analytics to either prove L&D’s worth to the organization or to validate L&D’s choices and budget. Words and phrases like “justify” and “show value” were used often.

Interestingly, according to David Wentworth at Brandon Hall Group, the pressure to defend L&D’s decisions and actions appears to be coming from the L&D function itself (44%), rather than other areas of the business (36%),2 which means, while business leaders may not be explicitly asking for “proof”, L&D departments most likely feel the need to quantify employee development in order to have that proverbial seat at the table.
Literature also focused heavily on Return on Investment, or ROI. How-to articles and research in this space continues to attempt to tie the outcomes of a specific program or initiative to financial business results.

Course-focused

Almost all of the literature we reviewed utilized the ‘course’ or ‘program’ as the unit of measurement. While several models address the need to take into account environment and other variables, they appear to do so in order to either control the entire experience or be able to isolate the “learning” from everything else.

To date, we have not been able to find any literature that addresses evaluating or measuring continuous learning as we understand it (i.e., individuals utilizing the environment and all resources available to them to continuously develop and improve). We feel that this is a shortfall of the current research and should be addressed.

Finally, the research focused heavily on learning from the L&D function’s point of view. Few appear to be looking at the field of learning evaluation/measurement/analytics from a holistic viewpoint. We expected to see more literature addressing L&D’s role in delivering the business strategy, or at lease providing information to other functions that could be helpful to them in making decisions.

Aged ideas

While we do not disparage any of the great work that has been done in the area of learning measurement and evaluation, many of the models and constructs are over 50 years old, and many of the ideas are equally as old.

On the whole, the literature on learning measurement and evaluation failed to take into account that the world has shifted – from the attitudes of our employees to the tools available to develop them to the opportunities we have to measure. Many articles focused on shoe-horning many of the new challenges L&D functions face into old constructs and models.
We realize that this last finding may spark a heated conversation – to which we say, GOOD! It’s time to have that conversation.

5 articles you should read

Of the literature we reviewed, several pieces stood out to us. Each of the following authors and their work contained information that we found useful and mind-changing. We learned from their perspectives and encourage you to do the same.

Article 1:  Making an Impact3

Laura Overton and Dr. Genny Dixon at Towards Maturity

“96% are looking to improve the way they gather and analyze data on learning impact. However, only 17% are doing it.”

Highlights:

  • Points out areas where L&D functions measure and what is important to them
  • Provides an interesting discussion on evidence to understanding impact
  • Shows compelling data about benefits of those who measure vs. those who guess
  • Gives some good hints for getting started

Towards Maturity’s 2016 report provides some interesting statistics about the world of learning metrics / measurement / analytics / evaluation. This article provides a sound platform for continued research on learning measurement and evaluation, as it provides a good summary of how learning leaders are currently thinking about the space.

Article 2:  Human Capital Analytics @Work4

Patti Phillips and Rebecca Ray at The Conference Board

“Aspirational organizations use analytics to justify actions. Experienced organizations build on what they learned at the aspirational level and use analytics to guide actions.”

Highlights:

  • Outlines an analytics maturity model for organizations to gauge their evolution when it comes to using data.
  • Provides a good discussion on “converting data to money”, or utilizing data to provide a comparison of cost savings
  • Identifies four key elements to help organizations make analytics work: Frameworks, Process Models, Guiding Principles, and Capability, all of which should be considered when putting together a learning strategy
  • Recounts some good examples and case studies

This article broadens the discussion about learning measurement to people analytics in general – something that L&D functions should be considering as they revamp their measurement and evaluation methods.

Article 3: The Learning-Transfer Evaluation Model5

Will Thalheimer at Work-Learning Research, Inc.

“For too long, many in the learning profession have used these irrelevant metrics as indicators of learning.”

Highlights:

  • Honest (yet biting) assessment of the current 4-level models and their success to this point in time
  • Section about the messages that measurement can send
  • Discussion on measuring negative impact, as well as positive impact
  • Introduction of the first new model for learning evaluation in about 10 years

Will addresses several points that have evolved our thinking. On top of that, Will is a witty writer who is easy to read and downright entertaining.

Article 4: Making data analytics work for you – instead of the other way around6

Helen Mayhew, Tamin Saleh, and Simon Williams

“Insights often live at the boundaries.”

Highlights:

  • Emphasizes the importance of focusing on “purpose-driven” data, or data that will help you meet your specific purpose.
  • Introduces the idea that large differences can come from exploiting and amplifying incrementally small improvements.
  • States that incomplete information is not useless and should not be treated as garbage – it has value and can be essential in helping people connect the dots
  • Provides a good discussion on using feedback loops instead of feedback lines

This article addresses data analytics in general, but provides several applicable points that L&D departments can incorporate.

Article 5: Leading with Next-Generation Key Performance Indicators7

Michael Schrage and David Kiron

“Measurement Leaders look to KPIs to help them lead — to find new growth opportunities for their company and new ways to motivate and inspire their teams.”

Highlights:

  • Provides a decent discussion on Key Performance Indicators and what they currently mean in organizations
  • Points to Chief Marketing Officers and their increasing accountability for growth-oriented objectives (we think CLOs and L&D in general are close behind).
  • Has an excellent discussion on leading versus lagging indicators, and the importance of both in “measuring”.
  • Recounts several good case studies that helped us think differently about what a KPI is and how it can be used

We found this article eye-opening. While it is not geared specifically to “learning”, it provides several, adaptable ideas that we feel will be important for next-generation learning measurement and evaluation.

Bonus: 4 Measurement Strategies That Create the Right Incentives for Learning8

Grovo

“As human beings, we are compelled to adapt our behavior to the metrics we are held against.”

Highlights:

  • Has a great discussion on how even the act of measuring learning can be a motivation.
  • Introduces several non-traditional ways to “measure” learning
  • Makes the point that measurement strategy should be a part of the learning strategy, not as a way to measure its effectiveness.

Yes, we know it’s a blog, and yes, we realize it was written by a vendor. But this piece made some interesting points – particularly, how what we measure impacts the business.

Overall Impressions

If we were to sum up all we read into a short statement, it would be this: L&D has a long way to go. That said, we are also hopeful. As L&D functions further integrate into the rest of the business, as tools for analytics and measurement get better, and as we begin to define new models that incorporate new ways of learning and new environmental variables, we can imagine a world, in the not too distant future, where we finally – after more than 50 years of trying – maybe crack this nut.

We would love to hear what you think – what did we miss? What else should we be looking at? Comment below.

Priyanka photo
Priyanka Mehrotra
Research Lead at RedThread Research

Footnotes

  1. “Overcoming the Real-World Challenges of Evaluating Learning Success with Kirkpatrick’s Model.” Carly Rae Zent, Training Industry.
  2. High-Performance Learning Measurement Essentials, Training Magazine, David Wentworth, 2018.
  3. Overton, L., Dixon, G. Making an Impact: How L&D leaders can demonstrate value. Towards Maturity, June 2016.
  4. Phillips, P., Ray, R. Human Capital Analytics @Work. Conference Board, 2015.
  5. Thalheimer, W. (2018). The learning-transfer evaluation model: Sending messages to enable learning effectiveness. Available at https://WorkLearning.com/Catalog
  6. Meyhew, H., Saleh, T. and Williams, S. Making data analytics work for you – instead of the other way around. McKinsey Digital, October 2016.
  7. Schrage, M., Kiron, D. Leading with Next-generation Key Performance Indicators. MIT Sloan Management Review. June 2018
  8. Grovo Blog: 4 Measurement Strategies That Create the Right Incentives for Learning. 2017.