Learning Tech Ecosystems: Best of Breed, Employee Experience, and Integration
Posted on Wednesday, June 12th, 2019 at 7:24 PM
In the second half of last year, we did a fairly in-depth exploration of the learning technology vendor market. From that study, we learned that the majority of vendors (about 60%) are actually point solutions – meaning that they intentionally focus on one or two functionalities and try to do them better than anyone else.
Interestingly, when we ask these vendors where and how they fit with other technology vendors in the space, the majority of them can articulate it clearly: they know what they do well and how they need to interact with other technologies in order to serve the needs of their business and employees.
Given the rise in point solutions and their increasing ability to integrate and think holistically, we were surprised by the dearth of credible literature on how to create a learning technology ecosystem. Pickings are slim. We found ourselves needing to expand our search beyond just learning technology ecosystems to include things like business technology stacks, the future of learning, and the broader learning technology landscape. Doing this, we were able to round up 54 articles. The word cloud to the right identifies the topics we heard most. The larger the word, the more we saw it.
From those 54 articles, we were able to identify several themes that reflect current and near-future thinking and to identify 5 articles that we think push the thinking beyond the pedestrian.
What we saw:
Yes, technology has changed the way learning and training is delivered. It has also changed the way people communicate, process, and share knowledge. Unsurprisingly, the literature we reviewed highlights the need for L&D functions to adapt and design for the current needs of the employee. Surprisingly though, a lot of what we read is written by vendors and solution providers, which brings us to our first theme:
The discussion of ‘Platform vs Best in Breed’
A number of articles we reviewed focus on discussions of choosing between a platform or suite solution or self-designing a solution with the best of technologies available.
If the purpose is to provide a solution that creates a learning experience based on the purpose and needs of the employee and is tailor-made to provide solutions to the company’s unique problems, then the answer is simple: own the digital process and choose the technologies that are right for you.
As Adam Hardwood mentions in his article, while buying a single solution LMS might be simple on the surface, building an ecosystem, however, “requires performance conversations, a clear purpose, in-house maintenance, and the development of digital know-how” which leads to greater employee engagement and solving real problems of the business and its people.1
The conversation needs to shift from ‘why’ and ‘what’ to ‘when’ and ‘how.’ L&D functions need to think beyond deciding between a platform or building a technology ecosystem. There is enough evidence to strongly suggest that they aren’t mutually exclusive.
The literature suggests that learning technology vendors are thinking more consciously and intentionally about how to design an ecosystem that supports both the business objectives as well as the employee needs. That said, there is a lack of existing case studies or shared experiences on how organizations are approaching building an ecosystem, who are the stakeholders involved, what are barriers faced in designing one, and what kind of budgets are being utilized for them.
New technologies & better integration make ecosystems more viable than they've ever been, but there are challenges
New tools and technologies are undoubtedly getting easier to integrate into existing systems, which is providing all kinds of new opportunities. Open APIs, plug-ins, advanced analytics, and tools like AI allow multiple solutions to work together in a flexible and interoperable manner and allow data and information to flow seamlessly.
That said, the literature also points out challenges. For example, there are a ridiculous number of solutions and new technologies doing one or two things incredibly well. While this allows organizations to find exactly what they’re looking for, it also makes for a really crowded market. For organizations without a clear idea of the business objectives and employee needs, it can be a difficult market to maneuver.
There’s also the question of what exactly needs to be integrated. David Wentworth at Brandon Hall tells us that while integration capabilities are one of the top three criteria organizations have for their tech providers, “customizations can often break, causing the integrations to fail” or malfunction.2 Technology, process-related variables, and the business value have to be considered when looking to integrate.
The focus on employee experience
A global 2018 study of five hundred CHROs, found that 83% of organizational leaders believe a positive employee experience is crucial to the organization’s success.3 There is a general consensus that we need to shift focus to a more balanced approach that takes into account the employee as well as the business needs.
A significant portion of the literature reviewed emphasizes the importance of tying learning to employee experience, placing the employee at the center of it, and designing initiatives that fulfill their needs and wants. Most articles cite employee experience as the reason to move to a learning technology ecosystem strategy, as it does more to enable employees to access learning whenever, however, and in whatever form they may need to.
However, we think these articles may be missing a larger discussion about how ecosystems are intrinsically connected to the people’s experience. Authors (such as Lynda Gratton and Adam Hardwood) highlight the importance of creating a seamless and ‘frictionless’ employee experience by anticipating employee needs and supporting and guiding them with the help of the right digital tools.
The discussion needs to switch its focus on technology to first identifying the needs and then using technology to fulfill those needs, which brings us to our next point.
Learning Tech Ecosystems are broader than just learning tech
A number of articles that we reviewed touch on how learning ecosystems fit into larger business ecosystems. For an ecosystem to be agile, mobile, and organic, it needs to be aligned with a business model that supports it. That means that it should be integrated not just within the L&D department but also with the business tools and technology that share knowledge and data. This means L&D functions should be looking beyond the learning technology they buy and should include existing business technology that can be leveraged for learning as well.
A few articles also stress the importance of understanding how these technologies intersect with non-tech systems and processes within the organization. Learning clearly takes place outside of technology; understanding what the intersections are between the tech and the learning can give L&D functions additional levers to influence learning behavior.
Finally, there is also a point to be made about the problem of over-focusing on technology. A report by PWC states that more than 50% of employees surveyed believe technology is taking them away from human interaction.4 And while tools and technology are a means to enable people to move in and out of learning, an ecosystem comprises of other equally important components such as the people, the processes, and data.
Ecosystems can help orgs and employees deal with uncertainty
One of the unexpected gems of this literature review is a small but valid discussion on uncertainty. Now, more than ever, organizations are faced with the need to be agile and adaptive. Currently, technology is seen as an enabler for this.
While the articles addressing uncertainty are not specifically learning-focused, we think the conversation fits nicely; learning technology ecosystems allow for greater adaptability for both employees and organizations.
For employees, a well-thought-out ecosystem can allow them the space to come up with new ideas, seek multiple sources to satisfy their curiosity and needs, and move in and out of learning environment seamlessly. In his article, Randall White says that ecosystems allow leaders to give free space in which people can develop themselves by trying out new ideas and have alternative learning experiences. It helps them manage uncertainty so individuals can perform better and become more agile and thus benefit from it.5
For organizations, an ecosystem approach to learning technology allows flexibility to adjust where necessary; technologies can be plugged or unplugged depending on the needs of the business without huge tear-ups to the organizations. It also provides them with opportunities to experiment and test.
What caught our attention:
Of the literature we reviewed, we found several articles that spoke to us. These articles were chosen because they offer insight into the idea of learning ecosystems and add something unique to the conversation.
Removing the “Platform” From Learning Platforms: The Learning Ecosystem
Michael D. Croft
"A learner doesn’t want content; they want knowledge, competency and skill."
This article helps the readers better grasp the idea of a learning ecosystem where all moving parts are working as one unit, tool creators and consumers are the community, and the environment is the learning context. The new ecosystem, capable of evolving and meeting the needs of new capabilities, can plug into existing systems and embed itself into the structure.
Highlights:
- Learning platforms are primarily file systems and repositories with passive forms of delivery.
- They are incompatible in a world that needs more proactive, agile, and adaptive solutions.
- The modern learning ecosystem, instead, is organic and adapts itself to learning moments whenever and wherever they occur.
Who's Building the Infrastructure for Lifelong Learning
Lynda Gratton
"Anticipation is key to managing a working life."
This MIT Sloan Review article talks about the evolving nature of work and the need for lifelong learning. The anticipation of how jobs and roles may change and morph acts as a motivator to prepare for the future through learning.
Highlights:
- The “three-stage life” comprising of three distinct periods of full-time education, full-time working, followed by full-time retirement no longer applies as people continue to work and learn even after crossing the traditional retirement age.
- A more future-proofed concept is a “multistage life,” in which learning and education are distributed across the whole of a lifetime.
- To achieve lifelong learning requires involvement from multiple stakeholders including educators, governments, and corporations.
The Organization of The Future
Allison Bailey, Martin Reeves, and Kevin Whitaker
Ecosystems cannot be successfully managed without deliberate planning and control.
This report speaks to the need to unlock the full potential of AI and humans through fundamental organizational innovation in order to be successful in the coming decade. Leaders will need to reinvent the enterprise as a next-generation learning organization by integrating technologies for seamless learning, using human cognition for high-level activities, nurturing broader ecosystems, rethinking leadership and redesigning the human-machine relationship.
Highlights:
- Organizations need to not only automate but to also “autonomize” significant parts of their businesses.
- Humans should increasingly focus their efforts on higher-level activities such as causal inference (“why is it the case”) or counterfactual thinking (“what is not the case but could be”).
- Combining the comparative advantages of machines and humans will enable the organization to learn on an expanded range of timescales—faster and slower.
- Ecosystems cannot be successfully managed with deliberate planning and control. Instead, organizations need to be adaptive in order to respond to signals that emerge from the ecosystem.
- The new way of designing and operating organizations will require managers and leaders to focus on several new challenges such as developing governance principles for technology, harnessing the continuous learning capabilities, and leading the ecosystems and an adaptive organization.
The End of Average
Todd Rose
“ is worse than useless, in fact, because it creates the illusion of knowledge, when in fact the average disguises what is most important about the individual.”
This book addresses the myth of average — and how no one actually is. It walks through several examples of how our society designs for the ‘average’, but when the ‘average’ is applied, it doesn’t actually serve anyone. For us, this book did two things: first, it opened our minds to a myth that L&D has been perpetuating from the beginning; second, it made us wonder if ecosystems — more than platforms — can help us side-step the tendency toward serving the non-existent “average” employee.
- The history of “average” isn’t as long as we are led to believe, and there is an increasingly popular study of individuality.
- Rose makes his point with several case studies including cockpits and the average Air Force pilots, Norma and the average woman, and a study of how individuals learn and the myth of the average career path.
- Applying individuality — as opposed to averagism — in business can produce just as good or better results than thinking about the average.
Uncertainty: Learning's Final Frontier
Randall White
“Uncertainty creates chaos, but the answer is not to inflict order.”
This article highlights some of the philosophical questions regarding the concept of uncertainty and benefits of chaos in the workspace. Learning leaders can play a crucial role in helping employees and organizations prepare and understand ambiguity and uncertainty by developing ecosystems that provide the necessary environment.
Highlights:
- The growing complexity that leaders and organizations face has been one of the top factors that negatively impact an organization’s performance.
- In the face of uncertainty, learning can play an important role — that of engaging people within organizations.
- Uncertainty and chaos do not necessarily require order and control. Allowing for freedom and space to experiment, collaborate, try new ideas, and experience alternative learning can help develop people and prepare them for the future.
Overall impressions
As we mentioned above, there is a lack of shared knowledge in the space about how to build and design successful modern ecosystems. This is why we believe our research is well-timed and can shed light on some much-needed clarity.
Perhaps less surprisingly, the few articles that we did find on the topic are written by vendors who are thinking about technology integrations and applications in a more holistic manner as they develop their solutions. To that end, we also want to highlight a few critical articles that are authored by vendors, who are also our sponsors, as well as others in the space. We hope you find them as informative as we did.
- "Building A Smarter Learning Ecosystem." LinkedIn, JD Dillion, 2019.
- “The Innovator’s Guide to Learning Technology.” Todd Tauber, April 5, 2019.
- “How To Develop A Learning Ecosystem Strategy.” Tim Dickinson, 2018.
- "The Rise Of The Learning Ecosystem- And What It Means For Learning Technologies." Lumesse Learning, Mark Probert February 22, 2018.
Where Are We in the Great Performance Management Experiment?
Posted on Thursday, May 23rd, 2019 at 10:42 AM
It has been a tad shy of a decade since organizations began redesigning their approaches to performance management (PM). While previous approaches focused mainly on top-down, annual, staid processes, organizations now have to go beyond simply managing performance: they need to enable it. More specifically, organizations need practices that motivate, engage, and develop employees through a more collaborative, dynamic, and personalized process.
Yet, with all the changes made to PM, people are still unhappy, unmotivated, and disengaged. In addition, practices hailed as innovated and forward-thinking haven’t shown themselves to be the “cure-all” they may have been touted as.
So, the question remains – Where are we in the great PM experiment?
We have partnered with Glint to answer this question and looked at 40 academic and business articles, reports, and books for this literature review.
What we saw in the literature
Not surprisingly, the literature shows a general consensus that the traditional annual PM process isn’t enough. As it turns out, most traditional processes don’t drive engagement, often don’t encourage development, and don’t focus on the employee experience. While the research recognizes that some of the more traditional aspects of PM are still very necessary, organizations have experimented with ways to better engage employees in the process.
And experiment they have. We found literature on everything from completely ditching the performance review to facilitating continuous conversations, to adaptable goals. And while the sentiment is right, we think that there is an overemphasis in the literature on the process (i.e., the nuts and bolts of how we conduct performance management and its individual components), and an under-emphasis on the changing relationship between employee and employer (manager and up) and what that means for performance. Specifically, four general themes emerged:
Traditional approaches are no longer appropriate
The traditional model of performance management – the one introduced in the mid 1990s and discussed ad nauseum as an evil necessity – is unlikely to yield the results organizations want because it doesn’t focus on feedback as an informal, real-time way to engage and develop talent. It also disregards the importance of developing and maintaining a relationship between manager and employee, and instead, rigidly sticks to a standardized cycle that is not aligned with how and when work gets accomplished or when feedback is needed.
A “one-size-fits-all” approach to performance management can’t handle the highly dynamic and customized world we live and work in because it doesn’t take into account the work type nor the people in the organization.
Because of this, organizations are shifting performance away from a complex, top-down system to a mechanism that can enhance employee experience – turning something that happens to the employee into something that happens for the employee.
Ratings aren't the problem – we are
Over and over again, authors, particularly in more recent publications, urge caution about removing ratings. In fact, some authors say that organizations have reached premature conclusions about ratings: instead of fixing them, they’re buying into the myth that they serve no purpose and should be removed.
However, the literature also indicates that removing ratings does not exonerate organizations from providing feedback or evaluation; and it may make it harder. Evaluation is necessary to provide meaningful and personalized feedback so employees can improve.
A bigger problem appears to be managers’ inability or unwillingness to diagnose and confront performance problems1. This is particularly challenging in situations where ratings have been removed. And as organizations remove ratings, they need to rely more heavily on more frequent feedback in order to guide employees. Unfortunately, many organizations aren’t sure that their managers are either willing or able to have those feedback discussions2. Which brings us to our third theme.
Relationships are increasingly important
As organizations have replaced annual performance reviews with more regular feedback conversations, there has been a necessary renewed focus on the relationships in organizations – particularly those between employees and managers.
To do this effectively, organizations have to trust their managers to move past just managing projects to truly managing people. This has expanded the role of managers from beyond simply assigning work to one that also includes motivating and engaging team members and holding individuals accountable.
The literature also addresses the growing trend of peer reviews – the practice of employees providing open feedback to each other. This has prompted organizations to begin to think about how best to create a “culture of feedback” where everyone is able to provide quality feedback.
Fairness matters
Finally, fairness. As organizations adopt more frequent feedback and more open conversations, they also need to think through how they create an environment of trust and fairness. More specifically, employees need to feel that the feedback they get is credible and fair – regardless of whether it comes from a peer or a manager.
The literature points out that in most PM processes, subtle forms of bias exist, and these biases can create different outcomes for different groups. For example, similarity bias may subtly influence a manager to provide slightly higher ratings for someone more like them (i.e., same likes/dislikes, same gender and race, similar background). These biases are particularly relevant when talking about the relationship between manager and employee. Most PM systems are not yet set up to protect against them.
Interestingly, removing ratings doesn’t remove the potential for bias – it can actually increase it. When organizations remove ratings, they often replace them with fairly ambiguous criteria for evaluation which allow for much broader interpretation. This broader interpretation can lead to perceptions of unfairness and violate norms of trust.
Articles that caught our eye
Of the literature we reviewed, several pieces stood out to us. Each of the following pieces explored ideas that we found useful and interesting. We found them helpful in expanding the way we have been thinking about PM, its challenges, and its possible solutions.
Performance Management: A Marriage Between Practice and Science – Just Say “I Do”
Paul E. Levy, Steven T. Tseng, Christopher C. Rosen and Sarah B. Lueke
“Spoiler alert: the fix is not to blindly get rid of ratings.”
This chapter discusses recent criticisms of traditional PM practices and reviews them in light of academic research. In an effort to reduce the gap between practice and science in PM, the chapter highlights what organizations can do to improve their PM practices and where scholars should focus their research efforts.
Highlights:
- Argues that practitioners are driving the criticism of PM and that the gap between science and practice needs to be addressed
- Suggests that solutions to address criticisms of PM should come from both a practical and research-based point of view
- Advocates that removing ratings should be the rare exception and not the general rule
"Re-Engineering Performance Management"
Ben Wigert and Jim Harter / Gallup, Inc.
“Performance management has buckled because organizations have prioritized measurement over development.”
This report presents research on why traditional practices are not working, insights on how to improve them, and expectations that today’s employees have for their employing organizations. The authors recommend that organizations should create a culture of performance development by establishing expectations, continually coaching, and creating accountability.
Highlights:
- Presents the research behind why traditional PM is not effective in today’s organizations
- Discusses the changing nature of what employees expect from their organizations and how organizations can think through what (if any) changes are necessary in their approach to PM
"Straight Talk About Employee Evaluation and Performance Management"
Lucia Rahilly, Bryan Hancock and Bill Schaninger / McKinsey
“…there is still no substitute for the direct feedback and coaching that happens day in and day out…”
This podcast, with transcription provided, discusses recent research by McKinsey on what drives effective PM. The discussion focuses on the role of the manager to engage in quality performance and development conversations with direct reports, the need for some sort of evaluative component, and the finding that perceptions of fairness impact the degree to which PM is seen as effective.
Highlights:
- Discusses the current trends in PM and the necessary reliance on the ability of managers to provide coaching and feedback
- Explains that people still want to know how they’re performing and that some sort of evaluative component is likely necessary
- Illustrates the importance of perceptions of fairness in the PM approach
"3 Biases that Hijack Performance Reviews and How to Address Them"
Beth Jones, Khalil Smith, and David Rock
“…not all biases make us actively malicious. The key is how we manage our biases.”
The article discusses bias from a neuroscience perspective, highlighting that bias is our brain’s constant search for efficiency. While bias is not inherently bad, it can lead to negative outcomes if left unexplored. The authors discuss three biases – expedience, distance, and similarity – and how managers and organizations can mitigate their impact on performance appraisal.
Highlights:
- States that bias negatively impacts performance appraisal and briefly discusses the impact of three prevalent biases
- Provides high-level suggestions on how to mitigate the influence of these biases in performance appraisal
"Putting the System into Performance Management Systems: A Review and Agenda for Performance Management Research"
Deidra J. Schleicher, Heidi M. Baumann, David W. Sullivan, Paul E. Levy, Darel C. Hargrove and Brenda A. Barros-Rivera
"…much work is yet to be done in developing a body of scientific knowledge about performance management systems that can better inform practice."
While this article isn’t particularly provocative or stirring (it's why we put it last), it does provide a foundational summary of current PM research, which is helpful in understanding more progressive and innovative perspectives. The authors present a model of PM and summarize research from 1980 to 2017. Based on this review, they provide recommendations for future research in PM. This article is great for leaders and practitioners that want to geek out on the history of PM.
Highlights:
- Presents a model of PM to organize components of PM and to integrate perspectives
- Suggests that there are only seven core tasks involved in PM
- Illustrates the importance and value of both formal and informal components of PM
- Builds a case that we’ve excluded the examination of important variables in PM
- Argues that more research is still needed on PM
Overall Impressions
The literature on PM is vast and varied, and there are many, many smart people with different perspectives. We’re pretty sure no one perspective is the “right” perspective. That said, we’re starting to see large-scale agreement for the notion that traditional, top-down, annual-driven PM is less likely to reign supreme in the workforce of the future. With this shift, we think we’ll also see an increased emphasis on the role of relationships in organizations, the expectations of managers, and the importance of trust and fairness in PM approaches.
People Analytics Technology: What the Literature Says
Posted on Friday, March 29th, 2019 at 12:40 AM
To understand the current state of people analytics, we went wide and deep into the literature published on this topic over recent years. We reviewed over 30 published pieces which included academic papers published in journals, web articles, blogs, and research reports. This article summarizes:
- The 5 major themes
- What we expected to see but didn't
- Five articles you should read
The 5 major themes
Though the literature about people analytics technology continues to get richer and more varied, there were 5 major themes from our review:
Growth and maturity
Business and HR leaders, academics, practitioners, and technology providers agree on the purpose and objective of successful people analytics – to add business value and achieve business outcomes. There are numerous reports and research findings that refer to the financial benefits reaped by companies that have successfully implemented a mature people analytics functions – ones that go beyond descriptive reporting and have embraced predictive and prescriptive analytics that ties in financial and management data. There is a general agreement that people analytics today means more than reporting on HR metrics, that it should help develop employees, and that solutions should be business driven. It needs to move beyond a day-to-day and backward-looking process to a more forward-looking and predictive process that allows for strategic future planning.
The focus on the employee
This is a growing space with increasing focus on using analytics for improving employee experiences and helping employees bring their best to work. There is a sizeable section of literature that focuses on harnessing the power of analytics and network analysis to measure employee engagement, satisfaction, collaboration, innovation, stress, etc. This interest can be explained by the growing trend of trying to understand the effect of a company’s social capital on its financial performance. As one of the reports from Accenture states, “Companies with a highly engaged workforce are 21% more profitable than those with poor engagement.”1
Collaboration
A significant portion of the literature we reviewed spoke of the importance of involving key stakeholders and influencers for successful people analytics. This is both timely and well argued. Establishing a people analytics function not only requires investment of resources for the needed technology and skills, but also a culture that encourages, champions, and allows for collaboration. Lexy Martin’s piece on the role of HR Business Partners (HRBPs) does a great job of presenting a guide to get HRBPs ready to champion the people analytics cause within their organizations.
Ethics and privacy
There is a growing concern about matters of ethics and privacy of the data collected. The now frequently mentioned phrase “just because something can be measured doesn’t mean it should be” applies to not just collecting data needed to drive insights, but also doing so in an ethical and legal manner. Leaders know they have an obligation to be transparent and open about the data they collect. But how do they go about doing this and what technology they can leverage are questions that need answering.
What we expected to see in the literature, but didn't
While a lot has been written on the value that people analytics can add for an organization and the importance of it from the perspective of the future of work, an essential component of its successful implementation is technology and appropriate tools and skills. While there are numerous articles discussing topics on how people analytics can help organizations at the macro level, there is an existing gap in the literature as to what kind of analyses should be run and how to run them. Ben Teush does a great job highlighting one aspect of this in his article.
The other gap is the lack of information with regards to the existing technology that can help organizations achieve their purpose. Once the company has identified the business challenges it needs to address, what kinds of tools and solutions should it look for? What are the different types of platforms that exist? What are their strengths and weaknesses? What is the different kind of technology available in the market for companies at varying growth levels, sizes, and stages of people analytics maturity levels? What should organizations be looking at next?
As companies increasingly look at people analytics with the end objective of adding business value, moving beyond reporting on HR metrics and benchmarking, there are some common challenges that they are looking to solve for through people analytics. While stakeholders such as HRBPs, CHROs, and business leaders may well be in support of people analytics, another frequently met challenge is collaboration between people analytics teams and other functions (e.g., IT, finance). Another unanswered question is what are some of the standard practices offered by technology providers that companies entering this field can quickly adopt to ensure some level of compliance and ethical standards are met?
These are some of the questions that we hope to provide answers to through our research. As part of our next planned output for this study, we will be launching our survey at the end of March. Survey results, along with the interviews that we will be conducting over the summer, will inform our final findings. These findings will be presented in the fall of 2019.
5 Articles you should read
Of the literature we reviewed, several pieces stood out to us. Each of the following authors and their work contained information that we found useful and mind-changing. We learned from their perspectives and encourage you to do the same.
Article 1: The Happy Tracked Employee2
“For workers, though, the value of all this data gathering isn’t as clear. Advanced people analytics may even hinder employees’ ability to freely manage their time and experiment.”
Highlights:
- Speaks to the very important and relevant aspect of analytics – privacy
- Illustrates the dangers of over-monitoring and too much data gathering
- Talks about what companies can do to prevent breaking employee trust and the law
Article 2: Ten Red Flags Signaling Your Analytics Program Will Fail3
Oliver Fleming, Tim Fountaine, Nicolaus Henke, and Tamim Saleh
“It is imperative that businesses get analytics right. The upside is too significant for it to be discretionary. Many companies, caught up in the hype have rushed headlong into initiatives that have cost vast amounts of money and time and returned very little.”
Highlights:
- Describes some of the common frustrations felt by leaders on their people analytics journey
- Identifies the top ten red flags which signal a failing analytics function that companies should be on the lookout for
Article 3: Better People Analytics4
Paul Leonardi and Noshir Contractor
“If, as the sticker says, people analytics teams have charts and graphs to back them up, why haven’t results followed? We believe it’s because most rely on a narrow approach to data analysis: They use data only about individual people, when data about the interplay among people is equally or more important.”
Highlights:
- Provides a lesson on why companies should not fixate their people analytics and ONA efforts on using data only about individual people and their attributes such as ethnicity, age, gender, education, tenure, absenteeism, etc.
- Offers an in-depth understanding of relational analytics and how organizations can use ‘six structural signatures’ to better understand their employees, their levels of efficiency, vulnerability, innovation, and influence
Article 4: Nine Dimensions for Excellence in People Analytics5
“When I reviewed all the work over the last few years in clients and organizations around the world I realized that the answers can be summarized into nine dimensions which are grouped into three categories: foundational aspects, resources needed, and value gained.”
Highlights:
- Gives a thorough overview and detailed guidance on creating a successful people analytics roadmap
- Provides exhaustive dimensions that include the foundational aspects people analytics
functions need to address, the resources needed, and how value can be gained from insights
Article 5: People Analytics – Show Me, Don’t Tell Me6
“The audiences people analytics articles seem to be targeting are (1) HR and business leaders, and (2) people analytics leaders. There is a third audience that just needs to understand how to do people analytics.”
Highlights:
- Points out the fact that while there has been much written about the benefits of people analytics, there remains a lack of knowledge shared on how to conduct people analytics
and what kind of analysis to perform - Provides excellent sources and links for those looking to start with people analytics, by providing a few examples of HR problems that analytics could solve
Bonus: Bonus: Reports vs Analytics: What’s the Difference?7
“Insight is the deeper understanding you get of the actions and behaviour behind all of the data points you’ve gathered. It is the ability to make out the big picture from the millions of brush strokes that created the painting.”
Highlights:
- Reminds and educates us on the differences between reporting and analytics, the limitations of reports, and why you need analytics to understand the bigger picture
Finally, David Green’s monthly compilation of articles on HR and People Analytics is a constant source for those looking to stay up to date about the field.
Additional reads
1. Houghton, Edward, and Green, Melanie; “People Analytics: driving business performance with people data,” CIPD report, June 2018
2. “The Rise of Analytics in HR: The era of talent intelligence is here,” LinkedIn Report, March 2018
3. Levenson, Alec and Pillans, Gillian; “Strategic Workforce Analysis,” Corporate Research Forum, November 2017
4. Marr, Bernard; “5 Inspiring Ways Organizations Are Using HR Data,” Forbes, May 2018
5. Martin, Lexy; “Here’s What You Need In a People Analytics Leader,” TLNT, November 2018
6. Chakrabarti, Madhura; “Upskilling HR in People Analytics,” Deloitte Capital H Blog, March 2018
7. Creelman, Davis; “Analyzing a Fact-based Culture,” HR People + Strategy Blog, April 2018
While we have identified the articles above as being the most critical for readers to review, we did read a lot of others. If you’d like a full list of the articles we covered, please do not hesitate to reach out to us at [email protected].
Learning Impact: Anything New?
Posted on Friday, March 8th, 2019 at 1:55 AM
Introduction
If you have been following our Learning Impact project, you know that the main premise of this research is that we’re evaluating “learning” in organizations all wrong. Therefore, in conducting a fairly in-depth review of existing literature on the topic, we were not at all surprised about the state of learning impact – only somewhat disappointed.
We looked at over 50 academic and business articles, reports, and books for this literature review, which has given us a decent understanding of the known world of evaluating learning. This short article will summarize:
- What we saw
- What we learned
- Overall impressions
Word cloud of the learning impact literature: Most prevalent words in literature reviewed.
What we saw
What we hoped to see in the literature were new ideas – different ways of defining impact for the different conditions we find ourselves in. And while we did see some, the majority of what we read can be described as same. Same trends and themes based on the same models with little variation. We have highlighted four of those themes below.
Models, models, models!
Much of the literature focused on established evaluation models. These articles generally fell into three main categories: use cases, suggested improvements, or critiques.
By far, the most common model addressed in the research is Kirkpatrick, although, given the number of articles written on how to use it effectively, adaptation is still a challenge. It was frequently called the “industry standard” (including by Kirkpatrick Partners themselves).1
Articles on Kirkpatrick appeared to be fairly passionate, either for or against. While many authors doubled down on it, we also ran across several articles, like this one or this one, that offer what we consider to be fair critiques of the model, including that there is a lack of empirical evidence for existing learning evaluation models and their tie to business results.
Other models, including Phillips’ Chain of Impact, Kaufman’s 5 Levels of Evaluation, Brinkerhoff’s Success Case Method, and Anderson’s Return on Expectation, among others, were also explored. In total, we looked at over 20 models. They are summarized in the table below.
Evaluation Model Summary
Model | Year | Incredibly simplified steps | Read more |
Kirkpatrick 4 Levels of Training Evaluation
Kirkpatrick |
1976 | Termed “the industry standard” by many of the articles we read, Kirkpatrick’s four levels are used widely to determine learning effectiveness.
|
“The Kirkpatrick Model” |
Kaufman’s 5 levels of Evaluation
Kaufman, Keller, Watkins |
1994 | Kaufman’s model adapts Kirkpatrick’s original model to include 5 levels and is used to evaluate a program from the employee’s perspective.
|
“What Works and What Doesn’t: Evaluation Beyond Kirkpatrick” |
Success Case Method
Robert Brinkerhoff |
2006 | Particularly effective in assessing important or innovative programs. It focuses on looking at extremes: most successful and least successful cases, and examining them in detail.
|
“Success Case Method” |
Chain of Impact
Jack Phillips |
1973 | Adapts the Kirkpatrick model by adding a fifth step: ROI. Purpose is to translate business impact of learning into monetary terms so that it can be compared more readily.
|
ROI Institute |
Value of Learning Model
Valerie Anderson |
2007 | Consists of a three-stage cycle applied at the organizational level. One of the few models that does not necessarily use the course or initiative as the unit of measurement. Anderson also introduced the term, “Return on Expectation” as a part of her work.
|
“A new model of value and evaluation: A new model of value and evaluation” |
CIPP Model Daniel Stufflebeam | 1973 | Framework was designed as a way of linking evaluation to program decision-making (i.e., making decisions about what happens to the program). Has a use case for resource allocation and/or cost-cutting measures. Utilizes the following areas:
|
“The CIPP Evaluation Model: How to Evaluate for Improvement and Accountability” |
UCLA Model
Marvin Alkins Dale Woolley |
1969 | Five kinds or need areas of evaluation – each designed to provide and report information useful for making judgments relative to the categories:
|
“A Model for Educational Evaluation” |
Discrepancy Model | 1966 | Used in situations where there is an understanding that a program does not exist in a vacuum, but instead within a complex organizational structure.
Program Cycle Framework:
|
“ABCs of Evaluation” |
Goal Free Evaluation
Michael Scriven |
1991 | Focuses on actual outcomes of a program rather than only those goals that are identified. Scriven believed that goals of a particular program should not be taken as a given.
|
“The ABCs of Evaluation” |
LTEM: Learning Transfer Evaluation Model
Will Thalheimer |
2018 | Designed to help organizations get feedback to build more effective learning interventions and validate results.
|
“The learning-transfer evaluation model: Sending messages to enable learning effectiveness.” |
Justification as the goal
Much of the literature reviewed focused on utilizing learning evaluation, measurement, and analytics to either prove L&D’s worth to the organization or to validate L&D’s choices and budget. Words and phrases like “justify” and “show value” were used often.
Interestingly, according to David Wentworth at Brandon Hall Group, the pressure to defend L&D’s decisions and actions appears to be coming from the L&D function itself (44%), rather than other areas of the business (36%),2 which means, while business leaders may not be explicitly asking for “proof”, L&D departments most likely feel the need to quantify employee development in order to have that proverbial seat at the table.
Literature also focused heavily on Return on Investment, or ROI. How-to articles and research in this space continues to attempt to tie the outcomes of a specific program or initiative to financial business results.
Course-focused
Almost all of the literature we reviewed utilized the ‘course’ or ‘program’ as the unit of measurement. While several models address the need to take into account environment and other variables, they appear to do so in order to either control the entire experience or be able to isolate the “learning” from everything else.
To date, we have not been able to find any literature that addresses evaluating or measuring continuous learning as we understand it (i.e., individuals utilizing the environment and all resources available to them to continuously develop and improve). We feel that this is a shortfall of the current research and should be addressed.
Finally, the research focused heavily on learning from the L&D function’s point of view. Few appear to be looking at the field of learning evaluation/measurement/analytics from a holistic viewpoint. We expected to see more literature addressing L&D’s role in delivering the business strategy, or at lease providing information to other functions that could be helpful to them in making decisions.
Aged ideas
While we do not disparage any of the great work that has been done in the area of learning measurement and evaluation, many of the models and constructs are over 50 years old, and many of the ideas are equally as old.
On the whole, the literature on learning measurement and evaluation failed to take into account that the world has shifted – from the attitudes of our employees to the tools available to develop them to the opportunities we have to measure. Many articles focused on shoe-horning many of the new challenges L&D functions face into old constructs and models.
We realize that this last finding may spark a heated conversation – to which we say, GOOD! It’s time to have that conversation.
5 articles you should read
Of the literature we reviewed, several pieces stood out to us. Each of the following authors and their work contained information that we found useful and mind-changing. We learned from their perspectives and encourage you to do the same.
Article 1: Making an Impact3
Laura Overton and Dr. Genny Dixon at Towards Maturity
“96% are looking to improve the way they gather and analyze data on learning impact. However, only 17% are doing it.”
Highlights:
- Points out areas where L&D functions measure and what is important to them
- Provides an interesting discussion on evidence to understanding impact
- Shows compelling data about benefits of those who measure vs. those who guess
- Gives some good hints for getting started
Towards Maturity’s 2016 report provides some interesting statistics about the world of learning metrics / measurement / analytics / evaluation. This article provides a sound platform for continued research on learning measurement and evaluation, as it provides a good summary of how learning leaders are currently thinking about the space.
Article 2: Human Capital Analytics @Work4
Patti Phillips and Rebecca Ray at The Conference Board
“Aspirational organizations use analytics to justify actions. Experienced organizations build on what they learned at the aspirational level and use analytics to guide actions.”
Highlights:
- Outlines an analytics maturity model for organizations to gauge their evolution when it comes to using data.
- Provides a good discussion on “converting data to money”, or utilizing data to provide a comparison of cost savings
- Identifies four key elements to help organizations make analytics work: Frameworks, Process Models, Guiding Principles, and Capability, all of which should be considered when putting together a learning strategy
- Recounts some good examples and case studies
This article broadens the discussion about learning measurement to people analytics in general – something that L&D functions should be considering as they revamp their measurement and evaluation methods.
Article 3: The Learning-Transfer Evaluation Model5
Will Thalheimer at Work-Learning Research, Inc.
“For too long, many in the learning profession have used these irrelevant metrics as indicators of learning.”
Highlights:
- Honest (yet biting) assessment of the current 4-level models and their success to this point in time
- Section about the messages that measurement can send
- Discussion on measuring negative impact, as well as positive impact
- Introduction of the first new model for learning evaluation in about 10 years
Will addresses several points that have evolved our thinking. On top of that, Will is a witty writer who is easy to read and downright entertaining.
Article 4: Making data analytics work for you – instead of the other way around6
Helen Mayhew, Tamin Saleh, and Simon Williams
“Insights often live at the boundaries.”
Highlights:
- Emphasizes the importance of focusing on “purpose-driven” data, or data that will help you meet your specific purpose.
- Introduces the idea that large differences can come from exploiting and amplifying incrementally small improvements.
- States that incomplete information is not useless and should not be treated as garbage – it has value and can be essential in helping people connect the dots
- Provides a good discussion on using feedback loops instead of feedback lines
This article addresses data analytics in general, but provides several applicable points that L&D departments can incorporate.
Article 5: Leading with Next-Generation Key Performance Indicators7
Michael Schrage and David Kiron
“Measurement Leaders look to KPIs to help them lead — to find new growth opportunities for their company and new ways to motivate and inspire their teams.”
Highlights:
- Provides a decent discussion on Key Performance Indicators and what they currently mean in organizations
- Points to Chief Marketing Officers and their increasing accountability for growth-oriented objectives (we think CLOs and L&D in general are close behind).
- Has an excellent discussion on leading versus lagging indicators, and the importance of both in “measuring”.
- Recounts several good case studies that helped us think differently about what a KPI is and how it can be used
We found this article eye-opening. While it is not geared specifically to “learning”, it provides several, adaptable ideas that we feel will be important for next-generation learning measurement and evaluation.
Bonus: 4 Measurement Strategies That Create the Right Incentives for Learning8
“As human beings, we are compelled to adapt our behavior to the metrics we are held against.”
Highlights:
- Has a great discussion on how even the act of measuring learning can be a motivation.
- Introduces several non-traditional ways to “measure” learning
- Makes the point that measurement strategy should be a part of the learning strategy, not as a way to measure its effectiveness.
Yes, we know it’s a blog, and yes, we realize it was written by a vendor. But this piece made some interesting points – particularly, how what we measure impacts the business.
Overall Impressions
If we were to sum up all we read into a short statement, it would be this: L&D has a long way to go. That said, we are also hopeful. As L&D functions further integrate into the rest of the business, as tools for analytics and measurement get better, and as we begin to define new models that incorporate new ways of learning and new environmental variables, we can imagine a world, in the not too distant future, where we finally – after more than 50 years of trying – maybe crack this nut.
We would love to hear what you think – what did we miss? What else should we be looking at? Comment below.