As part of our next-gen learning study, we invited leaders to participate in an interactive roundtable on learning methods. We were thrilled at the level of participation—both verbal and in chat. Thank you to everyone who took the time to connect with one another and share their experiences!
How does learning happen in orgs?
We opened the session with a discussion of how learning happens in orgs and how L&D functions can enable that learning. Leaders shared their definitions of learning, including:
- Learning is an ongoing process to gain and apply skills to solve business challenges
- Learning happens in the flow of work
- Learning is a combination of experience and reflection
- L&D functions enable learning by creating environments in which individuals and groups can build skills
- L&D functions enable learning through partnership with employees and leaders across the org
We shared RedThread’s Employee Development Framework, which depicts 6 employee behaviors that, together, describe how learning happens in orgs (Figure 1). For more info on the framework and detailed definitions of the 6 behaviors, check out this infographic.
We then transitioned to the specific topic of the day: learning methods. There are lots (lots!) of learning methods that can enable the 6 behaviors in the Employee Development Framework. Figure 2 shows a few of these methods mapped to each of the behaviors. It’s not a comprehensive list of methods, but it’s a decent start.
With so many learning methods available, L&D functions may ask:
How should we choose the learning methods that will work for our org? And how do we manage them all?
This overarching question framed the discussion, which focused on 4 main topics:
- Selection criteria: Which criteria should orgs use to choose the learning methods they invest in? (What works well?)
- Evaluating efficacy: How can learning leaders know if a learning method is working well for their org?
- New learning methods: What are some of the best new ways for employees to learn and develop?
- Exit strategies: When a learning method isn’t working for an org, what should learning leaders do?
The roundtable generated a number of insights we thought worth highlighting. The advice and examples leaders shared covered the range of the learning method lifecycle: selecting, evaluating, and offboarding learning methods. Here are our top 5 takeaways.
- Know your org’s business goals and audience
- Correlate business and L&D metrics to see what’s working
- Prioritize learning methods that connect people
- Use gamification … appropriately
- Pause or offboard methods that aren’t working
Know your org's business goals and audience
The leaders at this roundtable agreed that a key criterion for choosing learning methods should be the desired business outcome. They suggested asking questions such as: What’s the problem that needs to be solved or the org goal that should be accomplished? Leaders recommended partnering with leaders in other functions to answer this question in detail.
The other criterion leaders spoke a lot about was “know your audience.” Sometimes a method L&D thinks will work well actually falls flat—so it’s important to ask employees about their needs, challenges, and preferences before introducing a learning method. One leader gave this example:
We shoehorned AR/VR into our learning offerings. Then we asked our audience and they said, “No.” The new method wasn’t valuable. We should have asked the audience first.
Leaders were open about certain practical considerations that also influence how orgs choose learning methods: cost, implementation timeline, existing contracts or relationships with vendors, org tolerance for risk, etc.
Correlate business and L&D metrics to see what's working
Unsurprisingly, our discussion about using business goals as selection criteria flowed nicely into the topic that followed: understanding which learning methods are working and which are not.
Leaders emphasized that, ultimately, the best way to understand whether a learning method is working is to see if it moves the needle on business goals. They recommended partnering with leaders across the business to identify not just the business goal, but also the specific metrics associated with that goal.
Any lack of clarity on business metrics is telling in and of itself. One leader wrote in the chat:
Ask leaders what data they have that shows the issue they want to eliminate or the metric they want to improve. Red flags if they don't have any data!
Leaders said they look for correlations between business metrics and more traditional L&D metrics that indicate uptake of the learning method. They noted that some learning tech can now track business or productivity metrics in some areas—for example, in sales situations, call centers, or manufacturing environments.
In addition, leaders track indicators like engagement, adoption, usage, skill development, and observable behavior change. These metrics give relatively early or in-flight indications of whether a learning method is effective or not.
Prioritize learning methods that connect people
The leaders at this roundtable reported that they’re noticing more and more emphasis on learning methods that encourage people to learn from each other.
Worries about employee disconnection that surfaced during the COVID-19 pandemic have L&D functions thinking hard about how to foster more connection—creating more opportunities for employees to connect and learn with others—in teams, communities of practice, cohorts, etc.
One leader observed:
It’s not the old way of someone telling me something, or even me finding and reading content on my own. It’s more about, how are we helping each other learn?
One leader said her L&D function enables almost 70% team learning, especially for topics related to identified business problems, because team members are able to reinforce one another.
Others indicated that they are supporting a lot of collaborative learning as well, both to upskill and to foster connection.
Leaders noted that not all team learning is created equal. They placed a premium on learning methods that help employees learn together in real-world environments. The consensus was that the more a learning method could enable employees to learn in their actual work environments with the people they actually work with, the more effective it was likely to be.
Use gamification … appropriately
Sometimes employees aren’t able to learn in their real work environments—for example, when there are safety concerns, logistical constraints, or difficult / sensitive topics. In these cases, gamification might be an appropriate learning method.
Leaders noted that one reason gamification works well in such situations is that it facilitates smart failures. When it’s too risky or infeasible for employees to learn from failure in the course of their day-to-day work, then gamification and simulations—often facilitated through AR / VR—gives them opportunities to fail and learn from that failure.
They did, however, make a distinction between old-school gamification—which emphasizes stats and leaderboards—and business-oriented simulation games. One leader observed:
There’s a big difference between gamification to generate usage and gamification to generate critical thinking about a business problem.
Leaders were also careful to warn against introducing gamification or AR/VR simply because it’s a shiny new thing. They referred back to the business goal and the audience, and emphasized: If it doesn’t fit the goal and audience, it’s likely not going to work.
Pause or offboard methods that aren't working
Toward the end of the roundtable, we asked leaders, “What do you do when a learning method isn’t working?” The first thing to do, they said, is understand why it’s not working. Sometimes the problem can be fixed. And sometimes, you just have to let it die.
But letting go of a learning method can be challenging for L&D teams who have worked tirelessly to design, launch, and promote it. Leaders discussed the importance of looking at learning methods as disposable or having a short shelf life. Everything has a lifecycle and it’s best not to get attached, they advised.
One leader said her org implements “strategic pauses” to help offboard learning methods that may not be working:
If a learning method isn’t working, we’ll do a “strategic pause." We stop providing the program and evaluate if it’s really needed. If so, we rework the offering. If not, we find a way to offboard it.
Another leader said her org took the lifecycle management of learning methods away from the L&D function and handed that responsibility to the various business functions. If a method continues to be useful to a function, they keep it alive. If not, they let it go. Handing keep-or-cut decisions to the business function that uses the learning method struck us as an innovative way to help ensure the right decisions are made.
Finally, leaders noted the importance of planning for offboarding from the beginning. Contracting, sunk costs, culture, inertia, and a host of other reasons can pressure orgs to keep learning methods alive even if they’re not working and/or aren’t used. It’s helpful to have the expectation and processes for exiting learning methods in place from the very start.
This discussion about how to choose, evaluate, and exit learning methods was wonderfully enlightening. Thanks again to those who attended and made it such an enriching conversation. As always, we welcome your suggestions, thoughts, and feedback at firstname.lastname@example.org.