AI is increasingly becoming a key enterprise strategy for leading organizations. From marketing to product engineering to supply chain management, AI is helping business units discover and interpret new insights – fueling smarter and more agile organizations.

Another business unit where AI is increasingly adding value – unearthing new data and turning it into actionable, bottom-line-impacting insights – is learning and development (L&D).

Traditionally, L&D departments have struggled to assess the efficacy and impact of their programs. In fact, 1 in 5 organizations say they don’t measure the impact of learning programs at all. But AI is changing the game – ushering in a new era of accountability, verification and data science for L&D.

The Learning Data Problem

It turns out that proving employees have learned what they need to know and can accurately apply it on the job is hard to do. The tools and technologies that have been the mainstay of L&D (e.g. LMSs, assessments, etc.) are not equipped to provide this level of insight, and delivering this analysis manually at scale is impractical. As a result, L&D has been consistently hamstrung by its inability to demonstrate hard-hitting results and outcomes – weakening the department’s internal reputation and leaving organizations in the dark about the learning needs of their workforces.

But learning outcomes – like knowledge gains, mastery and retention – are just one part of the data set that the c-level wants to see. CEOs, CDOs and CFOs also want to know how learning is impacting business outcomes and delivering ROI. For practitioners who are struggling just to determine what their people learned, demonstrating the link between learning gains and business impact/ROI feels out of the realm of possibilities.

How L&D Measures Efficacy Today

For corporate leaders who aren’t familiar with the world of L&D, here’s a look at how training programs are typically measured:

Learning departments have relied on four levels of measurement, known as the Kirkpatrick Model, since the 1950s. The majority of today’s training technologies and tools, however, support only the first two levels of measurement, which provide an imperfect look into learner knowledge.

Level 1: Reaction

The most basic level of measurement is assessing participant’s perceptions of the training. This includes Net Promoter Score (NPS) questions (“would you recommend it to a friend?”) and general feedback questions, (“did you feel it was relevant to your job?”) And because this level is based on feedback directly from participants, it’s fairly easy to gather. In fact, 80% of L&D departments measure learner reaction. The problem here is that employee likes and wants tell L&D departments relatively little about learning needs, gaps and outcomes.

Level 2: Learning

This level evaluates what employees learned from the training. Research shows that roughly 49% of L&D departments measure learning – mostly through pre and post-training assessments. These assessments are static, giving L&D a small picture of what their people know at a single point in time. They provide no insight into how well a person will be able to apply what they’ve learned over time. They also do not account for the research that shows how quickly people forget what they’ve learned (as much as 70% just 1 day post training), making these assessments obsolete very quickly.

The more advanced levels outlined in Kirkpatrick’s Model provide much deeper insights, but are notoriously hard to measure.

Level 3: Behavior

This level is all about application – how well an employee applies the training on the job. Most often, this level is assessed through management evaluations and interviews. Perhaps because it’s a more hands-on type of evaluation, only 25% of organizations measure it.

Level 4: Results

Level 4 evaluates the results of training, or the “degree to which targeted outcomes occur as a result of the training.” Think: business impact (you know, that thing that only 8% of CEOs say they see from training). This involves assessing leading indicators (early measurements that suggest critical behaviors are on track to create a positive impact on results), KPIs and ROI. Armed with the traditional L&D tools, this analysis is nearly impossible. That’s why only between 7% and 13% of organizations claim to measure it.

Data Science Skills Gap

The reason Levels 3 and 4 are so hard to measure comes down to a data science skills gap. Many of the L&D technologies available today aren’t sophisticated enough provide the level of data science needed to evaluate training at these two levels, and most L&D departments don’t employ dedicated data scientists who can pluck value out of raw learner data. As a result, L&D continues to have difficulty demonstrating how their programs improve business performance and outcomes.

So, how can L&D get more valuable insights from their training and development programs? And how can C-level executives get answers to the questions they have about the ROI and business impact of training within their organizations?

The answer is AI.

How AI is Changing Data Capabilities of L&D Departments

Just as other departments are identifying new opportunities and insights from AI, so too can L&D. In fact, learning technologies that are supported by AI are beginning to provide built in data-science capabilities (This is a significant departure from non-AI-enabled learning technologies that capture and share lots of data, but leaves the onus of interpretation and application on L&D.) This not only makes measuring all levels of the Kirkpatrick Model realistic, but it also eases the burden of analysis for L&D professionals.

AI-powered learning technologies can:

  • Monitor user behavior to deliver insights into learner engagement
  • Combine performance and behavior metrics to verify mastery, predict future application of training and estimate when employees will need to refresh their knowledge
  • Automatically improve the efficacy of content by removing poor performing assessments and suggesting fixes
  • Identify assessments that are predictive of larger trends or workforce abilities (e.g. if a learner is good at “x”, they’ll likely be good. at “y”)
  • Integrate with other systems to link learning data with KPIs and, ultimately, business outcomes.

AI closes L&D’s data science skill gap. It helps L&D departments deliver against the full breadth of Kirkpatrick’s industry-standard metrics. It creates a more complete picture of workforce knowledge that can be used to improve programs, identify future needs and predict workforce performance. And, perhaps most importantly, AI equips L&D departments to quantify the returns and outcomes of their learning expenditures with hard-hitting data. All this equates to a more informed, more accountable training environment – one that answers the ROI and impact questions of the C-suite with training that delivers.