Effective eLearning evaluation is one of the most critical aspects of making sure your learning and training programs do what you need them to do: Improve the skills and knowledge of employees in a way that has a positive impact on the company’s bottom line. I spent a good deal of time investigating the Kirkpatrick Model for learning evaluation, which was around long before eLearning appeared. It applies well to any kind of learning, and is still recognized by many as the gold standard for evaluation. But there are other eLearning evaluation alternatives available for those who want to try something different.

eLearning Evaluation Alternatives: Beyond Kirkpatrick

If, however, you want to learn more about the Kirkpatrick Model and how to use it, take a look at my following series of articles:

eLearning Evaluation Alternatives: 4 Approaches

Why would you even entertain anything other than the Kirkpatrick Model if it’s considered the go-to model? I think it’s always better to use something because you’ve taken the time to investigate the alternatives before making a final choice. In the case of the Kirkpatrick Model, it’s more often than not the “default” option because it’s what most instructors and instructional designers learned in school or early in their careers. But the model does have its detractors. If you’d like to read a lively debate about its alleged problems, check out Kirkpatrick Model Good or Bad? The Epic Mega Battle! Other criticisms of the Kirkpatrick Model can be found in the article by Jane Bozarth, Alternatives to Kirkpatrick.

The good news for those interested in exploring eLearning evaluation alternatives is the limited number of models available. There are really only a handful, so taking a closer look at them is not going to suck up too much of your precious time. Below are four models worth considering with links to more information about each one:

CIPP: The Context-Inputs-Process-Product Model (Stufflebeam)

Daniel Stufflebeam was a true mover-and-shaker in the field of evaluation and founded the Western Michigan University Evaluation Center (he passed away in 2017). His CIPP Model is a popular choice among eLearning evaluation alternatives because it can be used for all kinds of evaluations, whether it’s a whole curriculum, single course, certification process, a project, an organization and so on. It was originally developed to help the federal government evaluate projects undertaken during the War on Poverty in the 1960s and 1970s. Instead of looking primarily at learning products or results, it approaches evaluation from the perspective of the curriculum development process. The key questions corresponding to each of the four main aspects of the model are as follows:

  • Context: Pre – What needs to be done? Post – Was the program keyed to clear goals based on assessed beneficiary needs?
  • Inputs: Pre – How should it be done? Post – Were the targeted needs addressed by a sound, responsive plan?
  • Process: Pre – Is it being done? Post – Was the program’s plan effectively implemented?
  • Product: Pre – Is it succeeding? Post – Did it succeed?

The “product” element, by the way, has another four sub-aspects that are important, including assessments of impact, effectiveness, sustainability and transportability. For more information on this approach, see CIPP Evaluation Model Checklist: A Tool for Applying the CIPP Model to Assess Projects and Programs. Because it can be used for a variety of evaluation needs, the language isn’t geared specifically to learning, but it’s easy to see how it relates.

LOL: The Levels of Learning Model (Kaufman)

Roger Kaufman is recognized as an important figure in the fields of instructional design, performance management and strategic planning. His evaluation model doesn’t feel very different from the Kirkpatrick approach. What he did was take the Kirkpatrick’s model and reconfigure it into a five-level model as follows:

  • Level 1 ~ Input: The resources available for the learning/training and their quality.
  • Level 2 ~ Process: What is the best way to deliver the learning/training content.
  • Level 3 ~ Micro: Did the individual (or small group) learn the content and apply it?
  • Level 4 ~ Macro: Did the learning program have the intended impact on the organization?
  • Level 5 ~ Mega: How did the learning program impact a wider group, such as a company’s customers/clients or society as a whole?

LTEM: Learning-Transfer Evaluation Method (Thalheimer)

Will Thalheimer has a great website called Work-Learning Research, Inc., where his work in the learning-and-performance field has a home. As one of the eLearning evaluation alternatives, the LTEM approach can also be considered an expansion and rethinking of the Kirkpatrick Model. According to Will Thalheimer, “LTEM is NOT an expansion of the Four-Level Model. It is intended to replace the Four-Level Model.” He felt evaluation needed a more robust approach, one that was based on sound research and a more granular approach to learning outcomes. His model progresses through eight levels of evaluation, starting with the simplest and most basic and finishing with learning outcomes that include the application of learning on the job:

  1. Attendance: This is your basic signing up, attending and completing a learning experience – which doesn’t necessarily indicate any real learning took place.
  2. Activity: You can track any variety of metrics around participation, engagement, interest and attention, which have their place. But once again, none of them can be taken to mean that any real learning has occurred.
  3. Learner Perceptions: This is where you can create and track various metrics of learner feedback about the course itself, as in whether or not they enjoyed it, found it useful, etc. But even these are not sufficient indicators of true learning success. Some of the metrics here, such as intention to apply what was learned, could be taken as potential precursors indicating learning took place, but need to be supplemented with other measurements.
  4. Knowledge: This is where you evaluate how well the learner knows the content, both right after the learning as well as how much they remember later (several days or more later). But knowing something and applying it in real-world situations can be two very different things.
  5. Decision-Making Competence: Here is where you evaluate not only whether the participant can make decisions based on the knowledge or skills learned soon after the learning, but whether they can remember how to do so later on.
  6. Task Competence: In this level, you’re evaluating both decision-making and taking action on the decision, and the ability to do so not just right after learning, but later on as well. The participant has to display that they can decide the time is right to use their knowledge and skills and then execute the task that uses them. This is all in a testing environment so far.
  7. Transfer: It is only here in level 7 that the learning rubber really meets the road, because this is the level where you evaluate whether or how well they are applying the learning in their real-world work environment.
  8. Effects of Transfer: This final level is where you expand the evaluation out to include the impacts of the learning not just in terms of the individual worker’s job performance, but how that impacts others, the company as a whole, and outward from there if needed.

Notice that it is only by going into levels 4-8 that you can really verify true learning has taken place. And if you want to verify it’s having a real impact on your company, you’ve got to take it all the way to level 8. You can learn more about LTEM at Work-Learning Research.

SCM: The Success Case Method (Brinkerhoff)

Robert Brinkerhoff is a well-known expert in the fields of evaluation and learning effectiveness. He created SCM in order to fill what he saw was a gap in learning evaluation – documenting results in a performance management context. Rather than a myopic focus on the training itself, more attention needs to be paid on how the training is used and how it impacts performance results. It doesn’t get too far into the weeds of hard statistical proof and instead focuses on just gathering compelling evidence. After all, everyone just wants better performance, not necessarily a huge record of stats proving it. This model focuses on evaluating learning and training to get performance results. It’s also fast and simple, which makes it very appealing among these eLearning evaluation alternatives. From the big-picture view, here are the questions important to SCM, which is a more qualitative, case-study, storytelling approach to evaluation:

  • How well is the organization using learning to drive needed performance improvement?
  • What is the organization doing to facilitate performance improvement from learning? What needs to be maintained and strengthened?
  • What is the organization doing (or not doing) that impedes performance improvement from learning? What needs to change?
  • What groups/individuals have been successful (or unsuccessful) in applying a learning opportunity to achieve a business result? Why have they been successful (or unsuccessful)?

Applying SCM is a 5-step process that typically looks like the following:

  1. Impact Model: Identify the relevant business goals, related performance needs, and the learning objectives. In other words, define what success would like for this learning/training intervention.
  2. Survey Participants: The idea here is to do a brief survey with as many participants as possible to identify how well everyone did in terms of applying the learning and achieving performance expectations.
  3. Analyze the Data: Whittle down what you have to a small group of the most successful participants and a small group of the most unsuccessful participants.
  4. Dig Deeper: Here you go deeper into both groups by conducting interviews (less than hour, some suggest 20-30 minutes is enough) to find high-quality corroborating evidence (it would stand up in court) of why and how their learning experience and application went so well (or not well). You’re identifying the factors that supported success or barriers that stood in the way.
  5. Tell the Story: Write up a compelling narrative that tells the story of the impacts the learning/training intervention achieved, as well as areas for improvement to get even better results next time.

You can learn more about SCM in this article by Brinkerhoff.

If you’ve ever wished for eLearning evaluation alternatives that are different from the well-worn Kirkpatrick Model, the four I’ve covered in this article are a good starting point. There are others, so keep an eye out for a future article with more options.