In this series of articles about how to evaluate eLearning efforts, I’m making use of the classic Kirkpatrick Four-Levels Model originally developed for training evaluation because it’s also the perfect framework to use in the eLearning environment. The previous two articles dealt with Level 1 evaluation of the audience reaction to the eLearning course or module. Now it’s time to review the essentials of Level 2 Evaluation, which involves assessing the level of learning that took place.

Level 2 Evaluation for eLearning: Learning

The good news is that you don’t have to be an educational expert to design effective learning assessments to find out how much knowledge your audience has gained by participating in your eLearning course or module. Here are three best practices to keep in mind when designing learning assessments:

Download the free white paper How You Can Assess The Effectiveness of Your Training –  Kirkpatrick Model

Alignment with Learning Objectives

Any design for an eLearning course or module would be a largely wasted effort without first clearly establishing what the learning objective is – what it is that you want your audience to learn. These objectives would be best developed under a SMART framework. Most of you are probably familiar with that in terms of goal-setting, but it should be applied to learning objectives as well. If your objectives are Specific, Measurable, Achievable, Relevant, and Timely, then designing a learning assessment tool will be all that much easier.

Variety in Forms of Assessment

There are many different ways to assess the knowledge your audience has gained. The Eberly Center for Teaching Excellence and Educational Innovation at Carnegie Mellon University has developed a listing of different kinds of assessment tools derived from Bloom’s Taxonomy (source). The table below is by no means exhaustive, but represents a more than adequate starting point for achieving variety in assessment tools:

Type of learning objective Examples of appropriate assessments
Recall
Recognize
Identify
Objective test items such as fill-in-the-blank, matching, labeling, or multiple-choice questions that require students to:
  • recall or recognize terms, facts, and concepts
Interpret
Exemplify
Classify
Summarize
Infer
Compare
Explain
Activities such as papers, exams, problem sets, class discussions, or concept maps that require students to:
  • summarize readings, films, or speeches
  • compare and contrast two or more theories, events, or processes
  • classify or categorize cases, elements, or events using established criteria
  • paraphrase documents or speeches
  • find or identify examples or illustrations of a concept or principle
Apply
Execute
Implement
Activities such as problem sets, performances, labs, prototyping, or simulations that require students to:
  • use procedures to solve or complete familiar or unfamiliar tasks
  • determine which procedure(s) are most appropriate for a given task
Analyze
Differentiate
Organize
Attribute
Activities such as case studies, critiques, labs, papers, projects, debates, or concept maps that require students to:
  • discriminate or select relevant and irrelevant parts
  • determine how elements function together
  • determine bias, values, or underlying intent in presented material
Evaluate
Check
Critique
Assess
Activities such as journals, diaries, critiques, problem sets, product reviews, or studies that require students to:
  • test, monitor, judge, or critique readings, performances, or products against established criteria or standards
Create
Generate
Plan
Produce
Design
Activities such as research projects, musical compositions, performances, essays, business plans, website designs, or set designs that require students to:
  • make, build, design or generate something new

Create and Use Rubrics

Anytime you have an assignment or piece of work that will feed into your assessment of learning, it’s important to be crystal clear about what will fulfill your expectations for the assignment. This is essentially a scoring tool that lays out what the participant needs to do and describes various levels of quality for that work. A good rubric would include the following:

  • The criteria you will use to assess each aspect of performance.
  • The descriptors that describe the characteristics you want to see in each aspect of performance.
  • The levels of performance that make up a rating scale for the level of mastery within each criterion.

For example, if you wanted to create a rubric to assess participation in the online discussion aspect of an eLearning course or module, you might take a look at a solid example such as this one from the University of Wisconsin-Stout.

There’s a lot more that could be said about how to assess learning, but the three best practices listed above are great starting point. The next article in this series will be about Kirkpatrick’s Level 3 Evaluation: Behavior.