<img height="1" width="1" style="display:none;" alt="" src="https://px.ads.linkedin.com/collect/?pid=2332164&amp;fmt=gif">

Putting the Smile Back in Learning and Development Program Measurement

By Andy Tanner - June 24, 2017

As learning and development consultants, we all desire more from our measurement data, but continue to be underwhelmed by the results. Why is that? Is it the type of data we are capturing? Or the way we are capturing it? Possibly the way we are analyzing it? Or even our inability to effectively apply the results?

I am going venture a guess and say the answer is all of the above. If you agree, or partially agree, or are just intrigued by the possibility, please keep reading. This will be the first in a series that will provide my perspective on the ineffectiveness of outcomes measurement in the training and professional development field, along with a new approach to address these challenges. Let’s begin by exploring the type of data we are capturing.

Leadership development assessments are the default mechanism for capturing the effectiveness of our professional development training programs, but they typically come way short of actually achieving that. The somewhat ironically named “Smile Sheet” has become a scapegoat for all of us tasked with providing measurement of learning effectiveness and/or ROI of leadership and development programs to our stakeholders.  But, can we really blame the smile sheet?

I think we would all agree that learner feedback is an important component in assessing learner effectiveness. Most of us would also acknowledge that we haven’t done much in progressing course evaluations, or added any other forms of measurement. One of my clients provided a very fitting anecdote related to a curriculum review he is conducting.

During the review, his team identified a course that seemed to miss the mark relative to the primary learning objectives of the program, but received very high reviews in course evaluations. With a little more investigation, they discovered the course had outstanding facilitators and a highly engaging format, and learners loved attending. The problem? The measurement data never assessed learning objectives nor skill attainment.

So what data should be captured? Effective learning measurement requires three primary data sets:

  1. Learner Evaluation
  2. Learner Consumption
  3. Learner Performance

Learner evaluation is already at the core of our programs – it’s easy to collect, enables consistency across courses / programs, and provides a direct line to our most important asset, the learner. The key is to go beyond satisfaction and evaluate effectiveness relative to learning objectives.

Learner consumption data provides a great view into the historic learning and development training footprint and is very accessible (typically in the LMS), but is rarely analyzed. The introduction of xAPI will make this data set extremely powerful in learning measurement.

Finally, learner performance data can be acquired through assessments, knowledge checks, and experiential learning decisions. This provides an objective view into a learner’s technical skills as well as their behavioral capabilities.

Advanced approaches go beyond the learning and development program and assess performance in the field. Designing a measurement strategy that not only includes these data sets, but aligns them with your learning framework, is the foundation for effective measurement.

Stay tuned for my next blog which will cover leading practice techniques for operationalizing the collection and analysis of these data sets.

We promise that we won't SPAM you.