When you hear the words “data-driven” and “advanced analytics,” does that excite you? If you’re in the business world, and specifically in the Learning and Development or Talent Development domains, chances are that they do. It seems that everyone wants to jump on the analytics bandwagon because of the endless possibilities when applying analytics, from using learner data to improve and refine learning and talent development programs to the business “Holy Grail” of showing impact on business ROI. However, analytics can only lead to effective insights when the foundation of data used in analysis is sound. So it is critical to utilize appropriate assessments that will provide the necessary data.
But even though program owners acknowledge this, they are often reluctant to consider adding any assessments to their already-packed programs. On the one hand, companies worry their employees already have assessment fatigue from assessment protocols within their current program; on the other hand, these same companies are very unsatisfied with the limited data-driven insights they can derive from their numerous existing evaluations.
This leads me to ask, “If there are already so many assessments in place, why aren’t they delivering the data value that clients desire? Shouldn’t all this data enable them to determine whether their learners have actually learned and where the gaps in learning remain? And shouldn’t all this data make it possible for the organization to examine group learning trends and help the organization determine the appropriate courses of action based on the story the data is telling?”
In some cases, the type of data collected isn’t necessarily to blame; many companies are underutilizing the data they are collecting. However, in many cases, the issue lies with the type of data that is being collected. What’s easy to collect (such as “smile sheet” data and professional development training program completion information) does indeed provide data, but those measures alone will not give you the level of information you need to answer your questions about participant learning.
To measure actual learning, assessments need to go beyond measuring “butts in seats” and smile sheets that only provide information about the participants’ impressions of the program and their naturally biased thoughts on whether they think the learning program was effective.
My next post will discuss what kind of assessments are needed to measure the effectiveness of learning programs and advance your insights.