View from the Edge

Verifying Driver Knowledge - Why Perfect Scores Don't Always Help

In this issue, CarriersEdge co-founder Mark Murrell discusses the problems that arise when fleets require perfect scores from drivers.

A key component of any fleet risk management program is comprehensive driver training. Ensuring drivers have the requisite knowledge and skills before heading out on the road is a critical step in mitigating risk, and demonstrating the thoroughness of those programs is often immensely valuable in court as well.

One step that people often take in this regard is to require drivers to receive perfect scores on any knowledge tests that accompany that training. The feeling is that by requiring this level of achievement, the company will demonstrate sufficient due diligence and avoid negligence claims.

However, a deeper analysis of this "Mastery" or "Correct-to-100" approach shows not only that it doesn't really help in court, but that it can actually increase risk in several ways. Since there are many reasons why drivers might have poor test results, and just as many reasons why they might do well, understanding what's happening underneath those results is often more important than the results themselves. Here's why.

Defensibility

The main argument in support of this type of testing is legal defence. It's based on the assumption that if you ensure everyone gets 100% in the training then you've ensured that everyone knows everything they should and you're on solid legal ground. On the other hand, if you let drivers out on the road with less than perfect scores then you're demonstrating negligence and may have legal exposure. That argument doesn't hold up.

First, perfection isn't the baseline elsewhere in the job, or elsewhere in society for that matter. The federal regulations - 49 CFR 383.135 - specify an 80% passing grade for CDL knowledge and skill tests. On top of that, vocational schools regularly have a 75% passing grade for graduation, law students don't need 100% to pass the bar, and even medical schools don't require 100% grades before letting someone perform surgery. As such, it's not reasonable that a driver be expected to know 100% of everything, all the time.

Second, forcing people to keep trying until they get everything right doesn't prove they know the content, it just shows that they (eventually) stumbled onto the right answer. Any moderately competent plaintiff's lawyer will pick up on that and shoot holes in the ‘mastery' argument pretty fast.

Third, endlessly retrying a question doesn't actually teach people anything. If someone gets a question wrong, it could be because they don't understand the related content. It could also be that they misunderstood the question, the question is poorly worded, or that the answer options are confusing. Forcing people to keep trying until they stumble onto the right answer doesn't address any of those issues. In fact, it can make the problem worse by deepening their confusion.

User Experience

Another consideration, in the bigger picture, is the participant experience. If they get a question wrong, and have to keep trying options until they find the right one, what's their learning experience like? Is that try-till-you-get-it-right process actually helping them learn the content? Or are they just finding, by process of elimination, the right answer to that particular question?

And how much of that are they even going to remember a week or a month later? Best case scenario, they remember the specific answer to a specific question. But without truly understanding the content, they're not going to have context and won't be able to apply that information in any real world situation.

What they will remember, though, is the miserable experience they had with a course that was a pain to finish. Not exactly the kind of thing that makes people eager to continue learning!

Instead, the overall effectiveness of the training is limited, which does nothing to improve the risk profile of the fleet, and the overall ROI is reduced as well.

Actionable Intel

The final point is that this kind of testing doesn't provide meaningful insights for management. Online systems that use this model rarely (if ever) track how many times someone tried a question before getting it right, or offer details on which questions are answered wrong most often. It just shows that someone finished the module (which means they eventually figured out the answers) at a particular date and time. Applying this approach with paper-based testing isn't much better either, since there's little insight into why someone got a question wrong.

Remember that testing is meant to validate that learning objectives have been met. If someone answers questions wrong, then they haven't assimilated enough of the content to fulfill those objectives. Trying the questions over and over until they find the right answer doesn't change that.

For a training manager, there's a lot to learn from looking into test data. Much like the value that comes from watching dashcam footage, being able to see which questions are answered incorrectly is hugely valuable for planning follow-up activities, developing a clearer picture of the participant's overall aptitude, and managing the training process as a whole.

Plus, as noted above, participants could be getting questions wrong because the questions themselves are bad. It may not be their fault at all - the questions could be worded poorly, or perhaps they don't accurately reflect the content presented in the course. If all you get is a confirmation that someone made it to the end then you don't have those insights and won't be able to address the issues.

A Better Option

If all of those things make Mastery Testing counterproductive in risk management, and potentially no help in court, then what should fleets do?

A better course of action, and one employed by successful fleets of all sizes, is a learning reinforcement plan that pairs a reasonable passing score for tests with instructor follow-up to close any outstanding gaps. The instructor can review the participant results, discuss any errors and clear up misunderstandings, and provide another level of reinforcement. Having the opportunity to discuss the content also allows drivers to think about it in a different way, further improving the experience.

Plus, if there are issues with test question wording, or mismatches with the course content, those will come out very quickly during those reviews.

The result is more thorough development of driver knowledge and skills, an improved risk profile, and a comprehensive program that tends to stand up very well in court!

View from the Edge is a bi-monthly review of best practices in risk management, driver development, and technology for the trucking industry, produced by CarriersEdge.

CarriersEdge is a leading provider of interactive online training tools for the North American trucking industry. With a comprehensive library of safety courses for drivers, extensive customization tools, extremely awesome reporting, the first mobile app for driver training, CarriersEdge helps fleets become more amazing every day.

Other Views from the Edge