Already have an account?

Login
Red pencil checking all the checkboxes

Some thoughts on knowledge testing

Mark Murrell

I've been getting some questions recently about how we structure the knowledge testing within CarriersEdge. In our system, people take courses then at the end of the course they get a test. They have two tries at the test, and if they fail it the second time, they're considered to be failed in the course and have to go back through the content again. Different tests have different passing scores, but generally it's either 80% or 85% to pass.

The reason we set it up that way is because we want to strike a balance between providing a good experience for participants, and validating that sufficient learning has taken place. We know that someone might fail a test by 1 point, or miss a couple of questions because they misread them, so we don't want to punish them for that. We've found that in most cases people pass the test the second time, but when they don't there's something they're missing so reviewing the content makes sense at that point.

That workflow has been in place ever since we first built this system 10+ years ago, and we still believe it balances diligence and reasonableness for both drivers and company administrators.

In the past few months, however, some people have requested a different testing workflow, something variously referred to as "mastery", "correct-to-100", or "all or nothing". With that model, participants have to keep retrying incorrect questions over and over until they get every one of them right. As a result, everyone ends up with 100% at the end, but with different paths to get there.

I'm not a fan of this testing model AT ALL. People are asking about it, though, so I thought I'd document some of my reasons for disliking it here, and maybe start a discussion.

Defensibility

The main argument I've heard in support of this model is legal defence. It's based on the assumption that if you ensure everyone gets 100% in the training then you've ensured that everyone knows everything they should and you're on solid legal ground. On the other hand, if you let drivers out on the road with less than perfect scores then you're demonstrating negligence and may have legal exposure. That doesn't make sense to me.

First, perfection isn't the baseline elsewhere in the job, or elsewhere in society for that matter. The federal regulations - 49 CFR 383.135 - specify an 80% passing grade for CDL knowledge and skill tests. On top of that, vocational schools regularly have a 75% passing grade for graduation, law students don't need 100% to pass the bar, and even medical schools don't require 100% grades before letting you cut someone open. As such, it doesn't seem reasonable that a driver be expected to know 100% of everything, all the time.

Second, forcing people to keep trying until they get everything right doesn't confirm they know the content, it just shows that they (eventually) stumbled onto the right answer. It seems like any plaintiff's lawyer with half a brain would get that and pick holes in the 'mastery' argument pretty fast.

Third, endlessly retrying a question doesn't actually teach people anything. If someone gets a question wrong, it could be because they don't understand the related content. It could also be that they misunderstood the question, the question is poorly worded, or that the answer options are confusing. Forcing people to keep trying until they stumble on the right answer doesn't address any of those issues. In fact, it can make the problem worse by deepening their confusion.

User Experience

Something else to consider here is the experience the participant has. If they get a question wrong, and have to keep clicking options until they find the right one, what's their learning experience like? Is that try-till-you-get-it-right process actually helping them learn the content? Or are they just finding, by process of elimination, the right answer to that particular question?

And how much of that are they even going to remember a week or a month later? Best case scenario, they remember the specific answer to a specific question. But without truly understanding the content, they're not going to have context and won't be able to apply that information in any real world situation.

What they will remember, though, is the miserable experience they had with a course that was a pain to finish. Not exactly the kind of thing that makes people eager to come back for more!

That kind of experience is going to limit the effectiveness of the training and provide weaker ROI over the long term.

Actionable Intel

The final point is that this kind of testing doesn't provide meaningful insights for management. I've never seen an 'all or nothing' system that tells you how many times someone tried a question before getting it right, or gives you any insights into which questions are answered wrong most often. It just tells you that someone finished the module (which means they eventually figured out the answers) at a particular day and time.

Remember that testing is meant to validate that learning objectives have been met. If someone answers questions wrong, then they haven't assimilated enough of the content to fulfill those objectives. Trying the questions over and over until they find the right answer doesn't change that.

For a training manager, there's a lot to learn from looking into test data. Being able to see which questions people get wrong is hugely valuable for planning follow-up activities, developing a clearer picture of the participant's overall aptitude, and managing the training process as a whole.

Plus, as noted above, participants could be getting questions wrong because the questions themselves are bad. It may not be their fault at all - the questions could be worded poorly, or perhaps they don't accurately reflect the content presented in the course. If all you get is a confirmation that someone made it to the end then you don't have those insights and won't be able to address the issues.

A Better Option?

So, now that I've ranted about what I don't like, what do I suggest instead?

The best course of action, I think, is a learning reinforcement plan that pairs a reasonable passing score with instructor follow-up to review and close any outstanding gaps. The instructor can review the participant results, discuss any errors and clear up misunderstandings, and provide another level of reinforcement. Having the opportunity to discuss the content also allows the participant to think about it in a different way, which further aids the learning.

Plus, if there are issues with test question wording, or mismatches with the course content, those will come out very quickly during those reviews.

The result is higher quality training, better trained drivers, and more engagement among participants overall.