Some people claim that multiple choice questions are a bad idea. But just because a tool can be misused doesn’t make that tool bad. Multiple choice assessments can be very valuable if done right, and they do happen to be the standard in all forms of testing and surveying, anyway.
So let’s make them correctly, shall we? Here’s a quick summary of my 15+ years of experience on how to do that:
- Have 4-5 answer options. No less, no more (and NO True/False).
- Link every question to a specific learning objective or behavioral outcome. Know exactly what are you measuring, and why, and how.
- Expect the question and answer take 20% of your design time, and writing effective distractors that actually test what you want to measure as the other 80% of effort. Don’t skimp on this, it will show.
- “All of the above” and “None of the above” are a sign that you’re cutting corners. Take a break, come back, do better. You don’t need to do this in one sitting.
- Be clear and brief. If you can subtract a word and still make sense, do it. The fewer words there are, the better you are measuring your actual objective instead of the learner’s reading comprehension. SMEs will tend to add language, but you are the expert, so you get to subtract it.
- Randomize both the questions and the answers whenever appropriate, and draw questions from a larger bank (or multiple question banks, if you want to get fancy). Make it obviously more work to cheat than learn, and people won’t try to cheat — or if they do they’ll end up learning the content even better as a byproduct, and at that point who cares?
- Have simple question standard and communicate it clearly. If it’s an 80% pass/fail threshold, give the learner multiples of 5 questions (10, 15, 20…). If it’s a 75% threshold, have questions in multiples of 4 (8, 12, 16…). Tell learners upfront the number of questions they can miss and still pass, and you reduce their anxiety and your support needs.
- Don’t repeat yourself. If you have knowledge checks, quizzes, pre-assessments, or other not-the-final-exam-yet kinds of evaluations, don’t use the same (or very similar sounding) questions in the graded exam. It is disorienting and unfair to your learner.
Additionally, I’d recommend that you always write all assessments first. Only once these measurements are approved by stakeholders and vetted by SMEs should you bother creating the rest of the content that supports it. Yes, it’s a little weird for people, but in practice it’s just easier that way. This focuses the content on business outcomes and circumvents scope creep in your project.
If anyone bugs you about teaching to the test, remind them that valuable supplemental resources can be made available to learners via other means, separate from any assessment components. This makes it easy for learners to differentiate “nice to know” from “need to know”, and that if it’s needed, it’s on the test already.
Now, it’s time for a multiple choice question!
This article was:
- A) Extremely valuable. I can implement it immediately.
- B) Good, but a bit long. I only skimmed it, really.
- C) Good, but too short. I’d like more explanation & examples.
- D) Not very helpful to me. I was expecting something different.
- E) Bad advice. I totally disagree and wasted my time reading this.
Please submit your answer below:
[contact-form][contact-field label=’Name’ type=’name’ required=’1’/][contact-field label=’Email’ type=’email’ required=’1’/][contact-field label=’Comment’ type=’textarea’ required=’1’/][/contact-form]