Techniques to reduce mistakes by respondents

What can a firm do to improve the likelihood that respondents answer multiple-choice questions correctly? The substance of their answer is known only to them, but some methodological trip-ups have solutions. To address the question, we can revisit the failure points that we presented above.

Reverse the scale. One step to identify a misreading asks a second question to confirm the first answer. So, if the first question asks for a “1” to indicate “wholly ineffective” on up to a “10” to indicate “highly effective,” a later question might present the choices and ask the respondent to pick the most effective one. If that choice did not get a high number (8, 9 or 10, probably) on the first question, you have spotted a potential scale reversal. If you decide to correct it, you can manually revise the ratings on the first question. Second, using different terms for the poles might improve accuracy, although at a cost of some consistency and clarity. Thus, the scale might be a “1” to indicate “wholly ineffective” on up to “10” to indicate “highly productive.” Respondents are more likely to notice the word or phrase variability and get the scale right.

Misread the question. Sometimes, next to the answer choices you can repeat the key word. Seeing the key word, such as “most inexpensive”, a respondent will catch his or her own misreading. As with scale reversals, here too a second question might confirm or call out an error. Alternatively, a firm might include a text box and ask the respondent to “briefly explain your reasoning.” That text might serve as a proof of proper reading of the question.

Misread selections. In addition to the remedies already discussed, another step available to a firm is to write the selections briefly, clearly, and with positives. “Negotiate fixed fees”, therefore, improves on “Don’t enter into billing arrangements based on standard hourly rates.” Furthermore, don’t repeat phrases, which can make selections look similar to a participant who is moving fast. “Negotiate fixed fees” might cause a stumble if it is followed by “Negotiate fixed service.”

Misread instructions. The best solution relies on survey software that rejects everything except numbers. That function should screen out the undesirable additions. The downside is that participants can grow frustrated at error messages if they do not tell them clearly the cause of their mistake: “Please enter numbers only, not anything else, such as letters or symbols like $.”

Fill in nonsense when answers are required. As mentioned, sophisticated software might detect anomalous selections, but that leads to dicey decisions about what to do. An easier solution is to keep the survey focused, restrict selections to likely choices (and thus fewer of them), and make them interesting. Sometimes surveys can put in a question or step that reminds participants to pay attention.

Give contradictory answers. Again, in hopes of trapping contradictions law firms can structure the question set to include confirmatory questions on key points. The drawback? A longer survey. Alternatively, some firms might email respondents and confirm that they meant to give answers that conflict with each other. Likewise, interviews after the survey comes back may smoke out corrections.

Become lazy. Keep the survey short, well-crafted, and as interesting as possible for the participant. Perhaps two-thirds of the way through a firm could ‘bury’ an incentive button: “Click here to get a $15 gift certificate.” Or a progress bar displayed by the survey software can boost flagging attention (“I’m close, let’s do a good job to the end….” .

Too quickly resort to “Other”. Despite the aspiration to achieve MECE (mutually exclusive, comprehensively exhaustive), keep selections short, few, and clear. Pretesting the question might suggest another selection or two. Additionally, a text box might reduce the adverse effects of promiscuous reliance on “Other”.

Ten pitfalls of respondents on multiple-choice questions

Before plunging into the bog of blunders, let’s define respondent as someone who presses submit at the end of an online questionnaire. An alternative term would be participant. Potential respondents who stop before the end of the questionnaire are partial participants. Typically, survey software logs the responses of partial participants. Now, enter the bog, if ye dare!

We have listed below several things that can go wrong when people tackle multiple choice questions. The pictorial summarizes the points.

  1. Reverse the scale. With a question that asks for a numeric value, as in a table of actions to be evaluated on their effectiveness, a “1” checked might indicate “wholly ineffective” while a ten might indicate “highly effective.” Some people may confuse the scale of low to high and check a “1” when they mean “highly effective”.
  2. Misread the question. Hardly unique to multiple-choice questions, simple misunderstanding of the inquiry dogs all survey questions. If the question addresses “effective actions” and someone reads it as inquiring about “ineffective actions”, all is lost.
  3. Misread selections. This pitfall mirrors misreading questions, but applies to the multiple selections. Negative constructions especially bedevil people, as in “Doesn’t apply without exception.”
  4. Misread instructions. This mistake commonly appears when questions ask for a number\index{number answer}. Careful survey designers can plead with respondents to tell them “Only numerals, not percent signs or “percent”. The guidance can clearly state “do not write ranges such as “3-5” or “4 to 6”, do not add “approx..” or ” ~ .” For naught. Or people sprinkle in dollar signs or write “2 thousand” or “3K”. Humans have no trouble understanding such entries, but computers give up. If an entry is not in the right format for a number, a computer will treat the entry as a text string. Computers can’t calculate with text strings. Fortunately, computers can be instructed to scrub the answers so that they are in a standard format. And sometimes the survey software can check the format of what’s entered and flash a warning message.
  5. Fill in nonsense when answers are required. Some participants can’t be bothered to waste their time on irrelevant questions, so they slap in the first selection (or some random selection). Unless the analyst takes time to think about the likelihood of a given answer in light of other answers or facts, this mistake eludes detection.
  6. Give contradictory answers. Sometimes a survey has two questions that address a similar topic. For example, the survey might ask respondents to check the cost management techniques they have tried while a later question asks them to rate those techniques on effectiveness. What if they rate a technique they didn’t say they had tried, or they fail to rate a technique that they had tried? This could be a form of contradiction.
  7. Become lazy. When there are too many questions or selections for questions go on and on or reasonable answers require digging, respondents can throw in the towel and make sloppy selections. Here the fault lies more with the survey designer than with the survey taker.
  8. Too quickly resort to “Other”. A form of laziness, if the selections are many or complex, some people just click on “Other” rather than take the time to interpret the morass. If they write a bit about what “Other” means, that text will reduce the adverse effects of the lack of discipline.
  9. Mis-click on drop-downs. If you find a “United Emirates” in your corporate headquarters data and nearly everyone else is “United States”, you can suspect that one person made a mistake on the drop-down list.
  10. Pick too many or too few. If they pick too many selections, the software might give a warning. Otherwise, if “select no more than three” governs, the software might simply take the first three even if four or more were checked. The survey software should be able to give a warning if this mistake happens.