Techniques to reduce mistakes by respondents

What can a firm do to improve the likelihood that respondents answer multiple-choice questions correctly? The substance of their answer is known only to them, but some methodological trip-ups have solutions. To address the question, we can revisit the failure points that we presented above.

Reverse the scale. One step to identify a misreading asks a second question to confirm the first answer. So, if the first question asks for a “1” to indicate “wholly ineffective” on up to a “10” to indicate “highly effective,” a later question might present the choices and ask the respondent to pick the most effective one. If that choice did not get a high number (8, 9 or 10, probably) on the first question, you have spotted a potential scale reversal. If you decide to correct it, you can manually revise the ratings on the first question. Second, using different terms for the poles might improve accuracy, although at a cost of some consistency and clarity. Thus, the scale might be a “1” to indicate “wholly ineffective” on up to “10” to indicate “highly productive.” Respondents are more likely to notice the word or phrase variability and get the scale right.

Misread the question. Sometimes, next to the answer choices you can repeat the key word. Seeing the key word, such as “most inexpensive”, a respondent will catch his or her own misreading. As with scale reversals, here too a second question might confirm or call out an error. Alternatively, a firm might include a text box and ask the respondent to “briefly explain your reasoning.” That text might serve as a proof of proper reading of the question.

Misread selections. In addition to the remedies already discussed, another step available to a firm is to write the selections briefly, clearly, and with positives. “Negotiate fixed fees”, therefore, improves on “Don’t enter into billing arrangements based on standard hourly rates.” Furthermore, don’t repeat phrases, which can make selections look similar to a participant who is moving fast. “Negotiate fixed fees” might cause a stumble if it is followed by “Negotiate fixed service.”

Misread instructions. The best solution relies on survey software that rejects everything except numbers. That function should screen out the undesirable additions. The downside is that participants can grow frustrated at error messages if they do not tell them clearly the cause of their mistake: “Please enter numbers only, not anything else, such as letters or symbols like $.”

Fill in nonsense when answers are required. As mentioned, sophisticated software might detect anomalous selections, but that leads to dicey decisions about what to do. An easier solution is to keep the survey focused, restrict selections to likely choices (and thus fewer of them), and make them interesting. Sometimes surveys can put in a question or step that reminds participants to pay attention.

Give contradictory answers. Again, in hopes of trapping contradictions law firms can structure the question set to include confirmatory questions on key points. The drawback? A longer survey. Alternatively, some firms might email respondents and confirm that they meant to give answers that conflict with each other. Likewise, interviews after the survey comes back may smoke out corrections.

Become lazy. Keep the survey short, well-crafted, and as interesting as possible for the participant. Perhaps two-thirds of the way through a firm could ‘bury’ an incentive button: “Click here to get a $15 gift certificate.” Or a progress bar displayed by the survey software can boost flagging attention (“I’m close, let’s do a good job to the end….” .

Too quickly resort to “Other”. Despite the aspiration to achieve MECE (mutually exclusive, comprehensively exhaustive), keep selections short, few, and clear. Pretesting the question might suggest another selection or two. Additionally, a text box might reduce the adverse effects of promiscuous reliance on “Other”.

Leave a Reply

Your email address will not be published. Required fields are marked *