Advisable to use “Don’t know” or “NA” in multiple-choice questions

Well-crafted multiple-choice questions give respondents a way to say that they don’t know the answer or that no selection applies to their situation. The two non-answers differ in that ignorance of the answer — or, possibly refusal to give a known answer — can be remedied by the respondent whereas they can’t supplement an incomplete set of selections. Firms should not want people they have invited to take a survey to have to pick the least bad answer when their preferred answer is missing. As we have written before firms should add an “Other” choice with a text box for elaboration.

From HoganLovells Cross-Border 2014 [pg. 19] comes an example of how a multiple-choice question accommodates respondents who don’t know the answer. Also, it shows how data from such a question might be reported in a polar graphic. Seven percent of the respondents did not know whether their company’s international contracts include arbitration procedures.

In the jargon of data analysts, a “Don’t know” is called item non-response: no answer is given to a particular survey item when at least one valid answer was given to some item by the same respondent, e.g., leaving an item on a questionnaire blank, or responding to some questions by saying, “I don’t know,” while providing a valid response to other questions.

Another survey, DLA Piper Compliance 2017 [pg. 15], used a “Does not apply” option. Almost one-third of the respondents checked it. It is conceivable that some respondents did not know the answer and resorted to denying its applicability to them as the best of the three choices, although far from optimal.

One more example, this time from Fulbright Jaworski Lit 2009 [pg. 61]. Here, one-fifth of those who took the survey indicated that they didn’t know the answer to the question reproduced on top of the plot.

It is easy to include variations of the non-substantive selections described above. In fact, extrapolating from these three instances, firms probably should do so since significant numbers of respondents might pick them — on average almost one out of five in the above surveys.

Multiple observations from one multiple-choice plot

This complex, swoosh-crowned graphic spawned many observations. It graces page 34 of Pinsent Masons’ “Pre-empting and Resolving Technology, Media and Telecoms Disputes” (2016). The first several comments address the question asked, the others focus on the visualization of the findings.

  1. Time bounded. It is a good practice to limit answers to a specified period. This question asked respondents to look back five years, rather than a sloppier version like “Have you ever used any of the following institutions?” Even clearer would have been a phrasing like “in your organization during 2012 though 2016 …”
  2. Large number of selections. We can’t know how all these 28 selections were arrayed for respondents to review or whether they were in a drop-down menu. Certainly this multiple-choice question touches the outer bounds of fecundity.
  3. Choose all that apply. It would be better if the question (or plot or adjacent text) made clear that respondents could tick several institutions. Assuming they could, it is harder to interpret the percentages.
  4. Sizable ‘Other’. Even with 27 specific selections, quite a few others must not have appeared on the list (or respondents did not spot or recognize the name of one that was on the list). ‘Other’ garnered 17%, which is a larger percentage than 22 of the specific selections. It appears also that the questionnaire did not have a space for respondents to write institutions that they used but did not see among the selections.
  5. Always want more. Law firms work hard to do the best they can, yet they are usually confronted by readers who wish they had done more. Regarding the topic of this plot, for example, a question might have sought a break down between disputes before these institutions commenced by the respondent organization or brought against them by some challenger organization. We also are not told how many TMT disputes (technology, media, and telecoms) the respondents’ organizations engaged in per institution. Or the firm might have woven in data from the institutions about how many disputes they handled during the five years. Finally, the interpretation and elaboration on this set of findings is minimal.

How Pinsent Masons chose to present their findings also leads to several observations.

  1. States the question. Highlighted in a brick-red hue at the top, the question as asked on the online questionnaire lets reader efficiently match it to the answers. Interpretation of the results becomes much easier.
  2. Alphabetical order. Normally, you expect selections to be ordered in decreasing number or percentage. Here, however, with 28 different selections, putting them in alphabetical order enables readers to find institutions more easily. Note also that ‘Other’ appears at the bottom of the stack, not in alphabetical order.
  3. Monochromatic. Many designers would have splashed rainbow hues on the bars. Color just for the sake of color would impose more visual burden but add no information. On the other hand, two colors might have distinguished “institutions” from “types of arbitration.” Alternatively, since many of the selections are based on a country or regional institution, a modest color scheme by continent might inform readers.
  4. Missing information. The report incorporates submissions from an impressive 343 participants. Question number 27 was midway through an imposing set of 55 questions in the online survey. That said, we do not know how many of them tackled this particular question nor how many total ticks they made. Not knowing those particulars, when the top bar says that 15% checked that institution, we do not know the absolute number of checks.
  5. Dramatic design. Almost all the pages of the report have some variation of the red, curvy leitmotif. We admire this eye-catcher as it breaks up the white expanse, paints a touch of color on the page, and cuddles the plot itself. An attractive, simple element with aesthetic appeal.
  6. Flipped coordinates. because the names of the institutions are so long, this plot would not work well if those names had been crowded on the horizontal axis (or rotated extremely). Quite properly the plot designer rotated the plot (sometimes called “flipping the coordinates”). We also commend the firm for not duplicating the percentages along the bottom axis or cluttering the panel with grid lines.

‘Other’ selection often picked more than specific selections

In multiple-choice questions, ‘Other’ should ideally be last-resort selections. Knowledgeable questions should cover all plausible answers, which should leave little need for respondents to check ‘Other’. But that is often not true in research surveys by law firms. To the contrary, ‘Other’ quite often ends up chosen more than one of the preceding, specific selections.

Here is an example of plotting the data from a multiple-choice question that included an ‘Other (please specify)’. Unusually, it lists ‘Other’ in its ordered ranking by percentage rather than at the bottom, the conventional treatment. Clearly, five specific selections were chosen less frequently than ‘Other’, which suggests that an analysis of what respondents filled in — assuming the questionnaire offered a free-text box — might have carved some of them out and named them.

To dig deeper into this inquiry, regarding the ratio between the number of respondents checking ‘Other’ and checking the remaining selections, we analyzed four research surveys. The four surveys were Allen & Overy, “Unbundling a market: The appetite for new legal services models” (2014); Berwin Leighton Paisner, “Legal Risk Benchmarking Survey: Results and analysis” (2014); DLA Piper, “DLA Piper’s 2017 Compliance & Risk Report: Compliance Grows Up Increasing Budgets and Board Access — Point toward Greater Prominence, Independence” (2017); and Fulbright & Jaworski, “Fulbright’s Sixth Annual Litigation Trends Survey Report” (2009).

From this small and perhaps unrepresentative sample, we found that in four questions ‘Other’ received fewer checks than any of the specific selections. However, in ten questions ‘Other’ was checked more than the least-selected choice (what we termed the “smallest selection”). 1

The next plot reflects the multiple-choice questions in the four surveys that had an ‘Other’ selection. For each of those questions, the bottom axis tracks the percentage of respondents who selected ‘Other.’ The vertical axis tracks the percentage of respondents who marked the smallest of the remaining selections. The diagonal line indicates the balance point where hypothetically a question’s ‘Other’ had the same percentage as the smallest selection. Accordingly, the red dots indicate questions where more people chose ‘Other’ than chose at least one alternative selection (a few times there were two or more alternatives selected less often).

As we suggested at the start, high ‘Other’ percentages suggest that the specific alternatives could and should have been expanded. Alternatively, after the questionnaire submissions have all been collected, the firm could have tried to tease out and code the ‘Other’ mentions so that one or two of them could have been named and given specific percentages.

Notes:

  1. We noted that one firm sprinkled ‘Other’ liberally among sets of selections, yet for several multiple-choice questions with complex selections the firm chose not to have ‘Other’.  This variability seems strange.

Law firm’s blog promotes survey, and other observations

Seyfarth Shaw hosts a blog on trade secrets. On behalf of authors of a book concerning trade secrets, a post in June 2017 urged readers who practiced law in-house at companies to take a survey. That blog post stimulates a number of observations.

Seyfarth itself did not conduct this research survey but the firm’s promotional effort helped someone else collect legal research data. This derivative survey by the firm, as we might call it, is commendable because the legal industry needs more and better data for it to improve its efficiency and productivity. This instance also shows that a law firm’s blog can invite participants to a survey.

We also learned from this example that Qualtrics hosted the authors’ survey. Qualtrics is one of the several companies that provide online survey capabilities. Others include Survey Monkey  and NoviSurvey. Undoubtedly there are many more offerings that law firms can choose from.

The invitation on the blog reveals several instructive points. The survey, hosted by the survey company Qualtrics, is completely anonymous. The survey doesn’t ask for your name or your company, and the survey authors won’t even know if you’ve taken the survey or what your answers are. The survey results will be reported in aggregate form only, and nothing about you or your company can be identified individually. The survey takes less than 5 minutes to complete.  Manifestly, the invitation assures respondents that their individual responses will not be disclosed, only responses in aggregated form. Anonymity is so important, in fact, that the authors wouldn’t even receive the answers, as Qualtrics serves as the intermediary collecting the data and aggregating it. That step diminishes the analytical potential for the firm because it will not have the underlying raw data to manipulate. The invitation also emphasizes that the survey is short and won’t take much time to finish.

The survey was still active so the questionnaire was available. As can be seen in the snippet below, one of the questions allows respondents to “check all that apply”. The question places no upward limit or cap on how many of the four selections could be checked.

Finally, this multiple-choice question has at the bottom an “other” option. Typically that selection is at the end of the list of selections (or the last one in the layout arrangement). Moreover, the survey asks people who check it to explain what they mean. The survey form has a text box for doing so. We should point out that the size of the text box signals to respondents that they may write a lengthy response if they want and do not need to limit themselves to a few terse words.

What should a law firm do to tell readers of its report about a question of this style? A survey report by Littler in 2017 (pg. 8) shows not only the question asked (kudos!) but also how to tell readers that more than one selection could have been chosen. The “(check all that apply)” may be in smaller font and not bold like the rest of the question, but it gets the point across.

Cure ambiguities in selections for multiple-choice questions

When creating the choices for a multiple-choice question, a careful developer will take time to be sure that the choices are as unambiguous as possible. Helping respondents know what each choice means may entail writing a definition of the term. Note that your survey software needs to have this capability or you may have to do it in the body text of the question. Additionally, a conscientious developer will ask several people to vet the choices for lack of clarity before releasing the survey.

The survey conducted by Berwin Leighton Paisner in 2014 1 offers an instructive example of the importance of defining terms. Below you can see the graphical results of the question.

However, the report does not include the actual form of the question asked on the survey so we do not know if any of the choices were defined. If we assume that the question asked something like “What is your role?”, we might further assume that the position choices were simply those shown as the five labels along the x axis at the bottom of the plot. Are each of them clear?

If a respondent were the general counsel for North America of a global company that has a global chief legal officer, which selection is appropriate? If a lawyer admitted to practice is working in the risk or compliance group, should she select that group or “In-house lawyer?” This example admittedly uses titles that are quite commonly included in research surveys, but still the important lesson teaches us that with multiple-choice questions to try to wring out blurriness and varied interpretations of key terms.

A second observation about this particular finding highlights the relatively large number of “Other.” If BLP’s survey included a text box for the person to provide a title not covered by the four given, it would have been better to review those additional titles and create another position or two to account for some or all of them specifically. Without further insight into the positions of respondents who selected “Other,” the category is quite large relative to the remaining four and created an analytic hole if the law firm wanted to analyze responses by position.

Notes:

  1. Previous posts explain the set of research surveys by 15 law firms from which BLP’s is one.