Order of selections in multiple-choice questions

Since participants are expected to read all the selections of a multiple-choice question, the order in which you list them may seem of little moment. But the consequences of order can be momentous. Respondents might interpret the order as suggesting a priority or “correctness.” For example, if the choice that the firm thinks will chosen most commonly stands first, that decision will influence the data in a self-fulfilling pattern. The firm thinks it’s important — or, worse, would prefer to see more of that selection picked — and therefore puts it first, while respondents are influenced by supposing that privileging to be true and choose it.

Or participants may simple tire of evaluating a long list of selections and deciding which one or more to choose. They may unknowingly favor earlier choices so that they can declare victory and move on to the next question.

Let’s look at a question from the King & Spalding survey on claims professionals (2016) [pg. 15], not in any way to criticize the question but to illustrate the possibility of the skews described above.

We don’t know enough about claims professionals or lines of insurance to detect whether this selection order nudges respondents, but clearly the selections are not in alphabetical order. When selections appear in alphabetical order, the assumption is that the firm tried to randomize the order and thereby avoid guiding respondents.

Another option for a firm is to prepare multiple versions of the survey. Each version changes the order of selections of the key multiple-choice question or questions. The firm sends those variants randomly to the people invited to take the survey. So long as the text of the selections remains the same, the software that compiles results will not care about variations in selection order.

A more sophisticated technique to eliminate the risk of framing relies on the survey software to present the selections in random order for each survey taker. In other words, the order in which person A sees the selections is randomly different than the order in which person B sees the selections.

Published reports infrequently restate the exact question asked and never the arrangement of selections. All the reader has to go by is the data as reported in the text, table or graphic. Because the summary of the data usually starts with the most common selection and discusses the remaining results in declining order, the original arrangement of selections is not available.

For example, here is one multiple-choice question from Davies Ward Barometer (2010) [pg. 58]. At the top, the snippet provides the text of the report which gives a clue to the question asked of respondents. Nothing gives a clue about the order of the selections on the survey itself.

As an aside, consider that this survey followed several prior surveys on the same topic. It is possible that the order of the selections reflects prior responses to a similar question. That would be a natural thing to do, but it would be a mistake for the reasons described above.

Order of selections in multiple choice: frequency of being chosen

When you create a multiple-choice question, the order of the selections to pick from can make a difference in the results you obtain. For example, if the first selection is the most likely choice, you may suggest to some respondents that the order of the selections reflects a pattern of declining priority. Which ones they pick (and depending on the restrictions, how many they pick) will be influenced by that perceived priority suggestion.

Many surveyors alphabetize the selections as a method to counteract any such bias in their order. Another technique, which we will discuss later, randomizes the order of the selections as presented to each person. High-end survey sites can make the ordering vary for each respondent according to a randomizer. An intermediate solution would be for the surveyor to create a few different versions of the survey that each vary the orderings of the selections.

To test one aspect of whether order of selections influences respondents — the number of times a selection is picked, I analyzed a recent survey. The data consists of four multiple-choice questions that allowed respondents to check all of the selections that applied to them. Think of a whimsical question like “Which desserts do you like best? (check all that apply)” where there are seven different scrumptious delights and some or all of them could be checked.

The plot below shows totals for how many times respondents chose the first selection of the question (the bar with 1 at its base on the x axis and the height of 483), the second selection (bar 2 at height 465), and so on.

It presents the result from only their first seven selections of each question, since that was the minimum number of selections across the four questions 1. As far as I could tell, there was no evident logic to the arrangement of the selections. Each one seems plausible and not ranked in any apparent way.

A different form of this empirical inquiry would take multiple-choice questions where only one selection was permitted. Obviously, also, the selections cannot be some fact that is fixed, such as the position or age of the respondent; the selections need to invite independent, subjective judgments, as with dessert preferences.

Returning to the plot, with the exception of the fifth-position selection, the number of selections drops off steadily as the position of the selection increases. That is to say, for the most part, respondents checked selections fewer and fewer times as they moved down the seven selections. It’s as if respondents grew fatigued and didn’t pay as much attention to the later selections.


  1. This decision could throw off the results in that a couple of the questions had 9 or 10 selections.