Pick the top 3 or rank the top 3 questions on polls by law firms

The questionnaire that resulted in Shook Hardy AI 2018 listed 11 selections (including “Other”) and asked respondents to “select up to 3.” The respondents were to select one, two, or three of the list by clicking on the boxes.

Multiple-choice questions can permit respondents to select up to some number of them, or they can require a certain number of selections, or they can require a certain number with a priority ranking of those. Below are examples of each variation.

DLA Piper RE 2016 [pg. 10] asked respondents to choose their top three (not more, not less) from a list of cities. We can’t tell from the plot how many cities were available for selection.

In Davies Ward Barometer 2005 [pg. 12], at the bottom of a table the report states that the question was: “From the list below, rank the three most important functions you perform for the management team as in-house corporate counsel. Rank the three functions in order of most importance by entering 1 (being the most important), 2 or 3 beside the functions.” The final column, “Combined Ranking,” is simply the sum of how many times that selection was ranked first, second, or third.

Dykema Gossett MA 2015 [pg. 5] also asked respondents not only to pick the top three but also to rank them. It is not clear what “weighted rank” means.

 

Multiple-choice questions that ask for a ranking can yield deeper insights

If you want to capture more information than you can from simple multiple choice questions, then a ranking question might be best for you. For one of its questions, Berwin Leighton Risks (2014) [pg. 17] presented respondents with seven legal risks. The instructions told the respondents to rank the risks from 1 to 8 (where 1 was the most serious risk and 8 the least serious). [Right, 8 ranking choices for only 7 items!] Presumably no ties were allowed (which the survey software might have enforced.)

The report’s plot extracted the distribution of rankings only for the two most serious, 1 or 2. It appears that the plot tells us, for example, that 48 respondents ranked “Legislation/regulation” as a 1 or 2 legal risk (most serious). Other plots displayed the distribution of 3 and 4 rankings and less serious rankings.

A ranking question, especially one with as many as seven elements to be compared to each other, burdens participants, because to answer it conscientiously they need to consider each element relative to all the others. As a surveyor, you can never completely rely on this degree of respondent carefulness.

But ranking questions can yield fruitful analytics. Rankings are far more insightful than “pick the most serious [or whatever criterion],” which tosses away nearly all comparative measures. Rankings are more precise than “pick all that are serious,” which surrenders most insights into relative seriousness. Yet the infrequency of ranking questions in the law-firm research survey world is striking. Findings would be much more robust if there were more ranking questions.

Some people believe that rankings are difficult analyze and interpret. The visualization technique of Berwin Leighton that presents different views of the aggregate rankings belies that belief. Many other techniques exist to analyze and picture ranking responses.

A ranking question gives a sense of whether a respondent likes one answer choice more than another, but it doesn’t tell how much more. A question that asks respondents to allocate 100 percent among their choices not only ranks the choices but differentiates between them much more precisely than simple ranking. Proportional distribution questions, however, appear in law firm surveys even less than ranking questions. In fact, we could not find one among the hundreds of plots we have examined. Perhaps the reason is that these questions are even more complicated to explain to survey participants.

Ranking law firms on disclosure of four demographic attributes

The reports at hand deal each in their own way with the four basic demographic attributes (position of respondent, industry, geography, and revenue). We can visualize the relative levels of disclosure by applying a simple technique.

The technique starts with the category assigned to each law firm for a given demographic attribute. For example, we categorized the firms on how they disclosed the positions of respondents with four shorthand descriptions: “No position information”, “Text no numbers”, “Some numbers”, and “Breakdown and percents”. It’s simple to convert each description to a number, such as in our example with one standing for “No position information” up to four standing for “Breakdown and percents.” The same conversion of text description to an integer counterpart was done to the other three demographics, where each time the higher number indicates a better explanation of the report regarding that demographic attribute.

Adding the four integers creates a total score for each firm. The plot below shows the distribution of those total scores by firm.

The firm that did the best on this method of assessment totaled 15 points out of a maximum possible of 15 (three times four categories plus one times three categories for the demographic attribute that had only three levels). At the other end, one firm earned the lowest score possible on each of the four attributes, and thus a total score of four. [Another plot could break up the bar of each firm into the four segments that correspond to the four demographic attributes.]

Our hope is that someday every law-firm research survey will disclose in its report breakdowns by these fundamental attributes together with the percentage of respondents in each. By then, perhaps another level of demographic disclosure will raise the bar yet again.

For survey ranking questions, a technique to assure that the scale was applied correctly

If you are collecting data with a survey, you might ask the invitees to rank various selections on a scale.  “Please rank the following five methods of knowledge management on their effectiveness using a scale of 1 (least) to 5 (most)” followed by a list of five methods.  Ranking yields more useful data than “Pick all that you believe are effective” since the latter does not differentiate between methods: each one picked appears equally effective.

But ranking spawns the risk that respondents will confuse which end of the scale is most effective and which least.  They might not read carefully and therefore put the number 1 for their most effective method – after all, being Number 1 is best, right? – and put the number 5 for their least effective method.

One method some surveys adopt to guard against respondents misreading the direction of the scale is to add a question after the ranking question.  The follow-on question asks them to check the most effective method.  Software can quickly confirm that the respondent understood and applied the scale correctly since the 5 on the first question matches the checked method on the second question.