Icons strengthen understanding and remembering

Here we use the term icon for a visual element in a survey report that is intended to convey a concept. By this definition, an icon serves more than a decorative purpose; it should link to and strengthen a discussion, plot or topic and add to the reader’s understanding and recall.

In Paul Hastings China 2013 [pg. 21], the firm chose icons to represent regulatory approval, the work done to integrate companies after an acquisition, and the due-diligence diving that precedes a potential acquisition. The visual representations of these three concepts, which are made clearer by the explanatory terms to the left, complement the text, a method that appeals to different cognitive styles of readers. Some people absorb information better by reading, others absorb information better through pictures. 1 Additionally, people store concepts in memory differently depending on the style of presentation.

HoganLovells Brexometer 2017 [pg. 9] turns to the well-known images of happiness and sadness. They stand atop a small table of survey results. Ever since the movie, Forest Gump, these stylized faces have become ubiquitous.


On page 9 of Pinsent Mason Infratech 2017, a closed-circuit TV camera films the lower right-hand corner of the page. It is not clear what that icon conveys, but a careful examination of the report shows that it is dappled with meaningful icons. You see a helicopter and an airplane on page 7, a stylized column chart with a trend arrow on page 14, light bulbs throughout the report that signify insights, a trio of humanoids on page 22, eight pages later four coins in a stack, an arrow in a target on page 34, and a trophy on page 38. Icon count no more.

Notes:

  1. The day will come when reports include audio material for those who are aurally inclined.

Challenges choosing categories, e.g., for revenue demographics

When a law firm invites its contacts to to take a survey, those people who accept probably form an irregular group in terms of the distribution of their corporate revenue.  Their revenue can range by the happenstance of self-selection and how the list was compiled from negligible revenue to many billions of dollars. When the firm’s report describes the revenue characteristics of the group, the firm must decide what ranges of revenue to use.

The firm might slice the revenue categories to put in them roughly equal numbers of participants. Doing this usually means that the largest category spans a wide range of revenue — “$3 billion and up” — whereas the smallest category tends to be narrow — “$0 to 100 million.” Such an imbalance of ranges results from the real-world distribution of companies by revenue: lots and lots of smaller companies and a scattering of huge ones (the distribution is long-tailed to the right). Stated differently, the corporate revenue pyramid displays a very, very broad base.

Alternatively, a law firm might choose to set the revenue ranges by some specific range values, perhaps “\$0-1 billion, \$1-2 billion, \$2-3 billion” and so on. The categories may make sense a priori, but binning revenue this way can result in very uneven numbers of participants in one or more of the categories depending on what categories are chosen, how narrow they are, and the vagaries of who responds.

Davies Ward Barometer (2010) [pg. 10] explained the corporate revenue ranges of its respondents in words. These are unusual ranges. The distribution skews toward remarkably small companies. Note from the last bullet that almost one out of three survey respondents “are not sure of their organization’s annual revenue.” Perhaps they do not want to disclose that revenue, as they work for a privately-held company. Or perhaps the organization has no “revenue,” but has a budget allocation as a government agency.

With a third approach, a firm fits its revenue categories to its available data set so that plots look attractive. You can guess when a firm selects its revenue categories to fit its data set. Consider the plot below from DLA Piper’s compliance survey (2017) [pg. 26]. The largest companies in the first category reported less than $10 million in revenue; the next category included firms with up to 10 times more revenue, but about the same percentage of respondents; the third revenue category again spanned companies with up to ten times more revenue, topping out at $1 billion, but close to the preceding percentages. Then we see a small category with a narrow range of $400 million followed by the two on the right with half the percentages of the left three. It appears that someone tried various revenue categories to find a combination that looks sort of good in a graphic.

The fullest way to describe the revenue attributes for participants turns to a scatter plot. From such a plot, which shows every data point, readers can draw their own conclusions about the distribution of revenue.

Drop-down lists for multiple-choice questions

With a survey question in the style of “In the coming year, how will spending on cybersecurity at your law department likely change?”,  it is easiest for the surveyor to make respondents choose from set of answers (we refer to them as selections devised by the firm. The selections for that question might be “Increase more than 10%”, “Increase 6-10%”, “Increase 1-5%,” and on down to “Decrease more than 10%”.

It is easier for the firm to have pre-defined selections than to give respondents free rein to type in their answer as they see fit. The analyst will endure much pre-processing to clean the inevitable mishmash of styles respondents come up with — even if the questionnaire is laden with explicit instructions. People ignore guides such as “Only enter numerals, so no percent signs or ‘percent'”; do not write ranges such as “3-5” or “4 to 6”, do not add “approx” or “~”. No matter how clear you are, respondents will often jot in whatever they want.

Page 30 of Winston & Strawn’s 2013 report on risk displays the results of a question: “Your parent company’s annual revenues/turnover for the most recent fiscal year are:” Given the plot’s six categories of revenue, the questionnaire likely laid out those categories to choose from. Imagine two rows of three selections each. The selections were likely in order from the largest revenue category to the smallest and there was probably a check box or circle to click on next to each one. See how the plot below displays the data.

With only six selections, the questionnaire can efficiently lay them all out for consideration. Instead of displaying all of the answer choices beneath the question, a drop-down question shows a text box under the question and invites respondents to click on a down arrow to review a scrollable list. They pick their (single) answer from that list and the answer is filled in for them.  Drop-down questions tend to appear when there is a large list of selections, such as states of the United States or countries in Europe or months of the year. Almost all drop-downs have the capability to complete a partial entry, so that if you put in “N” Nebraska shows up and populates (fills in) the text-entry box.

Specialists in survey design recommend that drop-down questions be used sparingly. The examples above (years, months, states, countries) make sense because they have numerous choices and respondents are not evaluating which choice is best: one answer and only one answer is the right one. Demographic questions are ripe for drop-down treatment.

For most multiple choice questions, especially those with concepts and jargon, showing all choices at the same time gives respondents context as they answer the question. What you hope they are doing is considering the selections as a group and evaluating which one (or more) they favor in comparison to the others.

Winston & Strawn might have elected to use a drop-down list for the corporate revenue question. That list could have had many more revenue categories than six, which would have collected revenue more precisely, yet enforced a consistent style for the answers. On the other hand, that arrangement would have pushed respondents to think about their company’s revenue, or even to research it, and it would have taken more time for them then to spot the corresponding category from the drop-down list. Finer categories may also conflict with some respondents desire to remain anonymous or not to disclose sensitive information. Someone working at a privately-held company might be willing to click on the broad “1-5 billion” choice but not want to disclose a more specific revenue number.

Challenge of clear selections if they cover complicated ideas

When you write non-numeric selections of a multiple-choice question, you want them to be different from each other and cover the likely choices as completely as possible. Yet at the same time you don’t want too many selections. You also would like them to be close to the same length. We have compiled other good practices.

The selections also need to be quickly and decisively understood by respondents. Respondents don’t want to puzzle over meanings and coverage of terms. Partly that means you need to cure ambiguities but partly it means to choose terms in selections carefully so that nearly everyone interprets them the same way at first reading.

We found an instructive example in one of the law-firm research surveys. Did the questions in the plot below achieve quick clarity? 1

I wonder whether most of the general counsel understand “Partner ecosystem”, let alone in the same way. Should there be a two notions joined as in “New sources of revenue and new business models”? Some companies might pursue revenue or a new business model, but not both. Likewise, why pair “Clean energy and environmental regulation”? They could be seen as two separate trends. The selection “Geopolitical shifts” feels so broad that it invites all kinds of interpretations by respondents.

This question challenged the survey designers with an impossible task. First they had to pick the important trends — and what happened to “Demographic changes”, “Big data”, “Urbanization” and “Taxes” to pick a few others that could have been included? Second, they had to describe those multifaceted, complex trends in only a few words. Third, those few words needed to fix a clear picture in lots of minds, or else the resulting data represents a blurred and varied clump of subjective impressions.

Notes:

  1. We do not know if the left-hand-side labels mirror the selections on the questionnaire. Some surveys have more detail, and even explanations, but the report gives an abbreviation of the selection.

Mandatory annual disclosure of number of lawyers and revenue by U.S. law firms

Why can’t the American Bar Association (or State Bars) require U.S.-based law firms above some modest-sized number of lawyers to report their fiscal year revenue along with a snapshot of the number of partners, associates, and support staff on the last day of the year?  The justification for that disclosure would be that clients, law school graduates or lawyers considering a job change, among others, would have comprehensive and reliable data on at least two key attributes of firms: size and revenue.

Yes, there are definitional issues, such as what does the term “partner” mean in the multi-tiered law firms of today and what makes up “revenue”.   Yes, there might be no way to confirm the accuracy of the self-reported numbers, but law firms that would have to comply have their books audited or reviewed by accountants, and the accountants could to attest to the reasonable accuracy of the four numbers.  Yes, I do not know what enforcement mechanisms might be available.  And yes, firms may fear that the initial data request slips down the proverbial slope to more and more.

Such concerns would need to be debated; they can be resolved.  If firms that have more than 30 lawyers fell under this mandate, then perhaps 1,200 to 1,500 law firms would each year turn in four numbers that they already know.  No work would be required except going to an online site and filling in the numbers.  The ABA or a third party could consolidate and publish that data and the legal industry would be greatly the beneficiary.

The label “cognitive computing”

“Cognitive computing” may be just another marketing buzzword, but legal managers will encounter it.  Based on KMWorld, Oct. 2016 and its white paper, “cognitive computing is all about machine learning, with some artificial intelligence and natural language processing.”  You can learn more from the Cognitive Computing Consortium  although that group does not yet support a legal sub-group.

In that regard, however, a LinkedIn user group called Artificial Intelligence for Legal Professionals has a couple of hundred members.

Create a choropleth to display data by State, country, region

When legal managers want to present data by State or by country, they can make good use of what is called a “choropleth”.  Choropleths are maps that color their regions in proportion to the count or other statistic of the variable being displayed on the map, such as the number of pending law suits per State or amounts spent on outside counsel by country.   Darker colors typically indicate more in a region and lighter shades of the color indicate fewer.

Below is an example of a choropleth that appears in Exterro’s 2016 Law Firm Benchmarking Report at page 8.  It shows how many of the 112 survey participants come from each state.

exterro-choropleth-oct-2016-post

California is the darkest with 21; the grey states had no participants.  The table below the map, which is truncated in this screen shot, gives the actual numbers by State, so someone could carp that the choropleth sweetens the eye but adds no nutritional information.  Still, it looks pretty good and it is an unusual example of an effective graphical tool.