Footnotes in survey reports

Footnotes are uncommon in law firm research surveys, but they do show up. For example, two reports have them: HoganLovells FDI 2014 has nearly 100 footnotes in its 100 pages and DLA Piper Debt 2015 has 20 in its 30 pages.

Footnotes commonly provide source information, such as note 10 of the DLA Piper snippet below [pg. 14]. The first note illustrates how to include material that the law firm considers secondary to the main text, but worth inclusion. Sometimes footnotes present counter-arguments to statements in the text. Whatever the purpose, footnotes add to the visual complexity of the page and consume some of the page’s “real estate.” On the other hand, they avoid clutter in the text proper.

Here is the DLA Piper snippet.

Some reports put a border above their footnotes. The HoganLovells example (below) puts a column-wide border above the footnotes whereas DLA Piper (above) puts a partial demarcation line. Typically the footnote’s font or size changes from the main text and usually the report numbers them consecutively from the start.

Another design choice would be to put footnote material in the margin next to the text that is footnoted. That style demands more sophisticated layout capabilities and is less familiar to readers. Regarding a third design choice, none of the survey reports found so far have used endnotes in lieu of footnotes. Endnotes are even more scholarly and they require the reader to flip back to peruse them.

Here is a second example of footnote style, from HoganLovells [pg. 12].

Stepping back, footnotes give an academic air to a report. They suggest that the law firm has thoughtfully considered their hierarchy of ideas, promoting some and demoting others. They exude carefulness and intellectual sophistication, but they complicate the demands on readers and might distract them.

Quotes are handled with varying styles in different reports

Regarding report quotations, aside from frequency their content and layout offers another way to differentiate reports. Choosing five reports at random, we can deduce some preliminary observations about both content and design choices. The four reports are Allen Matkins CommlRE 2018, Dykema Gossett MA 2017, Mayer Brown Privacy 2015, Morrison Foerster GC Disruption 2017, and Squire Sanders Retail 2013.

Neither Mayer Brown nor Dykema Gossett made use of any quotations (at least quotations that were set off from the main text). One might assume, therefore, that they neither asked for any free-text responses in their questionnaire nor conducted interviews of any of the respondents. Furthermore, they also chose not to obtain comments from any partners of the firm or to draw on material from other sources.

Taking a different tack, Squire Sanders and Morrison Foerster included in their reports what I have termed “anonymous quotes.” The image below shows a Morrison Foerster anonymous quote that is in a different type font from the main text, has quotation marks and a border around it, and nestles in a call-out box! Do you notice it? It credits no identified person with the quote, hence it is anonymous. That report scatters through its 26 pages a dozen anonymous quotes.

Someone reading the anonymous quotes would be forgiven for wondering whether the comments were manufactured. They seem to be simply emphasizing some point that the firm wanted to stress. When Squire Sanders inserted an anonymous quote, they sometimes used red font to highlight it.

The best use-of-quotes award goes to Alan Matkins. They cite specific individuals at named companies, sometimes placing their remarks alone on full-page photo or putting a box around them to make them more noticeable. Here is an example of this design-cum-content style.

On one page, the firm placed four boxes that contain quotes from three partners of the firm and one person at the co-sponsoring university.

Widths of borders on report pages

The width of borders on reports varies significantly between reports but some suggestions of regularities can be discerned. To measure the white space around text plots and visual elements, which I think of as “border space”, my crude method examined five reports. For each I fit the report pages to the size of my monitor (using the Foxit “Fit to page” command) and then measured the border widths in centimeters with my handy-dandy ruler. For all of these rough measurements, I ignored headers and footers; my intent was to measure the space between the edge of the document and the main text or plot.

The subjects of this faux-precise empirical investigation were Allen Matkins CommlRE 2018, Dykema Gossett MA 2017, Mayer Brown Privacy 2015, Morrison Foerster GC Disruption 2017, and Squire Sanders Retail 2013. Two of them leave relatively narrow borders on either side, at 6 cm approximately. The other two leave side borders that are twice as wide. As to borders at the top and bottom of pages, three were in the 12 cm range, while one had a larger top border (about 18 cm) but most of it was filled with a header. The actual widths are larger on an 8.5 inch wide sheet of paper.

Here is the top of a page from the Allen Matkins report. By my reckoning it uses 6 cm sides and 12 cm top margins. By the way, look carefully and you can make out a subtle watermark of coloring.

While the top and side borders were generally uniform on each page of a report, at the bottom there was more variability as sometimes plots or design elements were placed there and extended irregularly down the page, even as far as the edge. Also, the Squire Sanders report alternated a wider border on the outer side of the page. In other words, the right-hand pages (the odd-numbered pages) had the broader right margin (18 cm by my ruler) while the left hand page had the broader left margin. Squire Sanders often placed quotes in red font in the broad margin pages. Here is an example [pg. 24].

Unusual plots — Part I

Every now and then, amid the clonish hordes of bar and pie charts, an unusual plot breaks the boredom. Unusual plots may not be the optimal way to present data but they certainly pique interest and suggest that creative thinking about how to communicate data graphically deserves more thought. Herewith a few examples of unusual plots.

Morrison Foerster Consumer 2017 [pg. 5] added a stylized picture that invokes credit cards and identity theft to its column chart.

The tried-and-true plot styles appear repetitively because they are serviceable, familiar, and straightforward to create. Forcing the data into an unexpected plot style may not serve readers well. On the other hand, a bespoke plot style or twist on an old familiar style may be just what is needed. If law firms do not explore new ways to depict the data they collect, they miss an opportunity.

Morrison Foerster Privacy 2017 [pg. 5] hit upon an interesting visualization. The spectrum moves from low values on the left to high values on the right. Note that they use Figure 2 notation and put it at the bottom.

Norton Rose Lit 2016 resorted to the Rube Goldberg figure below to explain the meaning of one-tenth of a percent. While you can admire the ingenuity of the effort, you have to wonder whether it adds value for most readers of law-firm research survey reports. If they do not understand percentages, they will have a hard time extracting much from survey reports.

Berwin Leighton Arbappointees 2017 [pg. 7] created a gauge to display its findings. Everyone is familiar with fuel gauges, for example, so the general point is communicated to the reader, albeit not in the most pellucid format.

Decorative elements in survey reports

Four law-firm research surveys include examples of what what might be called “decorative elements.” Such elements gussy up the pages of the report, add to attraction for readers, contribute visual appeal. A graphical minimalist such as Prof. Edward Tufte might disparage decorative elements as eye-candy without informational nutrition, but others are more aesthetically minded and sensitive to the importance of reader engagement (dare we say, entertainment value of survey reports?).

Law firms want to leave a good impression: good looks and visual creativity lingers pleasingly in the mind. Besides, the designers who lay out the report don’t measure themselves simply by the ratio of ink to information. Artistic sensibilities and design values contribute to a report. Lawyers don’t think in terms of what catches the eye, but desktop publishers do.

Norton Rose Lit 2016 [pg. 4-5 ] nestled a grey piece of a jigsaw puzzle behind its text. The image is hard to spot, but look carefully to the right of the words “litigation trends” in the fourth line of text: a white space and a grey swatch mark part of the slightly-tilted image. Like a musical joke tucked in by a composer, a decorative element can amuse and entice a reader who appreciates it.

Later, in the same report, the firm flew a plane across the lower part of a page [pg. 23]. That visual tidbit may add no informational value, but it draws the eye and tickles the fancy.

Morrison Foerster Privacy 2017 headed each page with a complex visual. Critics might take the firm to task on the grounds that the header distracts; it’s blatant and complex. Others may admire the combination of evocative pictorial elements and a bit of lightness amid a cerebral presentation.

Berwin Leighton Arbvenue 2014 [pg. 10] added a cartoon to one plot. Whether or not you connect the small figure and the data being reported, let alone whether the cartoon makes a point any clearer, at least you notice it. If you notice it, you might also pay attention to the data to its left.

Morrison Foerster Compliance 2015 [pg. 12] bordered its pages with a wash of blue. As with all the decorative elements shown here, you could do away with the blue shading on either side and lose nothing — except a soupcon of color that pleases the eye. To rephrase an old saying, “All metrics and no magic makes survey a dull report.”

Organizational methods in survey reports — an overview

Law firms can choose from a plethora of methods to help readers recognize the organization of the firm’s survey report. To catalog some of those many techniques, we reviewed four reports: Clifford Chance Cross-border 2012, DLA Piper Compliance 2016, Norton Rose Lit 2016, and Berwin Leighton Arbvenue 2014. Among the many methods, the four reports employed at least nine.

Table of contents. Each of the four reports begins with a table of contents. For this traditional map, one of them (Clifford Chance) uses a color coding for each section in the Table of Contents and later in the corresponding report section. Another firm (Norton Rose) embellished the table of contents with graphics . The Berwin Leighton table was a model of simplicity, as we show below.

Introductory letter. Three of the four reports followed the table of contents with a letter from one or more partners of the firm. The letters run a page or two and they highlight the context and importance of the survey’s topic and sometimes how the firm decided to present the material garnered from its survey.

Executive summary. All four reports pull together their principal findings into a one-to-three-page executive summary (one referred to it as “Key Findings”). This technique to consolidate and digest the contents of the report helps readers take away the most important points.

Call-out box. Clifford Chance [pg. 24] uses boxed text to emphasize observations (see the graphic below). More generally, whenever a report puts a border around certain text, and perhaps changes the font color or style (and sometimes locates the call-out box partially in the margin), those visual cues point the reader to information that the law firm deems important.

Case studies. Of the reports, only Clifford Chance’s made use of lengthy case studies. Each about a page long, some focus on the findings for particular sectors, others focus on drilling down within a specific company based on an interview with one of its senior executive.

Visualization guides. The Norton Rose report [pgs. 4-5] creatively shows stylized jigsaw puzzle pieces to explain the framework of how the firm presents its findings. We include below a portion of this method.

Divider pages. Norton Rose published the longest report, at 48 pages, and perhaps because of that length it inserted divider pages before three sections [pgs. 8, 22, and 38]. The divider pages stand out because of their color and the few, bold words that introduce the following section.

Conclusion. Clifford Chance poured in many methods for presenting its findings, and ended with yet another, a conclusion. Conclusions encourage the firm to wrap up its main findings in that closing portion.

Summarized material. This general method covers many variations. For example, you could think of each graphic plot as a summarization of findings. In Norton Rose [pg. 21], as another example, a table lays out “Drivers of Disputes.” DLA Compliance has a prominent section called “What Chief Compliance Officers Need to Know.” In a third variation, Berwin Leighton Arbrevenue 2014 created an infographic [pg. 4]. Many other ways are available to organize ideas and map what is being presented.

NAICS classification of industries would help surveys four ways

If only there were a standard way to describe survey participants by industry … There is! Law firms could identify, analyze, and report on their participants by the North American Industry Classification System (NAICS) categories. This system has moved beyond the venerable SIC (Standard Industrial Code) categories. The NAICS offers a range of two-digit classifications that map well to the extant proliferation of industry/sector designations seen in law firm reports. Those classification together with the three- and four-digit elaborations on them easily suffice for law-firm research surveys.

If NAICS codes became the convention for law firm research surveys, at least four benefits would follow.

Mash-up data. For data analysts, “mash-up” describes the process of melding two sets of data. If firms used the NAICS, other data would then be available for analysis. Longitudinal data sets, meaning those maintained over a period of time, that the U.S. government has collected by NAICS code can supplement information about the number of businesses in the industry, more detail about those businesses, the number of employees in the businesses, and so forth. Everyone would benefit from richer, more insightful analyses after various mash-ups.

Consistency among surveys. If law firms adopted this standard classification system, readers of their reports and researchers would be much more able to compare results by industries. In the current disorder, and so long as each firm defines its industries idiosyncratically, comparisons and meta-analyses become much harder to carry out, if not impossible.

Improving the representativeness of the sample data. Because the NAICS data sets provide law firms with reliable counts of companies by industry, they could deploy techniques to make their convenience samples more representative of the actual distribution of U.S. businesses. One method of doing this, which we explain elsewhere, is called “raking.” As sample data is transformed to closely resemble population data, deeper statistical analyses become available.

Impute missing values. “Imputation” is the term statisticians use for filling in missing values. If a law firm has data about its participants by their NAICS code plus other information such as revenue, the firm could impute the number of employees of that company. An explanation of that methodology to supplement data can be found elsewhere, but it would be available to a firm so long as the industry coding conforms to the NAICS. For example, a firm that collects revenue, industry code, and state can even more accurately impute a number for employees. Fuller data sets enable better analyses.

Frequently-used terms

Several terms crop up so frequently here that readers deserve definitions of them as well as mention of alternative phrasings or synonyms that might appear.

  • Survey. Whether online or in hard copy or by electronic voting or during an interview, any questionnaire that a law firm administers to collect information from participants. Sometimes it is referred to as a “poll” or a “straw vote.”
  • Contact. Anyone who is invited to participate in a survey. We may sometimes refer to them as “invitees,” “clients” or “prospects.”
  • Participant. Anyone who starts a survey. A participant who submits answers to a survey is a “respondent.”
  • Respondent. A participant in a survey who submits the survey.
  • Company. The organization of a person who takes a survey. Mostly a a company will be an incorporated entity, but the term also applies broadly to partnerships, not-for-profit organizations, governmental entities and any other entity.
  • Report. The electronic file or hard copy publication that contains a survey’s findings and analysis. Most typically an electronic report is in PDF format. It could, however, be in a Word file, PowerPoint deck or other formats.
  • Text. Whatever is written or listed in the survey’s report.
  • Graphic. A plot or table that displays data. We also refer to them as “graphs.” If a an element of a report does not convey data, then it would be text or a “design element.”
  • Design element. Anything in a report that is neither text nor a graphic, such as borders, images, pictures, lines, shapes, glyphs, or other elements.

Four reasons why demographic questions usually lead off a survey

By convention, the first few pieces of information asked of respondents on a questionnaire typically concern demographic facts (title, industry, location, revenue). The reasons for this typical order might be termed psychological, motivational, practical, and instrumental.

Psychologically, law firms want to know about the person who is providing them data. Is this person higher or lower in the corporate hierarchy? Does this person work in an industry that matters to the firm or matters to the survey results? They want to know that the person is credible, knowledgeable, and falls into categories that are appropriate for the survey. To satisfy that felt need, designers of questionnaires put demographic questions first.

When a questionnaire starts with questions that are easy to answer, such as regarding the respondent’s position, the industry of their company, and its headquarters location, it motivates the respondent to breeze through them and charge on. They sense that the survey is going to be doable and quick. Putting the demographic questions first, therefore, can boost both participation rates and attrition rates.

A practical reason to place the demographic questions at the start is that doing so allows the survey software to filter out or redirect certain respondents. If an early question concerns the level of the respondent, and if their choice falls below the firm’s desired level of authority, the survey can either thank the respondent and close at that point or move their subsequent questions to a different path. Vendors who conduct surveys often cull out inappropriate participants, but law firms rarely take this step. Rather, they usually want as much data as they can get from as many people as will take part.

Fourth, if the demographic questions are at the start of the questionnaire, then even if the participant fails to complete the survey or submit it, it may be possible that the survey software captures valuable information. This could be thought of as a instrumental reason for kicking off a questionnaire with demographic questions. These days, the law firm particularly wants to know the email address of the participant and their title. That information probably flows into a customer relationship management (CRM) database.

Four techniques to make selections more clear

When someone creates a multiple-choice question, they should give thought to where and how to explain the question’s selections. People spend time wordsmithing the question, which is valuable time, but not the end of the matter. Even the invitation to survey participants may explain some background and key terms that shed light on selections. But at least four other options present themselves in the service of selections that can be answered without interpretative complexity.

First, a firm’s survey software should allow the designer to place an explanatory section before a question or series of related questions. That section can elaborate on what follows and guide readers in choosing among the selections. This technique has been overlooked in many of the questionnaires done for law firm research surveys.

Second, the question itself can be written carefully so that participants more easily understand the selections that follow. [This is not referring to directions such as “check all that apply” or “pick the top 3.” The point here pertains to interpretation and meaning of the multiple choices.] For example, the question might make clear the period for which answers should be given covers the previous five years. Or the question might define “international arbitration” in a certain way to distinguish it from “domestic arbitration.” The overarching definitions and parameters laid out in the question shape inform each of the selections that follow.

Third, as a supplement to the main question, some survey software enables the designer to add instructions. Using NoviSurvey, for instance, the instructions appear below the question in a box, and offer additional explanatory text. Instructions commonly urge participants not to put in dollar signs or text in a numeric field or to enter dates in a specific format, but they can also explain the selections. For example, the instructions might note that the first four selections pertain to one general topic and the next four selections pertain to a second topic. Or the instructions might differentiate between two of the selections that would otherwise perhaps be confused or misconstrued.

Finally, even if there is no explanatory section, guidelines from the question itself, or illumination in instructions, the selections themselves can embed explanatory text. Any time a selection has an “i.e.,” or an “e.g.,” the person picking from the selections should be able to understand them better. Sometimes a question will say “… (excluding a selection shown above)” to delineate two choices.

As a by-product, the more you expand on the selection choices, the more you can abbreviate them. The interplay between these four techniques to disambiguate selections, to present them more directly and clearly, allows careful designers of questions to craft selections more precisely and usefully.