Distribution of type sizes and colors in research reports

To get my arms around patterns or conventions that might appear when survey reports choose type sizes and colors, I looked at 10 reports from different law firms. In this unscientific sample, I chose the first two pages of substantive discussion, meaning I skipped the cover, table of contents, introductory letter, and other material to focus on the initial commentary.

This foray counted how many different sizes of type appeared on the two pages as well as the number of type colors. It is difficult to be sure when type fonts differ, but size is more evident, so the findings here should be directionally correct. Neither count includes any type sizes or colors in plots or tables. Whether the font was bold or italics was irrelevant.

In the plot below, the names of the reports appear on the left. The bar to its right shows for the firm’s report the number of font sizes that were counted (in the blue, darker, near segment) and the number of colors (in the red, farthest segment). For example in Seyfarth Shaw RE 2017, its pages use seven sizes and three colors. By contrast, the report at the bottom, Berger Singerman SthFlaRE 2017, uses one size and one color.


Multiple observations from one multiple-choice plot

This complex, swoosh-crowned graphic spawned many observations. It graces page 34 of Pinsent Masons’ “Pre-empting and Resolving Technology, Media and Telecoms Disputes” (2016). The first several comments address the question asked, the others focus on the visualization of the findings.

  1. Time bounded. It is a good practice to limit answers to a specified period. This question asked respondents to look back five years, rather than a sloppier version like “Have you ever used any of the following institutions?” Even clearer would have been a phrasing like “in your organization during 2012 though 2016 …”
  2. Large number of selections. We can’t know how all these 28 selections were arrayed for respondents to review or whether they were in a drop-down menu. Certainly this multiple-choice question touches the outer bounds of fecundity.
  3. Choose all that apply. It would be better if the question (or plot or adjacent text) made clear that respondents could tick several institutions. Assuming they could, it is harder to interpret the percentages.
  4. Sizable ‘Other’. Even with 27 specific selections, quite a few others must not have appeared on the list (or respondents did not spot or recognize the name of one that was on the list). ‘Other’ garnered 17%, which is a larger percentage than 22 of the specific selections. It appears also that the questionnaire did not have a space for respondents to write institutions that they used but did not see among the selections.
  5. Always want more. Law firms work hard to do the best they can, yet they are usually confronted by readers who wish they had done more. Regarding the topic of this plot, for example, a question might have sought a break down between disputes before these institutions commenced by the respondent organization or brought against them by some challenger organization. We also are not told how many TMT disputes (technology, media, and telecoms) the respondents’ organizations engaged in per institution. Or the firm might have woven in data from the institutions about how many disputes they handled during the five years. Finally, the interpretation and elaboration on this set of findings is minimal.

How Pinsent Masons chose to present their findings also leads to several observations.

  1. States the question. Highlighted in a brick-red hue at the top, the question as asked on the online questionnaire lets reader efficiently match it to the answers. Interpretation of the results becomes much easier.
  2. Alphabetical order. Normally, you expect selections to be ordered in decreasing number or percentage. Here, however, with 28 different selections, putting them in alphabetical order enables readers to find institutions more easily. Note also that ‘Other’ appears at the bottom of the stack, not in alphabetical order.
  3. Monochromatic. Many designers would have splashed rainbow hues on the bars. Color just for the sake of color would impose more visual burden but add no information. On the other hand, two colors might have distinguished “institutions” from “types of arbitration.” Alternatively, since many of the selections are based on a country or regional institution, a modest color scheme by continent might inform readers.
  4. Missing information. The report incorporates submissions from an impressive 343 participants. Question number 27 was midway through an imposing set of 55 questions in the online survey. That said, we do not know how many of them tackled this particular question nor how many total ticks they made. Not knowing those particulars, when the top bar says that 15% checked that institution, we do not know the absolute number of checks.
  5. Dramatic design. Almost all the pages of the report have some variation of the red, curvy leitmotif. We admire this eye-catcher as it breaks up the white expanse, paints a touch of color on the page, and cuddles the plot itself. An attractive, simple element with aesthetic appeal.
  6. Flipped coordinates. because the names of the institutions are so long, this plot would not work well if those names had been crowded on the horizontal axis (or rotated extremely). Quite properly the plot designer rotated the plot (sometimes called “flipping the coordinates”). We also commend the firm for not duplicating the percentages along the bottom axis or cluttering the panel with grid lines.