Attributes of infographics, standardized, in surveys by law firms

To inquire further into what law firms include in their infographs, we converted four of the elements — numbers, plots words and concepts — into their respective counts divided by the percentage of the page the infograph occupies.  Without that standardization, larger infographs would have larger accounts, but not necessarily more cognitive density per page.

The next four plots array the six infographs we have been working with from the lowest measure on the left to the highest on the right as well as the average of that data in a different color.

A first impression from these plots might be that the infographs do not vary all that much on these four elements. However, the range from the lowest to the highest is around 1-to2 for words and concepts whereas it is 1-to5 for numbers and 1-to-12 for plots.

The quartet of plots relies on numbers of very different magnitude, as in words are much more numerous than concepts. If we  standardize all the values after they have been divided by the page percent (When you standardize values you divide them by the mean.) then the absolute values — as adjusted for the amount of the page the infograph occupies — are transformed to the same scale. The result is the next plot, where each survey’s standardized value for the element is in a separate segment.

What we can conclude from this different perspective is that with words and concepts, all of the infographs have similar profiles (close to 1). On plots, however, two of the surveys are very skinny (Baker McKenzie and McDonald Hopkins). Likewise, on the use of numbers two of them have an abundance, relatively speaking, of numbers (McDonald Hopkins and HoganLovells.

Research surveys by the ten largest law firms

My initial data set of law-firm research surveys developed serendipitously. As I gathered legal industry surveys over the past couple of years, I found several that were sponsored by law firms. Having started to analyze that set, it occurred to me to look at the largest firms in the world.

According to the American Lawyer and in order of declining numbers of lawyers, the ten most gargantuan firms consist of Baker McKenzie, DLA Piper, Norton Rose Fulbright, Hogan Lovells, Jones Day, Latham \& Watkins, White \& Case, Greenberg Traurig, Morgan, Lewis, and Sidley Austin. I searched on Google for research surveys sponsored by each of them using the simple term of the first two names of the firm plus the word “survey,” e.g., “Baker McKenzie survey”. I then read over the first five or six pages returned by Google and did my best to spot research surveys.

One could certainly shoot holes in this methodology. Also I should point out that I treated a series of surveys hosted by a firm over several years as a single survey. I also did not include in my account compilations by any of the firm of laws or regulations, which some law firms call surveys. It might also be that terms like “poll” or “straw vote” or “questionnaire” would have uncovered other examples.

For several of the firms I already had at least one survey and I combined what I had with what I found online. The plot below shows the results of my poking around online and preexisting surveys. It plots the number of research surveys found per thousand lawyers of the firm. The standardization of per-thousand-lawyers accounts for the likelihood that firms with more lawyers produce more research surveys. With this standardization, a 2,000 lawyer firm with two surveys has outproduced and 4,000 lawyer firm with three surveys, on a survey per lawyer basis.

My searches on the four law firms at the lower right (Latham \& Watkins, Jones Day, and Greenberg Traurig) turned up no research surveys. If any reader knows of research surveys by the ten largest, or by any other law firm, I would appreciate hearing from you about them.

Disclosure of respondents’ revenue through multiple choice questions

In comparison to the demographic attributes reviewed so far (i.e., the disclosure and explanation of respondents’ positions, geographies, and industries), respondent revenue turns out to be not only the least elaborated but also the least standardized. This relatively poor showing may have happened because the respondents didn’t know or didn’t want to disclose their organization’s revenue, so the surveying law firm felt the data it collected was too fragmented. It might also have been that the firms did not think that corporate revenue would make a systematic difference in the answers given nor would it aid in the analysis of the data. On the darker side of interpreting the poor showing of revenue categories and percentages, it might be that the firms sensed that their mix of participants displayed unimpressive revenue.

In any event, my examination of 16 survey reports found that three categories cover the variability of disclosure.

Clear and full breakdown: A trio of law firm reports help readers gauge the annual turnover and distribution of the survey respondents’ organizations by breaking out their revenue into three-to-six category ranges. Across the three firms, their ranges started at less than $500,000 but went up to more than $20 billion. Of the fifteen different ranges used, only one of them — $5 billion to $10 billion — appeared more than once. For each range, these three firms included the percentage of respondents whose revenue fell within the range.

Some facts but incomplete breakdown: Six firms stated something about revenue in their report but unlike the three firms described above they did not provide a full breakout with ranges or percentages. For example, one firm wrote ‘Almost half of the survey respondents work for businesses with annual revenues of $1 billion or more’ and in a footnote added ‘The average respondent in this data set has revenue of $750 million.’ Plots in the report show the firm recognized five revenue categories: Less than $50M, $50M-$500M, $5000M-$1BN, $1BN-$6BN, and Over $6BN. Another firm offered, unhelpfully, that the companies represented ‘were of a variety of sizes’ and then broke them out by market capitalization (Large cap at 23% [more than $4 billion in market capitalization], mid cap at 21% [$1 to $4 billion] and small cap [less than $1 billion]). Two more instances: ‘Survey participants’ companies had an average annual revenue of $18.2 billion and median annual revenue of $4.7 billion’ and ‘A majority of companies (82) had revenues of Euro 1 billion or more.’

No facts about revenue. Disappointingly, the seven remaining reports provided no information whatsoever about the annual revenue of their respondents’ organizations. It is possible, to be sure, that corporate revenue has no bearing on the findings produced by the survey and summarized in the report. But that seems to me unlikely to be true.

The pie chart below visualizes the three categories described above.

Standardize and quantify participants by region

Continuing in the same vein regarding multiple-choice questions and the standardization of some demographic categories, we looked at how the law firm research surveys 1 identified their participants by geographic region. As with positions (and as will be seen with industry sectors) both the completeness of disclosure and the categores used to describe regions were all over the map.

One laggard proffered no information at all about the geographic dispersion of its respondents. Six of the law firms stated (or the reader could infer) that they gathered responses from a single country and they identified that country. Four other firms made general statements in the text of their report about geographic coverage (e.g., Allen & Overy stated that they surveyed companies “around the world” that were “in 27 different countries”) but provided no breakdown in terms of absolute numbers or percentages.

In line with good survey practice, however, five firms broke their participants down by percentages in regions. One firm’s report had two regions, two reports had for regions, one five, and one six.

Below is the information in the preceding paragraphs in graphical form.

As to the regions used to categorize participants, the 16 research-survey reports we examined produced a grand total of the same number of regions — 16, with very little standardization. The report used these descriptions: “Africa, “Americas, “Asia, “Asia Pacific, “Canada, “Continental Europe, “EMEA, “Europe, “Latin America, “Middle East and Africa, “Non-US, “North America, “Oceana, “Other, “United Kingdom” (or “U.K.”), and “United States”.

The take-away from this follows the lessons previously learned: the legal industry and its data analytics would be stronger if there were a more standard way of naming the regions from which participants come. Second, law firms that conduct research surveys should identify the countries or regions, ideally using standard terminology, from which their participants came as well as the percentage breakdown.

Notes:

  1. Reports by Allen & Overy, Berwin Leightner Paisner, Carlton Fields, Davies Ward Phillips, Eversheds, Goulston & Storrs, Haynes and Boone, Hogan Lovells, Littler Mendelson, Norton Rose Fulbright, Proskauer Rose, Ropes & Gray, Seyfarth Shaw, White & Case, and Winston & Strawn

Need for standardized position descriptions

The laxity in describing the respondent sample by position and percentage in each position should trouble those who want to rely on law-firm research surveys. How much credibility does data have if you don’t know who provided that data? But even if a firm classifies its respondents by level, the review I carried out raises another troubling point: the position descriptions vary wildly.

More specifically, in my set of law firm research surveys 1, the firms used a grand total of 16 different descriptions for titles in law departments. The position “General Counsel” was most common but after it the terminology was all over the place.

These are the actual terms used for the legal positions, listed roughly in descending order of corporate level: Chief Legal Officer, General Counsel, Chief Legal Officer/General Counsel (or other head of in-house legal function), Region or Division General Counsel (or equivalent), Chief Legal or Associate/Deputy General Counsel, Deputy General Counsel, Assistant General Counsel, Associate/Deputy/Assistant General Counsel, direct reports to general counsel, senior lawyers, Head of litigation, Senior Counsel, in-house counsel, in-house attorneys/corporate counsel, in-house lawyer, and Counsel. Several of the reports also included an “Other” category for respondents in the law department whose position was not among the selections.

The methodology sections of these reports do not say whether the online questionnaire explained what was meant by the selections of the multiple-choice questions for positions, such as years of experience or scope of responsibility. Probably they simply listed some number of selections and ask respondents to pick one of them (possibly with “Other”) to answer a seemingly simple question question: “What is your level, title or position?” The firm that gave a common title and then added “or its equivalent” showed thoughtfulness. It recognized that titles have proliferated in law departments so the firm was using a label to try to convey the level of responsibility of the respondent.

As for titles of respondents outside the law department, those also vary widely. Quoting from the reports, they include CEO/Director, C-suite executives, senior-level executives, Senior management, Executives, Human Resources Professionals, Risk/compliance, other professionals, and other business contacts.

As with the law department positions, the firms had no standard set of positions to draw from, so they came up with their own categories. Going forward, it would help the legal industry and its movement toward more data analytics to have at least a core of standardized position terms for the law department respondents and client respondents to choose from.

Notes:

  1. Reports by Allen & Overy, Berwin Leightner Paisner, Carlton Fields, Davies Ward Phillips, Eversheds, Goulston & Storrs, Haynes and Boone, Hogan Lovells, Littler Mendelson, Norton Rose Fulbright, Proskauer Rose, Ropes & Gray, Seyfarth Shaw, White & Case, and Winston & Strawn