Vertical rulers in survey reports by law firms

Horizontal rulers appear much more frequently than vertical rulers. But vertical rulers appear at times, and here are some examples. DWF London 2017 [pg. 4] places vertical rulers between key statistics But it is unlikely that these rulers help readers understand or identify the statistics.

King Wood AustraliaDirs 2015 [pg. 3] wads in very thick vertical rulers.

Allen Matkins CommlRE 2018 [pg. 16] covers four varieties of commercial space and names them in the margin, placing a vertical divider between each of them.

CMSNabarro UKRE 2016 [pg. 2] inserts not only vertical rulers between the four columns but also a horizontal ruler under “The UK picture”.

Taft Stettinius Entrep 2018 [pg. 4] eschews the horizontal look in favor of a vertical bar, in black. The bar is less a vertical ruler than a highlighter or a design element as it extends slightly above the material it sets off and not quite as far as the bottom of the material. Vertical rulers typically have something on each side, but this element has nothing on the left.

 

Weighting data from surveys by law firms

Surveyors sometimes weight their data to make the findings more representative of another set of information. For example, a law firm might realize that it has gotten too few responses from some demographic strata, such as manufacturers or companies with more than $5 billion in revenue. The firm might want to correct for the imbalance so that it can present conclusions respecting the entire population (remember, the survey captures but a sample from the population). The firm could weight the manufacturers or large companies that they got more heavily to create a sample more in line with reality.

How might such a transformation apply in surveys for the legal industry? Let’s assume that a firm knows roughly how many companies in the United States have revenue over $100 million by each major industry. Those known proportions enable weighting. If the participants materially under-represent some industry or revenue range, the proportions in each industry don’t match the proportions that we know to be true. One way to adjust (weight) the data set would be to replicate participants in industries (or revenue ranges) enough to make the survey data set more like the real data set.

In a rare example, CMS Nabarro HealthTech 2017 [pg. 19] states explicitly that the analysis applied no weightings.

King Spalding ClaimsProfs 2016 [pg. 10] explains that it calculated the “weighted average experience” for certain employees. This might mean that one company had fewer employees than the others, so the firm weighted that company’s numbers so that the larger companies would not disproportionately affect the average age. In other words, they might have weighted the average by the number of employees in each of the companies. As a matter of good methodology, it would have been better for the firm to explain what they did in order to calculate the weighted average.

White Case Arbitration 2010 [pg. 15] writes that it “weighted the results to reveal the highest ranked influences.” This could mean that a “very important on” rating was treated as a four, a “quite important” rating as a three, and so on down to zero. If every respondent had given one of the influences on choice of governing law the highest rating, a four, that would have been the maximum possible weighted score. Whatever the sum of the actual ratings were could then be calculated as a percentage of that highest possible rating. The table lists the responses in decreasing order according to that calculation. This is my supposition of the procedure, but again, it would have been much better had the firm explained how it calculated the “weighted rank.”

Dykema Gossett MA 2015 [pg. 5] does not explain what “weighted rank” means in the following snippet, but the firm may have applied the same technique.

On one question, Seyfarth Shaw RE 2017 [pg. 10] explained a similar translation: “Question No. 3 used an inverse weighted ranking system to score each response. For example in No. 3, 1=10 points, 2=9 points, 3=8 points, 4=7 points, 5=6 points, 6=5 points, 7=4 points, 8=3 points, 9=2 points, 10=1 point”

Miller Chevalier TaxPolicy 2017 [pg. 6] asked respondents to rank the top three. The firm then used an inverse ranking to treat a 1 as 3, a 2 as 2 and a 1 as 1 and summed to reach a weighted rank (score).

Sometimes surveys use the term “weight” to mean “rank”. Here is an example from Berwin Leighton Risk 2014 [pg. 6].

 

Survey reports that combine the questions in an Appendix

If law firms include in their reports the questions they asked, they usually do so near the relevant plot or table. Every now and then, however, a firm reproduces the questions of the questionnaire in the order they were asked. Here are some examples of that consolidated and comprehensive reporting.

In the five pages of Appendix 2, Berwin Leighton Risk 2014 [pg. 14] states not only all the questions but also all their aggregated responses. Kudos to the firm!

In the image below from HoganLovells FDI 2014 [pg. 62], the Appendix reproduces all the questions (and perhaps what the questionnaire looked like) in a table.

Reed Smith MediaRebates 2012 [pg. 9] includes all the questions in its Appendix.

Browne Jacobson Sleepins 2018 [pg. 32] reproduces all of its 80 questions on the survey in an Annex.

 

Efforts by law firms to obtain representative survey respondents

Brodies Firm GDPR 2018 [pg. 2] explains that the sample it surveyed resembles U.K. businesses as a whole by industry and by revenue or number of employees.

Osborne Clarke Consumer 2018 [pg. 28] strived to balance its participants within each nation.

White Case Arbitration 2010 [pg. 40] excellently describes its efforts to reach out and obtain a representative group of participants.

It’s very unusual for reports in a series to point out differences in the participant pools. Here is one example, however. Baker McKenzie Cloud 2016 [pg. 5] acknowledges that the 2016 survey has more respondents who are lawyers (“in a legal role”) than previous surveys.

Surveys conducted by law firms twice a year

Almost all of the law firm research surveys are conducted once a year. The effort is considerable and firms want to allow sufficient time to pass, especially if they are conducting a series, for changes to appear in the data. Annual surveys rule. That said, at least three law firms have conducted surveys on twice yearly.

Irwin Mitchell Occupiers 2015 [pg. 3] is one of the reports that has gathered data in the Spring and Fall regarding office rentals.

Morrison Foerster MA Two 2017 [pg. 3] reflects surveys that collect input in April and then in September.

Haynes Boone Borrowing 2016 [pg. 2] represents one of a series that gathered data on borrowing practices in April and September.

Law firm surveys and months held open; month started

It seems likely that the longer a survey is open, the more people will take part. But the data does not support that seemingly commonsense notion. For a group of 34 surveys selected by mostly because they all state the duration of the survey and the number who took it, the correlation between the number of weeks open and the number of participants was a negative 0.2! The shorter open periods were associated with more taking part!

What drives numbers of participants more than length of time open depends more on the quality and size of the email invitation list. By “quality” we mean that the invitees have a reasonable chance of being interested in the survey; the list isn’t some random collection of email addresses. By “size” we mean the sheer number of invitees; all things being equal, if more people receive the invitation, more people will decide to complete it.

Other factors that drive participation rates likely include whether the invitees know and respect the law firm (or co-contributors), the time demands of the survey, the topic, and the level of the invitee (senior executives and general counsel are bombarded with requests to complete surveys, but more junior people may receive invitations rarely and be more willing to participate).

The scatter plot below shows along the bottom axis how many months a survey was open and along the left, vertical axis how many participants completed it. Open periods of one month or of two months were the most common. For all of these surveys, with 8,500 total participants and 56 total months open, the average number of participants per month was 152.

We will need more surveys to derive dependable numbers on averages per month, and likewise to look at averages of participants per season.

Does seasonality influence participation numbers? Does it make a difference in what month you launch your survey? The next plots tells us that firms had no particular favoritism, except that none of them triggered their survey in the middle of summer, in July.

Institutional calendars, workload, or summer vacation plans may account more for the starting month of the survey than sensitivity to what will maximize participant numbers. Law firms may have budgets based on fiscal years or they may orient their survey toward a conference or try to catch the wave of heightened interest in a topic

Co-contributors to law firm surveys — Part V

The more research surveys we locate, the more organizations we identify who worked with the law firms. We call these organizations “co-coordinators” but in fact the roles they play vary from being the lead and the law firm adds a bit to being secondary to the dominant role of the law firm. Some organizations run the survey project; some provide law firms with access to potential participants; some of them primarily publicize the results of the survey research.

Whatever their role, co-coordinators appear in something like one half of all research surveys by law firms. Moreover, we have identified 18 surveys that have two or even three co-contributors. Having already listed the name and at least one survey of 73 co-coordinators, we add another 15 in this post.

  • Agenda Consulting: Browne Jacobson Sleepins 2018
  • Allenbridge: MJ Hudson VentureCapital 2017
  • Association of School and College Leaders (ASCL): Brown Jacobson School 2016
  • Canadian Venture Capital Association (CVCA): McCarthy Tetrault VC 2011
  • Censuswide: Freshfields Bruckhaus Whistleblow 2017
  • Esports Observer: Foley Lardner Esports 2018
  • FDU Group: Trowers Hamlin Networking 2014
  • Hong Kong Venture Capital and Private Equity Association: Oldham Li HKVC 2018
  • Infrastructure Intelligence: Burgess Salmon Infrastructure 2018
  • JLL: Baker McKenzie Cloud 2014
  • KPMG: Cliffe Decker PE 2017
  • Opinium Research: CMSNabarro HealthTech 2017
  • Perceptive Insight
  • Research Strategy Group: Gowling WLG Franchisee 2017
  • Royal Institution of Chartered Surveyors: Tughans Surveyors 2018
  • Upload: Perkins Coie VirtualReality 2016

Typical threshold of respondent revenue: $1 billion

Someone interested in research by law firms might think that corporate revenue of more than one billion dollars would be a common amount to categorize that data. The demographics detail could say things like “Number of respondents reporting less than $1 billion” or “Respondents between $1 and 4 billion revenue.”

Bryan Cave CollectiveLit 2007 [pg. 23] employs the cut-off figure and presents the revenue profile of its 242 respondents simply and elegantly. Readers can add the three largest categories and know that 39.7% of them reported more than $1 billion of global revenue.

DLA Piper Compliance 2017 [26] also lays out the revenue of its respondent companies. Readers can figure out that 32% of the companies exceeded $1 billion.

Some reports represent the number or percent of their participants whose revenue exceeded one billion dollars. These three did so.

  1. Pinsent Masons Energy 2017 [pg. 5]: all 200 businesses had revenue greater than $1 billion
  2. HoganLovells Brexometer 2017 [pg. 14]: all 210 respondents’ companies had more than $1 billion of revenue, and
  3. Clifford Chance Crossborder 2012 [pg. 36]: All respondents represented companies with annual revenues in excess of $1 billion.

Other reports partially disclose or require some detective work.

  1. Proskauer Rose Empl 2016 [pg. 4]: Almost half of the survey respondents work for businesses with annual revenues of $1 billion or more
  2. Carlton Fields CA 2012 [pg. 40]: its 322 participants had average annual revenues of $13.1 billion and median of $3.8 billion. Seventeen percent are Fortune Global 500 companies, and nearly 49 percent are Fortune 1000 companies. Of those, eight percent are Fortune 100, 19 percent are Fortune 101-500,\footnote and 21 percent are Fortune 501-1000.
  3. KL Gates GCDisruption 2017 [pg. 19]: majority of companies (82% of 200 companies, of 164 companies) had revenues of Euro 1 billion or more (at that time a Euro was about 1.3 dollars).

Unfortunately, many survey reports do not give enough detail about their respondents’ distribution of revenue to say anything regarding the common threshold of one billion dollars of revenue.

Questions stated with plots in law firm survey reports

Readers of survey reports deserve to know the precise wording of the questions that generated the report’s findings. How a question was phrased, the way instructions were added, what selections were available and in what order: all are vital for evaluating the reported data. Many reports don’t restate the question, but appear to summarize it in a plot’s title. Others quote or paraphrase the question asked in their text discussion of the plot. User-friendly reports state the question near the plot or table that summarizes the data.

A superlative treatment comes from Appendix A of HoganLovells FDI 2014 [pgs. 61-75], which lists all 19 questions that were on the survey (and presumably in their order on the questionnaire), as well as nine demographic questions, along with the choices available to respondents for each question. Even more, pages 76 to 95 reproduce in summary tables the data that was collected.  We heartily praise this comprehensive disclosure.

In the snippet below of Reed Smith London 2018 [pg. 19] the question appears above the plot.

Littler Mendelson Employer 2013 [pg. 5] states the question clearly, but considerably above the plot.

Gowling WLG Protectionism 2017 [pg. 14] reveals a style variation: the question sits snugly close to the plot. As compared to Littler Mendelson, proximity and bold fit highlight the connection between the question and the data.

The final example, from Pinsent Masons TMT 2016 [pg. 10], accentuates the question with red font and close proximity. Offsetting the question to the left is an aesthetic move. The subtitle in parentheses tells readers which subset of the data the plot summarizes.

Unusual formats of law-firm survey research reports

Most law firms publish their research as reports as a PDF document, portrait orientation. The production benefits of PDF stand out: clear and dramatic photos, ample white space, appealing layouts, compressed size and a widely-accepted format. Some maverick firms, however, chose other modes.

Goulston Storrs Multifamily 2017 created an article, which the firm published in a monthly produced by the same organization that conducted the survey.

Haynes Boone Borrowing 2015 [pg. 4] produced what appears to be a PowerPoint deck. Here is the full-page snippet.

Brodies Firm Housebuilding 2015 published a single page report and Baker McKenzie Brexit 2017 did the same with an infograph.