More co-coordinators of survey projects by law firms

As I discover more and more survey reports, I also find more instances of co-coordinators. With this additional group, the total number of co-coordinators has nearly reached 100.  I have named the co-contributor and then the name of the law firm and its report topic and date.

  1. Aberdeen & Grampian Chamber of Commerce: Aberdein Considine RealEstate 2017
  2. Allen Associates: Royds Withy OBB 2017
  3. Bench Events: Berwin Leighton MENAHotel 2016
  4. ComRes: Resolution Divorce 2012
  5. GlobalData: Foot Anstey Retail 2018
  6. ICAS: Brodies Firm Brexit 2017
  7. iGov Survey: Ashfords Outsource 2017
  8. Independent Publishing Group; Nielson Book Research: Harbottle Lewis Publishing 2017
  9. Longitude: Charles Russel FamilyBus 2017
  10. Loudhouse: Ashfords Retail 2015
  11. SMF Schleus Marktforschung: Pinsent Masons GermanTech 2014
  12. The Climate Change Collaboration: ClientEarth Climate 2018
  13. The Housing LIN: Winckworth Sherwood OldHousing 2017
  14. The Review: Trowers Hamlin Occupiers 2015
  15. The University of Manchester: White Case WhiteCollar 2017

Law firms that produced only PDF reports of surveys, only non-reports, or both

As described previously, in my collection of hundreds of law-firm research surveys, 44 firms have released at least one set of survey results only as a press release, a post on a blog, or an article (a “non-report”). Also, 88 law firms have produced at least one survey report in PDF format (a “formal report”). Some — or perhaps all — of those law firms have produced the results of a research survey in both formats, formal report and non-report, but I would have to confirm with each firm about its history of reporting to be sure of both numbers.

Nevertheless, with the data at hand, 24 law firms are in both camps, having published at least one formal report and at least one non-report. Another 64 formal-report firms have not issued a single non-report, while 20 firms have not produced a single formal report.

The pie chart below visualizes these findings. The largest group, in the bottom slice, represents the 64 law firms that have produced only formal reports in PDF. At the upper left, the green (darkest) segment, extends out only about a third as far out as the largest segment, as it represents the 20 law firms that have not released a formal report of their survey findings (about one-third of 64). The third segment, in the upper right, represents the remaining 24 firms that have chosen both formats.

These preliminary findings of significant variability in reporting practices may reflect the decentralized style of large law firms. Individual practice groups or countries on their own can launch a research survey and then decide how to release the results they obtain. Then too, it may be that as a firm becomes more familiar with research surveys, it decides to shift how it brings the results to the attention of the world. Marketing functions may have more or less sway over budgets and standards for releasing data results. In the end, however, we must regard these findings as provisional, because further research may shift the composition of the three groups significantly.

Size of law firm correlated to whether it produces a PDF report

As of this writing, I have found 44 law firms that have released at least one set of survey results as a non-report. At least I have not located a report in PDF format. Some of those law firms have produced survey reports in PDF format, which I have. As the comparison set, I have found 88 law firms that have produced at least one survey report in PDF format — what I refer to as a “formal report.” Among those firms, some have produced non-reports also.

My goal was to find out whether smaller law firms, when they sponsor a research survey, are more likely to resort to non-reports. Unfortunately, it is not easy to determine the number of lawyers (or solicitors) in each of the firms. Often precise data exists as is the case for large US law firms. But for firms based in the UK, Ireland and Australia, data sources do not use consistent definitions of who is a practicing lawyer and who is a “legal professional” or “fee earner.” In any event, I did my best to record an appropriate number of lawyers for almost all of the firms noted above.

In short, the underlying data for the following observations is sketchy. So long as we recognize the methodological shortcomings, we can at least look at two summaries.

For the non-report firms, the average number of lawyers in the firm is 1,243; for the firms that produced a formal report, the average fell to 1,024. Considering the median number of lawyers, the non-report firms came in at 700 lawyers while the formal-report firms came in ten percent lower, at 635. Based on these averages and medians, the two groups of firms seems reasonably similar in terms of size as measured by number of lawyers. If anything, my hypothesis appears to be wrong: the non-report firms are larger!

Increase recently in number of surveys without a published report

Of the more than 400 research surveys by law firms that I have tracked, about one-out-of-five of them are known only from a press release or article. I have previously explained some caveats about that group proportion, and I call the information I have located “non-reports.”

In addition to the non-reports, I have learned of another 64 surveys but have categorized them as “Missing.” “Missing” denotes surveys that later surveys refer to but for which I have not located a press release, an article, or any other manifestation — other evidence is missing.

Setting aside the “Missing” reports, has the proportion varied of published reports to non-reports over the past few years? The plot that follows addresses that question. The height of the dark segment on top of each column, which represents the number of non-reports that year, certainly jumped in 2017 and so far in 2018, so it appears that the proportion of non-reports has noticeably increased. To explain one column, the year 2016 represents 40 reports in the bottom, lighter segment and 5 non-reports in the top, dark section.

Why might that be? Perhaps more firms are undertaking surveys, which has brought online somewhat smaller firms, so they don’t have the resources to invest in a report. Alternatively, experience or instinct has led increasing numbers of firms to feel that the return on investment from a report is not sufficient. Then again, perhaps I simply haven’t found the published reports.

By the way, we have seen no evidence that firms issue a press release and considerably later a report. Rather, our sense is that the first publicity about the results of the survey comes simultaneously with the publication of the report and other business development efforts.


More participants in surveys with reports than in surveys without formal reports

Of the 420-some research surveys by law firms that I know about, about one-out-of-five of them are known only from a press release or article. I have located a formal report for all the others. To be forthright, either the law firm or a co-coordinator possibly published the results in a formal report, but so far I have not located the report.

Obviously, it costs less to draft a press release or an article than to produce a report. Reports require design decisions, plots, graphics, and higher standards of quality. Moreover, aside from expense, it also seems plausible that firms choose not to go through the effort of preparing a formal report if the number of participants in the survey seems too small.

To test that hypothesis about relative numbers of participants, I looked at nine survey results — let’s call them “non-report surveys” — that disclose the number of survey participants. I also collected data from 204 formal reports that provide participant numbers. The average number of participants in the non-report surveys is 157. Where there is a report, the average is 420, but that number is hugely inflated by one survey that has 16,000 participants. When we omit that survey, the average drops to 343 — still more than twice the average number of participants as in the non-report surveyrs.

When we compare the median number of participants, the figures are 157 participants in the non-report surveys versus 203 participants in the reports. Thus, the medians disclose one-third more participants where a report has been published than where a report has not been published.

A statistical test looks at whether the difference in averages between two sets of numbers suggests that the sets are likely to have a meaningful difference — here, on participants. With our data, the crucial test value turns out to be so small that we can confidently reject the hypothesis that no difference exists between the two sets in terms of participants. Larger numbers of participants are strongly associated with reported surveys.

Technical note: For those interested in the statistics, we ran a Welch Two-Sample t-test and found a p-value of 0.0331, which means that if someone could sample over and over from the universe of reported and non-reported surveys, only about 3% of the time would such a large difference in averages show up. Such a low percentage justifies statisticians concluding that the data comes from meaningfully different populations (the results are “statistically significant”). Bear in mind that I have not looked at all the non-reports in my collection and that a few more of them added to the group analyzed as described above could potentially change the figures materially and therefore the statistical test. Or there may be a formal report somewhere.

Vertical rulers in survey reports by law firms

Horizontal rulers appear much more frequently than vertical rulers. But vertical rulers appear at times, and here are some examples. DWF London 2017 [pg. 4] places vertical rulers between key statistics But it is unlikely that these rulers help readers understand or identify the statistics.

King Wood AustraliaDirs 2015 [pg. 3] wads in very thick vertical rulers.

Allen Matkins CommlRE 2018 [pg. 16] covers four varieties of commercial space and names them in the margin, placing a vertical divider between each of them.

CMSNabarro UKRE 2016 [pg. 2] inserts not only vertical rulers between the four columns but also a horizontal ruler under “The UK picture”.

Taft Stettinius Entrep 2018 [pg. 4] eschews the horizontal look in favor of a vertical bar, in black. The bar is less a vertical ruler than a highlighter or a design element as it extends slightly above the material it sets off and not quite as far as the bottom of the material. Vertical rulers typically have something on each side, but this element has nothing on the left.


Weighting data from surveys by law firms

Surveyors sometimes weight their data to make the findings more representative of another set of information. For example, a law firm might realize that it has gotten too few responses from some demographic strata, such as manufacturers or companies with more than $5 billion in revenue. The firm might want to correct for the imbalance so that it can present conclusions respecting the entire population (remember, the survey captures but a sample from the population). The firm could weight the manufacturers or large companies that they got more heavily to create a sample more in line with reality.

How might such a transformation apply in surveys for the legal industry? Let’s assume that a firm knows roughly how many companies in the United States have revenue over $100 million by each major industry. Those known proportions enable weighting. If the participants materially under-represent some industry or revenue range, the proportions in each industry don’t match the proportions that we know to be true. One way to adjust (weight) the data set would be to replicate participants in industries (or revenue ranges) enough to make the survey data set more like the real data set.

In a rare example, CMS Nabarro HealthTech 2017 [pg. 19] states explicitly that the analysis applied no weightings.

King Spalding ClaimsProfs 2016 [pg. 10] explains that it calculated the “weighted average experience” for certain employees. This might mean that one company had fewer employees than the others, so the firm weighted that company’s numbers so that the larger companies would not disproportionately affect the average age. In other words, they might have weighted the average by the number of employees in each of the companies. As a matter of good methodology, it would have been better for the firm to explain what they did in order to calculate the weighted average.

White Case Arbitration 2010 [pg. 15] writes that it “weighted the results to reveal the highest ranked influences.” This could mean that a “very important on” rating was treated as a four, a “quite important” rating as a three, and so on down to zero. If every respondent had given one of the influences on choice of governing law the highest rating, a four, that would have been the maximum possible weighted score. Whatever the sum of the actual ratings were could then be calculated as a percentage of that highest possible rating. The table lists the responses in decreasing order according to that calculation. This is my supposition of the procedure, but again, it would have been much better had the firm explained how it calculated the “weighted rank.”

Dykema Gossett MA 2015 [pg. 5] does not explain what “weighted rank” means in the following snippet, but the firm may have applied the same technique.

On one question, Seyfarth Shaw RE 2017 [pg. 10] explained a similar translation: “Question No. 3 used an inverse weighted ranking system to score each response. For example in No. 3, 1=10 points, 2=9 points, 3=8 points, 4=7 points, 5=6 points, 6=5 points, 7=4 points, 8=3 points, 9=2 points, 10=1 point”

Miller Chevalier TaxPolicy 2017 [pg. 6] asked respondents to rank the top three. The firm then used an inverse ranking to treat a 1 as 3, a 2 as 2 and a 1 as 1 and summed to reach a weighted rank (score).

Sometimes surveys use the term “weight” to mean “rank”. Here is an example from Berwin Leighton Risk 2014 [pg. 6].


Survey reports that combine the questions in an Appendix

If law firms include in their reports the questions they asked, they usually do so near the relevant plot or table. Every now and then, however, a firm reproduces the questions of the questionnaire in the order they were asked. Here are some examples of that consolidated and comprehensive reporting.

In the five pages of Appendix 2, Berwin Leighton Risk 2014 [pg. 14] states not only all the questions but also all their aggregated responses. Kudos to the firm!

In the image below from HoganLovells FDI 2014 [pg. 62], the Appendix reproduces all the questions (and perhaps what the questionnaire looked like) in a table.

Reed Smith MediaRebates 2012 [pg. 9] includes all the questions in its Appendix.

Browne Jacobson Sleepins 2018 [pg. 32] reproduces all of its 80 questions on the survey in an Annex.


Efforts by law firms to obtain representative survey respondents

Brodies Firm GDPR 2018 [pg. 2] explains that the sample it surveyed resembles U.K. businesses as a whole by industry and by revenue or number of employees.

Osborne Clarke Consumer 2018 [pg. 28] strived to balance its participants within each nation.

White Case Arbitration 2010 [pg. 40] excellently describes its efforts to reach out and obtain a representative group of participants.

It’s very unusual for reports in a series to point out differences in the participant pools. Here is one example, however. Baker McKenzie Cloud 2016 [pg. 5] acknowledges that the 2016 survey has more respondents who are lawyers (“in a legal role”) than previous surveys.

Surveys conducted by law firms twice a year

Almost all of the law firm research surveys are conducted once a year. The effort is considerable and firms want to allow sufficient time to pass, especially if they are conducting a series, for changes to appear in the data. Annual surveys rule. That said, at least three law firms have conducted surveys on twice yearly.

Irwin Mitchell Occupiers 2015 [pg. 3] is one of the reports that has gathered data in the Spring and Fall regarding office rentals.

Morrison Foerster MA Two 2017 [pg. 3] reflects surveys that collect input in April and then in September.

Haynes Boone Borrowing 2016 [pg. 2] represents one of a series that gathered data on borrowing practices in April and September.