Largest U.S. firms, gross revenue and number of research surveys

From the AMLAW listing for 2017, I looked at the 25 top law firms in terms of gross revenue. As to which of them have conducted or taken part in data-oriented research surveys, my investigations so far consist of searching for the name of the firm and the word “survey” using Google and then scanning down the first five or six pages of hits. The better method would be to search on the website of the firm itself, which should take place eventually.

In any case, at this point it appears that 16 of the 25 highest grossing U.S. law firms have not been involved in a research survey. In the plot below, they are the firms that have no green bar: Latham Watkins, Kirkland Ellis, Skadden Arps, Jones Day, Sidley Austin (which tried a survey a couple of years ago but didn’t complete it), Morgan Lewis, Gibson Dunn, Greenberg Traurig, Sullivan Cromwell (although I ran across a reference to a survey done in 2010 about Boards of Directors), Simpson Thacher, Cleary Gottlieb, Weil Gotshal, Paul Weiss, Quinn Emanuel, Davis Polk, and Wilmer Cutler.

The other nine firms are known to have sponsored at least one research survey, and six of them have been involved in more than one. The laurel wreath goes to DLA Piper, which at 28 surveys known to me almost equals the combined 32 of the other eight firms.

The plot sorts the law firms in descending order by gross revenue, which shows that five the top 12 firms have put this tool to use. Overall, however, the majority of these elite, huge U.S. law firms have not seen sufficient reason to take part in or publish a research survey.

Visualize variables in surveys with Sankey diagrams

Let’s say we would like to understand and visualize how survey reports vary in frequency by country, page orientation, and involvement with co-coordinators. A Sankey diagram (aka river plot) can reveal such insights on a plot as sized flows of numbers.

Consider a data set of 174 research-survey reports. For each report we know the headquarters country of the law firm or that it is a “VereinCLG” (firms that are either Swiss vereins or a “company limited by guarantee” (CLG)). Thus, for 9 surveys by Canadian law firms, 48 by UK law firms, 109 by U.S. firms, and 48 by VereinCLGs we also know whether the report was portrait or landscape orientation and whether the firm teamed with a co-contributor or surveyed on its own.

Starting at the left of the Sankey diagram below, the height of the four rectangles tell the relative proportions of surveys by country. Each rectangle then divides into two streams: the top stream flows into the Portrait orientation rectangle and the bottom stream flows into the Landscape rectangle. In the middle of the plot, the green rectangles indicate by their relative heights the proportions of portrait and landscape reports. Two streams flow from each of the orientation rectangles, the top one indicating the proportion of reports that did not have a co-coordinator (FALSE) and the lower stream the proportion that had a co-coordinator (TRUE). Again, the relative heights of the right-most rectangles suggests the proportions.

Consider the reports published by UK law firms. They are mostly portrait, because that stream is much thicker than the narrow stream pouring down into the “Landscape” rectangle at the bottom. But the Portrait and Landscape rectangles combine the data of all the countries, so I don’t think it is possible from this Sankey diagram to say what proportion of UK reports involved a co-coordinator. That said, of the portrait reports, fewer had co-coordinators but the balance was roughly even.

However, by swapping two words in the code that produced the first Sankey diagram, we produced the variation below that shows what proportion of a country’s reports involved a co-coordinator. It appears that the UK reports are approximately evenly divided between co-coordinators and no co-coordinator.

Potential participants of surveys on a logarithmic scale

Among the hundreds of survey reports that I have located, the number of participants varies enormously. The variance is a function of many factors:

  1. the size and quality of the law firm’s contact list
  2. whether there is a co-contributor, and the quality of its contact list
  3. the mix and amount of efforts to publicize the opportunity to take the survey
  4. the topic of the survey
  5. the length, complexity and design of the survey questionnaire
  6. the period of time that the survey stays open
  7. whether a survey is part of a series
  8. inducements offered for participation
  9. reputation of the law firm.

But some variance in participation numbers relates to the total number of potential participants. All things being equal, a survey targeted at a relatively small number of potential participants will not reach the numbers of a broad-based survey. Stated differently, 100 responses might mean a robust response rate, such as 20% or higher, if only a few hundred people qualify to take a survey, whereas given a huge pool of people who might be appropriate for a survey, the response rate would be an anemic sub-1% response rate.

To start a framework for evaluating potential participant numbers, I looked at 16 survey reports that have between 100 and 110 participants. At least by controlling for the number of actual respondents, I thought I could evaluate factors that influenced the number. But the other factors became too numerous and the data set was too small.

So, since none of the reports stated even the number of email invitations sent out, I estimated my own figures for how many could have been invited. I chose to use a base-10 logarithmic scale to roughly categorize the potential total populations. Thus the smallest category was for narrow-gauged surveys for hundreds of potential participants: the ten squared category (102). The next largest category aimed at roughly 10 times more participants: thousands as ten cubed (103). Even broader surveys would have had a reachable set of possible participants in the tens of thousands, at ten raised to the fourth power (104). At the top end of my very approximate scale are surveys that could conceivably have invited a hundred thousand participants or more (105).

Below is how I categorized the surveys by this estimated log scale and in alphabetical order within increasing bands of potential participants. The quotes come from the report at the page noted. I have shortened them to the core information on which I estimated the scope of the survey’s population.

Even though my categorizations are loose and subjective, the point is that the number of respondents as a percentage of the total possible participants can range from significant percentages down to microscopic percentages. That is to say, \textit{response rates} vary enormously in these — and probably all — law firm research surveys


Clifford Chance Debt 2010 [pg. 4] “canvassed the opinion of 100 people involved in distressed debt about their views of the Asia-Pacific distressed debt market.”

CMS GCs 2017 [pg. 26] had “a quantitative survey of 100 senior in-house respondents law departments” that were almost half “drawn from FTSE 350 or FTSEurofirst 300 companies. A further 7% represent Fortune 500 companies.”

DWF Food 2018 [pgs. 3, 8] “surveyed 105 C-suite executives from leading food businesses” that are “in the UK.”

Pepper Hamilton PrivateFunds 2016 [pg. 1] “contacted CFOs and industry professionals across the US” who work in private funds.


CMS Russia 2009 [pg. 3] explains that its co-coordinator “interview[ed] 100 Russian M&A and corporate decision makers.”

Foley Lardner Telemedicine 2017 [pg. 16] “distributed this survey … and received responses from 107 senior-level executives and health care providers at hospitals, specialty clinics, ancillary services and related organizations.”

Reed Smith LondonWomen 2018 [pg. 22] explains that “A survey was launched via social media which was open to women working in the City of London with a job title equivalent to director, partner, head of department or C-level status.”

Technology Law GDPR 2017 [pg. 2] writes that “In-house legal counsel from 100 different organizations (the majority of which had 1,000+ employees) were invited to participate in a survey.”


Burgess Salmon Infrastructure 2017 [pg. 3] “drew on the opinions of over 100 [infrastructure] industry experts.”

Dykema Gossett Auto 2016 [pg. 3] “distributed its [survey] via e-mail to a group of senior executives and advisers in the automotive industry including CEOs, CFOs and other company officers.”

Freshfields Bruckhaus Crisis 2013 [pg. 3] “commissioned a survey of 102 senior crisis communications professionals from 12 countries across the UK, Europe, Asia and the US.”

Norton Rose ESOP 2014 [pg. 2] “conducted a survey of 104 [Australian] businesses — from startups to established companies.”

Reed Smith Lifesciences 2015 [pg. 4] commissioned a co-coordinator that “surveyed 100 senior executives (CEO, CIO, Director of Strategy) in biotechnology and pharmaceuticals companies” around the world.


Berwin Leighton Risk 2014 [pg. 2] researched “legal risk” in financial services organizations around the world. “The survey was submitted to participants in electronic format by direct email and was also hosted online at the BLP Legal Risk Consultancy homepage.”

Dykema Gossett MA 2013 [pg. 10] “distributed its [survey] via e-mail to a group of senior executives and advisors, CFOs and other company officers.”

Proskauer Rose Empl 2016 [pgs. 3-4] retained a co-coordinator that “conducted the survey online and by phone with more than 100 respondents who are in-house decision makers on labor and employment matters.”

More co-coordinators of survey projects by law firms

As I discover more and more survey reports, I also find more instances of co-coordinators. With this additional group, the total number of co-coordinators has nearly reached 100.  I have named the co-contributor and then the name of the law firm and its report topic and date.

  1. Aberdeen & Grampian Chamber of Commerce: Aberdein Considine RealEstate 2017
  2. Allen Associates: Royds Withy OBB 2017
  3. Bench Events: Berwin Leighton MENAHotel 2016
  4. ComRes: Resolution Divorce 2012
  5. GlobalData: Foot Anstey Retail 2018
  6. ICAS: Brodies Firm Brexit 2017
  7. iGov Survey: Ashfords Outsource 2017
  8. Independent Publishing Group; Nielson Book Research: Harbottle Lewis Publishing 2017
  9. Longitude: Charles Russel FamilyBus 2017
  10. Loudhouse: Ashfords Retail 2015
  11. SMF Schleus Marktforschung: Pinsent Masons GermanTech 2014
  12. The Climate Change Collaboration: ClientEarth Climate 2018
  13. The Housing LIN: Winckworth Sherwood OldHousing 2017
  14. The Review: Trowers Hamlin Occupiers 2015
  15. The University of Manchester: White Case WhiteCollar 2017

Law firms that produced only PDF reports of surveys, only non-reports, or both

As described previously, in my collection of hundreds of law-firm research surveys, 44 firms have released at least one set of survey results only as a press release, a post on a blog, or an article (a “non-report”). Also, 88 law firms have produced at least one survey report in PDF format (a “formal report”). Some — or perhaps all — of those law firms have produced the results of a research survey in both formats, formal report and non-report, but I would have to confirm with each firm about its history of reporting to be sure of both numbers.

Nevertheless, with the data at hand, 24 law firms are in both camps, having published at least one formal report and at least one non-report. Another 64 formal-report firms have not issued a single non-report, while 20 firms have not produced a single formal report.

The pie chart below visualizes these findings. The largest group, in the bottom slice, represents the 64 law firms that have produced only formal reports in PDF. At the upper left, the green (darkest) segment, extends out only about a third as far out as the largest segment, as it represents the 20 law firms that have not released a formal report of their survey findings (about one-third of 64). The third segment, in the upper right, represents the remaining 24 firms that have chosen both formats.

These preliminary findings of significant variability in reporting practices may reflect the decentralized style of large law firms. Individual practice groups or countries on their own can launch a research survey and then decide how to release the results they obtain. Then too, it may be that as a firm becomes more familiar with research surveys, it decides to shift how it brings the results to the attention of the world. Marketing functions may have more or less sway over budgets and standards for releasing data results. In the end, however, we must regard these findings as provisional, because further research may shift the composition of the three groups significantly.

Size of law firm correlated to whether it produces a PDF report

As of this writing, I have found 44 law firms that have released at least one set of survey results as a non-report. At least I have not located a report in PDF format. Some of those law firms have produced survey reports in PDF format, which I have. As the comparison set, I have found 88 law firms that have produced at least one survey report in PDF format — what I refer to as a “formal report.” Among those firms, some have produced non-reports also.

My goal was to find out whether smaller law firms, when they sponsor a research survey, are more likely to resort to non-reports. Unfortunately, it is not easy to determine the number of lawyers (or solicitors) in each of the firms. Often precise data exists as is the case for large US law firms. But for firms based in the UK, Ireland and Australia, data sources do not use consistent definitions of who is a practicing lawyer and who is a “legal professional” or “fee earner.” In any event, I did my best to record an appropriate number of lawyers for almost all of the firms noted above.

In short, the underlying data for the following observations is sketchy. So long as we recognize the methodological shortcomings, we can at least look at two summaries.

For the non-report firms, the average number of lawyers in the firm is 1,243; for the firms that produced a formal report, the average fell to 1,024. Considering the median number of lawyers, the non-report firms came in at 700 lawyers while the formal-report firms came in ten percent lower, at 635. Based on these averages and medians, the two groups of firms seems reasonably similar in terms of size as measured by number of lawyers. If anything, my hypothesis appears to be wrong: the non-report firms are larger!

Increase recently in number of surveys without a published report

Of the more than 400 research surveys by law firms that I have tracked, about one-out-of-five of them are known only from a press release or article. I have previously explained some caveats about that group proportion, and I call the information I have located “non-reports.”

In addition to the non-reports, I have learned of another 64 surveys but have categorized them as “Missing.” “Missing” denotes surveys that later surveys refer to but for which I have not located a press release, an article, or any other manifestation — other evidence is missing.

Setting aside the “Missing” reports, has the proportion varied of published reports to non-reports over the past few years? The plot that follows addresses that question. The height of the dark segment on top of each column, which represents the number of non-reports that year, certainly jumped in 2017 and so far in 2018, so it appears that the proportion of non-reports has noticeably increased. To explain one column, the year 2016 represents 40 reports in the bottom, lighter segment and 5 non-reports in the top, dark section.

Why might that be? Perhaps more firms are undertaking surveys, which has brought online somewhat smaller firms, so they don’t have the resources to invest in a report. Alternatively, experience or instinct has led increasing numbers of firms to feel that the return on investment from a report is not sufficient. Then again, perhaps I simply haven’t found the published reports.

By the way, we have seen no evidence that firms issue a press release and considerably later a report. Rather, our sense is that the first publicity about the results of the survey comes simultaneously with the publication of the report and other business development efforts.


More participants in surveys with reports than in surveys without formal reports

Of the 420-some research surveys by law firms that I know about, about one-out-of-five of them are known only from a press release or article. I have located a formal report for all the others. To be forthright, either the law firm or a co-coordinator possibly published the results in a formal report, but so far I have not located the report.

Obviously, it costs less to draft a press release or an article than to produce a report. Reports require design decisions, plots, graphics, and higher standards of quality. Moreover, aside from expense, it also seems plausible that firms choose not to go through the effort of preparing a formal report if the number of participants in the survey seems too small.

To test that hypothesis about relative numbers of participants, I looked at nine survey results — let’s call them “non-report surveys” — that disclose the number of survey participants. I also collected data from 204 formal reports that provide participant numbers. The average number of participants in the non-report surveys is 157. Where there is a report, the average is 420, but that number is hugely inflated by one survey that has 16,000 participants. When we omit that survey, the average drops to 343 — still more than twice the average number of participants as in the non-report surveyrs.

When we compare the median number of participants, the figures are 157 participants in the non-report surveys versus 203 participants in the reports. Thus, the medians disclose one-third more participants where a report has been published than where a report has not been published.

A statistical test looks at whether the difference in averages between two sets of numbers suggests that the sets are likely to have a meaningful difference — here, on participants. With our data, the crucial test value turns out to be so small that we can confidently reject the hypothesis that no difference exists between the two sets in terms of participants. Larger numbers of participants are strongly associated with reported surveys.

Technical note: For those interested in the statistics, we ran a Welch Two-Sample t-test and found a p-value of 0.0331, which means that if someone could sample over and over from the universe of reported and non-reported surveys, only about 3% of the time would such a large difference in averages show up. Such a low percentage justifies statisticians concluding that the data comes from meaningfully different populations (the results are “statistically significant”). Bear in mind that I have not looked at all the non-reports in my collection and that a few more of them added to the group analyzed as described above could potentially change the figures materially and therefore the statistical test. Or there may be a formal report somewhere.

Vertical rulers in survey reports by law firms

Horizontal rulers appear much more frequently than vertical rulers. But vertical rulers appear at times, and here are some examples. DWF London 2017 [pg. 4] places vertical rulers between key statistics But it is unlikely that these rulers help readers understand or identify the statistics.

King Wood AustraliaDirs 2015 [pg. 3] wads in very thick vertical rulers.

Allen Matkins CommlRE 2018 [pg. 16] covers four varieties of commercial space and names them in the margin, placing a vertical divider between each of them.

CMSNabarro UKRE 2016 [pg. 2] inserts not only vertical rulers between the four columns but also a horizontal ruler under “The UK picture”.

Taft Stettinius Entrep 2018 [pg. 4] eschews the horizontal look in favor of a vertical bar, in black. The bar is less a vertical ruler than a highlighter or a design element as it extends slightly above the material it sets off and not quite as far as the bottom of the material. Vertical rulers typically have something on each side, but this element has nothing on the left.


Weighting data from surveys by law firms

Surveyors sometimes weight their data to make the findings more representative of another set of information. For example, a law firm might realize that it has gotten too few responses from some demographic strata, such as manufacturers or companies with more than $5 billion in revenue. The firm might want to correct for the imbalance so that it can present conclusions respecting the entire population (remember, the survey captures but a sample from the population). The firm could weight the manufacturers or large companies that they got more heavily to create a sample more in line with reality.

How might such a transformation apply in surveys for the legal industry? Let’s assume that a firm knows roughly how many companies in the United States have revenue over $100 million by each major industry. Those known proportions enable weighting. If the participants materially under-represent some industry or revenue range, the proportions in each industry don’t match the proportions that we know to be true. One way to adjust (weight) the data set would be to replicate participants in industries (or revenue ranges) enough to make the survey data set more like the real data set.

In a rare example, CMS Nabarro HealthTech 2017 [pg. 19] states explicitly that the analysis applied no weightings.

King Spalding ClaimsProfs 2016 [pg. 10] explains that it calculated the “weighted average experience” for certain employees. This might mean that one company had fewer employees than the others, so the firm weighted that company’s numbers so that the larger companies would not disproportionately affect the average age. In other words, they might have weighted the average by the number of employees in each of the companies. As a matter of good methodology, it would have been better for the firm to explain what they did in order to calculate the weighted average.

White Case Arbitration 2010 [pg. 15] writes that it “weighted the results to reveal the highest ranked influences.” This could mean that a “very important on” rating was treated as a four, a “quite important” rating as a three, and so on down to zero. If every respondent had given one of the influences on choice of governing law the highest rating, a four, that would have been the maximum possible weighted score. Whatever the sum of the actual ratings were could then be calculated as a percentage of that highest possible rating. The table lists the responses in decreasing order according to that calculation. This is my supposition of the procedure, but again, it would have been much better had the firm explained how it calculated the “weighted rank.”

Dykema Gossett MA 2015 [pg. 5] does not explain what “weighted rank” means in the following snippet, but the firm may have applied the same technique.

On one question, Seyfarth Shaw RE 2017 [pg. 10] explained a similar translation: “Question No. 3 used an inverse weighted ranking system to score each response. For example in No. 3, 1=10 points, 2=9 points, 3=8 points, 4=7 points, 5=6 points, 6=5 points, 7=4 points, 8=3 points, 9=2 points, 10=1 point”

Miller Chevalier TaxPolicy 2017 [pg. 6] asked respondents to rank the top three. The firm then used an inverse ranking to treat a 1 as 3, a 2 as 2 and a 1 as 1 and summed to reach a weighted rank (score).

Sometimes surveys use the term “weight” to mean “rank”. Here is an example from Berwin Leighton Risk 2014 [pg. 6].