Numbers of co-contributors on surveys conducted by law firms

If some organization helps on a law firm’s research survey, the report clearly acknowledges that contribution. For example, as in the snippet below, Burgess Salmon Infrastructure 2018 [pg. 8] gave a shout out to its two co-coordinators (Infrastructure Intelligence and YouGov).

At least 12 law firms have conducted surveys with two different co-contributors. Three firms have worked with four co-contributors (Dentons, Morrison & Foerster, and Reed Smith) and two firms have worked with six co-contributors (CMS and Pinsent Masons).

Interestingly, two law firms have teamed with one or more other law firms: Shakespeare Martineau Brexit 2017 with Becker Büttner Held and Miller Chevalier LatAmCorruption 2016 with 10 regional law firms.

For most co-coordinator surveys, the pairing is one law firm and one co-coordinator. However, Pinsent Masons Infratech 2017 and Clifford Chance Debt 2007 sought the assistance of three co-coordinators for a research survey.

At this point, there are at least nine co-contributors who have helped on more than one survey by a law firm: Acritas, Alix Partners, ALM Intelligence (4 surveys), Canadian Corporate Counsel Association (5), the Economist Intelligence Unit, FTI Consulting (3), Infrastructure Intelligence, IPSOS (5), Ponemon Institute, RSG Consulting (3), and YouGov.

How long survey collection continues with law firm sponsors

For 44 research reports I have determined how long the survey was open, i.e., the data collection period. I picked those reports haphazardly over time — making no effort to be random or representative but simply to start calculating some statistics. With that caveat, the average data collection period is 1.5 months with a standard deviation of 0.74, which means that about two-thirds of the periods fell between 0.8 months (~3 weeks) and 2.3 months (~5 weeks). The shortest collection period was 0.1 months (3 days) while the longest was 3 months.

The plot shows the distribution of open periods together With the month in which the survey launched. No particular month seems favored.

Here are several reasons why law firms call a halt to collecting survey responses.

  1. New responses have slowed to a trickle
  2. A practice group is eager to start the analysis and find something out!
  3. Staff and partners have been pushed enough to persuade more participants
  4. The firm has emailed three reminders to potential participants
  5. The co-contributor has done enough or been pushed enough
  6. Qualified responses have hit triple digits, a respectable data set
  7. The participant group is sufficiently representative or filled out
  8. Marketing wants to get out first or early on some current issue
  9. The firm wants to meet the promise it made to participants to send them a report promptly
  10. The budget says it’s time to start the analysis (and usually a report)

The analysis and report preparation can begin before the last survey submission, but that is easier to do with a programming script that lets an analyst read in updated data and re-run the analysis with little additional effort.

Potential participants of surveys on a logarithmic scale

Among the hundreds of survey reports that I have located, the number of participants varies enormously. The variance is a function of many factors:

  1. the size and quality of the law firm’s contact list
  2. whether there is a co-contributor, and the quality of its contact list
  3. the mix and amount of efforts to publicize the opportunity to take the survey
  4. the topic of the survey
  5. the length, complexity and design of the survey questionnaire
  6. the period of time that the survey stays open
  7. whether a survey is part of a series
  8. inducements offered for participation
  9. reputation of the law firm.

But some variance in participation numbers relates to the total number of potential participants. All things being equal, a survey targeted at a relatively small number of potential participants will not reach the numbers of a broad-based survey. Stated differently, 100 responses might mean a robust response rate, such as 20% or higher, if only a few hundred people qualify to take a survey, whereas given a huge pool of people who might be appropriate for a survey, the response rate would be an anemic sub-1% response rate.

To start a framework for evaluating potential participant numbers, I looked at 16 survey reports that have between 100 and 110 participants. At least by controlling for the number of actual respondents, I thought I could evaluate factors that influenced the number. But the other factors became too numerous and the data set was too small.

So, since none of the reports stated even the number of email invitations sent out, I estimated my own figures for how many could have been invited. I chose to use a base-10 logarithmic scale to roughly categorize the potential total populations. Thus the smallest category was for narrow-gauged surveys for hundreds of potential participants: the ten squared category (102). The next largest category aimed at roughly 10 times more participants: thousands as ten cubed (103). Even broader surveys would have had a reachable set of possible participants in the tens of thousands, at ten raised to the fourth power (104). At the top end of my very approximate scale are surveys that could conceivably have invited a hundred thousand participants or more (105).

Below is how I categorized the surveys by this estimated log scale and in alphabetical order within increasing bands of potential participants. The quotes come from the report at the page noted. I have shortened them to the core information on which I estimated the scope of the survey’s population.

Even though my categorizations are loose and subjective, the point is that the number of respondents as a percentage of the total possible participants can range from significant percentages down to microscopic percentages. That is to say, \textit{response rates} vary enormously in these — and probably all — law firm research surveys


Clifford Chance Debt 2010 [pg. 4] “canvassed the opinion of 100 people involved in distressed debt about their views of the Asia-Pacific distressed debt market.”

CMS GCs 2017 [pg. 26] had “a quantitative survey of 100 senior in-house respondents law departments” that were almost half “drawn from FTSE 350 or FTSEurofirst 300 companies. A further 7% represent Fortune 500 companies.”

DWF Food 2018 [pgs. 3, 8] “surveyed 105 C-suite executives from leading food businesses” that are “in the UK.”

Pepper Hamilton PrivateFunds 2016 [pg. 1] “contacted CFOs and industry professionals across the US” who work in private funds.


CMS Russia 2009 [pg. 3] explains that its co-coordinator “interview[ed] 100 Russian M&A and corporate decision makers.”

Foley Lardner Telemedicine 2017 [pg. 16] “distributed this survey … and received responses from 107 senior-level executives and health care providers at hospitals, specialty clinics, ancillary services and related organizations.”

Reed Smith LondonWomen 2018 [pg. 22] explains that “A survey was launched via social media which was open to women working in the City of London with a job title equivalent to director, partner, head of department or C-level status.”

Technology Law GDPR 2017 [pg. 2] writes that “In-house legal counsel from 100 different organizations (the majority of which had 1,000+ employees) were invited to participate in a survey.”


Burgess Salmon Infrastructure 2017 [pg. 3] “drew on the opinions of over 100 [infrastructure] industry experts.”

Dykema Gossett Auto 2016 [pg. 3] “distributed its [survey] via e-mail to a group of senior executives and advisers in the automotive industry including CEOs, CFOs and other company officers.”

Freshfields Bruckhaus Crisis 2013 [pg. 3] “commissioned a survey of 102 senior crisis communications professionals from 12 countries across the UK, Europe, Asia and the US.”

Norton Rose ESOP 2014 [pg. 2] “conducted a survey of 104 [Australian] businesses — from startups to established companies.”

Reed Smith Lifesciences 2015 [pg. 4] commissioned a co-coordinator that “surveyed 100 senior executives (CEO, CIO, Director of Strategy) in biotechnology and pharmaceuticals companies” around the world.


Berwin Leighton Risk 2014 [pg. 2] researched “legal risk” in financial services organizations around the world. “The survey was submitted to participants in electronic format by direct email and was also hosted online at the BLP Legal Risk Consultancy homepage.”

Dykema Gossett MA 2013 [pg. 10] “distributed its [survey] via e-mail to a group of senior executives and advisors, CFOs and other company officers.”

Proskauer Rose Empl 2016 [pgs. 3-4] retained a co-coordinator that “conducted the survey online and by phone with more than 100 respondents who are in-house decision makers on labor and employment matters.”

Weighting data from surveys by law firms

Surveyors sometimes weight their data to make the findings more representative of another set of information. For example, a law firm might realize that it has gotten too few responses from some demographic strata, such as manufacturers or companies with more than $5 billion in revenue. The firm might want to correct for the imbalance so that it can present conclusions respecting the entire population (remember, the survey captures but a sample from the population). The firm could weight the manufacturers or large companies that they got more heavily to create a sample more in line with reality.

How might such a transformation apply in surveys for the legal industry? Let’s assume that a firm knows roughly how many companies in the United States have revenue over $100 million by each major industry. Those known proportions enable weighting. If the participants materially under-represent some industry or revenue range, the proportions in each industry don’t match the proportions that we know to be true. One way to adjust (weight) the data set would be to replicate participants in industries (or revenue ranges) enough to make the survey data set more like the real data set.

In a rare example, CMS Nabarro HealthTech 2017 [pg. 19] states explicitly that the analysis applied no weightings.

King Spalding ClaimsProfs 2016 [pg. 10] explains that it calculated the “weighted average experience” for certain employees. This might mean that one company had fewer employees than the others, so the firm weighted that company’s numbers so that the larger companies would not disproportionately affect the average age. In other words, they might have weighted the average by the number of employees in each of the companies. As a matter of good methodology, it would have been better for the firm to explain what they did in order to calculate the weighted average.

White Case Arbitration 2010 [pg. 15] writes that it “weighted the results to reveal the highest ranked influences.” This could mean that a “very important on” rating was treated as a four, a “quite important” rating as a three, and so on down to zero. If every respondent had given one of the influences on choice of governing law the highest rating, a four, that would have been the maximum possible weighted score. Whatever the sum of the actual ratings were could then be calculated as a percentage of that highest possible rating. The table lists the responses in decreasing order according to that calculation. This is my supposition of the procedure, but again, it would have been much better had the firm explained how it calculated the “weighted rank.”

Dykema Gossett MA 2015 [pg. 5] does not explain what “weighted rank” means in the following snippet, but the firm may have applied the same technique.

On one question, Seyfarth Shaw RE 2017 [pg. 10] explained a similar translation: “Question No. 3 used an inverse weighted ranking system to score each response. For example in No. 3, 1=10 points, 2=9 points, 3=8 points, 4=7 points, 5=6 points, 6=5 points, 7=4 points, 8=3 points, 9=2 points, 10=1 point”

Miller Chevalier TaxPolicy 2017 [pg. 6] asked respondents to rank the top three. The firm then used an inverse ranking to treat a 1 as 3, a 2 as 2 and a 1 as 1 and summed to reach a weighted rank (score).

Sometimes surveys use the term “weight” to mean “rank”. Here is an example from Berwin Leighton Risk 2014 [pg. 6].


Efforts by law firms to obtain representative survey respondents

Brodies Firm GDPR 2018 [pg. 2] explains that the sample it surveyed resembles U.K. businesses as a whole by industry and by revenue or number of employees.

Osborne Clarke Consumer 2018 [pg. 28] strived to balance its participants within each nation.

White Case Arbitration 2010 [pg. 40] excellently describes its efforts to reach out and obtain a representative group of participants.

It’s very unusual for reports in a series to point out differences in the participant pools. Here is one example, however. Baker McKenzie Cloud 2016 [pg. 5] acknowledges that the 2016 survey has more respondents who are lawyers (“in a legal role”) than previous surveys.

Surveys conducted by law firms twice a year

Almost all of the law firm research surveys are conducted once a year. The effort is considerable and firms want to allow sufficient time to pass, especially if they are conducting a series, for changes to appear in the data. Annual surveys rule. That said, at least three law firms have conducted surveys on twice yearly.

Irwin Mitchell Occupiers 2015 [pg. 3] is one of the reports that has gathered data in the Spring and Fall regarding office rentals.

Morrison Foerster MA Two 2017 [pg. 3] reflects surveys that collect input in April and then in September.

Haynes Boone Borrowing 2016 [pg. 2] represents one of a series that gathered data on borrowing practices in April and September.

Law firm surveys and months held open; month started

It seems likely that the longer a survey is open, the more people will take part. But the data does not support that seemingly commonsense notion. For a group of 34 surveys selected by mostly because they all state the duration of the survey and the number who took it, the correlation between the number of weeks open and the number of participants was a negative 0.2! The shorter open periods were associated with more taking part!

What drives numbers of participants more than length of time open depends more on the quality and size of the email invitation list. By “quality” we mean that the invitees have a reasonable chance of being interested in the survey; the list isn’t some random collection of email addresses. By “size” we mean the sheer number of invitees; all things being equal, if more people receive the invitation, more people will decide to complete it.

Other factors that drive participation rates likely include whether the invitees know and respect the law firm (or co-contributors), the time demands of the survey, the topic, and the level of the invitee (senior executives and general counsel are bombarded with requests to complete surveys, but more junior people may receive invitations rarely and be more willing to participate).

The scatter plot below shows along the bottom axis how many months a survey was open and along the left, vertical axis how many participants completed it. Open periods of one month or of two months were the most common. For all of these surveys, with 8,500 total participants and 56 total months open, the average number of participants per month was 152.

We will need more surveys to derive dependable numbers on averages per month, and likewise to look at averages of participants per season.

Does seasonality influence participation numbers? Does it make a difference in what month you launch your survey? The next plots tells us that firms had no particular favoritism, except that none of them triggered their survey in the middle of summer, in July.

Institutional calendars, workload, or summer vacation plans may account more for the starting month of the survey than sensitivity to what will maximize participant numbers. Law firms may have budgets based on fiscal years or they may orient their survey toward a conference or try to catch the wave of heightened interest in a topic

Co-contributors to law-firm research surveys (Part III)

Twice I have written about instances of co-contributors [18 of them] and [13 more co-contributors] and their respective survey reports. Further digging has uncovered another group of 16 co-contributors.

  1. Achieve — Taft Stettinius Entrep 2018
  2. ANA — Reed Smith MediaRebates 2012
  3. Association of Foreign Banks — Norton Rose Brexit 2017
  4. Australian Institute of Company Directors — King Wood AustralianDirs 2016
  5. Becker Büttner Held — Shakespeare Martineau Brexit 2017
  6. Economist Group — Herbert Smith MandA 2017
  7. Gamesa — Brodies Firm Wind 2013
  8. Institution of Civil Engineers and techUK and Mergermarket — Pinsent Masons Infratech 2017
  9. Ipsos MORI Scotland — Brodies Firm Brexit 2017
  10. IVC Research Center — Meitar Liquornik TechVC 2018
  11. National Foreign Trade Council — Miller Chevalier TaxPolicy 2018
  12. Northern Ireland Chamber of Commerce — Goodbody GDPR 2018
  13. Oxford Analytica — Morrison Foerster ConsumerProd 2018
  14. Ponemon Institute — McDermott Will GDPR 2018, Kilpatrick Townsend CyberSec 2017
  15. Singapore Corp. Counsel — CMS SingaporeGCs 2018
  16. The Lawyer and YouGov — Pinsent Masons Brexit 2017
  17. “an independent consultancy” — Carlton Fields CA 2018

s of this writing, therefore, law firms have teamed on research surveys with at least 47 different organizations. Because some of those organizations have been involved in more than one survey by the firm (and sometimes surveys by more than one firm), the total of surveys with a co-contributor is likely nearly 70. But it is impossible to figure out the percentage that have a co-contributor even of the 309 law firm surveys I know about. First, I have not checked each one. Second, a few dozen of those surveys are known only from a press release, article or later survey report, not from a PDF report. Third, a firm might have worked with another entity without acknowledging that entity in the survey report.

Interviews can supplement the quantitative data gathered by a survey

Several firms combine modes of data gathering. They start with a survey emailed to their invitee list or otherwise publicized. At some point later the firm (or the service provider it retained) seeks interviews with a subset of the invitees. (At least we assume that those who were interviewed also completed a survey, but the reports do not confirm that assumption.)

The survey gathers quantitative data while the interviews gather qualitative insights. Interviews cost money, but what firms learn from conversations deepens, clarifies and amplifies the story told by survey data. Interviews also enable the firm to strengthen its connections to participants who care about the topic.

The reports make little of the interview process and provide almost no detail about them in general. They show up as quotes and case studies. DLA Piper Debt 2015 , for example, states that 18 interviews were conducted; commendably it lists the names and organizations of those who were interviewed [pg. 30]. We show the first few in the snippet below.

Reed Smith LondonWomen 2018 [pg. 22] mentions that “Several individuals opted to take part in further discussion through email exchange, in-person meetings and telephone interviews.” As a prelude to those discussions, in the invitation to women to take the survey the firm explained: “We will be inviting those who wish to speak on-the-record to take part in telephone or in-person interviews to impart advice and top tips. If you wish to take part in an interview, please fill in the contact details at the end of the survey.” This background tells us about the opt-in process of the firm, although the report itself does not refer to it.

HoganLovells Cross-Border 2014 [pg. 28] explains that interviews were conducted with 140 “general counsel, senior lawyers, and executives.” As with the other examples here, the report adds no detail about how long the interviews lasted or the questions asked during them.

Clifford Chance Debt 2007 [pg. 3] doesn’t say how many interviews were conducted, only that interviews took place during November 2007. It would have been good for the firm to have said something more about how many people they spoke with and how those people were chosen.

Norton Rose Lit 2017 surveyed invitees, “with a telephone interview campaign following” [pg. 5] and adds later in the report [pg. 38] that there was an “interview campaign following [the online survey] across July, August and early September 2017.”

NAICS classification of industries would help surveys four ways

If only there were a standard way to describe survey participants by industry … There is! Law firms could identify, analyze, and report on their participants by the North American Industry Classification System (NAICS) categories. This system has moved beyond the venerable SIC (Standard Industrial Code) categories. The NAICS offers a range of two-digit classifications that map well to the extant proliferation of industry/sector designations seen in law firm reports. Those classification together with the three- and four-digit elaborations on them easily suffice for law-firm research surveys.

If NAICS codes became the convention for law firm research surveys, at least four benefits would follow.

Mash-up data. For data analysts, “mash-up” describes the process of melding two sets of data. If firms used the NAICS, other data would then be available for analysis. Longitudinal data sets, meaning those maintained over a period of time, that the U.S. government has collected by NAICS code can supplement information about the number of businesses in the industry, more detail about those businesses, the number of employees in the businesses, and so forth. Everyone would benefit from richer, more insightful analyses after various mash-ups.

Consistency among surveys. If law firms adopted this standard classification system, readers of their reports and researchers would be much more able to compare results by industries. In the current disorder, and so long as each firm defines its industries idiosyncratically, comparisons and meta-analyses become much harder to carry out, if not impossible.

Improving the representativeness of the sample data. Because the NAICS data sets provide law firms with reliable counts of companies by industry, they could deploy techniques to make their convenience samples more representative of the actual distribution of U.S. businesses. One method of doing this, which we explain elsewhere, is called “raking.” As sample data is transformed to closely resemble population data, deeper statistical analyses become available.

Impute missing values. “Imputation” is the term statisticians use for filling in missing values. If a law firm has data about its participants by their NAICS code plus other information such as revenue, the firm could impute the number of employees of that company. An explanation of that methodology to supplement data can be found elsewhere, but it would be available to a firm so long as the industry coding conforms to the NAICS. For example, a firm that collects revenue, industry code, and state can even more accurately impute a number for employees. Fuller data sets enable better analyses.