Years of survey series and numbers of participants

Does the longevity of a survey series affect the average number of participants in the series? This is likely to be too crude a question, because the target populations of series differ significantly. Then too, firms might modify their questions as the series goes along rather than repeating the same questions, which could affect participation. A series might bring on different co-coordinators or change how it reaches out for participants. If we could control for factors such as these, which might swamp changes in participant numbers arising simply from annual invites, content, and publicity, we could make some headway on the question, but the data for that level of analysis is not available. Also, averaging participant numbers over the years of a survey series may conceal material ups and downs.

Moreover, of greater usefulness to law firms would be knowing whether numbers of participants tend to increase over the life of a series as it becomes better known and more relied on.

We plunge ahead anyway. To start, consider the series that have been sponsored by a law firm for four years or more. We know of 21 as are presented in the plot below. The color coding from the legend at the bottom corresponds to how many surveys have been in the series (some of which are ongoing). The color coding moves from midnight blue for the four-year series to the lightest (yellow) for the longest-running survey (13 years).

As we speculated above, a regression of how many years a survey has been conducted against average participants provides no insight. Other factors than the number of years a survey series has run influence the number of participants more.

How many invitees submit answers to law firm surveys?

Of the 464 law firm research surveys located to date, the number of participants is known for 273 of them. Osborne Clarke Consumer 2018 collected an extraordinary 16,000 participants so we have set it aside for this analysis as well as the next largest survey, CMS Restaurant 2018 at 5,446 participants, because they materially skew the aggregate calculations for the distribution.

Based on the slightly reduced set of data, the average numbers of participants is 417 while the median is 203. At the extremes, 11 surveys had fewer than 50 participants while six had 2,000 or more. Without the two outliers, the grand (known) total has reached 92,098.

The plot that follows shows the total number of participants per year.

The box plot shows more about the distribution of participants each year. The medians have been consistently around 200 participants. Lately, however, some outliers have been significantly above that figure.

Why do people take the time to respond to surveys from law firms?

  1. Most of them have some intrinsic interest in the subject of the survey.
  2. Longer term thinkers appreciate that reliable data about a subject will benefit everyone.
  3. Some respondents may feel flattered. Providing data and views affirms their sense of competence and knowledge.
  4. A survey is a break in the typical flow of work.
  5. Respondents feel grateful or loyal to the law firm that solicits answers.
  6. Many people feel good about being asked a favor and complying.

How long survey collection continues with law firm sponsors

For 44 research reports I have determined how long the survey was open, i.e., the data collection period. I picked those reports haphazardly over time — making no effort to be random or representative but simply to start calculating some statistics. With that caveat, the average data collection period is 1.5 months with a standard deviation of 0.74, which means that about two-thirds of the periods fell between 0.8 months (~3 weeks) and 2.3 months (~5 weeks). The shortest collection period was 0.1 months (3 days) while the longest was 3 months.

The plot shows the distribution of open periods together With the month in which the survey launched. No particular month seems favored.

Here are several reasons why law firms call a halt to collecting survey responses.

  1. New responses have slowed to a trickle
  2. A practice group is eager to start the analysis and find something out!
  3. Staff and partners have been pushed enough to persuade more participants
  4. The firm has emailed three reminders to potential participants
  5. The co-contributor has done enough or been pushed enough
  6. Qualified responses have hit triple digits, a respectable data set
  7. The participant group is sufficiently representative or filled out
  8. Marketing wants to get out first or early on some current issue
  9. The firm wants to meet the promise it made to participants to send them a report promptly
  10. The budget says it’s time to start the analysis (and usually a report)

The analysis and report preparation can begin before the last survey submission, but that is easier to do with a programming script that lets an analyst read in updated data and re-run the analysis with little additional effort.

Potential participants of surveys on a logarithmic scale

Among the hundreds of survey reports that I have located, the number of participants varies enormously. The variance is a function of many factors:

  1. the size and quality of the law firm’s contact list
  2. whether there is a co-contributor, and the quality of its contact list
  3. the mix and amount of efforts to publicize the opportunity to take the survey
  4. the topic of the survey
  5. the length, complexity and design of the survey questionnaire
  6. the period of time that the survey stays open
  7. whether a survey is part of a series
  8. inducements offered for participation
  9. reputation of the law firm.

But some variance in participation numbers relates to the total number of potential participants. All things being equal, a survey targeted at a relatively small number of potential participants will not reach the numbers of a broad-based survey. Stated differently, 100 responses might mean a robust response rate, such as 20% or higher, if only a few hundred people qualify to take a survey, whereas given a huge pool of people who might be appropriate for a survey, the response rate would be an anemic sub-1% response rate.

To start a framework for evaluating potential participant numbers, I looked at 16 survey reports that have between 100 and 110 participants. At least by controlling for the number of actual respondents, I thought I could evaluate factors that influenced the number. But the other factors became too numerous and the data set was too small.

So, since none of the reports stated even the number of email invitations sent out, I estimated my own figures for how many could have been invited. I chose to use a base-10 logarithmic scale to roughly categorize the potential total populations. Thus the smallest category was for narrow-gauged surveys for hundreds of potential participants: the ten squared category (102). The next largest category aimed at roughly 10 times more participants: thousands as ten cubed (103). Even broader surveys would have had a reachable set of possible participants in the tens of thousands, at ten raised to the fourth power (104). At the top end of my very approximate scale are surveys that could conceivably have invited a hundred thousand participants or more (105).

Below is how I categorized the surveys by this estimated log scale and in alphabetical order within increasing bands of potential participants. The quotes come from the report at the page noted. I have shortened them to the core information on which I estimated the scope of the survey’s population.

Even though my categorizations are loose and subjective, the point is that the number of respondents as a percentage of the total possible participants can range from significant percentages down to microscopic percentages. That is to say, \textit{response rates} vary enormously in these — and probably all — law firm research surveys


Clifford Chance Debt 2010 [pg. 4] “canvassed the opinion of 100 people involved in distressed debt about their views of the Asia-Pacific distressed debt market.”

CMS GCs 2017 [pg. 26] had “a quantitative survey of 100 senior in-house respondents law departments” that were almost half “drawn from FTSE 350 or FTSEurofirst 300 companies. A further 7% represent Fortune 500 companies.”

DWF Food 2018 [pgs. 3, 8] “surveyed 105 C-suite executives from leading food businesses” that are “in the UK.”

Pepper Hamilton PrivateFunds 2016 [pg. 1] “contacted CFOs and industry professionals across the US” who work in private funds.


CMS Russia 2009 [pg. 3] explains that its co-coordinator “interview[ed] 100 Russian M&A and corporate decision makers.”

Foley Lardner Telemedicine 2017 [pg. 16] “distributed this survey … and received responses from 107 senior-level executives and health care providers at hospitals, specialty clinics, ancillary services and related organizations.”

Reed Smith LondonWomen 2018 [pg. 22] explains that “A survey was launched via social media which was open to women working in the City of London with a job title equivalent to director, partner, head of department or C-level status.”

Technology Law GDPR 2017 [pg. 2] writes that “In-house legal counsel from 100 different organizations (the majority of which had 1,000+ employees) were invited to participate in a survey.”


Burgess Salmon Infrastructure 2017 [pg. 3] “drew on the opinions of over 100 [infrastructure] industry experts.”

Dykema Gossett Auto 2016 [pg. 3] “distributed its [survey] via e-mail to a group of senior executives and advisers in the automotive industry including CEOs, CFOs and other company officers.”

Freshfields Bruckhaus Crisis 2013 [pg. 3] “commissioned a survey of 102 senior crisis communications professionals from 12 countries across the UK, Europe, Asia and the US.”

Norton Rose ESOP 2014 [pg. 2] “conducted a survey of 104 [Australian] businesses — from startups to established companies.”

Reed Smith Lifesciences 2015 [pg. 4] commissioned a co-coordinator that “surveyed 100 senior executives (CEO, CIO, Director of Strategy) in biotechnology and pharmaceuticals companies” around the world.


Berwin Leighton Risk 2014 [pg. 2] researched “legal risk” in financial services organizations around the world. “The survey was submitted to participants in electronic format by direct email and was also hosted online at the BLP Legal Risk Consultancy homepage.”

Dykema Gossett MA 2013 [pg. 10] “distributed its [survey] via e-mail to a group of senior executives and advisors, CFOs and other company officers.”

Proskauer Rose Empl 2016 [pgs. 3-4] retained a co-coordinator that “conducted the survey online and by phone with more than 100 respondents who are in-house decision makers on labor and employment matters.”

More participants in surveys with reports than in surveys without formal reports

Of the 420-some research surveys by law firms that I know about, about one-out-of-five of them are known only from a press release or article. I have located a formal report for all the others. To be forthright, either the law firm or a co-coordinator possibly published the results in a formal report, but so far I have not located the report.

Obviously, it costs less to draft a press release or an article than to produce a report. Reports require design decisions, plots, graphics, and higher standards of quality. Moreover, aside from expense, it also seems plausible that firms choose not to go through the effort of preparing a formal report if the number of participants in the survey seems too small.

To test that hypothesis about relative numbers of participants, I looked at nine survey results — let’s call them “non-report surveys” — that disclose the number of survey participants. I also collected data from 204 formal reports that provide participant numbers. The average number of participants in the non-report surveys is 157. Where there is a report, the average is 420, but that number is hugely inflated by one survey that has 16,000 participants. When we omit that survey, the average drops to 343 — still more than twice the average number of participants as in the non-report surveyrs.

When we compare the median number of participants, the figures are 157 participants in the non-report surveys versus 203 participants in the reports. Thus, the medians disclose one-third more participants where a report has been published than where a report has not been published.

A statistical test looks at whether the difference in averages between two sets of numbers suggests that the sets are likely to have a meaningful difference — here, on participants. With our data, the crucial test value turns out to be so small that we can confidently reject the hypothesis that no difference exists between the two sets in terms of participants. Larger numbers of participants are strongly associated with reported surveys.

Technical note: For those interested in the statistics, we ran a Welch Two-Sample t-test and found a p-value of 0.0331, which means that if someone could sample over and over from the universe of reported and non-reported surveys, only about 3% of the time would such a large difference in averages show up. Such a low percentage justifies statisticians concluding that the data comes from meaningfully different populations (the results are “statistically significant”). Bear in mind that I have not looked at all the non-reports in my collection and that a few more of them added to the group analyzed as described above could potentially change the figures materially and therefore the statistical test. Or there may be a formal report somewhere.

Average number of participants in surveys by law firms

Of the data set so far, we know the number of participants in 114 of the surveys. Three challenges have prevented knowing the participant numbers of the other 235 identified surveys. First, when no report has been published (or located by us), we often can’t know participant numbers from a press release or a reference to the survey in an article. Second, even when we have located a PDF of a published report, sometimes it does not provide that crucial fact of methodology. Third, we have not taken the time to extract from all the existing PDF reports their numbers of participants.

Extraordinarily, Osborne Clark Consumer 2018 obtained 16,000 participants. For this analysis and the associated plot, we have dropped that survey from the data set because otherwise its incredibly large response would skew the results.

The number of surveys for which participant numbers are available is low in the early years, but in the last decade the average number of participants hovers around the 250 mark. Over this entire set, the median number of participants is 210.

The plot with goldenrod columns divides all of the surveys for which we have participant data into 10 roughly equal ranges. They are equal because they have approximately the same number of surveys in each range, but the ranges themselves vary. To pick one for explanatory purposes, the range on the left, a dozen surveys collected data on 20-to-less-than-69 participants.