Numbers of co-contributors on surveys conducted by law firms

If some organization helps on a law firm’s research survey, the report clearly acknowledges that contribution. For example, as in the snippet below, Burgess Salmon Infrastructure 2018 [pg. 8] gave a shout out to its two co-coordinators (Infrastructure Intelligence and YouGov).

At least 12 law firms have conducted surveys with two different co-contributors. Three firms have worked with four co-contributors (Dentons, Morrison & Foerster, and Reed Smith) and two firms have worked with six co-contributors (CMS and Pinsent Masons).

Interestingly, two law firms have teamed with one or more other law firms: Shakespeare Martineau Brexit 2017 with Becker Büttner Held and Miller Chevalier LatAmCorruption 2016 with 10 regional law firms.

For most co-coordinator surveys, the pairing is one law firm and one co-coordinator. However, Pinsent Masons Infratech 2017 and Clifford Chance Debt 2007 sought the assistance of three co-coordinators for a research survey.

At this point, there are at least nine co-contributors who have helped on more than one survey by a law firm: Acritas, Alix Partners, ALM Intelligence (4 surveys), Canadian Corporate Counsel Association (5), the Economist Intelligence Unit, FTI Consulting (3), Infrastructure Intelligence, IPSOS (5), Ponemon Institute, RSG Consulting (3), and YouGov.

Double surveys by law firms, with two meanings

Consider two different meanings of “double survey.” One meaning applies to a law firm sending out two surveys, each to a different target audience, and then combining the responses in a report. A second meaning applies to a firm conducting more than one survey in a year, but with the same target audience.

Burgess Salmon Infrastructure 2018 [pg. 8] explains that it simultaneously conducted two separate surveys, one by interviews and the other by an online questionnaire. The report juxtaposes the findings.

Minter Ellison Cybersecurity 2017 [pg. 6] also undertook a double survey. With separate instruments, it reached out to members of boards of directors and also to chief information officers and others. The report combines the data.

Turning to the second meaning of “double survey”, one example started in 2015. Haynes Boone has conducted its energy borrowing survey twice yearly since then, e.g., Haynes Boone Borrowing 2018 [pg. 2].

Other firms that have conducted surveys twice a year on a topic include Morrison Foerster, e.g., Morrison Foerster MA 2018, and Irwin Mitchell, e.g., Irwin Mitchell Occupiers 2014. We also found an instance of quarterly surveys: Brodies Firm Brexit 2017!

Use scale questions, but think about text labels

Quite often law firms ask respondents to answer a question with a value from a scale. Those values should represent balanced positions on the scale. That is, they should have the same equal conceptual distance from one point to the next. For example, researchers have shown the perceived balance on the strongly disagree-disagree-neutral-agree-strongly agree scale.

Most survey designers set the bottom point as the worst possible situation and the top point as the best possible, then evenly spread the scale points in-between.

The text selected for the spectrum of choices deserves an extended discussion. Sometimes questions on surveys add text only to the polar values of a scale. For example, “Choose from a scale of 1 to 6 where 1 indicates “Yes, definitely” and 6 indicates “No, definitely not.” Alternatively, the question could supply intermediate scale positions with text: 2 indicates “Yes, probably”, 3 indicates “Maybe”, etc.

DLA Piper Compliance 2017 [pg. 6] used a 10-point scale and text at the extremes and in middle position:

It is hard to create text descriptions of positions on a scale that respondents perceive as equally spaced. If you put only numbers, respondents will unconsciously space the choices: but you will not have as clear a way to indicate what was in the mind of the respondents. On the other hand, words are inherently ambiguous and introduce all kinds of variability in interpretation by respondents.

Often the responses to a well-crafted scale question come back reasonably “normal,” as in the oft-seen bell-curve normal distribution. The midpoint gets the most responses and on either side the numbers drop or rise fairly symmetrically. Here is an example from a five-point scale.

 

How many invitees submit answers to law firm surveys?

Of the 464 law firm research surveys located to date, the number of participants is known for 273 of them. Osborne Clarke Consumer 2018 collected an extraordinary 16,000 participants so we have set it aside for this analysis as well as the next largest survey, CMS Restaurant 2018 at 5,446 participants, because they materially skew the aggregate calculations for the distribution.

Based on the slightly reduced set of data, the average numbers of participants is 417 while the median is 203. At the extremes, 11 surveys had fewer than 50 participants while six had 2,000 or more. Without the two outliers, the grand (known) total has reached 92,098.

The plot that follows shows the total number of participants per year.

The box plot shows more about the distribution of participants each year. The medians have been consistently around 200 participants. Lately, however, some outliers have been significantly above that figure.

Why do people take the time to respond to surveys from law firms?

  1. Most of them have some intrinsic interest in the subject of the survey.
  2. Longer term thinkers appreciate that reliable data about a subject will benefit everyone.
  3. Some respondents may feel flattered. Providing data and views affirms their sense of competence and knowledge.
  4. A survey is a break in the typical flow of work.
  5. Respondents feel grateful or loyal to the law firm that solicits answers.
  6. Many people feel good about being asked a favor and complying.

How long survey collection continues with law firm sponsors

For 44 research reports I have determined how long the survey was open, i.e., the data collection period. I picked those reports haphazardly over time — making no effort to be random or representative but simply to start calculating some statistics. With that caveat, the average data collection period is 1.5 months with a standard deviation of 0.74, which means that about two-thirds of the periods fell between 0.8 months (~3 weeks) and 2.3 months (~5 weeks). The shortest collection period was 0.1 months (3 days) while the longest was 3 months.

The plot shows the distribution of open periods together With the month in which the survey launched. No particular month seems favored.

Here are several reasons why law firms call a halt to collecting survey responses.

  1. New responses have slowed to a trickle
  2. A practice group is eager to start the analysis and find something out!
  3. Staff and partners have been pushed enough to persuade more participants
  4. The firm has emailed three reminders to potential participants
  5. The co-contributor has done enough or been pushed enough
  6. Qualified responses have hit triple digits, a respectable data set
  7. The participant group is sufficiently representative or filled out
  8. Marketing wants to get out first or early on some current issue
  9. The firm wants to meet the promise it made to participants to send them a report promptly
  10. The budget says it’s time to start the analysis (and usually a report)

The analysis and report preparation can begin before the last survey submission, but that is easier to do with a programming script that lets an analyst read in updated data and re-run the analysis with little additional effort.

Titles of law firm reports based on research surveys

The most common style of title starts with a few keywords and then adds another line or a few explanatory words after a colon. Here Ashurst GreekNPL 2017 is one of many examples:

Titles on survey reports range from functional to fanciful. “Outsourcing Public Services Across Local Government” [Ashfords Outsource 2017] is about as meat-and-potatoes as it gets; “South Florida Real Estate 2016 Outlook Survey” [Berger Singerman SFlaRE 2016] has a similar matter-of-factness.

Some titles expand: “The Good, the Bad, and the Troubling: Fasken Martineau’s 2017 Employer Occupational Health and Safety Survey Report Legal Compliance Challenges For Canadian Employers” [Fasken Martineau OHS 2017] or “Getting it right from the ground up: A survey on construction disputes: The causes and how to avoid them” [Russell McVeagh ConstrDisp 2018].

Only rarely does a title include both the name of the firm and the year, as in “Baker McKenzie 2016 Cloud Survey” [Baker McKenzie Cloud 2016]. A touch more commonly, the year alone appears: “European Acquisition Finance Debt Report 2011” [DLA Piper EuropeDebt 2011].

Titles that stand out entice the reader. Examples include “Mythbusting the common law marriage” [Mills Reeve CommonLaw 2017] or “The Multichannel High Street: A Nation of Shoppers: but is it a nation of shopkeepers?” [Squire Sanders Retail 2013].

Most titles get the job done with simple language and structure. A few approach complexity: “Finding the balance: human touch versus high tech: Millennials and the future of the hotel and restaurant sector” [CMS Restaurants 2018].

When law firms conduct a series of surveys, the titles usually morph in minor ways as the years pass. For instance:

  1. “Survey Of Office Occupiers: Changing Attitudes To Property Needs” [Irwin Mitchell Occupiers 2014]
  2.  “Survey Of Office Occupiers – Part III: Changing Attitudes To Property Needs – Autumn 2015” [Irwin Mitchell Occupiers 2015]
  3.  “Survey Of Office Occupiers – Part IV: Changing Attitudes To Property Needs and the Impact of Brexit – Summer 2016” [Irwin Mitchell Occupiers 2016]
  4.  “Property Trends in 2018 – Survey of Office Occupiers” [Irwin Mitchell Occupiers 2018].

Types of co-contributors on law firm research surveys

Earlier I identified co-contributors who have teamed with various law firms on research surveys. Not that the law firm always leads the survey project and retains a co-coordinator. Some research projects happen the other way around; perhaps a group that lacks funds solicits a law firm to help out or another organization wants legal commentary. This analysis does not differentiate surveys by the respective roles of the law firm and its co-coordinator.

Based on 91 survey reports available in PDF that I have analyzed, approximately 106 co-contributors are named (some more than once). A more precise number count would depend on categorizing the units of larger organizations separately or collectively, e.g., Acuris units and Economist Group units.

Based on self-descriptions on their home webpage, I categorized the co-contributors into 15 types. The line between types has looseness, as between ‘Market Research” and “Marketing”, or between “Consulting” and either of those. Be that as it may, knowing that more work needs to be done to confirm all of the match ups and that other research surveys will turn up additional co-contributors, at least a preliminary view can be shared here. The plot below shows the initial results.

Law firms that do not proceed entirely on their own with a survey gravitate toward co-coordinators that help them reach the target market. Market research firms, publications that reach a sector or niche within a sector, and trade groups that have members with shared interests are by far the most common match ups. At the other end of frequent collaboration, firms team with a wide variety of co-contributors.

Largest U.S. firms, gross revenue and number of research surveys

From the AMLAW listing for 2017, I looked at the 25 top law firms in terms of gross revenue. As to which of them have conducted or taken part in data-oriented research surveys, my investigations so far consist of searching for the name of the firm and the word “survey” using Google and then scanning down the first five or six pages of hits. The better method would be to search on the website of the firm itself, which should take place eventually.

In any case, at this point it appears that 16 of the 25 highest grossing U.S. law firms have not been involved in a research survey. In the plot below, they are the firms that have no green bar: Latham Watkins, Kirkland Ellis, Skadden Arps, Jones Day, Sidley Austin (which tried a survey a couple of years ago but didn’t complete it), Morgan Lewis, Gibson Dunn, Greenberg Traurig, Sullivan Cromwell (although I ran across a reference to a survey done in 2010 about Boards of Directors), Simpson Thacher, Cleary Gottlieb, Weil Gotshal, Paul Weiss, Quinn Emanuel, Davis Polk, and Wilmer Cutler.

The other nine firms are known to have sponsored at least one research survey, and six of them have been involved in more than one. The laurel wreath goes to DLA Piper, which at 28 surveys known to me almost equals the combined 32 of the other eight firms.

The plot sorts the law firms in descending order by gross revenue, which shows that five the top 12 firms have put this tool to use. Overall, however, the majority of these elite, huge U.S. law firms have not seen sufficient reason to take part in or publish a research survey.

Visualize variables in surveys with Sankey diagrams

Let’s say we would like to understand and visualize how survey reports vary in frequency by country, page orientation, and involvement with co-coordinators. A Sankey diagram (aka river plot) can reveal such insights on a plot as sized flows of numbers.

Consider a data set of 174 research-survey reports. For each report we know the headquarters country of the law firm or that it is a “VereinCLG” (firms that are either Swiss vereins or a “company limited by guarantee” (CLG)). Thus, for 9 surveys by Canadian law firms, 48 by UK law firms, 109 by U.S. firms, and 48 by VereinCLGs we also know whether the report was portrait or landscape orientation and whether the firm teamed with a co-contributor or surveyed on its own.

Starting at the left of the Sankey diagram below, the height of the four rectangles tell the relative proportions of surveys by country. Each rectangle then divides into two streams: the top stream flows into the Portrait orientation rectangle and the bottom stream flows into the Landscape rectangle. In the middle of the plot, the green rectangles indicate by their relative heights the proportions of portrait and landscape reports. Two streams flow from each of the orientation rectangles, the top one indicating the proportion of reports that did not have a co-coordinator (FALSE) and the lower stream the proportion that had a co-coordinator (TRUE). Again, the relative heights of the right-most rectangles suggests the proportions.

Consider the reports published by UK law firms. They are mostly portrait, because that stream is much thicker than the narrow stream pouring down into the “Landscape” rectangle at the bottom. But the Portrait and Landscape rectangles combine the data of all the countries, so I don’t think it is possible from this Sankey diagram to say what proportion of UK reports involved a co-coordinator. That said, of the portrait reports, fewer had co-coordinators but the balance was roughly even.

However, by swapping two words in the code that produced the first Sankey diagram, we produced the variation below that shows what proportion of a country’s reports involved a co-coordinator. It appears that the UK reports are approximately evenly divided between co-coordinators and no co-coordinator.

Potential participants of surveys on a logarithmic scale

Among the hundreds of survey reports that I have located, the number of participants varies enormously. The variance is a function of many factors:

  1. the size and quality of the law firm’s contact list
  2. whether there is a co-contributor, and the quality of its contact list
  3. the mix and amount of efforts to publicize the opportunity to take the survey
  4. the topic of the survey
  5. the length, complexity and design of the survey questionnaire
  6. the period of time that the survey stays open
  7. whether a survey is part of a series
  8. inducements offered for participation
  9. reputation of the law firm.

But some variance in participation numbers relates to the total number of potential participants. All things being equal, a survey targeted at a relatively small number of potential participants will not reach the numbers of a broad-based survey. Stated differently, 100 responses might mean a robust response rate, such as 20% or higher, if only a few hundred people qualify to take a survey, whereas given a huge pool of people who might be appropriate for a survey, the response rate would be an anemic sub-1% response rate.

To start a framework for evaluating potential participant numbers, I looked at 16 survey reports that have between 100 and 110 participants. At least by controlling for the number of actual respondents, I thought I could evaluate factors that influenced the number. But the other factors became too numerous and the data set was too small.

So, since none of the reports stated even the number of email invitations sent out, I estimated my own figures for how many could have been invited. I chose to use a base-10 logarithmic scale to roughly categorize the potential total populations. Thus the smallest category was for narrow-gauged surveys for hundreds of potential participants: the ten squared category (102). The next largest category aimed at roughly 10 times more participants: thousands as ten cubed (103). Even broader surveys would have had a reachable set of possible participants in the tens of thousands, at ten raised to the fourth power (104). At the top end of my very approximate scale are surveys that could conceivably have invited a hundred thousand participants or more (105).

Below is how I categorized the surveys by this estimated log scale and in alphabetical order within increasing bands of potential participants. The quotes come from the report at the page noted. I have shortened them to the core information on which I estimated the scope of the survey’s population.

Even though my categorizations are loose and subjective, the point is that the number of respondents as a percentage of the total possible participants can range from significant percentages down to microscopic percentages. That is to say, \textit{response rates} vary enormously in these — and probably all — law firm research surveys

Narrow

Clifford Chance Debt 2010 [pg. 4] “canvassed the opinion of 100 people involved in distressed debt about their views of the Asia-Pacific distressed debt market.”

CMS GCs 2017 [pg. 26] had “a quantitative survey of 100 senior in-house respondents law departments” that were almost half “drawn from FTSE 350 or FTSEurofirst 300 companies. A further 7% represent Fortune 500 companies.”

DWF Food 2018 [pgs. 3, 8] “surveyed 105 C-suite executives from leading food businesses” that are “in the UK.”

Pepper Hamilton PrivateFunds 2016 [pg. 1] “contacted CFOs and industry professionals across the US” who work in private funds.

Medium

CMS Russia 2009 [pg. 3] explains that its co-coordinator “interview[ed] 100 Russian M&A and corporate decision makers.”

Foley Lardner Telemedicine 2017 [pg. 16] “distributed this survey … and received responses from 107 senior-level executives and health care providers at hospitals, specialty clinics, ancillary services and related organizations.”

Reed Smith LondonWomen 2018 [pg. 22] explains that “A survey was launched via social media which was open to women working in the City of London with a job title equivalent to director, partner, head of department or C-level status.”

Technology Law GDPR 2017 [pg. 2] writes that “In-house legal counsel from 100 different organizations (the majority of which had 1,000+ employees) were invited to participate in a survey.”

Large

Burgess Salmon Infrastructure 2017 [pg. 3] “drew on the opinions of over 100 [infrastructure] industry experts.”

Dykema Gossett Auto 2016 [pg. 3] “distributed its [survey] via e-mail to a group of senior executives and advisers in the automotive industry including CEOs, CFOs and other company officers.”

Freshfields Bruckhaus Crisis 2013 [pg. 3] “commissioned a survey of 102 senior crisis communications professionals from 12 countries across the UK, Europe, Asia and the US.”

Norton Rose ESOP 2014 [pg. 2] “conducted a survey of 104 [Australian] businesses — from startups to established companies.”

Reed Smith Lifesciences 2015 [pg. 4] commissioned a co-coordinator that “surveyed 100 senior executives (CEO, CIO, Director of Strategy) in biotechnology and pharmaceuticals companies” around the world.

Huge

Berwin Leighton Risk 2014 [pg. 2] researched “legal risk” in financial services organizations around the world. “The survey was submitted to participants in electronic format by direct email and was also hosted online at the BLP Legal Risk Consultancy homepage.”

Dykema Gossett MA 2013 [pg. 10] “distributed its [survey] via e-mail to a group of senior executives and advisors, CFOs and other company officers.”

Proskauer Rose Empl 2016 [pgs. 3-4] retained a co-coordinator that “conducted the survey online and by phone with more than 100 respondents who are in-house decision makers on labor and employment matters.”