Co-contributors to law-firm research surveys (Part III)

Twice I have written about instances of co-contributors [18 of them] and [13 more co-contributors] and their respective survey reports. Further digging has uncovered another group of 16 co-contributors.

  1. Achieve — Taft Stettinius Entrep 2018
  2. ANA — Reed Smith MediaRebates 2012
  3. Association of Foreign Banks — Norton Rose Brexit 2017
  4. Australian Institute of Company Directors — King Wood AustralianDirs 2016
  5. Becker B├╝ttner Held — Shakespeare Martineau Brexit 2017
  6. Economist Group — Herbert Smith MandA 2017
  7. Gamesa — Brodies Firm Wind 2013
  8. Institution of Civil Engineers and techUK and Mergermarket — Pinsent Masons Infratech 2017
  9. Ipsos MORI Scotland — Brodies Firm Brexit 2017
  10. IVC Research Center — Meitar Liquornik TechVC 2018
  11. National Foreign Trade Council — Miller Chevalier TaxPolicy 2018
  12. Northern Ireland Chamber of Commerce — Goodbody GDPR 2018
  13. Oxford Analytica — Morrison Foerster ConsumerProd 2018
  14. Ponemon Institute — McDermott Will GDPR 2018, Kilpatrick Townsend CyberSec 2017
  15. Singapore Corp. Counsel — CMS SingaporeGCs 2018
  16. The Lawyer and YouGov — Pinsent Masons Brexit 2017
  17. “an independent consultancy” — Carlton Fields CA 2018

s of this writing, therefore, law firms have teamed on research surveys with at least 47 different organizations. Because some of those organizations have been involved in more than one survey by the firm (and sometimes surveys by more than one firm), the total of surveys with a co-contributor is likely nearly 70. But it is impossible to figure out the percentage that have a co-contributor even of the 309 law firm surveys I know about. First, I have not checked each one. Second, a few dozen of those surveys are known only from a press release, article or later survey report, not from a PDF report. Third, a firm might have worked with another entity without acknowledging that entity in the survey report.

Interviews can supplement the quantitative data gathered by a survey

Several firms combine modes of data gathering. They start with a survey emailed to their invitee list or otherwise publicized. At some point later the firm (or the service provider it retained) seeks interviews with a subset of the invitees. (At least we assume that those who were interviewed also completed a survey, but the reports do not confirm that assumption.)

The survey gathers quantitative data while the interviews gather qualitative insights. Interviews cost money, but what firms learn from conversations deepens, clarifies and amplifies the story told by survey data. Interviews also enable the firm to strengthen its connections to participants who care about the topic.

The reports make little of the interview process and provide almost no detail about them in general. They show up as quotes and case studies. DLA Piper Debt 2015 , for example, states that 18 interviews were conducted; commendably it lists the names and organizations of those who were interviewed [pg. 30]. We show the first few in the snippet below.

Reed Smith LondonWomen 2018 [pg. 22] mentions that “Several individuals opted to take part in further discussion through email exchange, in-person meetings and telephone interviews.” As a prelude to those discussions, in the invitation to women to take the survey the firm explained: “We will be inviting those who wish to speak on-the-record to take part in telephone or in-person interviews to impart advice and top tips. If you wish to take part in an interview, please fill in the contact details at the end of the survey.” This background tells us about the opt-in process of the firm, although the report itself does not refer to it.

HoganLovells Cross-Border 2014 [pg. 28] explains that interviews were conducted with 140 “general counsel, senior lawyers, and executives.” As with the other examples here, the report adds no detail about how long the interviews lasted or the questions asked during them.

Clifford Chance Debt 2007 [pg. 3] doesn’t say how many interviews were conducted, only that interviews took place during November 2007. It would have been good for the firm to have said something more about how many people they spoke with and how those people were chosen.

Norton Rose Lit 2017 surveyed invitees, “with a telephone interview campaign following” [pg. 5] and adds later in the report [pg. 38] that there was an “interview campaign following [the online survey] across July, August and early September 2017.”

NAICS classification of industries would help surveys four ways

If only there were a standard way to describe survey participants by industry … There is! Law firms could identify, analyze, and report on their participants by the North American Industry Classification System (NAICS) categories. This system has moved beyond the venerable SIC (Standard Industrial Code) categories. The NAICS offers a range of two-digit classifications that map well to the extant proliferation of industry/sector designations seen in law firm reports. Those classification together with the three- and four-digit elaborations on them easily suffice for law-firm research surveys.

If NAICS codes became the convention for law firm research surveys, at least four benefits would follow.

Mash-up data. For data analysts, “mash-up” describes the process of melding two sets of data. If firms used the NAICS, other data would then be available for analysis. Longitudinal data sets, meaning those maintained over a period of time, that the U.S. government has collected by NAICS code can supplement information about the number of businesses in the industry, more detail about those businesses, the number of employees in the businesses, and so forth. Everyone would benefit from richer, more insightful analyses after various mash-ups.

Consistency among surveys. If law firms adopted this standard classification system, readers of their reports and researchers would be much more able to compare results by industries. In the current disorder, and so long as each firm defines its industries idiosyncratically, comparisons and meta-analyses become much harder to carry out, if not impossible.

Improving the representativeness of the sample data. Because the NAICS data sets provide law firms with reliable counts of companies by industry, they could deploy techniques to make their convenience samples more representative of the actual distribution of U.S. businesses. One method of doing this, which we explain elsewhere, is called “raking.” As sample data is transformed to closely resemble population data, deeper statistical analyses become available.

Impute missing values. “Imputation” is the term statisticians use for filling in missing values. If a law firm has data about its participants by their NAICS code plus other information such as revenue, the firm could impute the number of employees of that company. An explanation of that methodology to supplement data can be found elsewhere, but it would be available to a firm so long as the industry coding conforms to the NAICS. For example, a firm that collects revenue, industry code, and state can even more accurately impute a number for employees. Fuller data sets enable better analyses.

Balancing survey respondents across industries and geographies

When most law firms conduct a research survey, they primarily hope to get enough respondents so that the results are defensible and generalizable. That is to say, they want enough data from desirable respondents to be able to say that their findings make sense and can be extrapolated beyond their particular group of participants.

Ropes & Gray made a very different decision in its 2017 survey on risk management practices in companies. Working with a research group, the firm deliberately balanced the number of participants by five named industries plus “Other” and across four geographical regions. As can be seen in the table below, adapted from page 5 of the report, out of the 300 total respondents, 100 came from each of America and EMEA, while 70 came from Pacific Asia and 30 from Latin America. Moreover, each industry had exactly 50 participants.

 AmericaEMEAPacific AsiaLatin AmericaTotal
Private Equity171612550
Asset Management161811550
Life Sciences & Healthcare171711550

The very brief description of the survey’s methodology does not explain why the firm chose those industries, those geographies, or the balanced participant numbers within them. Nor does it delve into how FT Remark, the research firm that assisted Ropes \& Gray, obtained the desired number of respondents.

One reason for the geographic distribution may have been that it proved difficult to obtain equal numbers of respondents for each pair of industry and geography. It may also be that the firm feels that this geographical weighting in some way more accurately represents companies and their risk management approaches around the globe. Many other questions arise regarding the decisions underlying this symmetric data set.

We will close by noting that if a law firm sets its goal to proportionally balance the number of responses by one or more criteria, revenue of the company being another possible parameter, it significantly increases the effort to locate and persuade the requisite number of respondents.