Co-contributors to law-firm research surveys (Part III)

Twice I have written about instances of co-contributors [18 of them] and [13 more co-contributors] and their respective survey reports. Further digging has uncovered another group of 16 co-contributors.

  1. Achieve — Taft Stettinius Entrep 2018
  2. ANA — Reed Smith MediaRebates 2012
  3. Association of Foreign Banks — Norton Rose Brexit 2017
  4. Australian Institute of Company Directors — King Wood AustralianDirs 2016
  5. Becker Büttner Held — Shakespeare Martineau Brexit 2017
  6. Economist Group — Herbert Smith MandA 2017
  7. Gamesa — Brodies Firm Wind 2013
  8. Institution of Civil Engineers and techUK and Mergermarket — Pinsent Masons Infratech 2017
  9. Ipsos MORI Scotland — Brodies Firm Brexit 2017
  10. IVC Research Center — Meitar Liquornik TechVC 2018
  11. National Foreign Trade Council — Miller Chevalier TaxPolicy 2018
  12. Northern Ireland Chamber of Commerce — Goodbody GDPR 2018
  13. Oxford Analytica — Morrison Foerster ConsumerProd 2018
  14. Ponemon Institute — McDermott Will GDPR 2018, Kilpatrick Townsend CyberSec 2017
  15. Singapore Corp. Counsel — CMS SingaporeGCs 2018
  16. The Lawyer and YouGov — Pinsent Masons Brexit 2017
  17. “an independent consultancy” — Carlton Fields CA 2018

s of this writing, therefore, law firms have teamed on research surveys with at least 47 different organizations. Because some of those organizations have been involved in more than one survey by the firm (and sometimes surveys by more than one firm), the total of surveys with a co-contributor is likely nearly 70. But it is impossible to figure out the percentage that have a co-contributor even of the 309 law firm surveys I know about. First, I have not checked each one. Second, a few dozen of those surveys are known only from a press release, article or later survey report, not from a PDF report. Third, a firm might have worked with another entity without acknowledging that entity in the survey report.

Profusion of research surveys on Brexit and the GDPR

Law firms follow current events and especially those that suggest major legal repercussions. For example, the Brexit vote of the United Kingdom has unleashed a torrent of political and legal ramifications. Accordingly, it is not surprising that law firms have launched surveys to research aspects of Brexit, but that 10 or more have been completed may be surprising.

The ten studies found so far include Brodies Firm Brexit 2017, CMS Brexit 2017, DLA Piper Brexit 2018, Eversheds Sutherland Brexit 2017, Herbert Smith Brexit 2018, HoganLovells Brexometer 2017, Norton Rose Brexit 2017, Pinsent Masons Brexit 2017, Shakespeare Martineau Brexit 2018, and Simmons Simmons Brexit 2017.

Not surprisingly, all the firms are either UK based or UK-oriented with a major U.S. presence (DLA Piper, Norton Rose). Of the six Brexit reports available online, the average is 23 pages of plots and text per report.

Likewise, the European Union’s far-reaching regulation of data privacy, the General Data Protection Regulation (GDPR), has ushered in massive economic, political and legal changes. Law firms are keenly aware of all the work awaiting them, so GDPR has resulted thus far in at least six research surveys by law firms.

The GDPR survey research includes Brodies Firm GDPR 2017, Eversheds DublinGDPR 2018, McDermott Will GDPR 2018, Paul Hastings GDPR 2017, and Technology Law GDPR 2017.

On this topic, two UK firms have weighed in, but so have five U.S. firms. It is also quite possible that several other surveys that address cyber-security and hacking include some questions about GDPR.

Co-contributors to law-firm surveys (Part II)

A law firm should decide at the beginning whether it wants to conduct the survey on its own or coordinate with others. In the data set we have been examining, many firms teamed with another group on a survey or with a group that shared an interest in the topic. In fact, we spotted nine survey reports that had two other organizations coordinating with a law firm. Organizations aplenty can help develop questions, distribute invitations, publicize findings, and analyze data.

Obviously, assistance such as this sometimes comes at a cost. We don’t know how much firms have paid co-contributors but it could be a fairly substantial amount if they services obtained fall into the broad range of consulting. Having a co-contributor also adds complexity and elapsed time because the firm must manage the external provider (or be managed by it, since sometimes another company leads the survey process) and adapt to its scheduling. There is also the matter of finding the third-party.

The benefit of bringing in outside experience is that your eventual product will be superior. They also can provide the benefits of a consultant of helping to keep the project on track and on time and bringing experienced talent.

Having previously written about 18 instances of co-contributors, here are another 13 co-contributors and their respective survey reports.

  1. 451 Research — Morrison Foerster MA 2016
  2. Acritas — Proskauer Rose Empl 2016
  3. Bank Polski and Polnisch-Deutsche Industrie- und Handelskammer — CMS Poland 2016
  4. Biopharm Insight and Merger Market Group — Reed Smith Lifesciences 2015
  5. Economist Intelligence Unit — Bryan Cave CollectiveLit 2007
  6. J.D. Power, Univ. of Michigan — Miller Canfield AutoCars 2018
  7. Local Area Property Association — Simpson Grierson Reserve 2016
  8. Local Government New Zealand — Simpson Grierson LocalGov 2015
  9. Meridian West — Allen Overy Models 2014
  10. Oxford Economics — HoganLovells Brexometer 2017
  11. Rafi Smith Research Institute — Gilad Saar Trust 2018
  12. VB/Research and The Lawyer — DLA Piper Debt 2015
  13. WeAreTheCity — Reed Smith London 2018

Interviews can supplement the quantitative data gathered by a survey

Several firms combine modes of data gathering. They start with a survey emailed to their invitee list or otherwise publicized. At some point later the firm (or the service provider it retained) seeks interviews with a subset of the invitees. (At least we assume that those who were interviewed also completed a survey, but the reports do not confirm that assumption.)

The survey gathers quantitative data while the interviews gather qualitative insights. Interviews cost money, but what firms learn from conversations deepens, clarifies and amplifies the story told by survey data. Interviews also enable the firm to strengthen its connections to participants who care about the topic.

The reports make little of the interview process and provide almost no detail about them in general. They show up as quotes and case studies. DLA Piper Debt 2015 , for example, states that 18 interviews were conducted; commendably it lists the names and organizations of those who were interviewed [pg. 30]. We show the first few in the snippet below.

Reed Smith LondonWomen 2018 [pg. 22] mentions that “Several individuals opted to take part in further discussion through email exchange, in-person meetings and telephone interviews.” As a prelude to those discussions, in the invitation to women to take the survey the firm explained: “We will be inviting those who wish to speak on-the-record to take part in telephone or in-person interviews to impart advice and top tips. If you wish to take part in an interview, please fill in the contact details at the end of the survey.” This background tells us about the opt-in process of the firm, although the report itself does not refer to it.

HoganLovells Cross-Border 2014 [pg. 28] explains that interviews were conducted with 140 “general counsel, senior lawyers, and executives.” As with the other examples here, the report adds no detail about how long the interviews lasted or the questions asked during them.

Clifford Chance Debt 2007 [pg. 3] doesn’t say how many interviews were conducted, only that interviews took place during November 2007. It would have been good for the firm to have said something more about how many people they spoke with and how those people were chosen.

Norton Rose Lit 2017 surveyed invitees, “with a telephone interview campaign following” [pg. 5] and adds later in the report [pg. 38] that there was an “interview campaign following [the online survey] across July, August and early September 2017.”

Broad selections challenge designers of multiple-choice questions

When you write non-numeric selections of a multiple-choice question, you want them to be different from each other and cover the likely choices as completely as possible. Yet at the same time you don’t want too many selections. You also would like them to be close to the same length. We have compiled other good practices.

The selections also need to be quickly and decisively understood by respondents.  Respondents don’t want to puzzle over meanings and coverage of terms. Partly that means you need to cure ambiguities but partly it means to choose terms in selections carefully so that nearly everyone interprets them the same way at first reading.

We found an instructive example in one of the law-firm research surveys. Did the questions in the plot below achieve quick clarity?  We do not know if the left-hand-side labels mirror the selections on the questionnaire. Some surveys have more detail, and even explanations, but the report gives an abbreviation of the selection.

I wonder whether most of the general counsel understand “Partner ecosystem”, let alone in the same way. Should there be a two notions joined as in “New sources of revenue and new business models”? Some companies might pursue revenue or a new business model, but not both. Likewise, why pair “Clean energy and environmental regulation”? They could be seen as two separate trends. The selection “Geopolitical shifts” feels so broad that it invites all kinds of interpretations by respondents.

This question challenged the survey designors with an impossible task. First they had to pick the important trends — and what happened to “Demographic changes”, “Big data”, “Urbanization” and “Taxes” to pick a few others that could have been included? Second, they had to describe those multifaceted, complex trends in only a few words. Third, those few words needed to fix a clear picture in lots of minds, or else the resulting data represents a blurred and varied clump of subjective impressions.

NAICS classification of industries would help surveys four ways

If only there were a standard way to describe survey participants by industry … There is! Law firms could identify, analyze, and report on their participants by the North American Industry Classification System (NAICS) categories. This system has moved beyond the venerable SIC (Standard Industrial Code) categories. The NAICS offers a range of two-digit classifications that map well to the extant proliferation of industry/sector designations seen in law firm reports. Those classification together with the three- and four-digit elaborations on them easily suffice for law-firm research surveys.

If NAICS codes became the convention for law firm research surveys, at least four benefits would follow.

Mash-up data. For data analysts, “mash-up” describes the process of melding two sets of data. If firms used the NAICS, other data would then be available for analysis. Longitudinal data sets, meaning those maintained over a period of time, that the U.S. government has collected by NAICS code can supplement information about the number of businesses in the industry, more detail about those businesses, the number of employees in the businesses, and so forth. Everyone would benefit from richer, more insightful analyses after various mash-ups.

Consistency among surveys. If law firms adopted this standard classification system, readers of their reports and researchers would be much more able to compare results by industries. In the current disorder, and so long as each firm defines its industries idiosyncratically, comparisons and meta-analyses become much harder to carry out, if not impossible.

Improving the representativeness of the sample data. Because the NAICS data sets provide law firms with reliable counts of companies by industry, they could deploy techniques to make their convenience samples more representative of the actual distribution of U.S. businesses. One method of doing this, which we explain elsewhere, is called “raking.” As sample data is transformed to closely resemble population data, deeper statistical analyses become available.

Impute missing values. “Imputation” is the term statisticians use for filling in missing values. If a law firm has data about its participants by their NAICS code plus other information such as revenue, the firm could impute the number of employees of that company. An explanation of that methodology to supplement data can be found elsewhere, but it would be available to a firm so long as the industry coding conforms to the NAICS. For example, a firm that collects revenue, industry code, and state can even more accurately impute a number for employees. Fuller data sets enable better analyses.

Four reasons why demographic questions usually lead off a survey

By convention, the first few pieces of information asked of respondents on a questionnaire typically concern demographic facts (title, industry, location, revenue). The reasons for this typical order might be termed psychological, motivational, practical, and instrumental.

Psychologically, law firms want to know about the person who is providing them data. Is this person higher or lower in the corporate hierarchy? Does this person work in an industry that matters to the firm or matters to the survey results? They want to know that the person is credible, knowledgeable, and falls into categories that are appropriate for the survey. To satisfy that felt need, designers of questionnaires put demographic questions first.

When a questionnaire starts with questions that are easy to answer, such as regarding the respondent’s position, the industry of their company, and its headquarters location, it motivates the respondent to breeze through them and charge on. They sense that the survey is going to be doable and quick. Putting the demographic questions first, therefore, can boost both participation rates and attrition rates.

A practical reason to place the demographic questions at the start is that doing so allows the survey software to filter out or redirect certain respondents. If an early question concerns the level of the respondent, and if their choice falls below the firm’s desired level of authority, the survey can either thank the respondent and close at that point or move their subsequent questions to a different path. Vendors who conduct surveys often cull out inappropriate participants, but law firms rarely take this step. Rather, they usually want as much data as they can get from as many people as will take part.

Fourth, if the demographic questions are at the start of the questionnaire, then even if the participant fails to complete the survey or submit it, it may be possible that the survey software captures valuable information. This could be thought of as a instrumental reason for kicking off a questionnaire with demographic questions. These days, the law firm particularly wants to know the email address of the participant and their title. That information probably flows into a customer relationship management (CRM) database.

Four techniques to make selections more clear

When someone creates a multiple-choice question, they should give thought to where and how to explain the question’s selections. People spend time wordsmithing the question, which is valuable time, but not the end of the matter. Even the invitation to survey participants may explain some background and key terms that shed light on selections. But at least four other options present themselves in the service of selections that can be answered without interpretative complexity.

First, a firm’s survey software should allow the designer to place an explanatory section before a question or series of related questions. That section can elaborate on what follows and guide readers in choosing among the selections. This technique has been overlooked in many of the questionnaires done for law firm research surveys.

Second, the question itself can be written carefully so that participants more easily understand the selections that follow. [This is not referring to directions such as “check all that apply” or “pick the top 3.” The point here pertains to interpretation and meaning of the multiple choices.] For example, the question might make clear the period for which answers should be given covers the previous five years. Or the question might define “international arbitration” in a certain way to distinguish it from “domestic arbitration.” The overarching definitions and parameters laid out in the question shape inform each of the selections that follow.

Third, as a supplement to the main question, some survey software enables the designer to add instructions. Using NoviSurvey, for instance, the instructions appear below the question in a box, and offer additional explanatory text. Instructions commonly urge participants not to put in dollar signs or text in a numeric field or to enter dates in a specific format, but they can also explain the selections. For example, the instructions might note that the first four selections pertain to one general topic and the next four selections pertain to a second topic. Or the instructions might differentiate between two of the selections that would otherwise perhaps be confused or misconstrued.

Finally, even if there is no explanatory section, guidelines from the question itself, or illumination in instructions, the selections themselves can embed explanatory text. Any time a selection has an “i.e.,” or an “e.g.,” the person picking from the selections should be able to understand them better. Sometimes a question will say “… (excluding a selection shown above)” to delineate two choices.

As a by-product, the more you expand on the selection choices, the more you can abbreviate them. The interplay between these four techniques to disambiguate selections, to present them more directly and clearly, allows careful designers of questions to craft selections more precisely and usefully.

Advisable to use “Don’t know” or “NA” in multiple-choice questions

Well-crafted multiple-choice questions give respondents a way to say that they don’t know the answer or that no selection applies to their situation. The two non-answers differ in that ignorance of the answer — or, possibly refusal to give a known answer — can be remedied by the respondent whereas they can’t supplement an incomplete set of selections. Firms should not want people they have invited to take a survey to have to pick the least bad answer when their preferred answer is missing. As we have written before firms should add an “Other” choice with a text box for elaboration.

From HoganLovells Cross-Border 2014 [pg. 19] comes an example of how a multiple-choice question accommodates respondents who don’t know the answer. Also, it shows how data from such a question might be reported in a polar graphic. Seven percent of the respondents did not know whether their company’s international contracts include arbitration procedures.

In the jargon of data analysts, a “Don’t know” is called item non-response: no answer is given to a particular survey item when at least one valid answer was given to some item by the same respondent, e.g., leaving an item on a questionnaire blank, or responding to some questions by saying, “I don’t know,” while providing a valid response to other questions.

Another survey, DLA Piper Compliance 2017 [pg. 15], used a “Does not apply” option. Almost one-third of the respondents checked it. It is conceivable that some respondents did not know the answer and resorted to denying its applicability to them as the best of the three choices, although far from optimal.

One more example, this time from Fulbright Jaworski Lit 2009 [pg. 61]. Here, one-fifth of those who took the survey indicated that they didn’t know the answer to the question reproduced on top of the plot.

It is easy to include variations of the non-substantive selections described above. In fact, extrapolating from these three instances, firms probably should do so since significant numbers of respondents might pick them — on average almost one out of five in the above surveys.

Multiple-choice questions dominate the formats of questions asked

Having examined more than 100 reports published by law firms based on the surveys they sponsored, I suspected that more than three out of four questions asked on the surveys fell into the category of multiple choice. Reluctant to confirm that sense by laboriously trying to categorize all the questions in all those surveys, I invited my trusty R software select five of the surveys at random.

Sure enough, all five not only exceeded the perception of at least 75% of the questions being multiple choice, but in fact every single question that could be identified from the five reports fell into that format! Bear in mind that we can’t be certain about all the questions asked on the surveys, but we can glean from the reports most of them. It would be necessary to count from the actual questionnaire to confirm this data.

Specifically, Seyfarth Shaw Future 2017 went eight for eight, Morrison Foerster MA 2014 was five out of five, and Berwin Leighton Arbvenue 2014 used multiple-choice questions for all of its at least 14 questions (it is difficult to figure out from the Berwin report exactly how many questions were on the survey). In Foley Lardner Telemedicine 2014, all twelve questions (include three demographic questions) were multiple choice; with Foley Lardner Cars 2017, all 16 questions were multiple choice (including two demographic questions).

Of those 55 multiple choice questions, a few presented binary choices but most of them presented a list of 4-to-7 selections to pick from. Likert scales appeared rarely, as illustrated in the plot below from Foley Lardner Cars 2017 [pg. 5]. The scale ranges from “Strongly Agree” to “Strongly Disagree.”

Morrison Foerster MA 2014 [pg. 4] also used a Likert scale in a question.