Number and Percentage of Respondents Choosing Each Selection

Let’s remind ourselves of what we are calling “multi-questions.” The next plot, from a research-survey report (Kilpatrick Townsend CyberSec 2016 [pg. 7]) illustrates one. The plot derives from a multiple-choice question that listed seven selections and where “More than one choice permitted” applied. The plot gives the percentage of the 605 respondents who chose each selection.

You can spot such multi-questions because the percentages in the plot add up to more than one hundred. Here they total 237% which means an average of 2.37 selections per respondent.

Now, about presenting the results of multi-questions. Other than prose, the simplest description of the distribution of responses to a multi-choice question is a table. A table succinctly tells how many respondents chose each selection. From the data set we have been using and the question’s nine selections, the total number of roles selected was 318 from 91 respondents. A maximum of 819 possible selections could have been made if each respondent had checked each selection. When you know the number of participants in your survey, you can add a column for percentages.

If a table is not sorted by a relevant column, like the table above is sorted on “Selected”, it is harder for readers to compare frequencies. Column charts use bar height to help with comparisons, as the plot below illustrates. We used the data in the table above and added the frequency of selection in each bar.

Turn Multi-Question Responses into Dummy Variable Matrix

One vital step in the analysis of a multi-choice question creates a variable for each potential selection. The dummy variable for each selection is coded “1” if the respondent checked it and “0” if not.

Think of a spreadsheet where each row holds a person’s answer to the question. If the only question they answered was the multi-choice question, they will have columns to the right of their name up to the number of selections, and in each column a “0” if they did not select that role and a “1” if they selected it. The sheet would have as many rows as respondents and each row would have a pattern of “0”s and “1”s corresponding to the options not selected or selected. All those “0”s and “1”s form a matrix, a rectangular array of numbers.

For an example of a “”check all that apply” question, a multi-choice question, the snippet below shows the results from respondents checking from six selections available. The percentage inside the top bar selection tells us that 62% of the respondents picked it, so a “1” showed up for that dummy variable. For the remaining 38% of the respondents, the column would have a “0”.

It is entirely possible to have software count the number of times each selection was checked, but analysts often decide to convert multi-choice responses into binary matrices, populated only with “0”s and “1”s, so that software can carry out more elaborate calculations. For a simple example, the binary matrix shown below has a “RowSum” column on the far right that added each “1” in the columns to the left. The first respondent selected two roles, Role1 and Role3, so “1”s are in those two cells and the “RowSum” equals 2.

Multi-Answer, Multiple-Choice Questions in Surveys

Research surveys by law firms ask multiple-choice questions much more frequently than they ask any other style of question. They do so because it is easier to analyze the data from answers selected from a list or from a drop-down menu. Not only are they common, multiple-choice questions often permit respondents to mark more than one selection. These multi-questions, as we will refer to them, have instructions such as “Choose all that apply” or “Pick the top three.” The image below, from page 11 of a 2015 survey report by King & Wood Mallesons, states in the footnote that “Survey participants were able to select multiple options.” Thus, participants could have chosen a single selection or up to 10 selections.

To get a sense of how many multi-questions show up, we picked four survey reports we recently found and counted how many multi-questions they asked, based on the plots their reports presented. The surveys are Kilpatrick Townsend CyberSec 2016, King Wood AustraliaDirs 2015, Littler Mendelson Employer 2018, and Morrison Foerster ConsumerProd 2018. In that order they have 7 multi-questions in 24 non-Appendix pages, 4 in 36 pages, 8 in 28 pages and 4 in 16 pages. Accordingly, results from at least 21 multi-questions appeared in 104 pages. Bear in mind that each report has a cover and a back page that have no plots and almost always other pages without plots so the total number of survey questions asked is always less than the number of report pages.

While multi-questions certainly allow more nuanced answers than “Pick the most important…” questions, for example, and create much more data, those more complicated pools of data challenge the survey analyst regarding how best to interpret and present it.

A number of analytic approaches enable an analyst to describe the results, to glean from the selection patterns deeper insights, and to depict them graphically. We will explore those techniques.

Co-contributors to law-firm research surveys (Part III)

Twice I have written about instances of co-contributors [18 of them] and [13 more co-contributors] and their respective survey reports. Further digging has uncovered another group of 16 co-contributors.

  1. Achieve — Taft Stettinius Entrep 2018
  2. ANA — Reed Smith MediaRebates 2012
  3. Association of Foreign Banks — Norton Rose Brexit 2017
  4. Australian Institute of Company Directors — King Wood AustralianDirs 2016
  5. Becker Büttner Held — Shakespeare Martineau Brexit 2017
  6. Economist Group — Herbert Smith MandA 2017
  7. Gamesa — Brodies Firm Wind 2013
  8. Institution of Civil Engineers and techUK and Mergermarket — Pinsent Masons Infratech 2017
  9. Ipsos MORI Scotland — Brodies Firm Brexit 2017
  10. IVC Research Center — Meitar Liquornik TechVC 2018
  11. National Foreign Trade Council — Miller Chevalier TaxPolicy 2018
  12. Northern Ireland Chamber of Commerce — Goodbody GDPR 2018
  13. Oxford Analytica — Morrison Foerster ConsumerProd 2018
  14. Ponemon Institute — McDermott Will GDPR 2018, Kilpatrick Townsend CyberSec 2017
  15. Singapore Corp. Counsel — CMS SingaporeGCs 2018
  16. The Lawyer and YouGov — Pinsent Masons Brexit 2017
  17. “an independent consultancy” — Carlton Fields CA 2018

s of this writing, therefore, law firms have teamed on research surveys with at least 47 different organizations. Because some of those organizations have been involved in more than one survey by the firm (and sometimes surveys by more than one firm), the total of surveys with a co-contributor is likely nearly 70. But it is impossible to figure out the percentage that have a co-contributor even of the 309 law firm surveys I know about. First, I have not checked each one. Second, a few dozen of those surveys are known only from a press release, article or later survey report, not from a PDF report. Third, a firm might have worked with another entity without acknowledging that entity in the survey report.

Number of lawyers in survey firms; merged names

We start with a couple of methodological decisions. First, what number shall we use for the count of practicing lawyers in the firm? To reconstruct the number of lawyers practicing at the firm back in the year of a survey would take much digging. Although we could then analyze our data set much more accurately when firm size has meaning, the effort to obtain the historical, matching data would be daunting.

A second, related issue focuses on how to handle surveying firms that merged after the survey. At least three of the firms in the data set have merged with another major firm during the past few years. These merged firms include BryanCaveBLP, CMS, HoganLovells, and Norton Rose Fulbright. How should we treat their sizes? Also, if we keep the pre–merger name of the firm, we have to figure out both the month and year its merger took affect as well as the month and year a survey was published. That game’s not worth the candle. If we use the name of the merged firm, we lose the correct name of the firm as of the year the survey completed.

The convention I have tried to adopt uses the current lawyer headcount of an unmerged firm, the latest name of the merged firm, and the merged firm’s lawyer count. The first two names of the firm, without any punctuation, make up my “firm name”.

Accordingly, the average number of lawyers in the 77 law firms for which I have data is 1047. The median is 753 lawyers. The conclusion is inescapable: very large law firms are the typical sponsors of research surveys.

The range of sizes is also illuminating: 6 lawyers to 4,607 lawyers. The set includes at least three firms with less than 200 lawyers along with ten of more than 2,000 lawyers. The takeaway? A firm of any size can launch a research survey.

The plot presents aggregate size data from 69 firms based in four “countries”: Canada (6 different law firms), the United Kingdom (20 firms), the United States (38), and “VereinCLG,” five firms that have a legal structure of either a Swiss verein or a “company limited by guarantee” (CLG).

Average number of pages in reports by originating law firm’s geography

From the period 2013 through now, we have found 154 research surveys where a law firm conducted or sponsored the survey and a PDF report was published. That group includes 55 different law firms.

We categorized the firms according to five geographical bases: United States firms, United Kingdom firms, vereins, combinations of U.S. and U.K. firms (“USUK”), and the rest of the world (“RoW” — Australia, Canada, Israel, New Zealand, and South Africa). We thought we would find that the largest firms, either the vereins or the USUK firms, would write the longest reports. Our reasoning was that they could reach more participants and could analyze the more voluminous data more extensively (and perhaps add more marketing pages about themselves).

Quite true! As can be seen in the table below, the average number of pages and the median number of pages for the five geographical groupings of firms each stand at approximately the same number. How many surveys are included in each category is shown in the column entitled “Number”. Nevertheless, the two large classes of firms do indeed produce more pages of reports.

GeoBase Number AvgPages MedianPages
RoW 13 25.0 20.0
UK 41 24.1 20.0
US 78 22.5 19.0
USUK 17 30.2 22.0
Verein 5 27.6 28.0

We tested the difference between the average number of pages for the USUK reports and average pages for the US reports. We selected those two groups because they had the largest gap [30.2 versus 22.5].

A statistical test called the t-test looks at two averages and the dispersion of values that make up each average. It tells you how likely it is that the difference of those averages is statistically significant, meaning that if random samples of survey reports were taken repeatedly from law firms in each geography, less than 5% of the time a gap of that amount or more would show up. If that threshold is not met, you can’t say that the differences are due to anything other than chance. If the threshold is met, statistician say that the difference can be relied on, in that it is statistically significant. On our data, the t-test was 1.2 and the p-value is 0.24, much above the threshold of 0.05 for statistical significance. The swing between USUK average pages and US average pages may look material, but on the data available, we can’t conclude that something other than random variation accounts for it.

Profusion of research surveys on Brexit and the GDPR

Law firms follow current events and especially those that suggest major legal repercussions. For example, the Brexit vote of the United Kingdom has unleashed a torrent of political and legal ramifications. Accordingly, it is not surprising that law firms have launched surveys to research aspects of Brexit, but that 10 or more have been completed may be surprising.

The ten studies found so far include Brodies Firm Brexit 2017, CMS Brexit 2017, DLA Piper Brexit 2018, Eversheds Sutherland Brexit 2017, Herbert Smith Brexit 2018, HoganLovells Brexometer 2017, Norton Rose Brexit 2017, Pinsent Masons Brexit 2017, Shakespeare Martineau Brexit 2018, and Simmons Simmons Brexit 2017.

Not surprisingly, all the firms are either UK based or UK-oriented with a major U.S. presence (DLA Piper, Norton Rose). Of the six Brexit reports available online, the average is 23 pages of plots and text per report.

Likewise, the European Union’s far-reaching regulation of data privacy, the General Data Protection Regulation (GDPR), has ushered in massive economic, political and legal changes. Law firms are keenly aware of all the work awaiting them, so GDPR has resulted thus far in at least six research surveys by law firms.

The GDPR survey research includes Brodies Firm GDPR 2017, Eversheds DublinGDPR 2018, McDermott Will GDPR 2018, Paul Hastings GDPR 2017, and Technology Law GDPR 2017.

On this topic, two UK firms have weighed in, but so have five U.S. firms. It is also quite possible that several other surveys that address cyber-security and hacking include some questions about GDPR.

Co-contributors to law-firm surveys (Part II)

A law firm should decide at the beginning whether it wants to conduct the survey on its own or coordinate with others. In the data set we have been examining, many firms teamed with another group on a survey or with a group that shared an interest in the topic. In fact, we spotted nine survey reports that had two other organizations coordinating with a law firm. Organizations aplenty can help develop questions, distribute invitations, publicize findings, and analyze data.

Obviously, assistance such as this sometimes comes at a cost. We don’t know how much firms have paid co-contributors but it could be a fairly substantial amount if they services obtained fall into the broad range of consulting. Having a co-contributor also adds complexity and elapsed time because the firm must manage the external provider (or be managed by it, since sometimes another company leads the survey process) and adapt to its scheduling. There is also the matter of finding the third-party.

The benefit of bringing in outside experience is that your eventual product will be superior. They also can provide the benefits of a consultant of helping to keep the project on track and on time and bringing experienced talent.

Having previously written about 18 instances of co-contributors, here are another 13 co-contributors and their respective survey reports.

  1. 451 Research — Morrison Foerster MA 2016
  2. Acritas — Proskauer Rose Empl 2016
  3. Bank Polski and Polnisch-Deutsche Industrie- und Handelskammer — CMS Poland 2016
  4. Biopharm Insight and Merger Market Group — Reed Smith Lifesciences 2015
  5. Economist Intelligence Unit — Bryan Cave CollectiveLit 2007
  6. J.D. Power, Univ. of Michigan — Miller Canfield AutoCars 2018
  7. Local Area Property Association — Simpson Grierson Reserve 2016
  8. Local Government New Zealand — Simpson Grierson LocalGov 2015
  9. Meridian West — Allen Overy Models 2014
  10. Oxford Economics — HoganLovells Brexometer 2017
  11. Rafi Smith Research Institute — Gilad Saar Trust 2018
  12. VB/Research and The Lawyer — DLA Piper Debt 2015
  13. WeAreTheCity — Reed Smith London 2018

At least a dozen kinds of software used in survey project

As to the software that a law firm might use to carry out a survey project, the list is lengthy. Not that the firm itself needs to have each of the applications that are listed below, but from start to finish someone may need to deploy them.

  1. Database software for customer relations management (CRM) or some software to provide email addresses of invitees
  2. Bulk email software (e.g., ConstantContact) so that the firm can effectively invite its clients and others to take part in the survey
  3. Word processing software to draft questions, invitations and the text of the report
  4. Survey software (e.g., NoviSurvey, SurveyMonkey, SmartSurvey, Qualtrics, SurveyGizmo) to create an online questions and capture the responses of the participants
  5. Spreadsheet software (e.g., Excel) so that the firm can export the responses from the survey software into a worksheet and manipulate the data
  6. Statistical software (e.g., R, Stata, Excel, Tableau) so that an analyst can calculate various statistics
  7. Data visualization software (e.g., R, Excel or PowerPoint) so that an analyst can prepare plots and graphs
  8. Desktop publishing (e.g., LaTex, markdown languages, Adobe InDesign) so that the firm can integrate text, plots, tables and other elements into the report
  9. Presentation software (e.g., PowerPoint) or specialized software to prepare infographs
  10. Graphical design software (e.g., gimp, PhotoShop) so that the firm can work with images and photos and otherwise design the report as it wishes
  11. PDF software (e.g., Foxit, Adobe Acrobat, PScript5, ScanSoft, QuarkXPress) so the firm can save its report in a portable document format [see the plot below for more details]
  12. All kinds of other software are also involved, such as email, instant messaging, social media, website packages, video-conferences, calendaring, backup software and more.

The plot below examined data from 153 survey reports in PDF format. Of the set, 141 include metadata about the software used to create the report. The firms used nine different software packages although over the years they used multiple versions of the same package. Thus, for example, Adobe InDesign — all versions — dominated with more than 100 reports created with it.

Interviews can supplement the quantitative data gathered by a survey

Several firms combine modes of data gathering. They start with a survey emailed to their invitee list or otherwise publicized. At some point later the firm (or the service provider it retained) seeks interviews with a subset of the invitees. (At least we assume that those who were interviewed also completed a survey, but the reports do not confirm that assumption.)

The survey gathers quantitative data while the interviews gather qualitative insights. Interviews cost money, but what firms learn from conversations deepens, clarifies and amplifies the story told by survey data. Interviews also enable the firm to strengthen its connections to participants who care about the topic.

The reports make little of the interview process and provide almost no detail about them in general. They show up as quotes and case studies. DLA Piper Debt 2015 , for example, states that 18 interviews were conducted; commendably it lists the names and organizations of those who were interviewed [pg. 30]. We show the first few in the snippet below.

Reed Smith LondonWomen 2018 [pg. 22] mentions that “Several individuals opted to take part in further discussion through email exchange, in-person meetings and telephone interviews.” As a prelude to those discussions, in the invitation to women to take the survey the firm explained: “We will be inviting those who wish to speak on-the-record to take part in telephone or in-person interviews to impart advice and top tips. If you wish to take part in an interview, please fill in the contact details at the end of the survey.” This background tells us about the opt-in process of the firm, although the report itself does not refer to it.

HoganLovells Cross-Border 2014 [pg. 28] explains that interviews were conducted with 140 “general counsel, senior lawyers, and executives.” As with the other examples here, the report adds no detail about how long the interviews lasted or the questions asked during them.

Clifford Chance Debt 2007 [pg. 3] doesn’t say how many interviews were conducted, only that interviews took place during November 2007. It would have been good for the firm to have said something more about how many people they spoke with and how those people were chosen.

Norton Rose Lit 2017 surveyed invitees, “with a telephone interview campaign following” [pg. 5] and adds later in the report [pg. 38] that there was an “interview campaign following [the online survey] across July, August and early September 2017.”