Providers of survey software and hosting

In most survey reports, the law firm does not explain which survey software it used to create its set of online questions and capture the responses of the participants. Such software has been available for years. According to one online site, “The first online survey software and questionnaire tools initially surfaced in the late 1990s.”  The author goes to differentiate the capabilities of free software from paid versions.

“Typically, paid versions of online survey software offer added capabilities such as:

Survey logic — paid tools often provide the option to add a follow up question. This is based on the answer you’ve provided to the previous question.
Export data — There are several tools that won’t let you export your survey data, unless you start using the paid version.
Custom logo — Looking to get rid of the survey tool’s logo and make it your own? With most paid versions this is possible.
More question types — Although free survey tools offer plenty of question types, including multiple choice, ratings, drop-downs and radio buttons, paid versions tend to offer even more.”

At various points I have encountered references to the following software:

Explorance (Blue)
NoviSurvey (Fulbright Jaworski used this software)
Qualrics  In a previous post, Dec. 22, 2017, I mention a survey publicized by Seyfarth Shaw that used Qualtrics to host the survey.
SmartSurvey
Survey Gizmo
Survey Monkey
Survey Planet
Typeform
Zoho

Undoubtedly there are many more offerings that law firms can choose from. In fact, the commentary quoted above claims there are hundreds of offerings, and gives a list of 21.

Surveys by Canada’s largest firms

During the past six or seven years, it appears that all of the largest Canadian law firms have unveiled research surveys.

The plot below shows six of those firms and presents a point above the firm for each survey in each year in which they conducted a survey. For example, Fasken Martineau conducted two in 2017 and two in 2016. The dates and the points are slightly jittered so that none of them overlap.

We found out about these surveys through a search using Google and reviewing the first six pages of hits returned on March 6, 2018. We used the name of the firm and “survey.” It could well be that the firms have conducted other research surveys during the period. We have not searched the websites of each firm.

Blake Cassels, Borden Lardner, Davies Ward, Fasken Martineau, McCarthy Tetrault, and Miller Thomson are known to have sponsored surveys. It may also be that Gowling Lafleur Henderson, carried out one or more surveys before it combined with U.K.-based Wragge Lawrence Graham & Co. to create Gowling WLG in 2015. That merged firm ran two surveys, at least, in 2017.

Number and Percentage of Respondents Choosing Each Selection

Let’s remind ourselves of what we are calling “multi-questions.” The next plot, from a research-survey report (Kilpatrick Townsend CyberSec 2016 [pg. 7]) illustrates one. The plot derives from a multiple-choice question that listed seven selections and where “More than one choice permitted” applied. The plot gives the percentage of the 605 respondents who chose each selection.

You can spot such multi-questions because the percentages in the plot add up to more than one hundred. Here they total 237% which means an average of 2.37 selections per respondent.

Now, about presenting the results of multi-questions. Other than prose, the simplest description of the distribution of responses to a multi-choice question is a table. A table succinctly tells how many respondents chose each selection. From the data set we have been using and the question’s nine selections, the total number of roles selected was 318 from 91 respondents. A maximum of 819 possible selections could have been made if each respondent had checked each selection. When you know the number of participants in your survey, you can add a column for percentages.

If a table is not sorted by a relevant column, like the table above is sorted on “Selected”, it is harder for readers to compare frequencies. Column charts use bar height to help with comparisons, as the plot below illustrates. We used the data in the table above and added the frequency of selection in each bar.

Turn Multi-Question Responses into Dummy Variable Matrix

One vital step in the analysis of a multi-choice question creates a variable for each potential selection. The dummy variable for each selection is coded “1” if the respondent checked it and “0” if not.

Think of a spreadsheet where each row holds a person’s answer to the question. If the only question they answered was the multi-choice question, they will have columns to the right of their name up to the number of selections, and in each column a “0” if they did not select that role and a “1” if they selected it. The sheet would have as many rows as respondents and each row would have a pattern of “0”s and “1”s corresponding to the options not selected or selected. All those “0”s and “1”s form a matrix, a rectangular array of numbers.

For an example of a “”check all that apply” question, a multi-choice question, the snippet below shows the results from respondents checking from six selections available. The percentage inside the top bar selection tells us that 62% of the respondents picked it, so a “1” showed up for that dummy variable. For the remaining 38% of the respondents, the column would have a “0”.

It is entirely possible to have software count the number of times each selection was checked, but analysts often decide to convert multi-choice responses into binary matrices, populated only with “0”s and “1”s, so that software can carry out more elaborate calculations. For a simple example, the binary matrix shown below has a “RowSum” column on the far right that added each “1” in the columns to the left. The first respondent selected two roles, Role1 and Role3, so “1”s are in those two cells and the “RowSum” equals 2.

Multi-Answer, Multiple-Choice Questions in Surveys

Research surveys by law firms ask multiple-choice questions much more frequently than they ask any other style of question. They do so because it is easier to analyze the data from answers selected from a list or from a drop-down menu. Not only are they common, multiple-choice questions often permit respondents to mark more than one selection. These multi-questions, as we will refer to them, have instructions such as “Choose all that apply” or “Pick the top three.” The image below, from page 11 of a 2015 survey report by King & Wood Mallesons, states in the footnote that “Survey participants were able to select multiple options.” Thus, participants could have chosen a single selection or up to 10 selections.

To get a sense of how many multi-questions show up, we picked four survey reports we recently found and counted how many multi-questions they asked, based on the plots their reports presented. The surveys are Kilpatrick Townsend CyberSec 2016, King Wood AustraliaDirs 2015, Littler Mendelson Employer 2018, and Morrison Foerster ConsumerProd 2018. In that order they have 7 multi-questions in 24 non-Appendix pages, 4 in 36 pages, 8 in 28 pages and 4 in 16 pages. Accordingly, results from at least 21 multi-questions appeared in 104 pages. Bear in mind that each report has a cover and a back page that have no plots and almost always other pages without plots so the total number of survey questions asked is always less than the number of report pages.

While multi-questions certainly allow more nuanced answers than “Pick the most important…” questions, for example, and create much more data, those more complicated pools of data challenge the survey analyst regarding how best to interpret and present it.

A number of analytic approaches enable an analyst to describe the results, to glean from the selection patterns deeper insights, and to depict them graphically. We will explore those techniques.

Co-contributors to law-firm research surveys (Part III)

Twice I have written about instances of co-contributors [18 of them] and [13 more co-contributors] and their respective survey reports. Further digging has uncovered another group of 16 co-contributors.

  1. Achieve — Taft Stettinius Entrep 2018
  2. ANA — Reed Smith MediaRebates 2012
  3. Association of Foreign Banks — Norton Rose Brexit 2017
  4. Australian Institute of Company Directors — King Wood AustralianDirs 2016
  5. Becker Büttner Held — Shakespeare Martineau Brexit 2017
  6. Economist Group — Herbert Smith MandA 2017
  7. Gamesa — Brodies Firm Wind 2013
  8. Institution of Civil Engineers and techUK and Mergermarket — Pinsent Masons Infratech 2017
  9. Ipsos MORI Scotland — Brodies Firm Brexit 2017
  10. IVC Research Center — Meitar Liquornik TechVC 2018
  11. National Foreign Trade Council — Miller Chevalier TaxPolicy 2018
  12. Northern Ireland Chamber of Commerce — Goodbody GDPR 2018
  13. Oxford Analytica — Morrison Foerster ConsumerProd 2018
  14. Ponemon Institute — McDermott Will GDPR 2018, Kilpatrick Townsend CyberSec 2017
  15. Singapore Corp. Counsel — CMS SingaporeGCs 2018
  16. The Lawyer and YouGov — Pinsent Masons Brexit 2017
  17. “an independent consultancy” — Carlton Fields CA 2018

s of this writing, therefore, law firms have teamed on research surveys with at least 47 different organizations. Because some of those organizations have been involved in more than one survey by the firm (and sometimes surveys by more than one firm), the total of surveys with a co-contributor is likely nearly 70. But it is impossible to figure out the percentage that have a co-contributor even of the 309 law firm surveys I know about. First, I have not checked each one. Second, a few dozen of those surveys are known only from a press release, article or later survey report, not from a PDF report. Third, a firm might have worked with another entity without acknowledging that entity in the survey report.

Number of lawyers in survey firms; merged names

We start with a couple of methodological decisions. First, what number shall we use for the count of practicing lawyers in the firm? To reconstruct the number of lawyers practicing at the firm back in the year of a survey would take much digging. Although we could then analyze our data set much more accurately when firm size has meaning, the effort to obtain the historical, matching data would be daunting.

A second, related issue focuses on how to handle surveying firms that merged after the survey. At least three of the firms in the data set have merged with another major firm during the past few years. These merged firms include BryanCaveBLP, CMS, HoganLovells, and Norton Rose Fulbright. How should we treat their sizes? Also, if we keep the pre–merger name of the firm, we have to figure out both the month and year its merger took affect as well as the month and year a survey was published. That game’s not worth the candle. If we use the name of the merged firm, we lose the correct name of the firm as of the year the survey completed.

The convention I have tried to adopt uses the current lawyer headcount of an unmerged firm, the latest name of the merged firm, and the merged firm’s lawyer count. The first two names of the firm, without any punctuation, make up my “firm name”.

Accordingly, the average number of lawyers in the 77 law firms for which I have data is 1047. The median is 753 lawyers. The conclusion is inescapable: very large law firms are the typical sponsors of research surveys.

The range of sizes is also illuminating: 6 lawyers to 4,607 lawyers. The set includes at least three firms with less than 200 lawyers along with ten of more than 2,000 lawyers. The takeaway? A firm of any size can launch a research survey.

The plot presents aggregate size data from 69 firms based in four “countries”: Canada (6 different law firms), the United Kingdom (20 firms), the United States (38), and “VereinCLG,” five firms that have a legal structure of either a Swiss verein or a “company limited by guarantee” (CLG).

Average number of pages in reports by originating law firm’s geography

From the period 2013 through now, we have found 154 research surveys where a law firm conducted or sponsored the survey and a PDF report was published. That group includes 55 different law firms.

We categorized the firms according to five geographical bases: United States firms, United Kingdom firms, vereins, combinations of U.S. and U.K. firms (“USUK”), and the rest of the world (“RoW” — Australia, Canada, Israel, New Zealand, and South Africa). We thought we would find that the largest firms, either the vereins or the USUK firms, would write the longest reports. Our reasoning was that they could reach more participants and could analyze the more voluminous data more extensively (and perhaps add more marketing pages about themselves).

Quite true! As can be seen in the table below, the average number of pages and the median number of pages for the five geographical groupings of firms each stand at approximately the same number. How many surveys are included in each category is shown in the column entitled “Number”. Nevertheless, the two large classes of firms do indeed produce more pages of reports.

GeoBase Number AvgPages MedianPages
RoW 13 25.0 20.0
UK 41 24.1 20.0
US 78 22.5 19.0
USUK 17 30.2 22.0
Verein 5 27.6 28.0

We tested the difference between the average number of pages for the USUK reports and average pages for the US reports. We selected those two groups because they had the largest gap [30.2 versus 22.5].

A statistical test called the t-test looks at two averages and the dispersion of values that make up each average. It tells you how likely it is that the difference of those averages is statistically significant, meaning that if random samples of survey reports were taken repeatedly from law firms in each geography, less than 5% of the time a gap of that amount or more would show up. If that threshold is not met, you can’t say that the differences are due to anything other than chance. If the threshold is met, statistician say that the difference can be relied on, in that it is statistically significant. On our data, the t-test was 1.2 and the p-value is 0.24, much above the threshold of 0.05 for statistical significance. The swing between USUK average pages and US average pages may look material, but on the data available, we can’t conclude that something other than random variation accounts for it.

Profusion of research surveys on Brexit and the GDPR

Law firms follow current events and especially those that suggest major legal repercussions. For example, the Brexit vote of the United Kingdom has unleashed a torrent of political and legal ramifications. Accordingly, it is not surprising that law firms have launched surveys to research aspects of Brexit, but that 10 or more have been completed may be surprising.

The ten studies found so far include Brodies Firm Brexit 2017, CMS Brexit 2017, DLA Piper Brexit 2018, Eversheds Sutherland Brexit 2017, Herbert Smith Brexit 2018, HoganLovells Brexometer 2017, Norton Rose Brexit 2017, Pinsent Masons Brexit 2017, Shakespeare Martineau Brexit 2018, and Simmons Simmons Brexit 2017.

Not surprisingly, all the firms are either UK based or UK-oriented with a major U.S. presence (DLA Piper, Norton Rose). Of the six Brexit reports available online, the average is 23 pages of plots and text per report.

Likewise, the European Union’s far-reaching regulation of data privacy, the General Data Protection Regulation (GDPR), has ushered in massive economic, political and legal changes. Law firms are keenly aware of all the work awaiting them, so GDPR has resulted thus far in at least six research surveys by law firms.

The GDPR survey research includes Brodies Firm GDPR 2017, Eversheds DublinGDPR 2018, McDermott Will GDPR 2018, Paul Hastings GDPR 2017, and Technology Law GDPR 2017.

On this topic, two UK firms have weighed in, but so have five U.S. firms. It is also quite possible that several other surveys that address cyber-security and hacking include some questions about GDPR.

Co-contributors to law-firm surveys (Part II)

A law firm should decide at the beginning whether it wants to conduct the survey on its own or coordinate with others. In the data set we have been examining, many firms teamed with another group on a survey or with a group that shared an interest in the topic. In fact, we spotted nine survey reports that had two other organizations coordinating with a law firm. Organizations aplenty can help develop questions, distribute invitations, publicize findings, and analyze data.

Obviously, assistance such as this sometimes comes at a cost. We don’t know how much firms have paid co-contributors but it could be a fairly substantial amount if they services obtained fall into the broad range of consulting. Having a co-contributor also adds complexity and elapsed time because the firm must manage the external provider (or be managed by it, since sometimes another company leads the survey process) and adapt to its scheduling. There is also the matter of finding the third-party.

The benefit of bringing in outside experience is that your eventual product will be superior. They also can provide the benefits of a consultant of helping to keep the project on track and on time and bringing experienced talent.

Having previously written about 18 instances of co-contributors, here are another 13 co-contributors and their respective survey reports.

  1. 451 Research — Morrison Foerster MA 2016
  2. Acritas — Proskauer Rose Empl 2016
  3. Bank Polski and Polnisch-Deutsche Industrie- und Handelskammer — CMS Poland 2016
  4. Biopharm Insight and Merger Market Group — Reed Smith Lifesciences 2015
  5. Economist Intelligence Unit — Bryan Cave CollectiveLit 2007
  6. J.D. Power, Univ. of Michigan — Miller Canfield AutoCars 2018
  7. Local Area Property Association — Simpson Grierson Reserve 2016
  8. Local Government New Zealand — Simpson Grierson LocalGov 2015
  9. Meridian West — Allen Overy Models 2014
  10. Oxford Economics — HoganLovells Brexometer 2017
  11. Rafi Smith Research Institute — Gilad Saar Trust 2018
  12. VB/Research and The Lawyer — DLA Piper Debt 2015
  13. WeAreTheCity — Reed Smith London 2018