The plot below shows data on participants per year of lengthy series conducted by five law firms. A facet plot, it gives the data in a separate pane for each firm (alphabetically from Carlton Fields to White & Case). Within each pane the left axis varies for the number of participants in the survey year. For example, DLA Piper top right ranges from below 100 participants to around 300 whereas Davies Ward to its left ranges from 500 to 1,200 participants. White & Case’s survey data is missing participants for 2013 and 2014 so the line breaks. This group covers nine in the series at the maximum and six years at the minimum.
Generally speaking, the upward slope of the lines confirms that series gain participants as they continue over the years. The exception was Davies Ward, which declined from the initial burst of enthusiasm in 2005 but then began a recovery until the firm ceased sponsoring the series after 2011.
If a few more series of at least six years duration had full information on participants, we could more confidently assert that brand recognition and appreciation for a series build over time. Certainly this initial view suggests that to be the case.
Does the longevity of a survey series affect the average number of participants in the series? This is likely to be too crude a question, because the target populations of series differ significantly. Then too, firms might modify their questions as the series goes along rather than repeating the same questions, which could affect participation. A series might bring on different co-coordinators or change how it reaches out for participants. If we could control for factors such as these, which might swamp changes in participant numbers arising simply from annual invites, content, and publicity, we could make some headway on the question, but the data for that level of analysis is not available. Also, averaging participant numbers over the years of a survey series may conceal material ups and downs.
Moreover, of greater usefulness to law firms would be knowing whether numbers of participants tend to increase over the life of a series as it becomes better known and more relied on.
We plunge ahead anyway. To start, consider the series that have been sponsored by a law firm for four years or more. We know of 21 as are presented in the plot below. The color coding from the legend at the bottom corresponds to how many surveys have been in the series (some of which are ongoing). The color coding moves from midnight blue for the four-year series to the lightest (yellow) for the longest-running survey (13 years).
As we speculated above, a regression of how many years a survey has been conducted against average participants provides no insight. Other factors than the number of years a survey series has run influence the number of participants more.
Several law firms have conducted (and may still be conducting) two different series. These firms include Baker McKenzie (Brexit and cloud computing), Berwin Leighton (hotels, two different geographies), Clifford Chance (M&A and European debt), Freshfields Bruckhaus (corporate crises and whistle blowers), Herbert Smith (M&A and finance), Jackson Lewis (workplace and dress codes), Miller Chevalier (Latin American corruption and tax policy), and Morrison Foerster (legal industry and M&A).
A few firms have done (or may still be conducting) three surveys on different topics; CMS (the legal industry, Brexit, M&A in Europe), DLA Piper (compliance, debt in Europe, M&A), and Pinsent Masons (Brexit and two on construction in different geographies).
We can also look at the broad topics where one or more firms have coordinated a series of at least five years’ length. We have coded the particular topics into broader meta-topics. The next chart tells us that three meta-topics on industries are included in these long-running series: construction, real estate, and private equity. Second, firms have also run five-plus-year series on disputes (litigation, class actions, and arbitration). Finally, the most popular subject for research surveys has been mergers and acquisitions, with three different meta-topics.
If some organization helps on a law firm’s research survey, the report clearly acknowledges that contribution. For example, as in the snippet below, Burgess Salmon Infrastructure 2018 [pg. 8] gave a shout out to its two co-coordinators (Infrastructure Intelligence and YouGov).
At least 12 law firms have conducted surveys with two different co-contributors. Three firms have worked with four co-contributors (Dentons, Morrison & Foerster, and Reed Smith) and two firms have worked with six co-contributors (CMS and Pinsent Masons).
Interestingly, two law firms have teamed with one or more other law firms: Shakespeare Martineau Brexit 2017 with Becker Büttner Held and Miller Chevalier LatAmCorruption 2016 with 10 regional law firms.
For most co-coordinator surveys, the pairing is one law firm and one co-coordinator. However, Pinsent Masons Infratech 2017 and Clifford Chance Debt 2007 sought the assistance of three co-coordinators for a research survey.
At this point, there are at least nine co-contributors who have helped on more than one survey by a law firm: Acritas, Alix Partners, ALM Intelligence (4 surveys), Canadian Corporate Counsel Association (5), the Economist Intelligence Unit, FTI Consulting (3), Infrastructure Intelligence, IPSOS (5), Ponemon Institute, RSG Consulting (3), and YouGov.
Consider two different meanings of “double survey.” One meaning applies to a law firm sending out two surveys, each to a different target audience, and then combining the responses in a report. A second meaning applies to a firm conducting more than one survey in a year, but with the same target audience.
Burgess Salmon Infrastructure 2018 [pg. 8] explains that it simultaneously conducted two separate surveys, one by interviews and the other by an online questionnaire. The report juxtaposes the findings.
Minter Ellison Cybersecurity 2017 [pg. 6] also undertook a double survey. With separate instruments, it reached out to members of boards of directors and also to chief information officers and others. The report combines the data.
Turning to the second meaning of “double survey”, one example started in 2015. Haynes Boone has conducted its energy borrowing survey twice yearly since then, e.g., Haynes Boone Borrowing 2018 [pg. 2].
Other firms that have conducted surveys twice a year on a topic include Morrison Foerster, e.g., Morrison Foerster MA 2018, and Irwin Mitchell, e.g., Irwin Mitchell Occupiers 2014. We also found an instance of quarterly surveys: Brodies Firm Brexit 2017!
Quite often law firms ask respondents to answer a question with a value from a scale. Those values should represent balanced positions on the scale. That is, they should have the same equal conceptual distance from one point to the next. For example, researchers have shown the perceived balance on the strongly disagree-disagree-neutral-agree-strongly agree scale.
Most survey designers set the bottom point as the worst possible situation and the top point as the best possible, then evenly spread the scale points in-between.
The text selected for the spectrum of choices deserves an extended discussion. Sometimes questions on surveys add text only to the polar values of a scale. For example, “Choose from a scale of 1 to 6 where 1 indicates “Yes, definitely” and 6 indicates “No, definitely not.” Alternatively, the question could supply intermediate scale positions with text: 2 indicates “Yes, probably”, 3 indicates “Maybe”, etc.
DLA Piper Compliance 2017 [pg. 6] used a 10-point scale and text at the extremes and in middle position:
It is hard to create text descriptions of positions on a scale that respondents perceive as equally spaced. If you put only numbers, respondents will unconsciously space the choices: but you will not have as clear a way to indicate what was in the mind of the respondents. On the other hand, words are inherently ambiguous and introduce all kinds of variability in interpretation by respondents.
Often the responses to a well-crafted scale question come back reasonably “normal,” as in the oft-seen bell-curve normal distribution. The midpoint gets the most responses and on either side the numbers drop or rise fairly symmetrically. Here is an example from a five-point scale.
Of the 464 law firm research surveys located to date, the number of participants is known for 273 of them. Osborne Clarke Consumer 2018 collected an extraordinary 16,000 participants so we have set it aside for this analysis as well as the next largest survey, CMS Restaurant 2018 at 5,446 participants, because they materially skew the aggregate calculations for the distribution.
Based on the slightly reduced set of data, the average numbers of participants is 417 while the median is 203. At the extremes, 11 surveys had fewer than 50 participants while six had 2,000 or more. Without the two outliers, the grand (known) total has reached 92,098.
The plot that follows shows the total number of participants per year.
The box plot shows more about the distribution of participants each year. The medians have been consistently around 200 participants. Lately, however, some outliers have been significantly above that figure.
Why do people take the time to respond to surveys from law firms?
- Most of them have some intrinsic interest in the subject of the survey.
- Longer term thinkers appreciate that reliable data about a subject will benefit everyone.
- Some respondents may feel flattered. Providing data and views affirms their sense of competence and knowledge.
- A survey is a break in the typical flow of work.
- Respondents feel grateful or loyal to the law firm that solicits answers.
- Many people feel good about being asked a favor and complying.
For 44 research reports I have determined how long the survey was open, i.e., the data collection period. I picked those reports haphazardly over time — making no effort to be random or representative but simply to start calculating some statistics. With that caveat, the average data collection period is 1.5 months with a standard deviation of 0.74, which means that about two-thirds of the periods fell between 0.8 months (~3 weeks) and 2.3 months (~5 weeks). The shortest collection period was 0.1 months (3 days) while the longest was 3 months.
The plot shows the distribution of open periods together With the month in which the survey launched. No particular month seems favored.
Here are several reasons why law firms call a halt to collecting survey responses.
- New responses have slowed to a trickle
- A practice group is eager to start the analysis and find something out!
- Staff and partners have been pushed enough to persuade more participants
- The firm has emailed three reminders to potential participants
- The co-contributor has done enough or been pushed enough
- Qualified responses have hit triple digits, a respectable data set
- The participant group is sufficiently representative or filled out
- Marketing wants to get out first or early on some current issue
- The firm wants to meet the promise it made to participants to send them a report promptly
- The budget says it’s time to start the analysis (and usually a report)
The analysis and report preparation can begin before the last survey submission, but that is easier to do with a programming script that lets an analyst read in updated data and re-run the analysis with little additional effort.
The most common style of title starts with a few keywords and then adds another line or a few explanatory words after a colon. Here Ashurst GreekNPL 2017 is one of many examples:
Titles on survey reports range from functional to fanciful. “Outsourcing Public Services Across Local Government” [Ashfords Outsource 2017] is about as meat-and-potatoes as it gets; “South Florida Real Estate 2016 Outlook Survey” [Berger Singerman SFlaRE 2016] has a similar matter-of-factness.
Some titles expand: “The Good, the Bad, and the Troubling: Fasken Martineau’s 2017 Employer Occupational Health and Safety Survey Report Legal Compliance Challenges For Canadian Employers” [Fasken Martineau OHS 2017] or “Getting it right from the ground up: A survey on construction disputes: The causes and how to avoid them” [Russell McVeagh ConstrDisp 2018].
Only rarely does a title include both the name of the firm and the year, as in “Baker McKenzie 2016 Cloud Survey” [Baker McKenzie Cloud 2016]. A touch more commonly, the year alone appears: “European Acquisition Finance Debt Report 2011” [DLA Piper EuropeDebt 2011].
Titles that stand out entice the reader. Examples include “Mythbusting the common law marriage” [Mills Reeve CommonLaw 2017] or “The Multichannel High Street: A Nation of Shoppers: but is it a nation of shopkeepers?” [Squire Sanders Retail 2013].
Most titles get the job done with simple language and structure. A few approach complexity: “Finding the balance: human touch versus high tech: Millennials and the future of the hotel and restaurant sector” [CMS Restaurants 2018].
When law firms conduct a series of surveys, the titles usually morph in minor ways as the years pass. For instance:
- “Survey Of Office Occupiers: Changing Attitudes To Property Needs” [Irwin Mitchell Occupiers 2014]
- “Survey Of Office Occupiers – Part III: Changing Attitudes To Property Needs – Autumn 2015” [Irwin Mitchell Occupiers 2015]
- “Survey Of Office Occupiers – Part IV: Changing Attitudes To Property Needs and the Impact of Brexit – Summer 2016” [Irwin Mitchell Occupiers 2016]
- “Property Trends in 2018 – Survey of Office Occupiers” [Irwin Mitchell Occupiers 2018].
Earlier I identified co-contributors who have teamed with various law firms on research surveys. Not that the law firm always leads the survey project and retains a co-coordinator. Some research projects happen the other way around; perhaps a group that lacks funds solicits a law firm to help out or another organization wants legal commentary. This analysis does not differentiate surveys by the respective roles of the law firm and its co-coordinator.
Based on 91 survey reports available in PDF that I have analyzed, approximately 106 co-contributors are named (some more than once). A more precise number count would depend on categorizing the units of larger organizations separately or collectively, e.g., Acuris units and Economist Group units.
Based on self-descriptions on their home webpage, I categorized the co-contributors into 15 types. The line between types has looseness, as between ‘Market Research” and “Marketing”, or between “Consulting” and either of those. Be that as it may, knowing that more work needs to be done to confirm all of the match ups and that other research surveys will turn up additional co-contributors, at least a preliminary view can be shared here. The plot below shows the initial results.
Law firms that do not proceed entirely on their own with a survey gravitate toward co-coordinators that help them reach the target market. Market research firms, publications that reach a sector or niche within a sector, and trade groups that have members with shared interests are by far the most common match ups. At the other end of frequent collaboration, firms team with a wide variety of co-contributors.