If law firms include in their reports the questions they asked, they usually do so near the relevant plot or table. Every now and then, however, a firm reproduces the questions of the questionnaire in the order they were asked. Here are some examples of that consolidated and comprehensive reporting.
In the five pages of Appendix 2, Berwin Leighton Risk 2014 [pg. 14] states not only all the questions but also all their aggregated responses. Kudos to the firm!
In the image below from HoganLovells FDI 2014 [pg. 62], the Appendix reproduces all the questions (and perhaps what the questionnaire looked like) in a table.
Reed Smith MediaRebates 2012 [pg. 9] includes all the questions in its Appendix.
Browne Jacobson Sleepins 2018 [pg. 32] reproduces all of its 80 questions on the survey in an Annex.
Brodies Firm GDPR 2018 [pg. 2] explains that the sample it surveyed resembles U.K. businesses as a whole by industry and by revenue or number of employees.
Osborne Clarke Consumer 2018 [pg. 28] strived to balance its participants within each nation.
White Case Arbitration 2010 [pg. 40] excellently describes its efforts to reach out and obtain a representative group of participants.
It’s very unusual for reports in a series to point out differences in the participant pools. Here is one example, however. Baker McKenzie Cloud 2016 [pg. 5] acknowledges that the 2016 survey has more respondents who are lawyers (“in a legal role”) than previous surveys.
Almost all of the law firm research surveys are conducted once a year. The effort is considerable and firms want to allow sufficient time to pass, especially if they are conducting a series, for changes to appear in the data. Annual surveys rule. That said, at least three law firms have conducted surveys on twice yearly.
Irwin Mitchell Occupiers 2015 [pg. 3] is one of the reports that has gathered data in the Spring and Fall regarding office rentals.
Morrison Foerster MA Two 2017 [pg. 3] reflects surveys that collect input in April and then in September.
Haynes Boone Borrowing 2016 [pg. 2] represents one of a series that gathered data on borrowing practices in April and September.
It seems likely that the longer a survey is open, the more people will take part. But the data does not support that seemingly commonsense notion. For a group of 34 surveys selected by mostly because they all state the duration of the survey and the number who took it, the correlation between the number of weeks open and the number of participants was a negative 0.2! The shorter open periods were associated with more taking part!
What drives numbers of participants more than length of time open depends more on the quality and size of the email invitation list. By “quality” we mean that the invitees have a reasonable chance of being interested in the survey; the list isn’t some random collection of email addresses. By “size” we mean the sheer number of invitees; all things being equal, if more people receive the invitation, more people will decide to complete it.
Other factors that drive participation rates likely include whether the invitees know and respect the law firm (or co-contributors), the time demands of the survey, the topic, and the level of the invitee (senior executives and general counsel are bombarded with requests to complete surveys, but more junior people may receive invitations rarely and be more willing to participate).
The scatter plot below shows along the bottom axis how many months a survey was open and along the left, vertical axis how many participants completed it. Open periods of one month or of two months were the most common. For all of these surveys, with 8,500 total participants and 56 total months open, the average number of participants per month was 152.
We will need more surveys to derive dependable numbers on averages per month, and likewise to look at averages of participants per season.
Does seasonality influence participation numbers? Does it make a difference in what month you launch your survey? The next plots tells us that firms had no particular favoritism, except that none of them triggered their survey in the middle of summer, in July.
Institutional calendars, workload, or summer vacation plans may account more for the starting month of the survey than sensitivity to what will maximize participant numbers. Law firms may have budgets based on fiscal years or they may orient their survey toward a conference or try to catch the wave of heightened interest in a topic
The more research surveys we locate, the more organizations we identify who worked with the law firms. We call these organizations “co-coordinators” but in fact the roles they play vary from being the lead and the law firm adds a bit to being secondary to the dominant role of the law firm. Some organizations run the survey project; some provide law firms with access to potential participants; some of them primarily publicize the results of the survey research.
Whatever their role, co-coordinators appear in something like one half of all research surveys by law firms. Moreover, we have identified 18 surveys that have two or even three co-contributors. Having already listed the name and at least one survey of 73 co-coordinators, we add another 15 in this post.
- Agenda Consulting: Browne Jacobson Sleepins 2018
- Allenbridge: MJ Hudson VentureCapital 2017
- Association of School and College Leaders (ASCL): Brown Jacobson School 2016
- Canadian Venture Capital Association (CVCA): McCarthy Tetrault VC 2011
- Censuswide: Freshfields Bruckhaus Whistleblow 2017
- Esports Observer: Foley Lardner Esports 2018
- FDU Group: Trowers Hamlin Networking 2014
- Hong Kong Venture Capital and Private Equity Association: Oldham Li HKVC 2018
- Infrastructure Intelligence: Burgess Salmon Infrastructure 2018
- JLL: Baker McKenzie Cloud 2014
- KPMG: Cliffe Decker PE 2017
- Opinium Research: CMSNabarro HealthTech 2017
- Perceptive Insight
- Research Strategy Group: Gowling WLG Franchisee 2017
- Royal Institution of Chartered Surveyors: Tughans Surveyors 2018
- Upload: Perkins Coie VirtualReality 2016
Someone interested in research by law firms might think that corporate revenue of more than one billion dollars would be a common amount to categorize that data. The demographics detail could say things like “Number of respondents reporting less than $1 billion” or “Respondents between $1 and 4 billion revenue.”
Bryan Cave CollectiveLit 2007 [pg. 23] employs the cut-off figure and presents the revenue profile of its 242 respondents simply and elegantly. Readers can add the three largest categories and know that 39.7% of them reported more than $1 billion of global revenue.
DLA Piper Compliance 2017  also lays out the revenue of its respondent companies. Readers can figure out that 32% of the companies exceeded $1 billion.
Some reports represent the number or percent of their participants whose revenue exceeded one billion dollars. These three did so.
- Pinsent Masons Energy 2017 [pg. 5]: all 200 businesses had revenue greater than $1 billion
- HoganLovells Brexometer 2017 [pg. 14]: all 210 respondents’ companies had more than $1 billion of revenue, and
- Clifford Chance Crossborder 2012 [pg. 36]: All respondents represented companies with annual revenues in excess of $1 billion.
Other reports partially disclose or require some detective work.
- Proskauer Rose Empl 2016 [pg. 4]: Almost half of the survey respondents work for businesses with annual revenues of $1 billion or more
- Carlton Fields CA 2012 [pg. 40]: its 322 participants had average annual revenues of $13.1 billion and median of $3.8 billion. Seventeen percent are Fortune Global 500 companies, and nearly 49 percent are Fortune 1000 companies. Of those, eight percent are Fortune 100, 19 percent are Fortune 101-500,\footnote and 21 percent are Fortune 501-1000.
- KL Gates GCDisruption 2017 [pg. 19]: majority of companies (82% of 200 companies, of 164 companies) had revenues of Euro 1 billion or more (at that time a Euro was about 1.3 dollars).
Unfortunately, many survey reports do not give enough detail about their respondents’ distribution of revenue to say anything regarding the common threshold of one billion dollars of revenue.
Readers of survey reports deserve to know the precise wording of the questions that generated the report’s findings. How a question was phrased, the way instructions were added, what selections were available and in what order: all are vital for evaluating the reported data. Many reports don’t restate the question, but appear to summarize it in a plot’s title. Others quote or paraphrase the question asked in their text discussion of the plot. User-friendly reports state the question near the plot or table that summarizes the data.
A superlative treatment comes from Appendix A of HoganLovells FDI 2014 [pgs. 61-75], which lists all 19 questions that were on the survey (and presumably in their order on the questionnaire), as well as nine demographic questions, along with the choices available to respondents for each question. Even more, pages 76 to 95 reproduce in summary tables the data that was collected. We heartily praise this comprehensive disclosure.
In the snippet below of Reed Smith London 2018 [pg. 19] the question appears above the plot.
Littler Mendelson Employer 2013 [pg. 5] states the question clearly, but considerably above the plot.
Gowling WLG Protectionism 2017 [pg. 14] reveals a style variation: the question sits snugly close to the plot. As compared to Littler Mendelson, proximity and bold fit highlight the connection between the question and the data.
The final example, from Pinsent Masons TMT 2016 [pg. 10], accentuates the question with red font and close proximity. Offsetting the question to the left is an aesthetic move. The subtitle in parentheses tells readers which subset of the data the plot summarizes.
Most law firms publish their research as reports as a PDF document, portrait orientation. The production benefits of PDF stand out: clear and dramatic photos, ample white space, appealing layouts, compressed size and a widely-accepted format. Some maverick firms, however, chose other modes.
Goulston Storrs Multifamily 2017 created an article, which the firm published in a monthly produced by the same organization that conducted the survey.
Haynes Boone Borrowing 2015 [pg. 4] produced what appears to be a PowerPoint deck. Here is the full-page snippet.
Brodies Firm Housebuilding 2015 published a single page report and Baker McKenzie Brexit 2017 did the same with an infograph.
The back pages of recent reports often offer social media links as in this half-page snippet from Mills Reeve CommonLaw 2017 [pg. 36]. Twitter and YouTube icons show up.
King Wood AustDirs 2016 [pg. 36] closes with seven social media icons: the firm’s Facebook page, Twitter handle, and LinkedIn group, as well as the China Law Insight blog hosted by the firm, its In Competition blog, and the firm’s Spotlight on Safety Site (not available on July 27, 2018).
Brodies Firm Wind 2013 [pg. 10] has six buttons, including Twitter, LinkedIn, Facebook, Google Plus, YouTube, Instagram and reciteme. Its invitation to companies to take the survey also included a link to tumbler and a share button with many links, including Pinterest.
Carlton Fields CA 2018 [pg. 47] displays two QR codes (quick response), one to view the firm’s blog on class actions and the other to view additional class action resources.
Reports come in two basic orientations: portrait, which is vertical and tall, or landscape, which is horizontal and wide. To categorize the collection of survey reports at hand, we used the R software and an algorithm that assigned reports wider than they are tall as “Landscape” orientation. All the other reports are “Portrait.” Out of 250 reports for which we have PDF formats, 40 are landscape and 210 portrait. A pie chart conveys the ratio.
Among the landscape reports, we found Mills Reave Economy 2009, Brodies GDPR 2018, Tughans NIreland 2018, and Pinsent Masons Energy 2017.
For another perspective, we can multiply the width of a report’s pages in points by its height in points. There are 72 points in an inch. That product is “total points,” which creates a distribution over all the PDF reports. The dotplot sorts all the surveys into the two basic orientations and places a dot at the height of each report’s total points. The points are “jittered,” which means they are spread apart from each other so that the reader can see the concentration of dots at any particular value. Here, the bulk of the portrait reports have 500,000 total points, and the bulk of the landscape reports have a bit less than 400,000 total points.
Visualized with box plots, the median total points for a page are virtually the same but many of the portrait reports have fewer outliers. The landscape reports have a handful that are very low in total points.
Finally, a scatter plot shows the distribution of total points and draws ellipses around two groups according to their probable orientation.