|Two observations arise from a report published by KPMG, “Through the looking glass, How corporate leaders view the General Counsel of today and tomorrow” (Sept. 2016), one about what constitutes “data from a survey” and the other about dawning awareness among general counsel of data analytics.
Regarding the first observation, the report states that its conclusions are based on interviews with 34 “CEOs, Chairmen, General Counsel and Heads of Compliance who made themselves available for interviews and kindly agreed to participate in our research.” (pg. 27). While you can certainly identify themes from interviews, unless you ask everyone the same question (or some questions), you can’t quantify your findings. Writing that “risk management is top of mind for GCs” is worlds apart from writing that “Twenty-six out of 34 interviewees mentioned risk management as a significant concern.” Additionally, surveys are designed to gather data that is representative of a larger population. It is unlikely that the particular group of 34 who agreed to speak to the KPMG interviewers are representative of the broader population of global CEOs, Chairmen of the Board of Directors, General Counsel or Chief Compliance Officers. Subjective interpretations of what a limited group of people say falls short of quantified research, although those interpretations have whatever credibility a reader assigns them.
The second observation highlights the passing reference — but at least it is a reference — to machine learning software becoming more known to general counsel. “Technology was also cited as an important tool to help the GC improve efficiency, at a time when they are continually being asked to do more with less: ‘New technology helps the GC to be more responsive to the real-time demands of the C-suite of executives,’ says the CEO of a large consumer services company. Companies are making greater use of data analytics and are increasingly moving from descriptive analytics (where technology is used to compress large tranches of data into more user-friendly statistics) to predictive analytics and prescriptive models that extrapolate future trends and behavior. The Office of the GC is being transformed by this process, for example, when performing due diligence on M&A targets or monitoring global compliance.” (page 14). The following sentences direct attention to predictive coding in e-discovery, it is true, but at least the report links awareness of predictive analytics to transformation of law departments.
|To respect and rely on the findings of a legal industry survey, legal managers should be able to find in the survey report the number of people who answered the survey (the sample, respondent number or sometimes just “N”), the number of people who were invited to answer the survey (the population), and how the surveyor developed that population of invitees.
Focus on that last disclosure, which basically concerns the representativeness of the survey population. If a company that sells time and billing software to law firms writes to its customers and asks them “Do you find software technology valuable for your firm?,” no one should be surprised if the headline the vendor’s report boasts “Nine out of ten law firms find software technology valuable!” Aside from binary choice of the question and the vendor’s blatant self-interest in promoting sales of software, the crucial skew in the results arises from the fact that the people invited to complete the survey hardly mirror people in law firms generally. They have licensed or at least know about time and billing software. The deck was stacked, the election was rigged.
Unfortunately, all too often vendor-sponsored surveys go out to invitees who have some connection with the vendor and therefore are hardly representative of law firm lawyers and staff as a whole. The invitees will almost certainly be on the vendor’s contact list or its newsletter recipients or those who visit the vendor’s website and register. Only sometimes will a vendor develop or rent a much larger mailing list and reach out to its names. Even if they do, respondents will likely be self-selected because they use that kind of software or service or have some level of awareness of it.
Most law departments, when inviting their clients to complete a satisfaction survey, select recipients at or above a certain level, such as all “Managers,” or “everyone above comp level 15.” It would be interesting and enlightening for a department to try a “snowball survey.”
Send the questionnaire form (or an email with its online equivalent) to a relatively few, high-level clients. Ask them to complete the form and also to forward the blank form to three colleagues who have worked recently with the law department (or forward the email invitation to those colleagues). Each recipient, in turn, is also invited to extend the survey’s reach, and thus the snowball grows.
A service provider in the legal industry could adopt the same tactic: invite everyone you can reach to take a survey, but urge them to send it on to others they know who would have something to say about the survey’s topic. Now, some surveyors may reject the snowball approach because they want to control who is possibly in the participation group. But a broader-minded desire and one that is more objective would be to sample as many participants as possible and thereby gain a more accurate understanding of the entire population.
|What is the difference between a “poll” and a “survey”? One commentator said that polls tend to focus on single questions, like a referendum in politics that asks “yes” or “no” or a more elaborate question that offers a menu of possible answers, compared to surveys that ask sets of questions that can increase the coverage and reliability of the results. Another definition suggests that poll results appeal to a wider public – “Who should be the All-Star first baseman” — whereas surveys fit the needs more of academicians (or in the legal industry, vendors) who want to emphasize the scientific or scholarly character of their work. A survey regarding machine learning software used by law departments and law firms would be an example.|
So in short, a poll is generally used to ask one simple question, while a survey is generally used to ask a wide range of questions. JurisDatoris may poll its readers someday, asking a single question such as “What is the most common target of data analysis in your job?” By contrast, this blog will constantly discuss findings and methodology of surveys that ask many questions.
|Survey data provides considerable insight for legal managers, if the survey’s methodology was sound. One of the methodological decisions to be made by the sponsor is whether to weight responses. You do so by adjusting the responses to match the demographic characteristics of the population you have surveyed.
A survey of law departments, as an example, might weight the responses by the size of the law departments. That means you adjust the responses you have in hand so that they more accurately represent the entire population. You might have a category of 1-to-3 lawyers in the department, a second of 4-to-6, a third of 7 to 12, and fourth category for all law departments that are larger. Demographic data about law departments in the United States suggest that at least a third of them have three lawyers or fewer.
If the survey responses had only ten percent in the smallest category, the surveyor should multiply the ones they got by three so that the unbalanced sample is more representative of all U.S. law departments. The few in the sample need to be counted more if you are going to generalize about all law departments in the population.
The broader the categories, the less the surveyor needs to consider weighting responses since the responses are more likely to distribute themselves in conformity with the population’s distribution. But with narrow categories, a handful of responses might need to be weighted heavily (multiplied more) and therefore those few will be disproportionately influential in the overall results. One prophylactic is to trim the weights (that prevents one or two respondents from being upgraded by more than some large amount such as 5 or 10 times). An article in the New York Times, September 13, 2016, by Nate Cohn, helped make this point about survey weighting clear.
|Legal managers need to understand how much they can rely on a survey’s results. After all, scores of legal industry surveys are released each year, their findings trumpeted as truths about attitudes of inside counsel, dollars spent on e-discovery, compensation of associates, changes in diversity or many others management topics. Typically the sponsor of the survey invites a large group of lawyers to provide answers to questions on an on-line survey site.
Making up an example, the survey might say that “Based on over 200 responses, 65% of Managing Partners foresee that London will be less of a global legal center as a result of the Brexit vote.” The sad truth is that many reports do not adequately explain how many people were invited to respond, the source of those invited people, or the percentage that replied.
Picking a recently-released survey report at random, consider “Trends and Opportunities in Law Firm Outsourcing, sponsored by Willams Lea Tag and conducted by Sandpiper Partners.
On page 8, under the heading “Methodology”, readers learn “The survey was sent to Managing Partners, Chairmen and senior business executives of selected Am Law 150 firms, large firms in the United Kingdom and to registrants at Sandpiper Partners’ events. “ That’s all. It does not state how many people were invited to take part; it does not state the basis for why certain Am Law 150 firms were “selected” or why firms outside that group were excluded; it does not describe who are considered “senior business executives” and which were invited; it does not define “large firms” in the UK market; and it offers nothing about what kinds of events Sandpiper Partners holds nor who attends them. Readers are left in Stygian darkness about the number invited to take part and how that group was selected.
As to the number of survey invitees who actually participated, the report gives no answer: “Response rates for the survey increased more than 20% from 2015 to 2016,” it unhelpfully teases on page 9, and then proceeds with nice graphics to break respondents down with percentages by size of law firm, the respondent’s individual role, and the location of the firms. That’s it. We do not know whether 50 took part, 500, or 5,000.
Given silence on the number of invitees and silence on the number who took the survey, readers can’t assess one other important attribute of all surveys, the participation rate. In general, the higher the participation rate, the more trustworthy the results of the survey.
This survey report, we hasten to emphasize, tells what most legal-industry-survey reports offer regarding these key numbers: “We invited gobs of people in law firms or law departments and a bunch of them responded. Now, off to our headline findings that you should respect and rely on.”
Survey results are data that legal managers often encounter, but they need to be savvy about how much reliance to put on the data. When sophisticated survey results are published, the report typically includes a statement of the margin of error. Let’s think of a scenario. If the ABA invited its members by an email to approve or disapprove a nominee for the Supreme Court and 1,000 or so responded, the margin of error would be plus or minus three percent. So, if 60% approved the nominee, the “real” response from that subset (the sample) if all of the members (the population of ABA members) could be as high as 63% or as low as 57%. As explained in the NY Times, Oct. 6, 2016 at A18, the stated margin of error explains sampling variation: “error that occurs because surveys are based on only a subset of the full population of [ABA members].”
But the stated margin of error on a survey misses other important sources of error. “Frame error occurs when there is a mismatch between the people who are possibly included in the poll (the sampling frame) and the true target population.” With the ABA example, members who had not provided a usable email address would fall outside the frame of potential respondents.
A second form of survey error arises from nonresponders. That is when “the likelihood of responding to a survey is systematically related to how one would have answered the survey.” Again on the hypothetical ABA study, members who do not appear in court might not bother to respond at a higher rate than the rest of the members, and they might somewhat consistently (that is the meaning of “systematically” in the quote) have favored a judge who has been in the public eye regardless of merit. This would illustrate non-response bias.
Third, error other than margin of error may result from the analysis of the data. This is a morass of tough decisions and potential mistakes.
Fourth, the wording of the question or the choices permitted to the respondents (as in drop-down menus) may skew the accuracy of the results.
We could go on with other survey traps, but the key point here is that survey data in the legal industry should trigger careful examination of the survey’s methodology.
Senior partners in law firms pour hours into agonizing over how much to pay their partners. Those decisions take into account many factors and cumulatively, over time, shape the culture of the firm. Because those partner compensation decisions are so crucial, and analysis of data so integral, firms seek guidance and input from multiple sources, including surveys. So, when an article today in the New York Times, Oct. 14, 2016 at B3, headlined “Men v. Women in Law: A Pay Divide of 44%” the data analyst in me pored over it.
The headline derived from the finding that a partner’s annual compensation is typically “tied to the amount of business they bring.” Women brought in an average of $1.7 million of business; men averaged $2.6 million. Interestingly, where the average male partner brought in 50% more than the average female partner, the pay gap was less, at 44%, so something else went into the determinations.
My three other points are different. First, it would be very useful if Major, Lindsey & Africa, the preeminent executive search firm in the legal industry who sponsored the survey, controlled for the ages of the partners. It seems plausible that the average male partner is older than the average female partner, which may account for a chunk of the origination and comp differences – you’ve had more years to accumulate a stable of clients and to become known for a specialty.
Second, it would be insightful if the sponsor compared compensation by gender when it holds practice areas constant. If Trusts and Estates partners, to pick one practice, differ significantly on compensation by gender (ideally controlling for age differences), that finding would clarify the value of the survey results for decision-makers.
Last, if the comparison were between the average compensation of male partners with origination figures roughly similar to the origination figures of female partners, does the gender gap change? Take all the male partners who originated $1.6 to $1.8 million and match their comp against all the female partners in the same origination range — that will test discrimination.