To learn more from a set of data, you may want to calculate additional variables. Here is an example from a client satisfaction survey.
If you are a general counsel and you ask your clients to assess your department, ask them not only to evaluate your group’s performance on a set of attributes but also to rank those attributes by importance. The more important the attribute – such as timeliness, understanding of the law, responsiveness – the more your clients should expect good performance from the law department. You want to focus on what your clients value.
From the survey data, create an “index of client satisfaction” which divides the reality (performance ratings) by the expectations of clients (importance ratings) on each attribute. In short, reality divided by expectations, which is client satisfaction. Then you can calculate averages, medians, etc.
With 1.0 being the absolute best, where the delivered performance fully met the expectations of all your clients, your index will decline to the degree the performance of the law department fell short of what clients felt was important and expected. By the way, low expectations (importance) fully met shows up in the index as high satisfaction. Focus on the gap between the highest ranking attributes and their evaluation ratings.
To make better decision based on client-satisfaction survey results, break down client scores, such as by the frequency of their legal service use: low, medium, and high. In other words, for the attribute “Knowledge of the business” you might report that infrequent users averaged 3.8 on a scale of 1 (poor) to 5 (good); that medium users (seeking legal advice once a quarter or more often, perhaps) averaged 3.9; and high-volume users (perhaps more than three times a month) averaged 4.1. That would require an additional question on the survey regarding three choices for frequency of calling the law department but it lets you gauge more finely the scores of different tranches of your clients.
Heavy users could be thought to be your main clients, thus most deserving of your attention and resources, although some people might argue that infrequent users may be avoiding your department, under-using your expertise, and running unwanted legal risks. This is a complex topic, since a heavy user may be lazy, offloading work to the law department, thick as a brick, or too cautious to make decisions.
To go beyond tabulations of satisfaction ratings by frequency of use, and to introduce another way to weight each individual’s score, you could use the level of the person. A Grade 2 (SVP level, maybe) response would be weighted more than a Grade 3 (AVP level), and so on. Then the calculations of average scores can take into account the position in the company of respondents in a single metric, rather than multiple metrics for senior, medium and junior levels.
If you are collecting data with a survey, you might ask the invitees to rank various selections on a scale. “Please rank the following five methods of knowledge management on their effectiveness using a scale of 1 (least) to 5 (most)” followed by a list of five methods. Ranking yields more useful data than “Pick all that you believe are effective” since the latter does not differentiate between methods: each one picked appears equally effective.
But ranking spawns the risk that respondents will confuse which end of the scale is most effective and which least. They might not read carefully and therefore put the number 1 for their most effective method – after all, being Number 1 is best, right? – and put the number 5 for their least effective method.
One method some surveys adopt to guard against respondents misreading the direction of the scale is to add a question after the ranking question. The follow-on question asks them to check the most effective method. Software can quickly confirm that the respondent understood and applied the scale correctly since the 5 on the first question matches the checked method on the second question.
The first time a law firm or law department decides to collect a certain kind of data, legal managers should also decide whether to go back in time for past data. We know the number of EEOC charges handled in 2016, but what about in 2015 and 2014? Such retrospective data raises a set of concerns and challenges.
The farther back you go, the harder it is to collect accurate data and to feel comfortable that the data has been consistently collected over the period. For example, to figure out how many summer interns stayed more than six weeks becomes harder the more years you go back because full-time-equivalent information might not have been logged.
Maintaining a consistent definition back through time, where conditions possibly were changing, also limits the reach back. For example, it may be problematic to collect costs of e-discovery going back several years because the technology, staff skills, and procedural rules transformed so much during that time.
As to who should do the retrospective collection, it is a better practice to have one person in charge so that they develop a sense of similar treatment.
For a final step, once the older data has been assembled, it is good to graph it and see whether the visual trend line makes sense to subject matter expert (SME). Another technique is to have a SME at the beginning of the project estimate what they think the numbers will be in the prior years. Yes, those are subjective estimates, but at least they give a basis for testing the numbers collected against someone’s a priori surmise. Obviously, too, the firm or department needs to evaluate whether the value of the data exceeds the cost of reconstructing it.
One final note: whatever the decisions made during collection and whatever the methods, someone needs to carefully keep track of them so that someone else can audit the process or improve it if that appears appropriate.
Lawyers in firms and legal departments need training on how to recognize and make use of data that improves their decision-making. Over a decade ago General Electric’s law department appreciated the value of its lawyers being numerate and knowledgeable about business. The department “conduct[s] a weeklong advanced business course for lawyers, aimed at 30 of the high-achieing or high-potential individuals and covering such topics as financial analysis, controllership, and GE metrics.”
Ben Heineman, then the General Counsel of GE, described the training in Corp. Counsel, Vol. 13, April 2006 at 89. Others could adopt a similar tactic and organize training sessions on data literacy. The topics could include all the categories on this blog!
Data that leaders of lawyers (managing partners, practice group heads, executive directors, GCs, direct reports to the GC, LDOs, and others) can use to make better decisions are plentiful in law firms and law departments. Challenges to effective data analysis, however, are also plentiful – and the entire area of collection, data clean up, software tools, data visualization, and interpretation expands, deepens, and changes constantly.
This blog will explain how leaders of lawyers can take advantage of data science and become better managers.