To learn more from a set of data, you may want to calculate additional variables. Here is an example from a client satisfaction survey.
If you are a general counsel and you ask your clients to assess your department, ask them not only to evaluate your group’s performance on a set of attributes but also to rank those attributes by importance. The more important the attribute – such as timeliness, understanding of the law, responsiveness – the more your clients should expect good performance from the law department. You want to focus on what your clients value.
From the survey data, create an “index of client satisfaction” which divides the reality (performance ratings) by the expectations of clients (importance ratings) on each attribute. In short, reality divided by expectations, which is client satisfaction. Then you can calculate averages, medians, etc.
With 1.0 being the absolute best, where the delivered performance fully met the expectations of all your clients, your index will decline to the degree the performance of the law department fell short of what clients felt was important and expected. By the way, low expectations (importance) fully met shows up in the index as high satisfaction. Focus on the gap between the highest ranking attributes and their evaluation ratings.
To make better decision based on client-satisfaction survey results, break down client scores, such as by the frequency of their legal service use: low, medium, and high. In other words, for the attribute “Knowledge of the business” you might report that infrequent users averaged 3.8 on a scale of 1 (poor) to 5 (good); that medium users (seeking legal advice once a quarter or more often, perhaps) averaged 3.9; and high-volume users (perhaps more than three times a month) averaged 4.1. That would require an additional question on the survey regarding three choices for frequency of calling the law department but it lets you gauge more finely the scores of different tranches of your clients.
Heavy users could be thought to be your main clients, thus most deserving of your attention and resources, although some people might argue that infrequent users may be avoiding your department, under-using your expertise, and running unwanted legal risks. This is a complex topic, since a heavy user may be lazy, offloading work to the law department, thick as a brick, or too cautious to make decisions.
To go beyond tabulations of satisfaction ratings by frequency of use, and to introduce another way to weight each individual’s score, you could use the level of the person. A Grade 2 (SVP level, maybe) response would be weighted more than a Grade 3 (AVP level), and so on. Then the calculations of average scores can take into account the position in the company of respondents in a single metric, rather than multiple metrics for senior, medium and junior levels.