Survey data is more insightful if it controls for key variables (partner compensation and gender)

Senior partners in law firms pour hours into agonizing over how much to pay their partners.  Those decisions take into account many factors and cumulatively, over time, shape the culture of the firm.  Because those partner compensation decisions are so crucial, and analysis of data so integral, firms seek guidance and input from multiple sources, including surveys.  So, when an article today in the New York Times, Oct. 14, 2016 at B3, headlined “Men v. Women in Law: A Pay Divide of 44%” the data analyst in me pored over it.

The headline derived from the finding that a partner’s annual compensation is typically “tied to the amount of business they bring.”   Women brought in an average of $1.7 million of business; men averaged $2.6 million.  Interestingly, where the average male partner brought in 50% more than the average female partner, the pay gap was less, at 44%, so something else went into the determinations.

My three other points are different.  First, it would be very useful if Major, Lindsey & Africa, the preeminent executive search firm in the legal industry who sponsored the survey, controlled for the ages of the partners.   It seems plausible that the average male partner is older than the average female partner, which may account for a chunk of the origination and comp differences – you’ve had more years to accumulate a stable of clients and to become known for a specialty.

Second, it would be insightful if the sponsor compared compensation by gender when it holds practice areas constant.  If Trusts and Estates partners, to pick one practice, differ significantly on compensation by gender (ideally controlling for age differences), that finding would clarify the value of the survey results for decision-makers.

Last, if the comparison were between the average compensation of male partners with origination figures roughly similar to the origination figures of female partners, does the gender gap change?   Take all the male partners who originated $1.6 to $1.8 million and match their comp against all the female partners in the same origination range — that will test discrimination.

For survey ranking questions, a technique to assure that the scale was applied correctly

If you are collecting data with a survey, you might ask the invitees to rank various selections on a scale.  “Please rank the following five methods of knowledge management on their effectiveness using a scale of 1 (least) to 5 (most)” followed by a list of five methods.  Ranking yields more useful data than “Pick all that you believe are effective” since the latter does not differentiate between methods: each one picked appears equally effective.

But ranking spawns the risk that respondents will confuse which end of the scale is most effective and which least.  They might not read carefully and therefore put the number 1 for their most effective method – after all, being Number 1 is best, right? – and put the number 5 for their least effective method.

One method some surveys adopt to guard against respondents misreading the direction of the scale is to add a question after the ranking question.  The follow-on question asks them to check the most effective method.  Software can quickly confirm that the respondent understood and applied the scale correctly since the 5 on the first question matches the checked method on the second question.