Demographic attributes, categories and number of participants

It seems likely that surveys with more participants would cover more demographic attributes and divide those attributes into more categories. Larger numbers of respondents would encourage more slicing and dicing. Or so I thought.

To test that hypothesis, I looked at 10 law-firm research surveys. My less-than-scientific method to pick them started with the last one alphabetically on my list by firm name and pored over the surveys in reverse order until I found 10 with usable data. The by-passed surveys either did not disclose their number of participants, gave very sketchy demographic information, or both (a few were in a series by the same firm). Having trapped the eligible surveys, I counted how many categories the report included for each of the four most common demographics — position of the respondent, revenue of the respondents’ companies, location of the companies, and industry (what I have called the “Big 4”).

The first of the two charts shows how the total number of categories in those four demographic attributes compares to the number of participants in the survey.Each red circle stands for one survey’s number of participants (on the bottom axis) and total Big 4 categories (on the left axis). The wavy blue line shows a non-linear trend line. Very non-linear, and not much of a pattern!

The second chart displays the same total of categories for the four most common demographics, plus the total number of categories in any other demographic attributes on the left axis. It reflects the same 10 surveys, so the bottom axis remains the same, but a greater range on the vertical axis because it includes counts from any other demographic attributes. The trend line here shows even less of a pattern than the squiggle of first plot!

Sigh. At least with this set of surveys, we can’t support a hypothesis that more participants means more demographic attributes. Perhaps if we broadened this particular inquiry to cover more surveys we might eventually distinguish a clearer relationship, but for the moment, none is apparent.

Provide respondent numbers and positional breakdown

Readers of a law firm’s research-survey report 1 want from the start to assess the credibility of the findings. As that credibility in large measure rests on the quality of the survey’s respondents, readers would like to know how many people responded and what were their positions (aka levels, roles, or titles).

A review of survey reports I have collected suggests that we can capture the methodological disclosures of respondent positions by five classifications. Here they are in increasing praiseworthiness.

  1. No information. Woefully, two of the reports lack data on both the number of respondents or their positions. How much stock can anyone put in findings from a black box?
  2. Total respondents. One report, by a very eminent law firm, only disclosed how many respondents its survey had collected, but nothing about their positions.
  3. Total respondents and a broad position. As far as I could glean, five firms told how many people had participated (the number of respondents), but gave only the most general description of their position. How useful is it to know they were “senior management?”
  4. Total respondents and some position breakdowns. Three firms went a step farther and gave some breakdown of the respondents’ positions. One of those firms gave the percentage of total respondents in one broad category, but oddly provided no other quantitative data regarding positions.
  5. Total respondents and full percentage breakdown. A half dozen firms did well: They laid out how many respondents their survey had, broke them down by three-to-five positions, and provided the percentage of respondents in each position. These six firms win the coveted Golden Data Analyst Award: Berwin Leightner Paisner, Davies Ward Phillips, Littler Mendelson, Norton Rose Fulbright, White & Case, and Winston & Strawn.

This detail of disclosure should be the minimum standard for all research surveys conducted by law firms. Tell your readers about who provided data to you and give a clear, quantitative decomposition of the percentage at each position 2.

Notes:

  1. My set includes reports by Allen & Overy, Berwin Leightner Paisner, Carlton Fields, Davies Ward Phillips, Eversheds, Goulston & Storrs, Haynes and Boone, Hogan Lovells, Littler Mendelson, Norton Rose Fulbright, Proskauer Rose, Ropes & Gray, Seyfarth Shaw, White & Case, and Winston & Strawn.
  2. It is for another time to consider weighting the responses of people by their level of seniority.