If you are collecting data with a survey, you might ask the invitees to rank various selections on a scale. “Please rank the following five methods of knowledge management on their effectiveness using a scale of 1 (least) to 5 (most)” followed by a list of five methods. Ranking yields more useful data than “Pick all that you believe are effective” since the latter does not differentiate between methods: each one picked appears equally effective.
But ranking spawns the risk that respondents will confuse which end of the scale is most effective and which least. They might not read carefully and therefore put the number 1 for their most effective method – after all, being Number 1 is best, right? – and put the number 5 for their least effective method.
One method some surveys adopt to guard against respondents misreading the direction of the scale is to add a question after the ranking question. The follow-on question asks them to check the most effective method. Software can quickly confirm that the respondent understood and applied the scale correctly since the 5 on the first question matches the checked method on the second question.