A client recently wanted to include a question on a b2b survey asking retailers what percentage of their sales of a certain product were accounted for by different brands. I took their request very literally and turned it into a question that directly asked the percentage of total sales that each brand accounted for.
There was no problem with respondents understanding the question but most had a problem answering it. The question interrupted the flow of the survey. These retailers never thought about the percentage of sales accounted for by each brand. They could tell you the sales of each brand individually but they then needed to calculate percentages from them, something that’s very difficult to do during the course of an interview.
Fortunately the survey was asked by telephone so interviewers pointed out the problem early on and the question was changed to ask about volume sales for each brand from which we, rather than the respondent, then calculated the percentages.
Would the question have worked in an online survey? Respondents would feel less pressured without an interviewer but the question is still onerous. The quality of response would probably still be poor with some respondents choosing the easier option of entering numbers until they managed to come up with some that added to 100 rather than taking time to calculate the percentages exactly. But the problem may never come to light with an online questionnaire. No interviewers are involved that can alert you to potential problems. The data comes out the other end and is treated as definitive.
Regardless of method, this incident shows the value of piloting surveys. Properly testing questionnaires before starting fieldwork would pick up on issues like this.
In hindsight I should have questioned the client to understand why they wanted to know the exact percentage of sales that each brand accounted for. I should have asked what decisions they were planning to make based on this data. It is unlikely that they needed actual percentages. A simpler question, for example "In order of sales, which 3 of these brands do you sell most of?”, may well have met their need to understand which brands sold best with the target audience.
There is a general perception that data given in percentages is somehow more accurate. However, questions that are easier for respondents to answer will result in superior quality data than questions asking respondents to calculate percentages.
As well as learning a lesson about avoiding percentage questions, this episode also acted as a reminder to always gain as full an understanding as possible of how clients are planning to use the resulting data. It is our job as survey designers to marry the client’s objectives with the aim of creating the best possible respondent experience. In this way we maximise data quality and give clients information on which they can confidently base their decisions.