As regular readers of my blog will know I have recently immersed myself in two books that discuss how randomness affects our everyday lives (Fooled by Randomness: The Hidden Role of Chance in Life and in the Markets by Nassim Nicholas Taleb and The Drunkard’s Walk: How Randomness Rules Our Lives by Leonard Mlodinow).
These books have prompted me to consider how randomness affects the world of market research. As researchers our raison d’etre is to make sense of data but if there is an element of randomness, as there is bound to be with any dataset, how does it affect what we do?
When reporting data we use the concept of a statistical difference to ascertain whether or not the disparity between 2 figures is a real difference or due to chance. However, we commonly test for significance at the 95% level meaning that for every 20 times a difference is shown as significant, 1 of these differences is a mirage, due purely to chance. Therefore, 1 time in 20 randomness in the data misleads us.
In my agency days I can remember a few times when I have been preparing a report or presentation and came across figures which seemed to be anomalies compared to the rest of the results. We’d verify the data again to make sure it was OK and look at the question wording and its order within the questionnaire to check if there were any reasons why the data could be different from the norm. When these checks were made and didn’t unearth anything we’d then discuss possible reasons for the inconsistency and invariably come up with a theory to justify the figures to the client. Although, between us, we sometimes theorised that the strange figure could be due to randomness this isn’t something that tends to go down well with clients. We are paid to explain what data means, not make excuses, but our inability to acknowledge the possibility of randomness in our data has the potential to lead to flaky interpretation.
Another danger of randomness is the human tendency to try to make sense of it. People like to be in control and, by definition, we are not in control of randomness and try to construct theories to make sense of it. This can be especially dangerous for market researchers. When interpreting a set of data there is a danger that researchers can, either consciously or sub-consciously look for patterns that provide evidence of their hypotheses and ignore contradictory data (what psychologists call the “confirmation bias”). This applies to both quantitative and qualitative data. Indeed, it could be argued that qualitative research is more likely to be subject to this sort of confirmation bias due to the smaller number of respondents typically involved in a study. It may also be the case that moderators or interviewers are sub-consciously looking for information during the fieldwork phase that reinforces their or their client’s hypotheses around the subject they are researching.
Market researchers therefore need to beware of randomness. It is vitally important that researchers have the ability to keep an open mind and recognize that chance events produce patterns that are open to misinterpretation. We should question our perceptions and theories, spending as much time looking for evidence that they are wrong as we spend searching for reasons they are correct.