Generally, humans are bad at making estimates without some form of context. If someone asks us how loud something is we can’t articulate our answer any better than saying something vague like “quite loud”. But ask us to compare two noises and we’ll definitely be able to say which one is louder and even give an estimate by how much (“it sounds twice as loud as the first one”).
Our brains tends to focus on comparisons rather than absolute values. This has implications for survey designers. Sometimes we need to ask respondents to provide estimates. But as humans aren't good at reporting absolute measures, asking someone, for example, how many times they used their mobile phone last Friday isn't likely to generate a particularly accurate response.
Moreover, our estimates can be influenced by seemingly unconnected information. A famous example of this is an experiment by Kahneman and Tversky, two of the most prominent figures in behavioural economics. They asked people to give them the last two digits of their US Social Security number. They then asked whether they would pay the amount of this number in dollars for items whose value they did not know such as bottles of wine and computer equipment. Next, they were asked to bid for the items. Those with higher social security numbers bid higher amounts than those with lower numbers. The social security numbers had become their "anchors".
When designing questions asking for estimates, market researchers therefore need to be careful about the environment in which questions are presented. Do earlier questions potentially provide anchors? Is there anything unconnected in the survey experience that could possibly influence respondents’ estimates?
However, researchers can take advantage of the context that anchors provide. For example, when asking how many times someone used their mobile phone on Friday, if we can tell them how many times they used their phone on Thursday, we provide an anchor and their estimates will tend to be more accurate.
The anchor must though be personal to the respondent, it can't relate to the behaviour of others. If we tell Jane that Geoff uses his mobile phone 8 times a day on average then Jane will be sub-consciously influenced to provide a figure in the same ballpark. If she is a light users she might over-estimate her usage while she may under-estimate if she is a heavy user.
To give a real life example we designed and now manage a regular survey for a retailer asking their customers how much they spend at competitors. As we've seen just asking respondents how much they've spent at competing retailers within, for example, the last month isn't likely to produce estimates that are accurate. To combat this we provide respondents with their own personal anchor. The retailer knows how much each customer has spent with them in the last month. We present the customer with this figure during the interview and it gives them the context to better estimate their equivalent spend at competitors. We have now completed 6 waves of the survey monitoring the retailer’s “share of wallet” and how it is changing over time.
Used correctly anchors can therefore be a powerful aid in obtaining accurate responses. But, beware of unwittingly including anchors in questionnaires that covertly influence response and contaminate your survey data.