Survey Bias: Knowing The Different Types You Need To Avoid
From the bias of those creating your survey, to the preconceived opinions of those taking it. While you may believe you know all there is to know about survey bias, there’s probably a lot more types of bias than you might realise.
Consequently, survey builders need to be vigilant about their own biases, as well as the biases of the people they’re surveying.
Fortunately, there are ways to minimise the possibility and impact of these types of bias, which we will go on to look at next.
Survey bias types
When it comes to survey bias, there are many different types that can come into play, either during its creation or completion. These include:
Probably the trickiest to spot, hidden bias occurs in surveys when the words used in the questions or answer options unintentionally influence the choice of the respondent.
One of the biggest culprits behind this are adjectives, because they’re descriptive. When describing a subject using adjectives, your choice and order can dramatically impact your survey results.
For example, in a list of adjectives, more extreme adjectives like “amazing” and “awful” can influence the choice of other words around them that might be more neutral. Similarly, when selecting a colour from a list, more people could be influenced to select Tiger’s Eye Orange, rather than dark orange, simply because it sounds more exciting and exotic.
This refers to bias which is caused when a survey creator subconsciously phrases their questions in a one-sided way to support a hypothesis or confirm a belief.
So, rather than asking:
“Do you prefer Product A or Product B?
In this instance the question would be re-worded as follows:
“Is Product A better than Product B?
The trouble with the latter phrasing is that it predisposes the respondent to choose Product A over Product B, hence helping to confirm what the question is stating – that Product A is better than Product B.
This is typically the result of a survey weighed down with confirmation bias that doesn’t deliver the expected results.
In this instance, stakeholders often believe the research to be wrong because it doesn’t support a sunk debt or investment. So, rather than use the data to change direction, they will continue on the wrong pathway, which can cause them to miss significant market shifts, such as the move from DVD rentals to film streaming services.
When we talk about sampling bias, we’re essentially referring to a survey that doesn’t reach a representative sample of the population equally.
This sort of bias can easily arise if you limit your range of survey distribution options, such as distributing your survey via a QR code, that can only be accessed by those with a smartphone. The problem with doing this is that there be many other types of people in your market who would have loved to have taken your survey, if you had provided alternative distribution options. And you could now risk skewing your data as a result of this.
For more on sampling bias, you might also like to read our ‘Ways to Avoid Sampling Bias in Surveys’ blog piece.
With this sort of bias there is an assumption that most people think the same way, based on the standards of one’s own culture.
Subsequently, cultural bias can arise when people from the same relative backgrounds miss a cultural reference that the survey creator assumed they would know. For example, using the word Coke, soda, or pop, to indicate soft drinks.
But even more importantly, those carrying out the research might miss an obvious category, which lies beyond their initial scope. Therefore, by adding the option for ‘Other’ via a text box they can help to overcome this.
This refers to the bias that you can unwittingly introduce into your survey through the order of your survey questions and answers. In particular, how an earlier question might influence how someone answers another question that appears later in your survey.
For example, if you were to ask respondents a question about football teams, before asking them to name their favourite sport, they may be more influenced to say football in their response.
This type of bias and its closely related relative nostalgia bias, can disproportionately affect how respondents answer particular questions. In this instance, people will either answer using recent people or events that easily come to mind or with references from a period of time that they look back on fondly.
For example, consider people being asked to rate their greatest top 10 songs of all time. No matter how old they might be, people are more likely to recall and reference recent songs over older ones. In addition, some people may also be biased with referencing more songs from a period that holds fond memories for them, over other time periods.
Extreme or neutral response bias
Everyone has their own particular way of responding to questions. However, unfortunately some respondents have a tendency to provide only extreme or neutral responses.
There can be a variety of reasons for this. Extreme response bias can result from cultural and educational factors, while neutral response bias is more likely to happen when respondents don’t fully understand the question that is being asked of them. Yet, both can skew your results if you have too many people answering in this way. So, you’ll want to do all you can to stop this from happening
Fortunately, there are a few things you can do to help with this.
Think carefully about the language you use and make sure it’s not polarising in tone, which could otherwise elicit an overly strong or emotional response. Make sure your questions are as clear and simple as you can, so they’re more likely to be understood by everyone. And finally, you may want to avoid too many rating scale survey questions, especially those with a five-scale that may influence more neutral responses.
In-person survey taker bias
Sometimes surveys are conducted in-person by a trained interview to ensure the questions are understood correctly by all the respondents.
While this may enhance respondents understanding of the survey topic and areas of discussion, it can inadvertently introduce its own biases into the research.
One of the most significant biases in this scenario is response bias. This is where the survey taker changes the answer they provide because they think it will be more acceptable to the interviewer, even if they don’t truly believe it themselves.
Similarly, through desirability bias, respondents are motivated to answer in such a way that associates them with behaviours and characteristics that they believe to be more desirable. This could lead them to providing answers that were not entirely honest, in order to look better in the eyes of the researcher, if they were tasked to answer questions on more sensitive subjects such as alcohol consumption or sexual behaviour for example.
Both these forms of bias could be reduced by simply removing the interviewer and getting people to answer the questions in an online survey instead. And response bias could be further reduced by allowing people to complete their surveys anonymously.
However, depending on the complexity of the survey’s topic, some organisations may still feel that the benefits of having an interviewer outweigh the costs. It really is something that needs to be assessed on a case-by-case basis.
Careful consideration during survey creation can reduce bias
We hope you enjoyed reading this blog and now feel better informed about the range of different biases that exist, which could potentially harm your survey and skew your results.
Ultimately, if you’re to reduce the threat of survey bias, you need to be vigilant about your own potential biases and those of the people likely to complete your survey. And if you can craft your survey and its questions in such a way as to limit this, you’ll be more likely to receive the reliable and valid data you need to take effective actions with.