Getting started with online surveys

In this class we’ll be looking at the basics of surveys and highlighting some of the common pitfalls we see so you can avoid them.

Class 1: Survey Basics and Common Pitfalls

Getting started with online surveys can be a daunting process if you’re starting from scratch. While it can be tempting to just get stuck in and learn on the go, it’s a good idea to run over some basic principles of good practice and common pitfalls before you commit to collecting data.


Before we get too far, we’ll quickly define a few words. There’s sometimes a little confusion about particular terms, so when these words come up, this is what they mean.

  • Researcher: someone who is carrying out (or wants to carry out) surveys
  • Survey: The collection of questions that is used to collect the responses
  • Respondent: Someone who is answering the Survey questions
  • Response: A set of answers to survey questions associated with a single respondent

The Golden Rule

The most important thing to bear in mind when you set out to create any survey is this:

Set a clear goal for the survey

This doesn’t mean that you know what the results will be, but rather that you have a clearly defined scope for the data you’re collecting – you’re attempting to answer a specific overall question that is important for your organisation.

So, to start with, focus in on one overall issue that you want to investigate. Without a clear goal, survey projects can become meandering, and overloaded by trying to deal with every possible question in a single survey. This will nearly always lead to surveys that are too long and unfocused, which will not help them in collecting good data.

The next step is to come up with your questions. A good rule of thumb is that people can answer around three questions in a minute, if they’re properly reading the question and considering an answer. Naturally, this is very variable based on the question complexity. Another thing to consider is that for most online surveys and respondents, they will be answering in their free time. This makes brevity important. Researchers have differing opinions about the ideal length for a survey, but in general 10-15 questions is a good sweet spot that balances detail with the time needed to complete it. Beyond this, you’re asking for a lot of your respondents’ time and they may be may decide it’s not worth it. Many researchers use incentivization to counteract this and encourage people to complete longer surveys, but this can introduce bias, which seems a like a good point to talk about that concept.

You can never totally eliminate bias

No survey will ever be totally free of biases skewing the results. As researchers we need to be aware of this. We can always seek to minimise it, but because we can’t totally eliminate it we need to keep in mind what biases may effect your results:

List potential sources of bias for the survey

One of the most common biases (and one of the hardest to remove in online surveys) is selection bias. This is the fact that you’re only going to collect responses from the people who are interested enough in the subject to be convinced to take part. It’s often not safe to assume that the preferences of this group accurately reflect that of the population at large, especially on emotive or controversial subjects.

Question wording can also introduce bias. Often you’ll see survey questions that invite respondents to agree or disagree with a statement or proposition. This can lead to bias as the social nature of humans makes them more inclined to say that they “agree” with something than to “disagree” with it. Disagreement is a negative emotion and we seek to avoid it, so questions should be worded from as neutral a point of view as possible. Even a simple “Yes / No” option can skew slightly towards “Yes”. Just be aware of it, especially when your results are very close to 50-50.

The wider you can distribute your survey, the less bias you’ll see (in general) but the point is to always think about what biases may be in play whenever you’re writing questions or looking at data.

Beware of false precision

Asking people to rate things on a 1-10 scale feels intuitive but in reality can we rely on respondents to rate something to that degree of precision? One person’s 7 might be another person’s 6 or 8.

Keep ratings, ranges and scales specific and unambiguous

Simplifying answer scales can help with this. Having an even number of options (4 is often recommended – very good, slightly good, slightly bad, very bad) means that respondents need to commit one way or the other and not sit on the fence.

Moving forward

With these things established, in the next article, we’ll put this into practice and talk about setting up a simple survey, going through this process of defining the goal for the survey and using that to define what the questions should be.