Online surveys are commonly used by marketers, product managers, strategists and others to gather feedback. You’ve probably participated in some of these surveys and I’m sure you’ve noticed that they’re often executed poorly.
Surveys are increasingly becoming a more accepted tool for UX practitioners. Creating a great survey is like designing a great user experience—they are a waste of time and money if the audience, or user, is not at the centre of the process. Designing for your user leads to the gathering of more useful and reliable information.
Let’s take a look at some of the basics of creating and running a useful online survey.
What is a survey?
A survey is a simple tool for gathering information. Surveys typically consist of a set of questions used to assess a participant’s preferences, attitudes, characteristics and opinions on a given topic. As a research method, surveys allow us to count or quantify concepts—a sample or subset of the broader audience is used, the learnings from which can be applied to a broader population.
For example, we might have 100,000 unique users of a website in a given year. If we collect information from 2,000 of those users, we could confidently apply the information to the full 100,000.
When it comes to the digital space, we can use surveys for a variety of purposes including:
- Gathering feedback on a live product or during a pilot;
- Exploring the reasons people visit a website and assessing their experience of that visit (such as a True Intent survey);
- Quantifying results from qualitative research activities such as contextual enquiry or interviews; and
- Evaluating usability, such as the System Usability Scale.
Surveys can be an effective method of identifying:
- Who your users are;
- What your users want;
- What they purchase;
- Where they shop;
- What they own; and
- What they think of your brand or product.
Surveys can benefit and inform the design process by:
- Providing information to better understand end users to design better products;
- Mitigating risk of designing the wrong, or a poor, solution for users;
- Providing stakeholders with confidence that a design is, or will be effective. Gathering larger sample sizes, in comparison to qualitative research, often speaks the language of business stakeholders. Whether we like it or not, there is often a perception when it comes to research that more is more.
Like with any UX research activity an effective survey must start with a clear understanding of the needs and information required from the project. To create an effective survey both the business context and project objectives must be clearly understood. The business context of the interface or the product includes insight into why it exists and how it supports the business objectives. The project objectives include the reason the survey is being conducted. For example, is the survey being run to understand the end user, inform the direction of a design or assess a live website? The project objectives may inform the type of survey, the collection method and the robustness of the evidence required, which in turn could influence the ideal approach.
Furthermore, a set of research questions should be defined around the information that needs to be collected. These research questions can then be used as a framework for ensuring that the required information is collected effectively. Defining the information can also be a mechanism for avoiding any irrelevant questions that could creep into the activity.
The information required also provides a framework for the scope of the research. As a start point to any project, the information to be collected needs to be agreed to by all parties. Without this information the research becomes an exercise in guesswork and is likely to miss the mark for stakeholders and be frustrating for all.
Creating an effective survey
Effective questions and good survey design are important for generating quality data and maximising completion rates. Poor questions result in poor feedback that cannot be relied upon. Dropout is the enemy of achieving a robust sample. It is a win in the first place to get someone to agree to participate in an online survey. It is unforgivable if they drop out because they are bored or frustrated.
The following is a guide to creating an effective and engaging survey:
- Logical flow of questions. In order to make the questions easier and faster to answer they should be grouped with like questions and ordered in a logical manner. Imagine answering questions about your attitude to boat refugees, then being asked about your experience of your favorite fast food chain. The transition can be jarring. Obviously there is a need to be able to change topics, however minimising any unnecessary shifts, particularly at inappropriate times, will result in a more effective survey.
- Questions need to be easy to understand. Many surveys will be completed without the aid of anyone to clarify any confusion. It is important to make sure that questions can be readily understood without any additional information. Of major concern is that ambiguous or difficult to understand questions can be answered incorrectly which can bring the data into question.
- Provide questions appropriate for the audience. People can and will answer just about any question put in front of them. This doesn’t mean that they are qualified to answer them, or are able to provide insightful feedback. A good way to check this is to ask yourself, “will my audience know the answer to that question”?
- Avoid double negatives. Double negatives, particularly in combination with the available responses, can make answering questions difficult. Imagine the question:
For a participant, the response “No, my manager is not non-responsive” could be a challenging idea.
- Avoid questions that contain two concepts. For example:
You may think your manager’s leadership is great but their communication skills could be improved. In that case, how do you decide on an answer? This also gets challenging when analysing the findings. Which skill does the manager need to work on if the rating is poor? All questions should relate to one concept. If required, add an extra question to explore the other concept.
- Use balanced ratings scales. Use an equal number of positive and negative options—this relates to probability. With 4 options the natural spread would be 25% per answer, therefore if we have more positive options than negative we would increase the chances of getting positive feedback. An example of this would be:
A balanced rating scale is shown below:
With a balanced rating scale there is a greater chance of the results reflecting a participant’s true beliefs.
- Avoid answers that overlap. An example of overlapping scale would be:
Obviously for someone who is 24 years of age there are 2 options. The same advice goes for concepts.
- Use open-ended questions. This allows us to better understand what is happening. It is great to use multiple-choice questions to gather proportions of feedback and priority and then ask the more probing, ‘Why’? A common use is to follow-up a satisfaction question. For example, follow-up the question “Overall, how satisfied were you with your experience of the website?” with ‘Why’? as an open text field. This can provide great insights into what was driving the feedback.
- Use writing-for-the-web techniques. Using elements such as bolding key words, avoiding unnecessary copy and using a conversational tone can go a long way to making your survey more engaging and easier for participants to read and understand. For example, a question about gender is simplified in the second option below by removing unnecessary copy:
(Note: Read Jessica Enders’ article for an in-depth exploration of how, when and why you should—or shouldn’t—ask for someone’s sex or gender in a survey).
- Keep it short. There is often a temptation when writing surveys to add more areas for exploration. The problem is that they can become painfully long. A better approach is to keep the survey succinct and run another in a month or two.
- Avoid asking about behaviour. While there is nothing stopping you from asking for feedback regarding behaviour there are better techniques for collecting this type of information that is likely to translate to making better design decisions. For example, trying to assess the effectiveness of myki (Melbourne’s poor public transport ticketing system), observing people buying tickets and travelling throughout the network would yield more accurate and useful feedback than asking people how they had used the system over the last week.
- Include ‘don’t know’ options. There will be cases where participants legitimately don’t have an answer. It is likely to be more helpful to know that your audience don’t hold an opinion on a topic than forcing them into an answer, which can distort the picture by overestimating the positive or negative. The same can be said for neutral options in rating scales.
Once the survey has been written
It is a good idea to test the survey before launching it to your full audience. Initially this could be done with a colleague or someone from your organisation to pilot the survey. Don’t give them too much feedback on the survey background—only provide the information any potential participants would have. Give them clear direction in terms of the type of feedback you are looking for. Something along the lines of:
- Are there any questions that didn’t make sense to you?
- Are there any questions you couldn’t answer or were missing the answer you wanted to provide?
Once you are happy that the questions are clear and can be answered, launch the survey to a subset of your audience. When using a panel, go out to a subset of the total sample or when using an intercept survey (a pop-up on a live website) limit the proportion of visitors who will see the survey. Once you have checked that the questions are being completed as would be expected, go out to the whole sample.
There are many tools available for scripting and running surveys, ranging from light weight and inexpensive tools right through to specialist market research tools. The more comprehensive tools include greater functionality for including logic and routing in the survey as well as more powerful reporting functionality.
For most UX applications more simple surveys tools such as those discussed in the next section should offer adequate functionality to create surveys. My advice would be to keep surveys simple. A lot of time can be spent creating clever logic and routing within a survey but the more complex the survey, the greater the amount of testing required (a seemingly exponential increase). Often the benefit gained from the additional complexity of the survey does not reflect the time taken to set this up.
Below is a list of some of the survey tools on the market. This is not intended to be an exhaustive list, rather a place to start if you are interested writing and running a survey.
All also offer annual option (not shown)
|Free, $23, $75
|Free, $36, $123 & $218
|Free, $14.08, $29.08 $74.08 & $183.25
|Weblink, email, Facebook, or embed on your site or blog & Enhanced security (SSL)*
|Weblink, email*, Facebook*, or embed on your site or blog*
|Weblink, Facebook, or embed on your site or blog & Enhanced security (SSL)*
|Offers good value.
|Of these tools it probably offers the most advanced functionality and this is reflected in their price of their higher end versions.
|Isn’t a dedicated survey tool however can be used for other applications. It only offers 10-point Likert scales, which may be inadequate for some.
*Only available in paid-for solutions.
Consider the following when choosing a survey tool:
- If you work in a medium to large organisation, someone will already have access to a survey tool. Use it. It will save you money and the time spent trying to choose one. Try marketing, HR or market research teams.
- If you plan to use a research panel for your sample, contact them and see which tools they can integrate with easily.
- For all but the most basic of surveys expect to pay something for the tool. The costs are fairly low—for example SurveyMonkey and SurveyGizmo have $19 offerings which remove most restrictions to allow access to much of the functionality required.
Surveys can be a really useful UX tool to provide input for the design process. The key to a successful survey is establishing the objectives and information required from the study up front, then making sure the questions asked cover them. Keep at the forefront of your mind the importance of creating a good experience for the participants by writing appropriate questions. Designing an effective survey is going to produce the best results.
Keep it short, keep the participant in mind when writing the questions and engage with your audience—good luck!