Finding and scheduling suitable research participants is one of the biggest logistical challenges of UX research. Not to mention then getting those participants to fully engage in research activities.
There have been many articles written about finding UX participants and ensuring they are at least representative of your users. But I’m yet to find much good discussion about the motivations for participants to take part in our research, and how that affects their participation and the research results.
Understanding the underlying contexts, motivations, and biases when people enter a study helps plan and interpret results in the most neutral way possible.
There are many exceptions, but the most common ways to find UX research participants are to reach out to existing customers or leads, or use panels of UX tools like usertesting.com or recruiting companies. Even if you write a screener and recruit for a well-defined persona, each source results in different motivations, and can lead to varied responses to research activities.
Let’s look at each main recruiting source and some of the pros, cons, and things to be aware of while crafting your research plans.
Existing Users
People who already have a relationship with your brand can’t help but bring their preexisting impression of the company – whether positive or negative – to research sessions. Their overarching perception of your brand will sway their impressions of the product you’re investigating.
This is called the halo effect. If you generally like a brand, you’ll be primed to like everything about it. If you dislike the brand, you’ll be primed to think more negatively about every aspect you see.
Let’s say, for example, that you’ve always wanted a BMW, and hold the company in high regard. You get brought in to test a new navigation system and have trouble entering your address.
Your first thought may not be that the system has a usability problem. Without even realising it, you might blame yourself, thinking you made a mistake, or write it off as part of being just a prototype.
The information in front of you doesn’t match your previous expectations (a phenomenon known as cognitive dissonance). So you assign the trouble elsewhere, downplay the importance of the issue, or focus your attention on the aspects of the experience that you like (what’s called confirmation bias). That means as a UX research participant, you’ve failed to give a lot of really important information without even knowing it.
A user’s experience with an overall brand also plays into their motivation to participate in a test. If a person frequently uses a product, they may have a vested interest in seeing the service improve and/or vouching for specific changes or improvements. If they like the product or have a good relationship with someone who works there, they may participate because they want to help out. On the other hand, if they’ve had negative experiences, they may look at a research session as a chance to vent or find an inside connection to get things changed.
Special note: If you work on enterprise tools and/or your users are internal, you’re likely to experience exaggerated effects of both the halo effect and confirmation bias, as well as battling politics and ulterior motives. You can’t avoid this, but it’s good to have a heads up.
Panel members
Participants who actively sign up for a research panel know they’ll be compensated for their time when they participate, and are more likely to view responding as a job.
Many panels allow researchers to “rate” participants, so respondents know that if they give poor quality feedback, they could lose opportunities. The upside of this is that they are the most likely group to show up to sessions as scheduled and respond appropriately and consistently in longitudinal studies. Several studies have shown that monetary incentives increase participation rates.
The downside is that they may view their participation as only a job. They may not be invested in your product or may want to fudge their way into being seen as a representative user.
We’ve all heard of the professional user research participant, who will “ frequently supplement their income by participating in user research… and say and do whatever it takes to get into a study.” Writing effective screeners can help prevent some of those participants from partaking, but even the most qualified panel respondent is more likely to be motivated by money over altruism or intrinsic interest in the product.
So how can you make the most of your user research?
Now that we’ve looked at some of the issues, let’s take a look at the steps you can take to get the best possible engagement and data from research sessions. We have tools at our disposal, regardless of the source of our users.
Offer compensation (in a way that participants want to receive it)
Remember that participating in a study is essentially a social exchange. People need to feel they at least come out even. Money, of course, is one of the easiest benefits to provide.
Studies show that monetary incentives, including receiving a fixed amount of cash, being entered into a lottery for a prize, and charitable donations on a participant’s behalf, can make respondents more likely to participate in research. Besides the obvious benefit of getting paid, compensating participants shows you value their time and input.
Furthermore, giving participants an incentive of any kind can help spark the social construct around the reciprocity principle. Essentially, if you give something (anything) to someone, they will feel compelled to do something in return. This can be especially powerful, especially for longitudinal studies. Anecdotally, I’ve found I get the best response rates when I give about a third of an incentive after successful setup of a longitudinal study and the rest of the incentive upon completion.
When choosing compensation, be aware that different types of monetary incentives will be most effective for different types of studies and different types of people. People who have strong inclinations toward self-direction, learning new things, or risk-taking respond better to lottery-type incentives than fixed amounts. People who value close social relations and betterment of the group over oneself prefer money given to a charity in their honour.
So think about the type of characteristics your target persona has and consider whether you can shift (or at least experiment with shifting!) the type of incentive you offer. Think carefully about offering a discount to your service as motivation. This can sway people too far and they might feel uncomfortable saying anything negative.
Also be mindful of the amount of incentive you provide. You want to provide an amount that demonstrates you appropriately value their time without breaking the budget. For instance, I’ve paid doctors much more to participate in a study than general e-commerce shoppers and typically pay participants of in-person or ethnographic studies much more than respondents to remote sessions.
Help participants see the importance of their feedback
To tip the social exchange cost/benefit ratio even more, give people context about why their help is useful and what you’ll do with the information. People like to know the feedback they give isn’t just going into a corporate vacuum, never to be seen again.
You can do this simply by introducing the topic at the beginning of a session – something as simple as, “we’re talking about x today because we’ve noticed some issues and would like to make improvements.” Though be careful, because there are times that it makes sense not to give too much away at the beginning of a session.
I’ve also found that people love hearing about changes we’ve made based on their feedback, especially with long term customers or internal users. It’s not always possible to share, but if you can, highlight specific study periods and lessons learned in release notes or even press releases. Participants appreciate it, and are more likely to take part again, or encourage others to do the same.
Create expectations through group labels
This last one is a bit tricky, but several studies show that people are more likely to adopt behaviours based on external labels if they are relatively positive. One study showed that when researchers labelled a random group of people as politically active, they were 15% more likely to vote, and several studies have shown that people tend to like to belong to groups or follow social norms.
My educated guess is that labelling people sets an expectation they’ll behave a certain way. If they don’t follow through, they start to experience the same kind of cognitive dissonance as when you find an issue with a product you love. You can subtly shift language to let people know you expect them to follow through – for example, tell them they’re in the group most likely to respond.
Switch it up when you can
When you know how people can be swayed based on the way you recruit, you can take steps to minimise bias in your results. As you can see, different sources of users and incentives vary the amount and quality of participation. When possible, try to use different types of recruiting methods and experiment with compensation to maximise your results.
What are some of the ways you reduce bias from people taking part in UX research? Let us know in the comments!