We live in an incredibly complex world. We have different kinds of stimuli and information coming at us from different directions at all times. In order to make sense of it all in quick time and make decision-making more efficient, our brain resorts to shortcuts. These shortcuts lead to cognitive biases.
Cognitive bias is the tendency of our brains to be in favor of or against something without analyzing the complete data set. We cherry-pick data to confirm results that suit us. Factors such as social upbringing, social pressures and emotional states impact our cognitive biases.
Types Of Cognitive Biases
The goal of user research at any stage of product design is collecting data about your target audience. But if you are not careful, cognitive bias can start to influence your user research data set. A compromised data set means you won’t have the full picture of your audience’s behavior. And this can result in ineffective decision-making. There are more than 100 different kinds of cognitive biases that we all are susceptible to. Here are a few of the more common ones:
Confirmation bias is the tendency to favor information that confirms your preexisting beliefs or hypotheses while disregarding or dismissing information that contradicts them. This bias can present itself in a variety of ways. For example, interpreting ambiguous information in a way that confirms one’s beliefs, searching for information that confirms one’s beliefs, or placing more weight on information that confirms one’s beliefs when making decisions.
An example of confirmation bias in user research is when a researcher only selects participants who fit specific demographic criteria (i.e., age, gender, profession) that align with their hypothesis about who would be more likely to use their product or feature. For instance, your initial hypothesis might state that 25-to-40-year-old males are more likely to use your product. While that may be true, there might be other demographics that could be your target audience too.
To mitigate confirmation bias in user research, design research studies that are intended to test your hypotheses, rather than confirm them. In our example above, opening up the research to a more diverse set of demographics can be a way to test our hypotheses.
Framing bias, also known as framing effect, refers to the way in which information is presented, or “framed,” which can influence an individual’s perception of the information, and hence, decision-making. Here is an interesting example of framing bias in user research.
Another example of framing bias is framing user-experience questions in a suggestible way. For example, asking users, “What did you like/dislike about the product?” forces them to think about its positive and negative traits. A better question to ask is, “How was your experience using the product?”
As much as possible, frame open-ended questions. “How was your experience using the product?” is a better question than “How satisfied are you with the user experience on a scale of 1 to 10?”
Social Desirability Bias
Social desirability bias refers to when someone participating in a research study answers the survey, interview or questionnaire in a way that they believe is more socially acceptable or desirable than giving their true opinions or behaviors. This is often done to make a positive impression on the person or group conducting the research.
Anonymity in user research surveys is one way to mitigate social desirability bias. Reframing your questions to boost the ego of your research participants is another way to circumvent social desirability bias. For example, middle-aged people and above can be more socially conscious of their ability to use modern technology. When asked, “How easy or difficult do you find it is to use the product?” they might favor a more positive response, even though it might not be true. A better question to ask in this case could be, “If you were to design this app for your mom, what would you change about it?” In the latter case, you have taken the focus off them and instead put someone older than them in the box.
How To Minimize Cognitive Bias In User Research
There are other kinds of cognitive bias that can seep into your user-research results—for example, data clustering that is based on a small sample size. You might see patterns where none exist. Similarly, recency bias can nudge you to give more weight to recent results. For example, a UX designer might give more weightage to usability problems that show up in the latest research data.
Here are a few ways to minimize cognitive bias in user research:
Use Large Data Sets
This piece on usability testing has some interesting perspectives on optimum sample sizes in user research. But think of largeness not in terms of numbers but in terms of diversity. Include people from different demographics in terms of age, gender, culture and other important parameters.
Write Down Your Hypotheses
When we are making a website or designing a mobile app, we all have our target audience in mind. However, at the design stage, the target audience is only a hypothesis. The purpose of user research is to either confirm or reject your initial hypotheses. It helps if you write down your initial assumptions and go from there. With your presumptions written down, you are less likely to succumb to cognitive bias.
Combine Qualitative and Quantitative Metrics
Quantitative metrics are less susceptible to bias since they are hard numbers. Bounce rate and time taken to navigate a webpage or complete a form are some examples of quantitative metrics. Combine them with qualitative metrics such as sentiment analysis for the full picture.
At the end of the day, every user research will have its set of biases. But if you are smart enough and careful enough, you can mitigate these biases to a large degree.