Adobe Creative Cloud

October 27, 2017 /UX/UI Design /

A Comprehensive Guide to UX Research Methods

You can’t build a product people will fall in love if you don’t know your audience. To maximize your chances of success, it’s extremely important to conduct insight research before making any product decisions. User research is a process of understanding user needs, wants, and behaviors using different observation and feedback collection methods. It’s a fundamental part of the UX designer’s job and a core part of the overall UX design process.

In fact, UX research is the starting point for any project–it comes first in the UX design process because, without a proper research, product design can only be based on a designer’s own experiences and assumptions, which isn’t objective. Creating a product without good user research is like building a house without a solid foundation.

The field of user experience has a wide range of research techniques available and quite often it’s not obvious which one you should choose for a given project. It’s not realistic to use the full set of techniques on a given project, and that’s why UX researchers often face the question “What kind of user research should I do?” This article is intended to help you decide what research technique to choose based on the goals you’re trying to achieve.

Before You Start Your Research, Understand Your Goal

Before choosing a certain approach for user research it’s important to answer the following fundamental question: What do I want to know about my users and why do I want to know it?

Once you know what you’re trying to learn and why you can start thinking about how to learn it. What you need to know about your user probably falls into one of the following categories:

  • What people do/What problems they face
  • What people need
  • What people want
  • Can people use a certain product?

For each group, I’ll specify the most relevant research method or technique. This is by no means an exhaustive list of user research techniques, and some of these techniques could be used to answer more than one type of question. I suggest thinking of the following list as a good starting point, not as a complete guide for user testing.

What People Do/What Problems They Face

Contextual Inquiry

Contextual inquiry is a variety of field study where a researcher observes people in their natural environment and studies them as they go about their everyday tasks. This method helps researchers obtain information about the context of use: users are first asked a set of standard questions such as “What is the most frequent task you typically do” and then they are observed and questioned while they work in their own environments.

This technique is generally used at the beginning of the design process and is good for getting rich information about work practices, as well as the tools and features users use on a regular basis. The results of contextual inquiries can be used to define requirements, learn what is important to users, and respond to their needs with informed design solutions. Everyone who works on a design team should participate in a contextual inquiry from time to time.

Visiting a real user/client is an invaluable experience for designers; studying users and tasks in context can inform design decisions. Image credits: assembly

Tips:

  • Identify users. You need to visit the right people and you need to visit enough people. While every project is different, as a rule of thumb you should visit 5-6 people, and at least 2 of them should be from significantly different user groups.
  • Don’t do it alone. Take a colleague with you to discuss your observations (this might be a fellow designer or developer from your team).
  • Don’t just listen to users, observe their behavior.‘What people say’ can be different from ‘what people do.’ As much as possible, try to observe what users would typically do to accomplish their tasks.
  • Minimize interference. When studying the natural use of a product, the goal is to minimize interference from the study in order to understand behavior as close to reality as possible.
  • Use critical incident technique. Make interviewees recall specific critical incidents; ask them about the specific cases where the product worked particularly well or particularly poorly. You can use this information to get an idea of the strengths and weaknesses of your product.

Diary Study

A diary study can be used to see how users interact with a product over an extended period of time (ranging from a few days to even a month or longer). During this period, study participants are asked to keep a diary and log specific information about activities. A diary study helps a researcher find answers to questions like: What are users’ primary tasks? What are their workflows for completing complex tasks? Diary studies can be used as a follow-up to a contextual inquiry: they provide organic behavioral insights and help develop a rich understanding of a participant’s context and environment.

In-situ logging is the most straightforward method to collect data from diaries. Participants are asked to report all important information about relevant activities as they complete them.

In-situ logging: when research participants engage in a relevant activity, they must report all important details about that activity right away.

Tips:

  • Create clear and detailed instructions for logging. Be as specific as possible about what information you need participants to log.
  • Remind study participants about logging. Prompt participants to fill in their diary (for example, through a daily notification).
  • Make it possible to add screenshots to a diary. If you use a digital version of a diary, make it possible for participants to upload screenshots. Screenshots are a nice supplement for user data and will help you with future data analysis.

What People Need

Surveys and Questionnaires

Surveys and questionnaires allow a researcher to get a larger volume of responses, which can open up the opportunity for more detailed analysis. This type of research can be relatively inexpensive to run. The downside of this method is that there’s no direct interaction with the respondents and thus, it’s impossible to dive more deeply into answers provided by them.

There are two types of surveys–quantitative and qualitative:

  • Quantitative surveys count results and show how many people ‘do this’ versus ‘doing that.’ This type of survey can be used when the questions can be answered by checkbox or radio button (e.g. “Do you use product A on a daily basis?” with answers being “Yes” or ”No”).
  • Qualitative surveys ask open-ended questions. They ask for comments, feedback, suggestions, and other kinds of responses (e.g. “How could we improve your experience?”)

In most cases, it’s possible to mix the two kinds of surveys. For example, you can start with small qualitative surveys to gather rich feedback. This will help you to discover which questions you need to ask and the best way to ask them for a later quantitative survey.

Tip:

  • Keep it short. If the survey is too long, you may find that you don’t get as many responses as you’d like. Don’t forget that every extra question reduces your response rate. Better to send a few short surveys than to put everything you want to know into a long survey that won’t be completed by the average user.
  • Test your survey to eliminate problems. Test surveys on paper first to avoid time-consuming reworks. Get your colleagues to attempt to answer the questions before sending it to users.
  • Open-ended vs. closed-ended questions. Asking open-ended questions is the best approach, but it’s easy to get stuck in data analysis because every user answer requires researcher time for analysis. Plus, users quickly tire of answering open-ended questions, which usually require a lot of reading and typing.
  • Don’t make your own tool for surveys. Some product teams try to reinvent the wheel and create their own survey tool. Don’t do that. There are a lot of survey tools out there to choose from such as Google Forms, SurveyGizmo, Survey Hero, Survey Monkey, etc. They can save you lots of time and money.
  • Ensure that partial responses will be recorded. People often quit surveys part way through, and that’s why it’s so important to frontload the most important questions and make sure that partial responses are recorded.

A quantitative survey created using Google Forms.

Interviews

Gathering information through direct dialog is a well-known user research technique that can give a researcher rich information about users. This technique can help the researcher assess user needs and feelings both before a product is designed and long after it’s released. Interviews are typically conducted by one interviewer speaking to one user at a time of 30 minutes to an hour. They can take place face-to-face, over the phone, or via video chat.

Interviews can be a good supplement to surveys and questionnaires; conducting an interview beforehand helps researchers refine questions for the survey while conducting an interview afterwards allows them to gather more detailed analysis.

Since interviewing requires a lot of soft skills like active listening, observing, and knowing when and how to probe for more details, it’s recommended to hire a skilled interviewer. If you can’t afford that, check the following tips.

Tips:

  • Create a friendly atmosphere right from the start. Make sure participants feel comfortable before the actual interview begins and emphasize that their answers will not be judged (say there are no right or wrong answers).
  • Start and finish with easy questions. Start your session with simple, lightweight questions before diving deep into the topics you want to uncover. These might be questions like “What do you do for a living?” or “Can you tell me about your hobbies?”
  • Create a discussion guide. Questions often have to be asked in the right way to get good responses. Write up a discussion guide (or an interview protocol) for all interviewers to follow. This guide should include questions and follow-up questions.
  • Take detailed notes. You never know what details are going to be important during analysis, but you can be absolutely sure you won’t be able to create sufficient details from memory. It’s even better to obtain permission from your interviewee to allow for video recording–recorded video can be analyzed in depth later or shared with a product team.

When possible, it’s best to record the interview so you can refer back to it later.

What People Want

Concept Testing

Concept testing is giving users a rough approximation of a product or service in order to understand if they would want or need such product or service. A concept should convey the key essence of a product. A researcher shares this approximation with users in order to determine whether it meets the needs of the target audience. Concept testing can be done one-on-one or with larger numbers of participants, either in person or online.

Designers often use the rapid prototyping technique to create a concept.

Rapid prototyping is a process of quickly mocking up the future state of a product, be it a website or app, and validating it with a broader team of users.

Tip: Don’t wait for a fully-formed product. It’s possible to test a mock-up or semi-functional prototype (even low-fidelity ones) as long as you can explain to users what’s required from them.

User Groups

User groups (also called focus groups) are structured interviews that quickly (and, usually, inexpensively) reveal the desires and attitudes of a target audience. A moderator leads a group of 3–6 participants through a discussion about a set of topics, giving verbal and written feedback through discussion and exercises.

This research method is helpful when a company needs to gather a lot of insight in a short period of time. It can help researchers find the answer to the following important questions:

  • How users perceive a product
  • What users believe are a product’s most important features

Despite the obvious advantages of this method, user groups are often criticized by many researchers. The most critical problems are:

  • People aren’t good at predicting what they want (especially when asked out of context).
  • The ‘Say vs. do’ problem. What people say and what people do are often very different. Users can make up an opinion they don’t actually feel strongly about. They can also talk a lot about things that don’t actually matter to them, which can be misleading.
  • Groupthink. Participants may influence each other’s opinions and preferences, thus introducing bias and producing inaccurate data.

Therefore it would be incorrect to use this method in the hopes of receiving an accurate measurement of behavior.

Tips:

  • Choose a few topics. On average, plan to discuss 3-5 topics during a 90-minute focus group.
  • Don’t use jargon. When talking to respondents, speak in their language. This will help you understand how they truly feel.
  • Don’t draw attention to specific issues that you care about. Doing so causes people to change their behavior and focus their answers on the issues you emphasize. This problem is particularly common for topics related to UI design; when you ask people about a specific design element, they notice it much more than they would have otherwise.
  • Don’t ask leading questions. You don’t want to lead users on because it affects the validity of your study. Asking a question like “What do you like about our improved mobile app?” assumes the user has a positive preference for the product right off the bat.

Can People Use A Certain Product?

Usability testing

Usability testing is evaluating a product or service by testing it with representative users. In usability testing, you recruit some test participants and give them a set of scenarios that lead to usage of specific aspects of a product (or prototype). The main goal of this user experience testing method is to identify usability problems, collect qualitative data, and determine participants’ overall satisfaction with the product. According to the Nielsen Norman Group, if you want to select just one type of user research for your project, it should be qualitative usability testing.

Usability testing is often done formally (where a researcher creates a screener, hires participants, has them come into his or her lab, records the session, etc.).

A lab doing formal usability testing. Image credits: usabilitygeek

It can also be done informally (where a researcher goes to the nearest coffee shop, finds participants, asks them to play with a product for 10 minutes, and then gives them a small treat as a thank you).

Informal guerilla usability testing. Image credit: johnferrigan

Tips:

  • Up to 85% of core usability problems can be found by observing just 5 people using a product. According to Jakob Nielsen research, the best results come from testing no more than 5 users and running as many small tests as you can afford.
  • Some people think that usability testing is very costly and should be reserved only for design projects with huge budgets. That’s not true. Despite popular opinion, it’s possible to conduct usability testing on a budget. You can embrace guerilla usability testing. When you have a prototype, step out of your office, find people who are at least similar to your target users, and begin testing! The most important usability problems are easy to spot for people who are new to your product.

Card sorting

Card sorting is asking users to organize items into groups (major features or topics related to the product), and then getting them to assign categories to each of those groups. This method helps create or refine the information architecture of a system (how to label your menus, how to group your content, etc.) according to the users’ mental models.

Card sorting can be done as open or closed sorting:

  • Open sorting. Participants put the cards into groups and then name the groups themselves.
  • Closed sorting. A researcher gives users defined categories to sort the cards into.

Card sorting can be done both offline (in person, with index cards), or online (using tools such as Optimal Sort).

An open card sort with physical index cards. Write down the major features or topics for your system on cards, then recruit some users and ask them to organize the cards into categories of their choosing. Image credits: UX Matters

Tip: While you’ll need more users for a card sorting study than for a usability test, it’s better not to go overboard as you’ll tend to get diminishing returns beyond 15 or more users.

Conclusion

Good user research is key to designing a great user experience. Conducting user research allows you to dive deep beneath the surface of what you think your users want, to what they actually need. Now you have a few different techniques to choose from.

UX/UI Design

Design. Prototype. Share. All in Adobe XD.

Go from idea to prototype faster with Adobe XD, the first all-in-one tool for creating and sharing website and mobile app designs.

DOWNLOAD

Join the discussion

  • By Roy Kok - 6:44 AM on October 31, 2017   Reply

    Really helpful, thanks!

  • By Ilaria Mauric - 11:38 PM on November 2, 2017   Reply

    hello Nick, I have a question for you, I still struggle to find a proper answer. We are now running a research in the banking domain for one of our clients. We have set the goals and divided the research in 2 steps. In the first one we will do 4 rounds of qualitative interviews with 4 different groups of users For the end of this first phase we will have collected insights from around 50 users. In the second one we are thinking to gather feedback with qualitative surveys, 2 groups of 100 respondents. The survey should help us valutating what has come out from phase 1. The stakeholders we are talked with are wondering how they can rely on the final results we’ll share if we have reached an audience of only 250 users, while they have thousands and thousands. It is very hard for me to explain that our research is not statistic or market research, but our stakeholders have never seen anything like that, while they are very used to statistics, targets and market research. Do you have any tip helping us to defend and explain our numbers and conquer the trust we need? The worst thing it could happen is that at the end of our research, stakeholders object and refuse to work on the insights because they don’t think we have a significant sample.

Join the discussion