Usability test plan
Create a usability test plan to ensure everything runs smoothly. Learn how to set research objectives, recruit participants, choose usability testing methods, create tasks, and establish evaluation criteria.
Usability testing guide
Before you get to the exciting stuff – putting your designs in the hands of your users – it’s a good idea to have a plan in place for what you want to achieve and how you’ll achieve it (not to imply that planning isn’t exciting too!).
A usability test plan helps you define clear goals and objectives for testing, identify who your target audience is, and specify how you’ll recruit them. During the planning stage, you’ll also decide on the testing methods you’ll use, the scenarios and tasks you’ll give your participants, and the success criteria.
By following these tips, you can create comprehensive usability tests that will yield useful insights and help you optimize your product for the needs of your users.
Top tip: Harness quick usability tests for effective design feedback
The advice we share here is intended for any type of usability study, big or small. But we do want to highlight that usability testing doesn't need to be a big undertaking. To see real examples of how quick, iterative testing works in Lyssna, check out these customer stories from Klarna and YNAB.
Define usability testing goals and objectives
An important first step when planning for usability testing is to define your goals and objectives. Here are some simple steps to follow to help you get the most out of your research.
Work out what you want to learn
You don't need to understand every issue or problem in a single study. It’s easier and more productive to run a series of smaller studies with a clear objective for each. This approach means that you’re more likely to get feedback that can guide you toward a solution or support a decision.
Define the scope of your usability test
This helps to make sure that your goals are realistic. You only have so much time and so many resources, so focus on the most important things that you want to find out. Be specific about what you’ll be testing, such as the navigation of your app or website, or the customer onboarding flow. This keeps things focused.
Set your objective
Once you’ve defined the scope of your study, you’ll need to set an objective. It’s important to think about what will have the most business impact, as this will help you gather insights that align with your company’s strategies. Make sure to keep your objective clear and simple – the more complex it is, the more difficult it will be to test and analyze the results.
For example, say you’re the product manager for a fitness tracking app and have noticed that new users are abandoning the app after the first few sessions. You could run a first click test or a five second test to identify any issues that are causing these users to drop off. A clear and specific objective for this usability test could be to:
“Identify usability issues that are causing our users to abandon the app and make recommendations for design and functionality improvements that will increase user engagement and retention.”
A goal such as “Evaluate the app's user experience to understand how users interact with it and gather feedback for potential improvements” lacks the focus on the abandon rate and user engagement, making it a more general usability evaluation goal. The original goal is more specific because it directly targets the issue of users abandoning the app and aims to improve user engagement and retention based on the findings.
By setting a clear and specific objective, you'll know what you’re trying to achieve when conducting usability testing. This will help guide your testing method, the number of participants to recruit, and how to analyze your findings.
Recruit usability testing participants
Now that you know what your goals are, the next step is to identify and recruit test participants from your target audience. In this section, we’ll cover how to define your sample size and explore some different approaches to recruiting participants.
Identify your audience
To work out who your audience is for usability testing, think about who your product is designed for and what type of people are likely to use it. If you’re working on an existing product, you should already know who your users are.
Once you have an idea of your target audience, you can start to narrow down specific characteristics and demographics that will be important for testing. For example, if you’re testing a networking app aimed at people in their twenties, you’ll want to make sure that your test participants fall within the age range of your target audience.
You may also want to consider conducting some initial research, such as surveys or focus groups, to get a better understanding of your target audience’s preferences, habits, and behaviors. This information can help you create more targeted and effective usability tests.
Define your sample size
When it comes to the number of test participants you need, there's no right answer. It all depends on what you’re trying to learn.
If you’re researching specific usability problems, the Nielsen Norman Group recommends that testing with just 5 people can be enough to uncover almost as many usability issues as you’d find with a larger group of test participants. This approach is ideal if you have a specific target audience in mind and want to get feedback from a limited number of people who meet your criteria. By running tests with 5–7 users, making improvements to your product or prototype, and testing again, you can efficiently and cost-effectively iterate your design.
If you’re looking for feedback from a wider audience to understand how they perceive your product, it can be helpful to recruit a diverse range of participants. For quantitative studies like this, we recommend testing with around 30 users to get statistically significant results. This is particularly useful when trying to uncover trends and opinions, as a larger sample size will help establish quantitative findings that you can present to stakeholders.
Recruit participants
Once you have a good idea of who your users are as a target audience, you’ll want to aim for your test participants to resemble this target group as closely as possible.
Recruiting participants is one of the more difficult steps in usability testing. You might source from your customer base or, if you’re using a remote testing platform like Lyssna, you can use a participant panel to make recruiting participants easier.
Recruiting from a participant panel
A participant panel can be an effective way to recruit from a diverse audience. These participants have opted-in to share their feedback, so this gives you access to willing testers from around the world. You can select from a variety of demographics and locations in order to target specific personas.
When recruiting, you can also filter by demographics to help identify precise participants based on their profession, hobbies, behaviors, and anything else that’s relevant to your study.
The biggest benefit of using a panel is that it's usually quicker and easier than trying to recruit people on your own. The platform you use may also have the infrastructure to handle payments or incentives and distribute the test itself, streamlining the process.
Lyssna’s research panel calculator allows you to enter your study size, type, and audience to get an estimate of the cost and turnaround time.
Recruiting your own participants
If you’d like to test with your existing customers or prospects, or your product caters to a niche audience, recruiting from your own network might be the way to go. You could source these participants from your customer base, via social media, or through online communities.
If you’re recruiting your own participants, it’s worth considering offering incentives such as monetary compensation, gift cards, or discounts to your product.
Choose a usability testing method
Similar to defining your research participants, the usability testing method you choose depends on what you want to find out. Different methods produce different insights, so consider your needs when deciding which method will work best for you. To help you decide, it’s worth asking the following questions.
Are you looking for quantitative or qualitative data?
Quantitative data will give you numerical data and qualitative data will give you information about the behaviors, attitudes, and motivations of your users.
You don’t necessarily have to choose one or the other – you might consider using a combination of quantitative and qualitative approaches to help strengthen the credibility of your findings.
What stage of product development are you up to?
User testing can happen at any stage of the product development process, and even after your product launches, so it’s worth considering where you’re up to and what you want to find out.
Formative usability testing is usually conducted early in the design process. The goal is to identify potential design problems as early as possible, and this helps to make the design process efficient and save you time later trying to fix issues that were preventable. Testing methods tend to be qualitative, like user interviews, focus groups, surveys, and prototype testing.
Summative usability testing is often done later, after the product has been designed. The goal is to figure out how well the current iteration of your product is working against previous iterations, or against competitors in the market. Testing methods are quantitative, like first click tests and five second tests.
Do you need to be present during usability testing?
Moderated usability testing involves an active, live moderator guiding test participants. This can be done either in-person or remotely. Participants complete unmoderated usability testing remotely and at their own pace. Both approaches have their pros and cons.
To choose between them, consider who your users are. Are they spread around the world? Are they experienced with technology? Different testing methods may be better suited depending on your users’ characteristics. Budget, resourcing, and time constraints can also be other factors to consider.
What are you trying to learn?
Different usability testing methods will be better suited to different research questions or problems. For example, if you’re looking to understand the overall user experience, a navigation test or prototype test might be a good solution. If you’re looking to understand which design users prefer, a preference test could be the way to go.
What budget and resources do you have?
Some methods are more resource-intensive than others, or require more specialized equipment and software. Understanding what budget and resources you have available can also help you decide what methods to choose.
What’s your timeline?
Some usability testing methods require time to plan and execute, while others can be done more quickly. For example, conducting a remote and unmoderated five second test, and recruiting from a participant panel, is going to give you much quicker results compared to an in-person prototype test.
Create usability test scenarios and tasks
Once you’ve decided on the usability testing methods you’ll use, you’ll need to create some test scenarios and tasks.
Test scenarios are a set of conditions or situations that a user would encounter while using your product. They’re designed to be representative of typical use cases, and help to provide a realistic context for usability testing.
Tasks are specific actions that you ask a participant to perform during the usability testing session. You should design them to be clear and concise, and representative of the typical activities that a user would perform while using your product.
Be sure to design tasks in a way that allows you to observe users’ behavior and identify any issues or problems they encounter, even if you’re conducting an unmoderated remote test. It’s important to keep tasks focused on a specific goal, and avoid adding unnecessary complexity that could confuse the user or alter the testing results.
For example, if you’re testing a travel booking website, instead of framing a task as “Plan a vacation,” which is broad and might lead users down various paths, you could design a task like “Book a round-trip flight from New York to Los Angeles for two adults departing next weekend.” This task is specific, goal-oriented, and provides clear instructions, enabling you to precisely observe the user’s interaction and identify any hurdles they encounter during the booking process.
The number of tasks you ask a participant to complete during a test will vary depending on the complexity of the product and the goals of your study, but we recommend keeping it to one or two per session. This will allow you to gather enough insights without overwhelming participants or making the test too long.
What are the benefits of usability test scenarios and tasks?
Creating test scenarios and tasks is important for several reasons:
They provide context: A test scenario provides context for your users. It sets the stage for the tasks they’re being asked to complete and helps them understand the purpose of the test and what’s expected of them.
They standardize your tests: Giving each user the same tasks to perform allows for more accurate and reliable data collection, and a way to compare between different users and sessions.
They help to identify usability issues: By giving your test participants a specific task to perform, you can identify usability issues related to that task. This can help you work out areas where design or functionality is hindering a user’s ability to complete the task.
They allow for goal-oriented testing: This means that users are completing specific tasks that align with the goals of your product. This allows you to evaluate the product's ability to meet its intended purpose and identify areas for improvement.
Types of usability testing tasks
Usability test tasks can generally be split into two main categories:
Exploratory: These are open-ended tasks that don’t have a right or wrong answer. They help answer broad research questions and look at how users expect to find information in a product.
Specific tasks: These tasks usually have a right or wrong answer, and look to perform a specific task and how easily users can complete the task.
Let’s see what this looks like with an example. Imagine you’re designing a new online grocery shopping app. An example of an exploratory task could be: “You need to find a recipe for a dinner party you’re hosting this weekend. Please navigate the website to find a recipe to save to your favorites.”
This task is open-ended and doesn’t have a clear right or wrong answer. It allows the test participant to explore the app and give insights into how they’d expect to find and save a recipe. This can help answer questions about the app’s overall usability, navigation, and organization.
Using the same example, a specific task could be: “You need to add 2 cans of tomatoes to your cart. Please find the product and add it to your cart.”
This task has a clear right or wrong answer and requires the user to perform a specific action. It helps you measure how easily the participant can complete the task and if there are any difficulties in the process, so you can identify specific usability issues and areas for improvement.
Write a usability test script
Writing a usability testing script involves preparing the instructions and scenarios/tasks for participants to follow during the usability test. Having a script helps you standardize the testing experience for your participants and allow for more accurate data analysis.
Scripts are most useful for moderated testing, but if you’re running an unmoderated test, you can still provide participants with clear instructions and guidance through written prompts or prompts integrated into the testing tool. These prompts can outline tasks, provide specific instructions, or be questions asking for feedback.
If you’re preparing a usability test script, here are some things to consider.
Outline the test structure
Divide the script into sections, such as introduction, pre-task questions, task instructions, and post-task questions. This helps maintain a logical flow throughout the usability testing session.
Include an introduction
Having an introduction prepared in your script can help make sure you cover everything you need to. Aim to keep it concise, clear, and focused on making participants feel comfortable and informed.
Begin by welcoming and thanking your participants for their involvement. Provide an overview of the purpose of the test, what you’ll be asking participants to do, what you hope to learn, and how long it will take.
Be sure to highlight the participant’s role as a valuable contributor and encourage honest feedback. Discuss recording and obtain consent, if applicable (more on this below). Clarify that there are no right or wrong answers, and address any questions or concerns the participant may have.
Get consent
You might have sent a consent form to your participants prior to the testing session, but if not, it’s important to include a section in your script about getting informed consent. Informed consent is an ethical requirement to ensure that participants understand the purpose of the test, what’s expected of them, and any potential risks or benefits involved.
Make instructions clear and concise
Write task instructions that are easy to understand and follow. Use simple language and avoid jargon or technical terms. Clearly define the objective of each task and provide any necessary background information.
Define test scenarios and tasks
We covered this above, but be sure to create realistic scenarios that align with the goals of your usability test. Each scenario should represent a typical user goal or action. Break down the scenarios into specific tasks that participants will need to perform to achieve the goal.
Include questions
Along with the tasks, include relevant questions to gather participant feedback and insights. Questions can be asked before, during, or after completing each task to understand their thought process, challenges, and satisfaction levels.
In our usability testing questions article, we share top tips on writing effective usability testing questions, including specific and open-ended questions, avoiding leading or biased questions, and focusing on the user's goals and tasks.
Be flexible
While it’s helpful to have a structured script, allow room for some flexibility during the testing session. Participants may have unique perspectives or encounter unexpected issues, so be prepared to deviate from the script if necessary.
Test the script
Before conducting the usability test with your participants, review and test the script to ensure clarity and coherence. Ask a colleague to review it to help identify any unclear instructions or confusing elements.
Establishing evaluation criteria for usability testing
Another important consideration to make when planning usability testing is establishing what usability metrics you want to collect and what success looks like. Here are some metrics to consider.
Completion rate
Completion rate refers to the percentage of users who successfully complete a task during a testing session. It measures the percentage of users who complete the task from start to finish without abandoning it or getting stuck.
For example, if 10 users attempt a task and 8 of them complete it successfully, then the completion rate for that task is 80%.
Task success rate
The task success rate is the percentage of users who are able to complete a specific task successfully, regardless of how long it took them. It measures the percentage of users who can achieve the goal of the task, even if they encounter obstacles along the way.
For example, if 10 users attempted a task and 8 of them achieved the goal of the task, whether they completed it in the expected timeframe or not, then the task success rate would be 80%.
Time to completion
Time to completion is the amount of time it takes for a user to complete a specific task. It measures the duration of the task from the moment the user begins until the moment they finish.
Time to completion is important because it can provide insights into how easy or difficult it is for users to complete a task. If a task takes a long time to complete, it may indicate that the interface is confusing, the task difficult, or that there are usability issues that need to be addressed. On the other hand, if a task can be completed quickly and efficiently, it can indicate that the interface is intuitive and user-friendly.
Error rate
Error rate is the percentage of participants who complete the usability task with no errors. Errors can be organized into critical and non-critical errors:
Critical errors impact the success of the task – they stop participants from successfully completing the task. These are errors that definitely need addressing.
Non-critical errors are minor issues, such as visual inconsistencies, typos, or slightly confusing language. While they might not prevent the user from completing a task, they can create frustration or confusion, and impact overall satisfaction with the product.
User ratings
User ratings are qualitative insights into what your participants think and feel about the usability of your design. They can be useful in understanding what is and isn’t working.
Usability testing logistics
If you’re planning in-person or moderated usability testing, you might need to tick off a few logistical items. Below is a handy list of things to consider before you get stuck in.
Location
If you’re testing in-person, you’ll need to decide on where you’ll conduct your testing sessions. You might also need to arrange for the necessary equipment to be there and make sure the space is comfortable and conducive to testing.
Scheduling
If running a moderated test, you’ll need to schedule testing sessions and make sure that you and your participants are available at the designated time. For in-person testing, you’ll also need to ensure that things like transportation, parking, and refreshments are in place.
Equipment and technology
Depending on the type of tests you’re running, you’ll need to make sure you have all the necessary equipment and technology set up, such as computers, cameras, microphones, recording software, and a stable internet connection.
Communication
If running moderated usability testing, you’ll need to consider how and when you’ll communicate with participants, for example during the recruitment, pre-test, and post-test stages.
Compensation and incentives
If providing compensation or incentives to your test participants, you’ll need to decide how and when these will be shared.
Data collection and analysis
You’ll need to work out how you’ll collect and analyze the data you collect. If you’re using a usability testing tool like Lyssna, this is usually an included feature.
Ethics
You’ll also need to make sure testing is conducted ethically and that the privacy and confidentiality of your participants is protected.