Analyzing tree testing results
Creating and sending your test to participants is just the beginning of the process; the fun stuff happens when you receive your results and can start analyzing them for insights. In this chapter, we'll cover how to analyze your results and use them to inform design decisions.
Tree testing guide
Organizing responses
If your tree test is moderated and in-person, you'll need a system to capture both the quantitative and qualitative data and a way to organize the data for analysis. We'd recommend organizing the different data types separately and bringing them together afterward for comparison and to build a narrative.
Luckily, if you're using a platform like Lyssna for remote unmoderated testing, your data should be organized for you.
Identifying patterns and trends in navigation choices
With your results organized in a meaningful way, you can start identifying patterns and trends in the data. The metrics you'll likely have at hand include success rate, directness, time to completion, and the common paths your participants took.
Success rate
Your success rate will tell you the percentage of respondents who found the correct answer. A high success rate indicates fewer (or no) severe issues. According to a study conducted by Bill Albert and Tom Tullis, a ‘good’ success rate can be considered in the range of 61–80%, a ‘very good’ success rate in the range of 80–90%, and anything above 90% can be considered ‘excellent’.
However, as the NNGroup highlight in that same article, the best frame of reference is your own previous data. You should also consider the complexity of the task you’re asking your participants to complete – for mission-critical or revenue-generating tasks, you should aim for a high or excellent success rate.
Directness
Directness measures the efficiency of participants in reaching the correct answer without backtracking. It reflects the percentage of users who navigate directly to the correct destination within the tree structure. A high directness percentage signifies a smoother user experience, indicating that users can easily find what they’re looking for without unnecessary detours or confusion.
Even if a task yields a high success rate, a low directness percentage suggests inefficiencies in navigation. For instance, participants may have had to backtrack or explore multiple options before reaching the desired destination, signaling potential usability issues or shortcomings in the information architecture.
What you don’t want is multiple participants selecting an incorrect answer without backtracking. This indicates a flaw in the tree's structure or labeling.
Time to completion
Time to completion measures the amount of time it takes for participants to successfully navigate through the tree structure and complete a given task.
Low completion times suggest that users can quickly and effortlessly find the information they need, indicating an intuitive IA design. It reflects the clarity of labeling, the logical organization of content, and the ease of navigation within the tree.
High completion times can indicate usability issues within the IA, such as confusing labeling, unclear hierarchy, or non-intuitive navigation paths. Participants might take longer to find information or come across challenges that make it hard to finish their task. This can make them feel frustrated or give up.
Paths
Analyzing the paths your participants take helps you identify both patterns and unexpected choices.
Identifying common paths allows you to recognize prevalent user behaviors and preferences, highlighting areas of the IA that are intuitive and well-structured.
Uncovering unexpected paths can reveal potential areas for optimization within the IA. If a significant number of participants deviate from the expected navigation paths or encounter obstacles during their journey, it suggests usability issues or points of confusion within the IA design.
Interpret the findings to inform design decisions
Analyzing the findings from a tree test involves more than just examining numerical metrics; it requires interpreting both quantitative data and qualitative insights gathered from follow-up questions. By combining numerical data with qualitative feedback, you can gain a deeper understanding of user behavior and preferences.
Once you’ve identified patterns and trends in the data, you can draw insights to inform design recommendations. For instance, if your tree test reveals that a significant number of participants consistently took an unexpected path instead of the correct one, this could prompt you to recommend revising the labels or structure of the IA to better align with user expectations.
Similarly, if follow-up questions reveal confusion or frustration related to specific labels, you can use this qualitative feedback to refine the wording or organization of information within the IA.
The key is to use numerical data as a guide, but also to rely on qualitative insights to provide context and depth to the findings. By triangulating both types of data, you can develop actionable recommendations that address usability issues and enhance the overall user experience. This iterative approach ensures that design decisions are grounded in evidence and directly address the needs and preferences of your target audience.