Preference testing guide
Read on to find out everything you need to know about preference testing – including presenting design options, crafting effective questions, and analyzing results – so you can gather valuable insights into user preferences and make informed design decisions.
Preference testing guide: Unlock insights to improve CRO
Unlocking insights with preference testing
When taking a preference test, a participant is shown multiple design options and asked to choose which one they like the best. Preference tests are often used to measure aesthetic appeal or desirability, but you can also ask participants to judge designs based on their trustworthiness or how well they communicate a specific message or idea..
You can test a whole range of things – videos, logos, color palettes, icons, website designs, sound files, mock-ups, copy, packaging designs, and more. In Lyssna, you can test up to six design options at the same time.
A benefit of preference testing is that it can be done during the design process, so you can get feedback early and iterate as needed. Preference testing allows you to gather both quantitative and qualitative feedback, which helps you understand the "what" as well as the "why".
How to run a preference test
When you run a preference test, you include three sections:
The instruction.
Your different design options.
Any follow-up questions you want to ask.
Below are some tips on how to set up your preference test for success.
Ready to start preference testing? Check out our library of ready-made templates. Here are some preference testing templates to get started with:
1. Ask the right questions
You may be tempted to run a preference test simply to settle an argument about two different design versions. Asking a question like, “Which design is best?” could give you this. The problem is that you're asking people to give you their opinion on the design, and they may not be qualified to do that.
Instead, we suggest asking a question that has a more specific lens, like:
Which design communicates the concept of “human-centered” the best?
Which design is easier to understand?
Which do you find easier to read?
This level of specificity allows you to evaluate your hypothesis with more precision. However, if you want to ask a more general question, we suggest saying, “Which design do you like best?”. This asks the participant to reflect on their own preferences rather than which design may be objectively better. Participants are much more qualified to tell you about their preferences rather than what is or isn't a good design.
2. Include different design options
Participants are shown all the design options side-by-side and have to look at each one individually before they decide which one they like best.
Preference tests are great for comparing different versions of the same design. But remember that your participants will be playing “spot the difference.” If your design options are too similar, they may struggle to identify what they’re being asked to judge. Don’t be afraid to crop a full-page design to focus on a particular area of variance.
Preference tests are compatible with various different asset formats, including JPEGs, GIFs, PNGs, MP3s, and MP4s. You can even mix and match asset formats within one preference test.
Your preference test design options don’t need to be the same size or shape, either.
3. Ask follow-up questions
Follow-up questions help you to gather qualitative data. You can use any question structure for follow-ups (including single-choice, multiple-choice, Linear scale) but we suggest using a free-text entry field to let participants explain their decision.
Follow-up questions appear next to the design option that the participant has chosen. This lets them think about it while they give you more feedback.
Great follow-up questions can be deceptively simple, but can elicit detailed feedback from your participants. They might include:
Why did you choose that design?
What did you like about that design?
What stands out the most to you about this design?
Once you get your responses, you can tag feedback into themes to get a high-level idea of how many participants shared similar feedback. You can also filter by each design to see follow-up responses from people who chose that option.
4. Recruit your participants
After setting up your preference test, it's time to recruit your participants. In Lyssna, you can either recruit from your own network or the Lyssna panel. If recruiting from the panel, you can filter by demographics to match your target audience.
Conversion rate optimization (CRO) expert Rich Page suggests the following when deciding on how many people to recruit:
"I suggest getting at least 100–200 users, as this will give you a good amount of responses to analyze. Ideally more is better, but it depends on what your budget is. Each tester will cost between $1–5, depending on if you ask further questions or if you have a very specific target audience. If you ask just two questions and have simple demographics for your target audience, this will cost you just $200 for 100 testers, which will be the best money you can spend!
You can also recruit your own testers if you want to save money. For example, you could email your preference test to a random sample of your customers, or to a list of friends. Bear in mind, you'll get different feedback from users that already know your business, so you may want to do a few rounds of preference testing to include users who don't know about your business."
5. Analyze your results
After running a preference test, you’ll be shown the number of participants who preferred each design on the results page.
If you compared two designs, you'll also see an indication of which design is the statistically significant leader. Statistical significance is defined as the likelihood that the best-performing design is actually the favorite, and isn’t outperforming the other designs by random chance.
The level of significance you can obtain will vary depending on your sample size, with larger sample sizes giving you greater significance. It will also depend on the degree of difference between the designs’ performance, with large differences in performance giving greater significance.
Using preference test choices with logic
In Lyssna, preference test choices can be used as conditions with logic. This means that you can hide or show a section or question after the preference test based on how the participant answered. This also applies to the follow-up questions in your preference test.
For example, you could run a preference test to help you understand whether your participants prefer option A or option B. Later on, you might want to show one five second test to participants who preferred A and a different five second test to participants who preferred B. Or, you may want to tweak your follow-up questions based on whether they chose A or B – for example, “Why did you choose A?” and “Why did you choose B?”.
Different ways to compare designs
Preference tests use comparisons as a test structure, but you can compare designs in many ways using Lyssna.
Preference tests are best used when you want participants to look at multiple designs at the same time, but sometimes that isn’t ideal – especially when seeing both could introduce bias.
If you want to get feedback on two or more versions of a design but don’t want your participants to view both at the same time, you may wish instead to consider a variation set. Variation sets allow you to send option A to one set of participants and option B to another set. This helps you to gather thorough feedback.
You can also create a test with multiple sections. For example, you could add a design survey to your test, which will allow you to compare different designs without showing them side-by-side.
CRO expert Rich Page is an advocate of using preference testing to boost conversion rates, suggesting it's a better option compared to A/B testing. Below, Rich shares his thoughts on why he opts for preference testing.
Use preference testing to increase CRO
CRO is now being used by many smart online businesses to increase the percentage of users that convert into sales or leads on their website, and to significantly improve their revenue.
And many of them are doing A/B testing to help them find out which version of their proposed improvements converts better.
However, there are three major problems with A/B testing:
It doesn’t tell you why your visitors preferred the winning version. It only tells you which version won (if any).
It doesn’t give you any feedback from your visitors about the improvements you were A/B testing.
Most websites don’t have enough traffic to conduct A/B testing, or sometimes only enough to do it for pages that get the most traffic.
Do any of these frustrating issues sound familiar to you?
This is why you need to start doing preference testing instead of using A/B testing to get ideas and feedback about what you want to improve on your website.
Preference test your key web pages
I recommend preference testing each of key page on your website, particularly your home page, product pages, and checkout or signup flow.
Comparing two versions of these pages will give you great ideas on how to improve the winning design even more, like making it easier to use or explaining information. If many users said they didn't notice the difference between the two designs, you should make the improvement more noticeable.
If the new design doesn’t get very good feedback, conduct more conversion research (such as customer surveys, web analytics reports, visitor session recordings, user testing, or CRO audits). Then, do another round of preference testing on your new design options.
Once you’ve decided on a final version of the improved page based on the insights and feedback, it’s time to launch your new design.
To make sure you get good results from the improved page, you'll need to watch how your conversion rate and revenue in Google Analytics 4 change over the next few weeks. You can do this by looking at the last few weeks and the same time last year.
If you have enough traffic (at least 5,000 visitors per week to the page you're testing), you should run an A/B test on this winning page. This will show that it has a positive impact on your conversion rate and revenue.