What is a CRO test?
We'll take you through what a test is, the importance of a hypothesis, and what methods of 'testing' you can conduct.
What is a CRO test?
A CRO test is a method of identifying what the best outcome is for your conversion rate when making changes on your website. CRO tests most commonly come in the form of A/B tests, but there are also ways of determining the best outcome which don’t require the traffic that A/B tests do.
Any kind of test should be backed by data (be it analytics, heatmaps, interviews, observations from session recordings, user feedback, etc) in order for it to be a valid test, i.e. solving a real problem, otherwise you’ll just be injecting bias into your testing and that’s a sure-fire way to get mediocre results.
Let’s talk about A/B testing
A/B testing tends to be an umbrella term containing a few testing methods. The main benefit of testing is that you can see if the change makes an impact in a controlled environment. Any test should always start with a hypothesis to prove or disprove, just like any experiment. A hypothesis should be in the following format:
If [VARIABLE], then [RESULT] due to [RATIONALE]
A strong hypothesis will help you to think about your reasons behind testing and what main goal you are tracking (i.e. conversion rate). What you’re trying to do by testing is to actually test your hypothesis. Here’s an example:
If we make the product page CTA more prominent in the visual hierarchy, then add to baskets will increase due to more people noticing the CTA and clicking add to basket
Let’s talk through each main type of CRO test with this example in mind:
Split testing (A.K.A: A/B testing)
Split testing is when you test one version of a webpage against another, showing the original version to 50% of your audience, and the changed version to the other 50% of your audience, hence the term split test.
Let’s say you are testing one change to the CTA area on your ecommerce product page, you’ll be testing the new CTA area against the original CTA area to see which one has a greater impact on conversions.
For split testing, you’ll need to first work out your sample size, because this will determine whether you can run a test or not. Sample size isn’t just based on the number of visitors your page receives, but also how many conversions your page receives. If you get a lot of traffic but not many conversions, then it will take much longer to build up a large enough sample size, which may present an invalid result, and may be an opportunity cost. If you have good traffic and lots of conversions, then your A/B test will take less time to complete and is more likely to give you a valid result.
Let’s say you have 100,000 visitors and 5,000 conversions a month on your product page, then each version would receive 50,000 visitors and 2,500 conversions
A/B/n testing is when you are testing more than one alternative version of your webpage. In this case, you’re testing two new CTA variations against the original.
Because you are testing more than one change, you are splitting your audience into 3 groups, rather than two. This means that each group will now have a smaller sample size than just A/B testing, and the time to run the test will be longer.
Now your 100,000 visitors and 5,000 conversions a month will be split in three, and each version would receive 33,333 visitors and 1,666 conversions
Multivariate testing is when you are testing more than one variable at the same time on your webpage. Multivariate tests are used to see which combination of changes works best for your audience.
Multivariate testing requires more traffic as there are multiple changes which require multiple combinations of changes to test all possible variations. Let’s say you want to test 3 new versions of your CTA area, plus 3 new versions of a re-ordered information section, this means there would be 16 different combinations to test.
With a multivariate test, your 100,000 visitors and 5,000 conversions a month will be split into the sixteen different combinations, and each variation would only receive 6,250 visitors and 312 conversions
What goals should you be measuring whilst testing?
You can test to improve the conversion rate for any goal, not just hard sales. A lot of the A/B tests carried out will be mainly focusing on conversion rate for sales and transactions, but you can also test to improve the number of people adding to basket, going to your checkout/funnel, and people even test using smaller metrics (known as micro-metrics) like interactions with elements like filters, navigation, search. Whilst testing for uplifts in micro-metrics, you should always keen an eye on conversion rate for the segments you’re testing on, just to make sure you’re not implementing changes which reduce sales volumes or order value.
What other types of CRO test are there?
- Launching the changes without testing, then measuring the impact
This is what marketers used to do on their websites before they started to discover testing. Changes would be launched as a ‘test’ on the live website, then after a couple of weeks to a month, the stats would be analysed to determine whether the change impacted conversion rate positively or negatively. This isn’t necessarily the wrong way to do things if you can’t A/B test, but it comes with uncertainties. Seasonality can impact metrics, so if you made the change during a peak month, next months’ stats may not show the impact, visa versa and the uplift from the peak month might cover any impact the change may have had. Also comparing the month the change happened compared to the previous year is problematic because of growth, changing traffic, changing buying behaviours, and other changes that could have happened within that year.
- User testing
User testing is when you set tasks / questions for an audience that is either recruited or from your own pool, in order to gain insights into user behaviour. User testing can be a relatively quick way to validate your changes, assuming you are asking the right questions.
The two main ways we use user testing is via user testing videos (e.g. via usertesting.com) where people carry out tasks on your website, or via user testing screenshots (e.g via UsabilityHub) where people would answer questions based on screenshots/designs you show them. Both methods are excellent for collecting feedback and validating changes you are planning to make, and understanding any further changes you may need to make before launching your change.
Conclusion & takeaways
Now you understand CRO testing, you’re in a better position to take the next step and research, write some hypotheses, and start testing. If you have the traffic and conversions, then you should A/B testing your changes. If you do not have the traffic to A/B test, then it’s advisable to run your changes through a phase of user testing in order to validate your change or generate any feedback before it is launched and measured.
If you need to start A/B testing but don’t have the in-house expertise or resource, or if you are testing but not seeing any results, then Worship can help to start or improve your optimisation programme. Just give us a call on 0161 236 1188 or email [email protected] to get in touch.