The Ins And Outs Of A/B Testing

8 minute read
A/B testing, put simply, is the practice of testing two versions of something to pick a winner, based on data. In the world of internet and digital marketing, those things could be your call to action (CTA), the color of your CTA button, your email subject line and anything else that might impact how your potential audience will respond. We use A/B testing to help our clients get higher conversion rates and deliver a better user experience for their website visitors. While in theory, A/B testing sounds simple, we’ve found that there are finer nuances to it that you need to get right in order to derive value out of it.

Should You Be Doing A/B Testing?

While there’s certainly merit to doing A/B testing for conversion optimization, A/B testing might not be the best option for everyone. Unless you have a website that fetches sizable traffic, you can probably skip A/B testing and look at other methods of conversion optimization.

How much traffic is enough? The sample size for your desired goal is the answer to that. Calculating the sample size for an A/B test is pretty complicated and best left to a statistician. However, there are several online sample size calculators that can help you.

These calculators provide an estimate of the sample size required per variation. For example, say I have a baseline conversion rate of 2% and a minimum detectable effect of 20% (relative). The minimum detectable effect represents the relative minimum improvement over the baseline that you’re willing to detect in an experiment, to a certain degree of statistical significance. I set the statistical significance to 95%. The tool I used indicated that I need approximately 33,000 users per variation. So that’s 33,000 for option A and another 33,000 for option B. You can then use your website analytics to determine the amount of traffic your site gets per month. Since we want to reduce outside factors, we want to try to complete the test as quickly as possible. If your site is only getting 10,000 visitors a month, you would not be able to effectively test at this minimum detectable effect.

How Long Should You Run an A/B Test For?

Run your A/B test until your suggested sample size is reached. Typically, this will take at least two weeks and a maximum of four weeks. Anything shorter or longer than that may not give you an accurate picture.

What To A/B Test For?

Dig deep into your analytics reports to discover trends that can help you form a hypothesis that will inform your A/B testing strategy. For example, let’s say your website is experiencing high cart abandonment rates. Based on that trend, you can come up with any of the following hypotheses:

• The checkout form is too long/complex.

• Visitors aren’t sure about the security of their payment information.

• The checkout page introduces unexpected costs, such as shipping charges.

For each of the above hypotheses, you can run an A/B test to see whether your hypothesis is right.

What If Your Hypothesis Is Wrong?

There’s always a chance that your hypothesis turns out to be incorrect. More than determining a winner, A/B testing is about gathering insights that you can then apply to other parts of your marketing and online presence.

Let’s say you run an A/B test for the color of your CTA button on your landing page since you were experiencing low conversion rates. However, test results show that the color of the button isn’t the problem. Now, while it’s important to run A/B tests for other elements of your landing page, you shouldn’t bin the results of this one. Segment the data from your A/B tests to gather deeper insights. For example, new visitors might have responded better to the new CTA button, or desktop visitors may have had a higher conversion rate than mobile visitors.

Important Tips When A/B Testing

External factors can affect the results of your A/B test. Be sure to account for press mentions (positive or negative), paid campaigns, season changes or other external factors, such as the holiday season. Let’s say an e-commerce retailer starts running a test in November and its sample size is reached by the middle of December. December is the height of the holiday season, which means people shop online a lot more. Thus, the A/B test won’t give an accurate picture, since external factors will have influenced customers’ decisions. Running A/B tests frequently is one way to mitigate external factors.

Steer clear of multivariate testing unless you get a huge amount of traffic to your landing page. Multivariate testing splits the audience between several versions of the same page, unlike A/B tests that split between two variations. For multivariate testing to deliver statistically significant results, you should be getting people by the droves to your website in a short span of time.

Choosing An A/B Testing Tool

An A/B testing tool or software can help you effectively set up and execute your tests. When deciding on a testing tool, an intuitive user interface is key. It should be easy to navigate the tool and fetch key reports. Check the statistical model it’s using as well. Certain A/B testing tools out there use a Bayesian model, which we’ve found can be easier to understand. If you’re just starting out, a combination of Google Analytics and Google Optimize is a good choice to learn the ropes, since they’re free.

A/B testing can be a key consideration for almost anyone looking at ways to optimize conversions on their website. Just make sure that your site has enough traffic to meet sample size requirements, and don’t let your tests drag out over an extended time to reduce external factors.

See Original Article on

Related Stories

A Quick Guide To Getting Better At Audience Segmentation
4 minute read
UX Trends Likely To Dominate In The Next Year
5 minute read
4 Reasons Why Customer Data Platforms Are Becoming An Essential Tool
5 minute read
UX Trends Likely To Dominate In The Next Year
5 minute read