Categories
5 reasons why your A/B test does not work

5 reasons why your A/B test does not work

September 27,2022 in Digital Marketing Blogs Posts | 0 Comments

Web is important for you as a sales channel so you follow conversions and figure out ways to improve them. In this case you or your agency certainly are familiar with the concept of an A/B test. And if you are not, you certainly should be. A/B test is a very useful thing, irreplaceable in the process of improving conversions. You generate hypotheses, verify them and if they work, deploy them on the web. And the numbers will go up.

So much for theory. In practice, of course, it does not work so easily. Especially if you are waiting for the test result, but none is coming, or it comes but is not of much help in your decision-making. “What’s wrong?” you ask. The answer will not please you, but probably it is you who is wrong. Because you are doing one of the things that you should avoid when doing an A/B test.

1. Testing without a hypothesis

“Instead of the blue button, let’s use the green button and we’ll see.” Such assignment for the A/B test is not so unusual, and, as in many other cases, should be followed by a question: “Why?” Try to ask this yourself whenever you devise a similar change on the web. Why should the green button be better than the blue one.

The explanation should take into account the user, who does not care if green is your favorite color. What we need is a hypothesis. “The page contains various elements of blue and green color contrasts with them more than blue,” you say. “Green is the color of positive people and evokes forward motion like a traffic light,” says a colleague with psychological talents. Both hypotheses are correct. They may not be one hundred percent, but you need to have a rationale for them and elementary knowledge of user behavior. The A/B test is here to make your hypothesis confirmed or refuted.

Without a hypothesis, testing is like throwing darts at the target. You may hit something, but primarily the result is mere play of chance. If we put it in the context of the button color testing, comparing blue and another randomly selected color from a reasonable palette (light gray button on a white background cannot be called a reasonable option) will give you the same random outcome as when testing the same blue button twice.

2. Comparing more than one thing

Let’s return to the blue button. Does color change help the conversion? We have a hypothesis which claims it does. It might also help if we removed a paragraph of text above the form. We have a good hypothesis for this too. As well as for a change of the title size, removing an item from the form or three more things on the page. Let’s test it all, ideally at the same time. After a week we find out that a new variant is less a conversion than the original.

You can probably guess what the problem is. It is that the test detects whether the hypothesis is correct or not. But I cannot say which one. Testing multiple changes at once is problematic in that the changes interact with each other. One incorrect change decreases the conversion rate of the whole variant, even though other changes do improve the conversion rate.

3. A campaign running during the test

Campaigns are a great way to create awareness of the actual service or product and increase web traffic and conversion rate. However, hand in hand with that also comes a mostly negative and significant effect on the statistics. And also on the A/B test. Why is this so?

Campaigns bring many people to the web coming in order to get your product or deciding to acquire it during the visit. But they also bring a large group of people who are only interested in advertising but have no real interest in the product or service. These people either leave the site immediately, click through it, or even start the conversion process and quit in the middle of it. In the web the conversion quantity vs web traffic rate drops, which significantly affects the test as both variants have so low conversion rate that it minimizes the difference between them and it is difficult to select the winning option.

4. The conversion rate is too low

It is linked to the previous point, but it manifests itself even if you do not run a campaign or another active traffic source. It’s a bit like a vicious circle. You want to increase conversions, but the A/B test takes too long because both variants have very low conversion rate. What to do in such a moment? Consider whether it makes sense to take the test or if similar minor modifications have a chance to improve conversions. The real problem may be in the product, in the form you provide it, or an unsuitable traffic source. It should be noted that the A/B test is not a universal savior. When you need to fix a wrecked car, painting is not enough.

5. Dependent testing

Rather than an error it is a downside of the A/B testing. The result of one can affect the other, and finally you might get a better variant than was the original one, but it may not be the best possible variant. Actually, you will lose the best one during the testing.

Imagine that you are testing the blue and green button. The result is that the green button provides higher conversion rate. Then you come up with a hypothesis that text B could work better, for some reason, than the original text A. You make a new test and find out that the hypothesis was correct and text B is better for the conversion rate.

Did you really get the best option? Maybe not. What if text B could work even better in the original blue button? This question should be asked even before the first test and we should use a multivariate testing. We do not test two but four variants against each other: Blue button with text A, blue button with text B, green button with text A and green button with text B. This test will obviously take longer, but the result is the best possible variant of the button.

How to avoid mistakes like this?

You do not need to be versed in the statistics, but need a thorough knowledge of the process by which A/B test works. It is important to make more long-term planning and always base the test on realistic assumptions. The ideal is to use the services of a web integrator who has experience with testing and who can provide not only a dry result but can also analyze why the result is what it is. Then the A/B testing will be a real benefit for you.

Was this article helpful?

Support us to keep up the good work and to provide you even better content. Your donations will be used to help students get access to quality content for free and pay our contributors’ salaries, who work hard to create this website content! Thank you for all your support!

Reaction to comment: Cancel reply

What do you think about this article?

Your email address will not be published. Required fields are marked.