What Is A/B Testing and How To Do It in Programmatic?

a/b testing

Adopting new technologies can be tricky. Each vendor swears that their product is the one that will provide the highest uplifts, increase the viewability of ad units; and, of course, their same product will do all this without compromising on user experience. But can these vendors truly deliver such promises? Or are they simply repeating the well-trodden statements of their marketing teams?

You can take all guesswork out of the equation by implementing A/B testing.

What is A/B testing?

Simply put, A/B testing is a randomized experiment with two variants: A and B. It compares two versions of the same variable and checks which one has achieved the desired effect. One crucial aspect of this technique is that you have to have clearly defined hypotheses and criteria for success prior to the test. Basically, what is the effect you’re looking to achieve? IS it a higher click-through rate or perhaps a higher CPM? Whatever it is, you need to be sure prior to the beginning of the test.

Why should you perform A/B testing?

The main benefit of A/B testing is that it shifts the perspective from “I think” to “I know”. You can make your decision based on hard data that points to the best solution, as opposed to a “hit and hope” approach.

a/b testing - programmatic

A/B testing in programmatic advertising

In an ideal world, every time you as a Publisher consider implementing new technology, whether it is a new Header Bidding system, an automated pricing management solution, or merely a new ad format that you are looking to compare to either its predecessor or an alternative, you should be able to make an informed decision about what actually works. Unfortunately, many tech vendors consider it best practice to test their solution against the existing setup. If they don’t want to A/B test it, the obvious question is: are they worried that their solution will not live up to the hype?

How to A/B test?

Even though at its core A/B testing is straightforward, there are a number of logical steps you should follow to get the most reliable results:

  1. Look for places to optimize

Collect insights on the efficiency of your Header Bidding, pricing strategy, and other aspects of the Programmatic ecosystem. Look for opportunities to improve your performance. 

  1. Define success

The goals you set are what you will later use to determine which variant has been more successful. Therefore, make them very specific in order to ensure that there is no room for guesswork. 

  1. Create an hypotheses

What do you predict will happen? Why? It is important to make some forecasts, even if they turn out to be not supported by your data. It’s still a learning opportunity. 

  1. Generate variations 

Here you split your traffic into experimental and control (ideally 50/50). Traditionally, the experimental will be the new glossy solution, technique, or strategy; and the control is what you have been doing thus far. 

  1. Run the experiment 

Just do it! At this point, your traffic will be split into control or a variation of your experiment. Depending on your resources, it would be optimal if the variations were equally distributed and the testing was performed at the same time and within the same geo so as to keep other variables as constant as possible. 

  1. Analyze the results 

You can now run the statistical analysis in order to determine whether there was a statistical difference between the two variations. Assuming there is a difference; and basing on your definition of success and the hypotheses you created following points #2 and #3, you can easily say which variation has emerged victorious.

If your choice was the winner – well done! And if you didn’t predict the more successful option – that’s not a problem either. A/B testing is always a learning opportunity, even if it means remaining with the existing solution or setup. And if the experiment yields no results, meaning there was no significant difference, then go back to the drawing board. You can always generate new hypotheses or new success metrics, and you can always perform a new test.

LET’S GET IN TOUCH!

Do you have any questions regarding optimization with A/B testing?
Please feel free to contact us – our team will be more than happy to discuss it with you!

Zuzanna Zarębińska Junior Strategy Analyst Yieldbird

Zuzanna Zarębińska
Strategy Analyst
publishers@yieldbird.com

    Bartłomiej Oprządek

    Bartłomiej Oprządek

    Regional Growth Director

    Start using Yieldbird Platform to increase profits in publishing industry

    Try all the possibilities of Yieldbird Platform

    Related articles