blog post

A/B Testing Ads For Pretotypes Is Nice, But Clarity Is Better

Testing acquisition and value at the same time is tempting, but often leads to complexity and results that are hard to interpret. Therefore, this post advocates the speed and cost benefits of simplicity.

Published February 5, 2022

What’s great about quantitative experiments for us product folks is that on top of assessing the value risk of a product or feature we get insights on acquisition metrics (basically for free). It may therefore be tempting to put some additional effort into testing different ads at the same time as testing our product.

In this post, I argue why I think it’s important to at least keep things mentally separate in early stages of product validation. I will explain that

  • conversion is the more important metric for testing value,
  • reducing complexity leads to results which are easier to interpret and
  • focusing on one hypothesis optimizes € to data and hours to data.

Sounds good? Let’s dive in!

Regard Clicks As Given

When testing value risk e.g. through a Fake Door MVP with your typical traffic acquisition through e.g. paid ads, users usually need to perform 2 actions:

  1. Clicking on the ad to get to the landing page, if interested, and
  2. giving the commitment by signing up, indicating interest, paying upfront... you name it.

A good performing ad with a good CTR usually indicates that it communicates the idea and its value very well and convincingly. A lot of signups on the other hand are indicative that people are convinced that the value of the service is higher than the costs imposed by giving the commitment.

The key messages of the ad can be highly aligned with those of the landing page–and of course they can be totally misaligned. If it’s overpromising or generic, chances are that CTR is high and conversions are low. If it’s poorly designed or not fitting your target group but leads to a convincing landing page for those who eventually get there, CTR may be low while conversions are stellar.

The point is: The ad and the landing page are different things altogether. And in the end, no matter how bad your ad performs, if those users are signing up in troves because your product hits a nerve, that’s a good sign! And it possibly just means that the ad sucks.

That’s why out of those 2 actions mentioned above, the second one is the one that’s meaningful for assessing value risk. And this action is measured by conversions rather than clicks. Which leads me to treating these aspects as separate areas of concerns and ultimately treating the CTR as given when interpreting the experiments’ results.

It’s Easy To Get Overwhelmed By Data

In a scenario where there are a lot of different ads running in parallel, leading to different versions of the landing page, there’s simply a lot going on. Finding out the causalities between all of these different factors often leads to not only complexity (and a longer time to get the setup working) but also to uncertainty.

download.jpeg

Quantitative experiments already have a fair amount of degrees of freedom. In fact, they’re often a big bet on a certain set of product qualities (features, pricing, business model etc.). In such a situation, it helps to reduce complexity as much as possible, e.g. by varying only one aspect of the experiment at a time. To leave the acquisition channel out of the equation aids this goal.

So rather than A/B testing a lot of different ads, it may make sense to keep those constant to keep track of causal relations.

Splitting Your Budget And Attention

With every additional test you run in parallel, the absolute amount of visitors for each variant decreases when the budget stays the same. Pretotypes are designed to provide insights fast and with 10-100€. This budget spent on ads regularly buys something between 20 and 80 clicks. This is already low in statistical terms, so additionally testing different variants of a landing page will split users to a point where randomness clouds all insights.

For ads, this is slightly different, but to a point still holds true: There may be slight indications that one ad performs better than the other even on a small amount of clicks. And yet, setting this up means a lot more time spent on creating the ad/page variants, increasing the time to actual data.

641unr.jpeg

Focus Pocus

With these three aspects in mind, I feel like rather than increasing complexity, increasing the amount of iterations instead is where the magic comes from. This means keeping as many aspects as possible constant and only changing those to be tested. It definitely helps to have a clear hypothesis for each experiment, e.g. by using Strategyzer’s Test Cards.

And while the insights regarding acquisition are useful, they should be treated as a by-product in early product validation. But if all goes well, the time will eventually come where acquisition is the more pressing problem. Until then, keep it simple and stupid.

Timothy Krechel

Innovation Consultant

Subscribe to my product journey.

Honest insights of somebody who validates product ideas for a living. Subscribe!

I care about the protection of your data. By pressing "Subscribe", you agree to receiving updates on my products and blog posts via mail (max. 1 per week). You can unsubscribe at any time. For further information, please read the Privacy Policy.

Con-
Tact

Want to connect?Awesome!