A/B Testing: Efficiency is the key to a good strategy

Manu Arjó
4 min readMay 24, 2024

--

I want to talk to you about A/B testing because you have all heard about it and are eager to work with them. However, in A/B testing, as in almost everything, efficiency is the key to good execution.

In the world of Product Management, A/B testing is akin to a helmet for a motorcyclist: you may not like wearing it, but when push comes to shove, you’ll be grateful to have it on. It helps us work safely, assuring us that we will not fail.

You all know Barcelona is a “motorcycle” city. Full of them. I have been riding motorcycles for more than 15 years myself. And, of course, the law mandates that you must wear a helmet to ride a bike. However, wearing a helmet only because the law requires it means that we can choose helmets that offer less protection but are more comfortable.

The same applies to A/B testing: Doing A/B tests “just because”, will mean we are not implementing the best possible tests. And spending time on tests that are less helpful than others represents an opportunity cost.

But how do we ensure that we are wearing the correct helmet and not one that is going to hinder us more than protect us? This is where efficiency comes into play in our A/B tests.

What is essential in an A/B test?

  • Clear hypothesis: Without a well-defined hypothesis, we are driving without knowing our destination. If you don’t know what you’re testing, don’t attempt it.
  • Control group and test group: We must be clear about what we want to test and what we want to control.
  • Success metrics: You need to know what success looks like in your test. We must have a goal. Otherwise, how will you know if what you’ve implemented is working?

The 3 golden rules that we apply in Interflora’s Product and UX team to conduct efficient A/B tests:

  1. The test must affect 100% of the traffic. If we make improvements to our Product that only affect 5% of our traffic, we are wasting our time. I’m confident we can make improvements that will benefit the majority of our users. Don’t divide our audience so much that we end up with samples too small to conclude. Seek balance.
  2. What we want to test should not be a test so minor that measuring the changes or improvements will take months. If the changes are very subtle (in terms of user perception), the improvements will be minimal. Identify what you really need to improve. Don’t conduct an A/B test just for the sake of it. Find the pain point and address it precisely.
  3. The tests should not be so complex that we cannot interpret the results. If measuring the effects of the A/B test requires us to calculate how much we have invested in capturing traffic for each variable, determine how much we have earned in each variable, then apply the data to obtain the net margin, but subtract the discounts and delivery costs… In the end, we will not draw clear conclusions and we will have wasted time. If the functionality you want to test is so complicated that you need a manual, remember: Keep it Simple. Simplify until it’s something everyone can understand.

Another thing that I have discovered after having worked in various flower delivery e-commerce platforms is that each product is unique, and each user (or target), is a universe. So, before diving into an A/B test, understand your product and your users well. Understand your environment before taking action. Don’t just apply what worked elsewhere; each context is unique.

Additionally, our entire team should know what we are testing and why. Communicate and establish expectations and objectives. And not just our team, our stakeholders must also be informed what is being tested, when it is being tested, and why. There needs to be more than an idea for a test; we must have a clear strategy, communicate it, and apply it consistently. Clear communication allows us to avoid problems.

If our product is user-centric, our tests should reflect that. Embrace the values of our product. Avoid trying things that go against the philosophy of our product.

If we decide to try something, ensure a solid rationale is behind it. And if the indicates that we are mistaken, accept it and learn from it. Statistics don’t lie (although liars can use statistics!).

A/B testing must be well-prepared, and we must be certain of the results we aim to test. Focus on what will genuinely add value to our product and our users, and we will flourish 💐.

--

--

Manu Arjó

Head of Product at Interflora. Anteriormente en Be.green, Colvin, Softonic y Groupalia.