Ideas
April 18, 2022

The ABCs of A/Bs

Testing is critical to learning about customer needs, but it is rarely conducted with the rigor required to drive more effective marketing. Our latest piece delves into the pitfalls.

GALE

A Business Agency

Dynamic marketers talk about testing and learning, measuring success, being agile, and adjusting to what works. But often the tests marketers’ run lacks the rigor required to understand what is working. There are so many ways to establish testing that we can only focus on a few of the pitfalls, such as:

Not considering statistical significance.

In A/B tests we need to account for the size of the audience, the expected shift in behavior, and the holdout group’s size. At GALE we have developed a tool to show confidence rates at different holdout sizes, given expected lift in target behaviors.

Not monitoring behavior beyond the test.

Often, marketers stop monitoring during the campaign before presenting a Powerpoint with learnings. While this is necessary and often valid, it could be understating the impact beyond the campaign. For example, a customer who is saved in a three-week defection campaign provides a lift beyond those three weeks, as its unsaved peers may have stopped transacting altogether.

Testing without a hypothesis.

The quest for learnings often pushes an over zealous set of variables to test. Instead, we should be thinking about which marketing interventions should have a positive impact for a given audience — providing focus to priority tests rather than testing for the sake of it. This improves significance and reduces complexity for execution.

Stratifying your holdouts.

Selecting audiences at random is generally accurate but not always. For some of our clients there can be “whale” customers who have an outsized impact on the business. Defining these whales often involves a behavioral attribute from first-party data (e.g., visits, spend). Often, the whales should be excluded or specific stratification rules need to be applied to holdouts to ensure a proportional amount of customer types are in both test and control. 

Failed experiments are learnings.

Avoid succumbing to the pressure of having positive news to share when it comes to testing. You cannot hit home runs all the time. In fact, learning what does not work is a step closer to figuring out what does. Standing on the shoulder of failed experiments may be just as helpful as relying on the tests that do. Do not be so quick to judge, as a number of variables may have ruined your control (e.g., sending an email on the Friday before a long weekend).

Testing is often spoken about but rarely at the specificity required for it to drive more effective marketing. In the digital age, it is more important now more than ever to strategically learn against hypotheses with credible studies. When done well, everyday is an opportunity to learn about your customers.