Now that your test is up and running, don’t fall victim to a dangerous A/B testing mistake…ending your test too soon!
If one of your variations is ”underperforming”, don’t let yourself extrapolate that early data into lost sales or lost email subscribers. There’s often a high degree of randomness early on in A/B tests that can mislead you into picking losing variations if you don’t let the test keep gathering data.
To avoid this trap, YOU MUST resist the initial impulse to stop A/B tests early picking apparent winners too soon.
There’s 3 testing minimums you should honor when running A/B tests:
Minimum Testing Time: 2 weeks
Minimum Conversions: 100
Minimum Statistical Significance (aka - Chance to Beat Original): 95%
Next: Forget About Your A/B Test For 2 Weeks
One of the best ways to keep yourself from getting fooled by random A/B test data is to use a “Set-It-and-Forget-It” automatic winner settings.
This A/B testing feature will allow you to preset a minimum amount of time, a minimum number of conversions and a threshold of confidence in the data before your A/B testing software automatically declares the winning variation.
If you use any of our Thrive Themes A/B testing tools (Thrive Leads, Thrive Optimize, Thrive Headline Optimizer and Thrive Quiz Builder), make sure to use this set-it-and-forget-it automatic winner feature so you never let irrational emotions stop a split test before the statistically significant winner emerges.
After 2 weeks has passed, go back to check whether the test produced a clear winner, still needs more data to make a statistically significant decision, or the test was just inconclusive.
And once your test is over, it's time to reflect on everything your A/B taught you.
Click the next lesson arrow below to find out how to capture all your results and valuable insights for future A/B tests and marketing efforts.