A/B testing is a scientific process that product teams use to validate hypotheses and improve their product. At a high level, A/B testing helps product managers learn and iterate more quickly by making changes to features, rolling out those changes to some users (while showing other users the original version), and then evaluating the outcome.
As an example, a product manager might change the location of an Add to Cart button, swap out different product images, or tweak copy on a website. By seeing which variations perform better than others, product teams can make countless improvements and also get some signal on why.
Using the right product analytics can help make A/B testing a simple, effective process.
The right way to approach A/B testing
The first step in building a data-driven A/B testing methodology is to recognize all the sources of information it can provide. While many product managers use A/B tests to figure out whether people prefer option A over option B, in fact A/B tests are often more useful when they surface the downstream effects of your tests.
Here’s a good way to think about it: Say you want to optimize your website. You hypothesize that by changing the image of your product on your homepage, more new visitors to the site will click your free trial button. So you set up a test to measure this: half of the new visitors to your site will see the new image, and half will see the existing image.
After three weeks, you measure, and you find that 50% more of the people who saw your new image clicked the free trial button, compared to 39% of people who saw your old image. If you stopped right here, you’d have useful information that would help you improve your product.
But the really useful information—the information that truly tells you what about your test was successful—arrives when you start measuring the downstream effects of your change.
For example, while 50% of people who saw your new image clicked the free trial button (that’s 28% more than the percentage of people who saw the old image), how likely were those 50% to move to your paid plan? What kind of retention rates did they exhibit? How did they use your product in the trial stage?
If 28% more people clicked the free trial button, but that group of people were 75% less likely to move to the paid version, then in fact you’ve reduced overall revenue for your company. If they’re less likely to keep using your product six months or a year from now, then you’ve actually increased churn!
The larger point is that A/B tests aren’t only about optimizing an extremely local target. Done right, they’re about optimizing your business. And the way to make sure you’re doing that is to measure all the data they produce.
How FSAstore.com built a framework for success
Let’s examine what this process looks like in action. In a recent webinar, Rishabh Vig, Product Analyst at FSAstore.com, explained how his team uses their product analytics toolset for experimentation and analysis, and introduced his team’s framework for running A/B tests.
As the largest online marketplace of FSA-eligible products, FSAstore.com provides tools, resources, and products to help people eliminate the guesswork from their FSA expenses. To deliver on its brand promise to provide simple, convenient FSA services, the product team at FSAstore regularly uses A/B testing to improve the user experience.
As Rishabh explained, when FSAstore started running A/B tests, they lacked a defined process for doing so. Indeed, since nearly anything within a product can be subjected to A/B testing, every product team needs to develop a focused way to manage these experiments. Over time, and by combining tools such as Optimizely and Heap, Rishabh and his team built out a data-driven framework for running A/B tests.
At a high level, it looks like this:To hear Rishabh walk through this process at a more detailed level, and learn about some of Optimizely’s best practices for developing an effective A/B process, you can watch the entire webinar on demand here.
An example of using Heap and Optimizely together
One specific use case that Rishabh and his team used Heap and Optimizely for was to up-sell white-label products on their website. (“White-label products” are items produced by one company and rebranded by another.) This involved experimenting with a pop-up box that encouraged website visitors to try out a white-label product. Their hypothesis was that if the site presented this pop-up when visitors were looking at a branded product, more visitors would choose to purchase the white-label product.
To measure these experiments, the product team tracked two main metrics: swap rate, which measures the percentage of customers who swap out a non-white-label product for a white-label product, and the overall product penetration, which measures the volume of sales for a specific product in their customer base, on the site.
FSAstore ran these tests by setting up Heap snapshots for the pop-up, then measuring pop-up vs non-pop-up against everyone who added white-label products to their cart. To their delight, they measured a swap rate of 21%. They also increased their white-label penetration by 37%.
Delivering on its brand promise
Today, the product team launches one to two A/B tests each week. Nearly 40% of those tests give them information to measurably improve their site. Rishabh notes that they’ve been able to use Heap and Optimizely to achieve a clear segmentation of test performance with different user groups and dimensions. The impact of user behavior across the entire funnel has also become much clearer, and they’ve removed blind spots to aid more targeted iterations.
Most important of all, its A/B testing program has enabled their product team to understand their customers’ behavior, which gives them the insight they need to fulfill their brand promise of an easy shopping experience.
To learn how you can use Optimizely and Heap together to improve the user experience, and hear key strategies for a hypothesis-driven approach to product management, watch our on-demand webinar with FSAstore.com here.