Create the Ultimate User Experience with A/B Testing
Experimentation and testing, at a concept level, is not new. As business pressures increase, and the need to improve merchandising and marketing efforts comes to the forefront, you need to go beyond assumptions and intuitions in making investment decisions for your e-commerce site, mobile applications, and all digital commerce touchpoints. Testing and experimentation, as tools, allow you to examine everything you do digitally by employing simple A/B testing methods.
W. Edwards Deming said, "You create the system your visitor must navigate. People don't cause defects, systems do." Systems can be tested, measured, and optimized. You need to know whether the assumptions you made when creating your site meet the needs and expectations of your audience. More important, you need to know how to optimize your system so you can market more effectively.
So, What Is A/B Testing?
A/B testing was made popular by the Coke vs. Pepsi blind taste test in the 1980s, but dates back to the mid-1700s, when James Lind sought a cure for scurvy. Modern day A/B testing originates from the statistical method called Design of Experiments (DOE), developed by Sir Ronald A. Fisher in the early 1930s. It has applications that can help you use response data to do everything from designing a better promotion for the holiday season to creating a better landing page or even developing a more user-friendly tablet kiosk.
A/B tests allow merchants to present different experiences to different users, and measure their resulting purchase behavior to identify the "better" experience. In its simplest form, A/B testing is based on two timeframes. For example, you can A/B test four different layouts of a landing page—the element of comparison is the entire page or the entire experience. Multivariate testing allows you to test each sub-element, such as content, layout, and imagery, to evaluate the best combination of all of those elements to obtain the most effective landing page.
In their seminal book Always Be Testing (Wiley Publishing, 2008), authors Bryan Eisenberg and John Quatro-vonTivadar demonstrate the impact that effective A/B testing had at Overstock.com via seemingly small changes to the site. They emphasize the importance of minor changes with minimal opportunity cost to the company that can make a significant difference to the user experience.
As explained in the book, the management team at Overstock.com felt that its Movies landing page was not converting at the rate it was hoped for. Through A/B testing techniques, it was observed that the top of the banner was deterring users from scanning the page for further product information. In spite of helpful categories located below the top elements, it seemed to suggest that the contents were kid-oriented. Upon changing that one page element to something that conveyed a general-purpose page, the team reported a 33 percent decrease in page abandonment rate.
TMO: (T)est, (M)easure, (O)ptimize
These three distinct activities form the core of any optimization strategy.
Testing, in the context of site optimization, includes comparing elements on your sites and campaigns to see which site experience persuades your visitors to take the necessary action, be it adding an item to the cart or viewing more product detail pages. In this stage, you need to determine which A/B testing tools and processes are required, and then use them.
Measuring is the stage where you capture the actual data that your tests produce. Data capture is performed in a variety of ways, depending on the nature of your tests and the tools you use. For example, if conducting focus groups was a part of your "Test" stage, data capturing methods typically include observing and interviewing focus group participants. If you are using free tools, such as Google Website Optimizer, as part of your overall test plan, measurement will involve some level of automated reporting and manual number crunching. This corresponds to a lot more "right-brain" activity, so you must remember to staff appropriately.
You'll need to decide which key performance indicators (KPIs) you will be tracking for your success metrics, for example, sessions, bounce rates, or average order value.
Optimizing is the stage where you act on your test results based on what your KPIs are telling you, and put into place optimization techniques to realize site improvements. This goes beyond declaring which test won in the process and toward implementing changes that improve conversion rate metrics. This is also the part where human judgment and insight play a significant role in your overall site optimization strategy, more than anything else. Returning to our staffing discussion, remember to staff the right kind of people for this stage—you'll need people who can translate numbers into actionable business decisions, get buy-in from key stakeholders, and implement changes on the site. This is also the stage where you'll need to initiate cross-functional collaboration in your organization as it relates to the site.
A/B Testing for All Your Digital Commerce Touchpoints
As you can see above, A/B testing offers limitless possibilities to help improve your Web site in ways that will help drive sales. A/B testing is also extremely helpful in maximizing usability and sales for in-store digital kiosks, mobile, and tablet commerce. The advent of tablet commerce has introduced vast user experience possibilities. Savvy retailers are embracing this new channel by deploying new customer touchpoints and other innovations. Most user experiences are based off of retailers' understanding of consumers' experience preferences on the Web. User experience decisions are mostly driven by instincts and anecdotal observations. A/B testing can provide the much-needed quantitative foundation for the decisions that drive innovative user experiences on these new channels.
A/B Testing "Best Practices"
Here are some suggestions to consider as you go down the path of incorporating A/B testing as part of your overall site optimization strategy:
- Begin with a test plan, including defining a specific hypothesis, goals for the test, and success metrics. Resist the urge to jump straight into running ad-hoc tests.
- If multiple tests are running on a site, a customer can/should participate in only one A/B test at a time. If you are using an external tool, make sure you clarify this aspect with your vendor.
- Test pages that get a good amount of traffic. Without significant traffic (in relation to the overall traffic volume on your site), you may not be able to achieve statistically significant observations within a reasonable amount of time.
- Make the duration of the test directly proportional to the number of items tested. The reason behind this is the same as mentioned in the previous point.
- Identify your success metric as something that happens most often on your site.
- Tests served to a higher percentage of traffic yield results faster. If you are trying to minimize customer impact, try to reduce the scope of the experiment instead of the traffic percentage to which it is served.
- Test dramatically different concepts for best results. This is especially true for some of the simple tests. Don't test for "Checkout" buttons with different shades of green!
A/B testing is simply a tool for determining the means, and not an end in itself. The ability to run A/B testing successfully doesn't really mean anything unless you can apply the results to effect positive change on site revenue and related performance indicators. Don't hesitate to work with your CRM team to leverage your collective experience in gaining the most from your efforts in this direction.
There is no substitute for talking with customers. Although you can garner some insight into your customers' mindset via these tests, they are no substitute for direct interaction through your contact centers, surveys, social media destination, and other channels.
A/B testing is no substitute for usability testing. You should plan to or continue to conduct usability testing as part of your overall site management strategy.
Vinod Kumar is an e-commerce strategist at Demandware, with a primary focus on business analysis and Web analytics.