In September, I outlined the five strategic considerations for successful optimization campaigns. One of these steps is critical yet overlooked by countless marketers: testing. I gave the example of a gym membership—if you don’t leverage the gym (or your testing capabilities) in a consistent and disciplined fashion, it really doesn’t deliver any sort of value.
The most successful organizations are diving into systematic testing built on a well-developed strategic plan, crystal clear methodology, and a universal goal alignment across stakeholders. This starts with a rock solid methodology developed and agreed upon by all. What’s involved? For starters, getting those key players on board and soliciting feedback from all sides to ensure a deeply collaborative beginning. From there, it’s essential to fully invest in and carry out all core steps of the process, then “wash, rinse, repeat, and improve” the next time around.
It’s just like the sixth grade science fair—you need a hypothesis. What do you think will happen when you test this messaging or that promotion or the integration of a new purchase path? Although the scientific method for developing a conversion optimization hypothesis is all well and good, it’s also essential to keep an eye on inputs—think qualitative and quantitative data, tacit knowledge, and of course, your gut as a marketer—to put forth an educated guess as to what’s next. Then prove it. Or disprove it.
When you’re A/B testing, the approach and design can often seem like no-brainers—most likely, you’re testing your current site, or some extended element of it, against a new visual, message, promotion, or content piece. A/B testing, in general, is most effective when you’re looking at two versions of one variable—color, marquee image, promotional messaging, and so on. Keep an eye on sample sizes to ensure the numbers coming back are both solid and statistically significant, and be sure you’re collecting and assessing.
For multivariate testing, site traffic should be evenly distributed across all versions being tested, so make sure you’ve got a good amount of traffic. Otherwise, A/B testing is likely the better choice to ensure concrete results in a reasonable amount of time.
Setting success metrics is not only an essential part of the test design process, but a critical step on its own. Once a hypothesis has been defined, you need to define success benchmarks for your testing. What will make A more successful than B? Is it simply a lift in sales? Higher conversion metrics? Engagement? Larger orders? Or something entirely different? Your testing tool will record variations shown to the site visitor and help you get the complete picture and determine what worked and what didn’t.
As with everything else, the stakeholders must agree from the get-go on what “success” means throughout the testing process and beyond. If you don’t understand how your organization defines a successful venture, you won’t know if you achieved any objective and, of course, getting the resources allocated for future testing will be challenging.
Executing a Test
Now do it! Execute the test and watch your metrics, but be careful not to conclude anything too early. Statistical significance and confidence are essential to the testing process, and too few engagements and conversions simply don’t tell the full story. Likewise, be sure you’re keeping an eye on offer performance by individual segment. Looking at all the traffic at once can lead be misleading. A promotional or specific outreach may have fallen short in one critical market, didn’t convert on a certain device or browser, or simply failed to engage female shoppers. It’s not enough to give a thumbs up or thumbs down—dive into the segments and their responses and see how you can improve in the next go-around.
Beyond segmenting and scale, it’s also critical to carefully outline the duration of your test in the initial planning stages. This should be the first of many tests following the wash, rinse, repeat, and improve format, so keep a close eye on it through all stages of the rollout so you can continue to enhance down the road.
And remember, although your gut is essential to the hypothesis stage, it shouldn’t be part of the actual test execution stage. Numbers tell the story. Guts can get messy.
Reporting and Sharing
Maybe your results will fall in line with your initial hypothesis, maybe they won’t—or maybe you’ll be completely blown away, for better or for worse. No matter the results, though, reporting and sharing metrics and analysis are fundamental steps in the testing process and help lay the groundwork for the repeat and improve cycle as it pulls more decision makers and stakeholders into the process and helps make the case for more resources and man hours in the future.
So it’s over (for now)—and you’re on to the fun part. Socialization is about getting everyone as amped and excited as you are about the testing possibilities. It’s about celebrating your successes whether they’re true achievements or simply newfound knowledge, and building even greater buzz among those around you. Be infectious. Success breeds success—and excitement and enthusiasm breed fervor, support, and a deep commitment to upcoming initiatives. Socialize, socialize, and socialize some more.
Leading organizations don’t stop here, though. For them, testing is an ongoing process driven and refined by real-time successes and less-than-successes. This not only helps a business home in on the critical information and iterative elements of testing, but also demonstrates significant value to internal decision makers and fund sourcing. In other words, it appeals to the guy with the checkbook and the “approve” stamp. If you can show true fiscal value, look forward to more testing resources, support, and in turn, bigger and better campaigns and more in-depth tactics.
For more information, please visit my blog at http://blogs.adobe.com/digitalmarketing/author/kevin-lindsay/