You’ve done the groundwork to create a meaningful hypothesis, designed a challenger to your control, set up your measurable conversion points, and let your test run. You’re excited to check the results and gain meaningful insight into your site performance, only to find that they are… inconclusive. Don’t worry, it happens to even the most seasoned marketer. Optimization is about collecting data to inform your decisions, but as we all know, there is a million different ways to interpret any given data set.

First, evaluate what your test results even mean. This is a good time to review the KPIs you originally laid out in your test plan: What was your hypothesis? Are you measuring the correct conversion points to support the hypothesis? After you’ve ensured your data seems correct, there are still many ways in which we can be surprised by test data. Even if it’s inconclusive, all test data is still valuable. Yes, I really just said that. It is valuable data every time – for varying reasons.

There are several typical ways that tests can yield confusing results. Here are a few common culprits, and how you can make the most of these seemingly “inconclusive” results.

1. When your test has not yet reached statistical significance: Sometimes test results are unclear simply because the test hasn’t had enough time to gather true results yet. Often, we are tempted to turn off tests too early in order to get on to the next. Testing one area of your site can tie up important real estate and limit your ability to make changes to that section for the duration of the test. That makes it all too tempting to turn off tests before the data is accurate. If you still deem the results of your hypothesis as worthy, try to let the test run its natural course to gather enough data to be usable.

2. When the test data conflicts with your other data sets: Perhaps your testing tool claims that your had an increase to an action, but your analytics tool claims there was a decrease. It’s important to first determine that you’re measuring the same metrics. For example, is your testing tool measuring page visits and your analytics showing page views? Are you comparing the same period of time? Maybe your test was only measuring results on one URL, but you’re viewing your analytics report as a grouped segment. Ensure that you’re comparing apples to apples.

3. When the results surprise you: This can encompass a variety of types of results, such as when data shows to the opposite of what you guessed in your hypothesis. Or, perhaps the results uncovered behavior you weren’t even testing for but those results shock you about your visitor’s actions. A good example of this is when I tested the layout of my webinar registration pages. I tested the hypothesis that putting the registration form further down the page would give people more detailed info about the webinar before asking them to give up their personal info to register. I tested 3 scenarios: the control page, which had the registration listed mid-way in the page, and two challengers, a form at the bottom and a form at the top. I was surprised that the registration up top was the clear winner. However, I was also surprised to find that more people signed up for our newsletter from the same page, which was a conversion point I wasn’t initially testing for.

4. When the results only yield an obvious next step to test: Sometimes test results can seem confusing because they just seem blatantly wrong. How could visitors have possibly taken the action that the data suggests?  These kind of unclear results are actually a blessing in disguise. Maybe you were testing your Contact Us button and your KPIs were measured using two conversion points: clicks to the Contact button, and actual completed form fills. The results showed that you got 200% lift in clicks to the Contact button, but your actual form fills were at an all-time low. How can that be? This points out a few  areas to test next. Look for culprits that would increase clicks but decrease forms. Maybe your form is too long. Maybe the button copy was misleading, so people clicked and were disappointed when the form wasn’t what they expected. Instead of being discouraged, use the data to inform your next test and see if you can increase both clicks and form fills.

All data is useful somehow, but it all depends on how you interpret it. Even results that are confusing at first glance can yield valuable insights. So, keep testing and keep learning!