Dating back to Google engineers running experiments to optimize the number of search results per page in 2000, A/B testing has been a core optimization strategy for FORTUNE 500 enterprises and small horticulture blogs alike.

For more sophisticated marketers, basic A/B tests have given way to complex multivariate tests designed to identify even the tiniest opportunities for optimization. Advanced optimization platforms powered by machine learning allow websites to continually run tests and send traffic to the best performing variations, based on any KPI.

However, despite the explosive growth of testing technology ,even very advanced companies are making some basic A/B testing mistakes. Below are 3 common pitfalls of A/B testing and strategies for how to overcome them.

Skewed Data from Non-Human Traffic

Already upending online advertising and shaking consumer confidence in the industry, clicks from bots can also wreak havoc on marketers looking to optimize the onsite experience.

In theory, running A/B or multivariate tests and dynamically allocating traffic to the highest performing variations should be a relatively straightforward process. But if non-human clicks overwhelmingly favor one variation, the experience that real consumers prefer will appear less successful.

Marketers can offset the rise of bot traffic by picking an optimization metric that is indicative of a real human action that can’t be replicated by a bot. Build tests around conversions or other “action-oriented” metrics rather than simple clicks (more on this below).

Incorrect Testing Framework

Across industries, far too many marketers are still clinging to a basic framework from the infancy of the A/B testing era known as hypothesis testing. It is inherently pessimistic, starting with the assumption that the alternative variation being tested is no better than the existing variation and believing any observed differences merely represent random noise. The test then tries to reject this hypothesis by proving the findings are rare enough to merit significance.

Got it? Probably not, because hypothesis testing is a convoluted mess that requires a wealth of data to be meaningful.

Instead, A/B tests should be run in the Bayesian Framework which assigns a probability of which variation is superior based on prior information and available data.

Adopting a Bayesian framework allows marketers to work without a pre-defined sample size, analyze more intuitive metrics and ultimately make quicker decisions that aren’t bound by restrictive assumptions. In a nutshell, “Going Bayesian” is a far simpler and more reliable way of determining that variation A performed better than variation B and optimizing your onsite experience accordingly.

Selecting the Wrong Optimization Objective

When running A/B tests, there are three main metrics you can optimize around: clickthrough, goal completion or (for e-commerce) revenue. Each comes with strengths and drawbacks.

In general, I favor revenue-based optimization as it provides data most in line with marketers’ ultimate goals. However, for revenue-based optimization to yield meaningful data, a marketer must run a longer experiment with a pretty high sample size. Furthermore, since purchases often happen across many independent events, it is far tougher to ultimate attribute a purchase to any specific optimization event.

Optimizing for clickthrough is by far the easiest metric to understand as it can be directly tied to the banner or button where an A/B test is being run but clickthrough alone often isn’t helpful for a marketer and it is susceptible to our robot overlords. A happy medium between complex revenue based optimization and simple CTR is optimizing around a goal completion, such as signing up for a newsletter. Goal completion doesn’t directly drive more revenue but allows a marketer to still drive actionable results with smaller data samples over shorter periods of time.

Within revenue-based optimization, a common mistake in the marketplace is companies incorrectly running A/B tests to check for revenue per session. A far more effective metric to optimize is revenue per user as it is far more representative of your optimization efforts versus simply reflecting the aggregate user behavior of your visitors.

As more and more aspects of onsite experiences are being A/B tested, ensuring that tests use accurate data and correct methodology has never been more important. Choose a crystal clear objective based on real human actions and optimize happily ever after.