Imagine you set out on a road trip. You packed the car, made a playlist, and set out to drive 600 miles—but you don’t actually know where you’re headed.

You remember to top off the gas tank before you leave and pack snacks. But you arrive at a destination, and it’s not at all what you imagined it would be.

Running an experiment without a hypothesis is like starting a road trip just for the sake of driving, without thinking about where you’re headed and why. You’ll inevitably end up somewhere, but there’s a chance you might not have gained anything from the experience.

“If you can’t state your reason for running a test, then you probably need to examine why and what you are testing.”

—Brian Schmitt, Conversion Optimization Consultant, CROmetrics

Creating a hypothesis is an essential step of running experiments. Although you can set up and execute an experiment without one, we’d strongly advise against it. We’d even argue that a strong hypothesis is as important as understanding the statistical significance of your results.

Hypotheses help you answer the question, why?

‘Hypothesis’ defined

A hypothesis is a prediction you create prior to running an experiment. It states clearly what is being changed, what you believe the outcome will be, and why you think that’s the case. Running the experiment will either prove or disprove your hypothesis.

Hypotheses are bold statements, not open-ended questions. A hypothesis helps to answer the question: “What are we hoping to learn from this experiment?”, while ensuring that you’ve done due diligence in researching and thinking through the test you’re planning to execute.

In this post, we’ll show you how to craft great hypotheses, how they fit into your experiment planning, and what differentiates a strong hypothesis from a weak one.

The components of a hypothesis

A complete hypothesis has three parts. The statement follows: “If ____, then ____, because ____.” The variable, desired result, and rationale are the three elements of your hypothesis that should be researched, drafted, and documented prior to building and setting an experiment live.

Components of an A/B test hypothesis.
Image from Building Your Company’s Data DNA. Download it here.

Let’s look at each component in more detail and walk through an example:

The Variable

A website or mobile app element that can be modified, added, or taken away to produce a desired outcome.

Tips to select a variable: Try to isolate a single variable for an A/B/n test, or a select handful of variables for a multivariate test. Will you test a call to action, visual media, messaging, forms, or other functionality? Website or app analytics can help to zero in on low-performing pages in your website or user acquisition funnels and inform where you should be looking for elements to change.

Example: Your website has a primary call to action that’s above the fold on your homepage. For an experiment, you’re going to modify this variable and move it below the fold to determine if conversions will improve because the visitors have read more information.

Result

The predicted outcome. This could be more landing page conversions, clicks or taps on a button, or another KPI or metric you are trying to affect.

Tips to decide on the result: Use the data you have available about your current performance to determine what the ideal outcome of your experiment will be. What is the baseline metric that you’ll measure against? Is the change to the variable going to produce an incremental or large-scale effect?

Example: Maybe your desired result is more conversions, but this may not always be the result you’re aiming for. Your result might be to reduce bounce rate by testing a new navigation or recommended content module.

Rationale

Demonstrate that you have informed your hypothesis with research. What do you know about your visitors from your qualitative and quantitative research that indicates your hypothesis is correct?

Tips to craft the rationale: Show that you’ve done your experiment homework. Numerical or intuition-driven insights help formulate the “why” behind the test and what you think you’ll learn. Maybe you have input from customer interviews that helped formulate the hypothesis. Maybe you’ve seen an application of the change being tested work well in other experiments. Try using qualitative tools like surveys, heat maps, and user testing to determine how visitors interact with your website or app.

Example: The rationale for testing a new headline on a landing page might be: removing your company’s name from the homepage headline will improve conversions because I’ve conducted surveys that indicate our language is confusing. Borrowing customer language from feedback surveys will improve our performance.

What are the outcomes of a strong hypothesis?

A thoroughly researched hypothesis doesn’t guarantee a winning test. What it does guarantee is a learning opportunity, no matter the outcome (winner, loser, or inconclusive experiment.)

Winning variation? Congratulations! Your hypothesis was correct. If your variations were inconclusive or lost, the hypothesis was incorrect, which should tell you something interesting about your audience.

“When a test is based upon a thorough research and a clear hypothesis to test, you learn about your audience with every test. I always segment the testing results per device type, browser, traffic source and new/returning visitors. Sometimes the average uplift isn’t the best metric to examine. By segmenting the results, you can find the true winner.”

—Gijs Wierda, Website Conversion Specialist, Catchi Limited

Maybe you crafted a hypothesis based on ‘conventional wisdom,’ or read and A/B testing case study and decided to replicate it on your own audience. The variation lost, but you and your team learned that what works for other sites and apps doesn’t work for you. Go forth and craft a new hypothesis, and uncover your own best practices!

Tip: Document both your research and your hypotheses. Remember to share a hypothesis along with the key experiment metrics when publicizing experiment results within your team. Your library of experiment hypotheses will become a valuable reference point in creating future tests!

How does a hypothesis fit into your experiment workflow?

How experiment goals bubble up to company goals
Craft hypotheses to bubble up towards company goals.

According to Kyle Rush, Head of Optimization at Optimizely, a hypothesis is a key component of every test and should be tackled right after you identify the goals of the experiment. Here’s his experiment process:

  1. Identify goals and key metrics
  2. Create hypothesis
  3. Estimate test duration with a sample size
  4. Prioritize experiments with projected ROI
  5. QA the experiment
  6. Set test live
  7. Record and share results
  8. Consider a retest

Steps 1 and 4 of this process are just as important as the hypothesis creation. Keep in mind that not all hypotheses are created equal. Your team may have an interesting idea, or there may be a disagreement that you’re trying to settle—but that doesn’t mean it’s the most important thing to test.

Prioritize and test based on parts of your site or app that have high potential for business impact (revenue, engagement, or any other KPI you’re trying to improve.) Use your analytics to identify these areas, and focus on crafting hypotheses that can support improvements in these areas. Resist the urge to test just for the sake of testing, and focus on high-impact changes to your variables.

“Everything starts and ends with the hypothesis. Ask, ‘What business or customer experience problems do we think we can solve for mobile and why do we think those changes will impact a certain metric?’ Ultimately, time is the most valuable asset for any company … so we start by crafting hypotheses we believe in and then prioritize those hypotheses against all other opportunities we have to test. [T]he success of your optimization program is most correlated to your ability to identify test hypotheses that will move the needle and your ability to tell the story with the resulting test data.”

Matty Wishnow, Founder & CEO, Clearhead

How you can get started

Here’s your actionable cheat sheet:

Hypothesize for every outcome: Make every experiment a learning opportunity by thinking one step ahead of your experiment. What would you learn if your hypothesis is proven correct or incorrect in the case of a variation winning, losing, or a draw?

Build data into your rationale: You should never be testing just for the sake of testing. Every visitor to your website is a learning opportunity, this is a valuable resource that shouldn’t be wasted.

Map your experiment outcomes to a high-level goal. If you’re doing a good job choosing tests based on data and prioritizing them for impact, then this step should be easy. You want to make sure that the experiment will produce a meaningful result that helps grow your business. What are your company-wide goals and KPIs? If your experiments and hypotheses are oriented towards improving these metrics, you’ll be able to focus your team on delving into your data and building out strong experiments.

Document your hypotheses. Document all of the experiments you run. This habit helps to ensure that historical hypotheses serve as a reference for future experiments and provide a forum for documenting and sharing the context for all tests, past, present, and future.

Crafting great hypotheses is a skill learned over time. The more you do it, the better you’ll get.

Data DNA eBook CTA - Metric