Great marketers of the past needed and relied on sharp instincts. They were gut-driven savants who could hit marketing home runs based on talent and sweat (yes, the Don Drapers of the world wrote A LOT of lines before they got to the right one).

In contrast, it seems like these days, a lot of marketers who believe in the importance of data skip straight to the reporting. They put something out into the world and then use data solely to report on their successes and failures. They laugh at traditional marketers for being dinosaurs, but they gloss over most of the truly important steps that data enables themselves, like the hypothesizing, the testing, and the iteration.

It’s a step in the right direction, but it’s actually not that different from the marketer who just uses their well-tuned gut. In this post, we’ll discuss how to apply the scientific method to use data to become better marketers and have a greater business impact.


The Scientific Method

If you can think back to your high school days, the scientific method is a process for investigating why things happen, acquiring new knowledge, and correcting or improving previous assertions.

It starts with an observation. Then, you think of questions related to that observation. Next comes the hypothesis or hypotheses — potential answers to that previous question. Then you test the hypothesis and record results. Based on those results, you can alter your hypothesis, expand it, or reject it. Then you iterate — more testing and gathering of results — until you can come to a conclusion.

The biggest difference between marketers of the past and marketers today is that we have a lot more tools to test and collect relevant data to prove or disprove the hypotheses that we come up with. Marketing (the skill) is now as much a science as it is an art. Some would argue it’s even more of a science.

So if marketing is at least part-science, how does one apply the scientific method in a way that improves his or her results?


Observing and Questioning

Like all good marketers, we’re avid readers and listeners — consumers of information. We want to know what our competition is doing, what our industry is doing, and most importantly, we want to know everything about our audience. With that, comes a lot of observations.

We also must be curious. When we make observations, we want to know why they occur. Was that blog post really effective because the voice resonated or was it because the attached infographic made the content easy to consume? If we bid based on CPM rather than CPC, would we have gotten more leads? If the email had a better metaphor, would it have been more successful? These are all potential questions stemming from observations about our marketing efforts.

At Bizible, we’ve observed that a couple of our pieces of content have been home runs — they’ve driven hundreds or thousands of leads, and more importantly, they have driven a significant amount of revenue. Clearly, we want to create more pieces of content like these, so we’ve come up with questions about them to help reveal insight into why they, in particular, were so successful.



This, in my opinion, is the fun part. Coming up with hypotheses takes creativity and intuition. Hypotheses attempt to answer the questions posed in the step before, but it is integral that they are phrased in a way that is provable or disprovable with data. Essentially, they can’t be statements so general that a test could not make an argument for one way or the other. To do so, hypotheses must include one independent variable and one dependent variable. Here’s an example:

Hypothesis: If we write article headlines with statistics in the title, the CTR of our paid social ads will increase.

Not A Hypothesis: Headlines with statistics are better than headlines without statistics in the title.

In this case, the article headline is the independent variable (either includes statistics or doesn’t) and the CTR is the dependent variable. In the second statement, there isn’t a testable dependent variable because there isn’t a defined way to measure what “better” is.

It’s also important to note that you should research the topic prior to hypothesizing because your hypothesis shouldn’t go against existing research. However, the B2B marketing landscape is always changing, so something that was true a year ago may not still be true today.

Here are a few other hypotheses we’ve looked at and tested:

I’m sure the best traditional marketers of the past went through this very same process of observing, questioning, and hypothesizing; they just did it in a less formal way. However, it’s the next steps in the scientific process that set data-driven marketers apart as consistent and effective producers of business value.



The next step in the scientific method is testing, which means executing the marketing initiatives in a way that will prove or disprove your hypothesis.

Using one of the hypotheses from earlier as an example, we could test it by looking back at the last year’s worth of blog posts, separating the headlines that include statistics and the headlines that don’t. We could then run an analysis using our paid social data and compare the CTRs of the blog posts. Another way to test this would be to create A/B tests on your future blog posts. Using the same blog content, create two headlines — one with a statistic in it and one without — A/B test it, and measure its performance.

For some of our other tests, like the lead-to-opp conversion rate by day of the week test, we used our attribution data to see how leads moved through the funnel based on the date of lead creation. We’ll discuss the reports in the next section.


Recording Results a.k.a Reporting

Recording results is pretty straightforward, but it does require the right tools. After all, if you can’t measure it, you can’t test it.

For the most part, we want to know how our marketing impacts business results, so our tests’ dependent variables tend to be the number of qualified leads, sales opportunities, and revenue. We measure and report all of this using marketing attribution, which connects our marketing efforts to sales. This way, we can see how specific landing pages (such as blog posts), marketing channels, search keywords, etc. all impact leads, sales opportunities, and revenue.

If you want to test and record results based on how your marketing affects revenue, attribution data is essential.

For our lead-to-opp conversion rate test, by creating a lead report and an opportunity report (both organized by lead create date), we were able to create and analyze conversion rates — e.g. 100 leads created on Monday and 10 opportunities from leads created on Monday, results in a Monday conversion rate of 10%. Based on our attribution data, we were able to see that while volume is lower on leads created on Fridays, the conversion rate wasn’t significantly different from the conversion rate of leads created Monday-Thursday.



This step is often forgotten, but it may be the most important. Using the information you learn during the Recording Results phase, the scientific method requires data-driven marketers to go back to their hypothesis and modify it based on the results. That could mean expanding the hypothesis to make it more impactful, it could mean tweaking it because the results disproved the initial hypothesis, or it could mean changing the original hypothesis altogether.

Of course, when you change your hypothesis, that means you have to test it again, and the cycle starts over.

Without this step in the scientific method, data-driven marketers aren’t using their data to its full potential. This is where marketing insights become actionable. It doesn’t make a whole lot of sense to use data to report on something if you’re not going to take the time to analyze what the results mean and try to optimize based on them.

If you don’t want to be a marketing dinosaur, it’s necessary to apply the entire scientific method. Use your smarts and creativity to come up with interesting and potentially powerful hypotheses, THEN use data to prove/disprove and improve.