Can you tell a good marketing study from a bad one?
What do a pharmaceutical rep, an academic, and a marketer all have in common?

They all understand the power of good statistical information to sell their products, ideas, and services.

The more informed you are about your market and consumer behavior, the better you will be at crafting a marketing message that will attract and engage your ideal client …

…and more clients mean more revenue.

As a marketer or small business owner you probably come across plenty of statistical information, especially online.

But how do you know if a study is valuable to your business?

Here is my list of 7 best practices that will help you to determine a good study from a bad one.

1. Who funded the study?  Independent studies are best.  However, sometimes a company sponsored study is all that’s available.  Look to the methodology used. Make sure that the sample size is large enough (see practice #3) and that it wasn’t a survey of their 10 best customers (which by-the-way isn’t a study, it’s a case study).

2. How was the study conducted?  Typical market research studies are done via survey.  But there are other types of studies that use eye-tracking, split-testing and A/B testing, randomized controlled experiments, online controlled experiments, aggregate studies, and some of the most interesting studies utilize Google search queries.  My general rule is to be skeptical of a study that doesn’t publish its methodology or sample size.

3. Is the population size large enough?  What’s better a survey that was conducted with 100 participants or 1,000 participants?  It stands to reason that the larger the sample size the more likely you will obtain statistically significant results.   From a statistical perspective, the larger the sample size, the lower the margin of error and the higher the confidence rating.  So if you come across a study that says, “72% of all online adults use social networking sites,” and the sample size was over 2,000 people with a 95% confidence rating, and margin of error was (+/-) 2.3 percentage points, you can be confident in the study’s results.

4. Is the population representative of your market?  This is essential  to marketers.

One of my favorite resources for information is the Nielsen Norman Group. They have a terrific library of studies all related to website usability.  Recently, I purchased a study that used eye-tracking to determine how people read on the web—very useful information for a copywriter.

However, when I checked how the study was conducted, I found that the population size was only 300 people, and that 65% of the participants made over 50k per year, 69% had at least college degree, 58% were white, and 58% were female.  The study was conducted using a group of people that most likely represented their client base.

If, for example, my ideal market is 18-29 year old Hispanic males with only some college, then this would not be study that I could readily apply to my market.

In the case where the sample size is small relative to the population size (in this case 300 people out of  a population of all internet users), you’ll want to be especially mindful of who is conducting the study, for what purpose, and how the study was conducted.

In the case of the Nielsen study, they were able to track 1.5 million eye fixations and gathered and reviewed 300 GB of data. Since Nielsen has an excellent reputation, used sound methodology and the population used was a good representation of my market, this is a study that I could easily use to improve my marketing.

 5. Do your own fact checks.  As a marketer or small business owner, you probably come across a lot of statistical information online.  But how do you know if the studies are reliable?  Like a detective, look for corroborating evidence.  While researching an article on bounce rates, I googled the phrase, “what is an average bounce rate.” On several pages I came across a reference to a study called the “Google Analytics Benchmark Averages for Bounce Rate.”  My practice is never to quote a second or third party reference, but to go directly to the study itself.  Why? Because sometimes the reporter either misquotes the study or takes facts out of context.  You’ll find too, that many studies quote the same information without going to the source.  But the larger issue is that oftentimes bloggers merely quote each other, and that an actual study was never done.

In the case of the Google Analytics Benchmark, none of the articles linked back to the source.   Moral of the story, never use information from sites that don’t reference their material.

6. Age of the study.  Referring to my bounce rate article, with considerable effort  I was able to find a copy of an email that someone posted on a Google forum with some of the information I’d seen on other sites.  However, the date of the newsletter was July 2011, and the information was gathered between November 1, 2009 and February 1, 2010.  Since more than 1/3rd of the world’s population is on the internet and it’s growing every day, I’m not sure how relevant the study is currently.  Since the information regarding the bounce rates is dubious, I’ve decided to table the article for now.

Unless you are interested in analyzing market trends, be sure to check the date of the study you’re using and whether or not there are more recent studies available.

7. Look for corroborating evidence. Not only should you try and find the most up-to-date study, when possible look for other studies on the same subject.  This will be your best defense against studies that have been skewed to favor the sponsor of the study or any bias on the part of the investigators.  You’ll want to look for similar study methods and outcomes.  Be careful, because you may find that one study was quoted multiple times by various news and blog sources.

Here is a list of my favorite resources:

http://www.nngroup.com/

http://pewinternet.org/

http://comscore.com

http://wsj.com

http://nytimes.com

Comments or questions? I’d love to hear from you.