In part one of this blog I posed the aggravating quandary of the marcom world incessantly measuring some element of communication performance or customer opinion while learning relatively little. In this second part I’ll offer why–and by extension –what we must start to change if we want well-intended efforts to truly yield useable knowledge.
This quandary exists because of three primary reasons. We:
- too frequently measure and in doing so ignore the necessity of creative sampling measurement techniques,
- too heavily rely on measuring consumer attitudes or opinions and too little on actual consumer or organizational behavior, and
- Increasingly oversimplify our efforts by trading methodical and objective ways to understand what we wish to measure in favor of anecdotal knowledge.
Points one and two are a little more technical and suitable for discussion elsewhere. But the final point is perhaps most critical as to why we try to measure everything and know so little.
Buoyed by impressive advances in mobile data collection many marcom pros fancy themselves would-be analysts, pollsters and the like and too frequently proceed without sufficient care to make a reliable measurement. All too common, the measurement mine field trap doors of unbiased questioning, too many “yes/no” questions, poor quality control, insufficient sample size and a gravitation to descriptive over numeric data sinks one’s proverbial ship while still in the harbor. Later when sailing at sea and poring over the results it’s no wonder that the captain’s crew becomes literally swamped with “data” or other forms of information. We measure because we yearn to know something more. Poor or biased (whether intentional or not) measurement does just the exact opposite and risks saddling us and clients with a sort of fool’s about an incorrect assessment of the issue at hand.
This trend is increasingly dangerous ad hardly limited to marcom pros trying to daily make sense of it all in the vast data galaxy. In fact a “poster child” example of this showing the good and not so good ways to deal with all our measurement motivation is easily within our understanding to better make this point.
The folks at Consumers Union have been publishing Consumer Reports since the 1930’s. Committed to completing comprehensive, methodical analysis of all sorts of consumer product reviews, CR has deployed rich methods of quantitative measurement combined with extensive consumer surveys. It’s blended to provide comprehensive and practical information about virtually any consumer product. The typical outcome of such an effort is some form of numeric rating or ranking supported often by additional descriptive commentary that has aided consumer decision-making for eight decades. Too ensure bias is minimized CR exists strictly as a non-for-profit entity and abides by a strict policy of accepting no advertising to ensure never is there a confusion in priorities between the business office and testing lab.
By contrast the relatively new Angie’s List is a periodical on multiple platforms that seeks to blend “peer review” and ratings of local services with display advertising and feature articles. Like CR its content is partially fueled by subscriber input but relies exclusively on referral commentary to summarize service business performance. In addition, the very businesses being evaluated by Angie’s subscribers are simultaneously being solicited for display advertising investment. The resulting knowledge yielded by this method is a smattering of subscriber comments accompanied by a conventional scholastic letter grade. Numeric data are largely, if not completely, ignored and little disclosure is made about what, if any, natural conflicts of interest exist between those being rated and their ad expenditures.
The competing methods yield far different applications. The CR analysis of say, window air conditioners elicits a thorough, directed advisory of what’s best to buy and what’s worth the money in ways that are always objectively supported. Angie…on the other hand lists some descriptive referrals from a handful of subscribers. Whatever important views this group offers is in no way a comprehensive or accurate measure of anything. It is of course a handful of opinions from local consumers like you—you know the same opinions you can rapidly gather during small talk at church, with your pals at the local watering hole, or while idly watching your kid’s baseball game.
In a nutshell CR is most of what in commercial marcom measurement we should seek to be following. Methodical, reasoned, and thorough the CR style of measurement can be adopted for the broadest or most narrow form of commercial measurement where credible data will help yield insights to broaden our understanding. In contrast the anecdotal approach of Angie’s List represents most of what needs to be avoided. This is not to say that consumer referral or “word –of mouth” is unimportant—far from it. But when such views are assembled in a way where little regard is made for the methodical or accuracy of measurement and where objectivity and potential conflicts of interest are not disclosed transparently, the “resulting” findings are hardly credible—and at a time when marcom measurements has never been more important why are we wasting our time with that?