It’s all too easy to use your blog or Twitter account to only draw attention to studies that back up what you’re selling.
Among those working in social media in some form, it would probably be a rather conservative estimate to state that at least half of all content shared informs others how wonderful social media is in one way or another.
And yet, I wonder how many people would continue to read a newspaper if half of it was dedicated to how important newspapers are and how everyone should read one.
Which is why I draw your attention to a recent study by Pew Research. It suggests, on the surface at least, that social media research may offer less value than many of us working in the industry claim.
The year-long study concluded that “the reaction on Twitter to major political events and policy decisions often differs a great deal from public opinion as measured by surveys”.
The report stated that “the overall negativity on Twitter over the course of the [US presidential] campaign stood out”, adding: “For both candidates, negative comments exceeded positive comments by a wide margin throughout the fall campaign season.”
Which appears to be a rather damning indictment on social media research’s ability to capture public opinion.
Twitter isn’t a representative sample
Firstly, I think it’s important to acknowledge that those of us who use Twitter are not a representative sample of the population and it does skew towards a narrow (but expanding) demographic.
As far as the wider population goes, Twitter is still quite new and the relatively recent media infatuation with it is testament to that.
Its early adopters are more likely to be those interested in being part of and shaping the public sphere.
I’m not sure anyone would be particularly surprised to discover that Twitter’s character limit (in addition to the fact more extreme opinions are frequently rewarded with more attention) lends itself to more polarised views.
So the fact that the study found “the overall negativity… stands out” isn’t especially illuminating. It demonstrates we’re generally capturing the opinions of people who feel strongly about issues.
In other words, probably the kind of non-apathetic people you’d want to listen to.
Social media research doesn’t work in a traditional quantitative sense
Of course this means it doesn’t work as a quantitative methodology in a traditional sense and (as I’ve said time and time again) does not, and cannot (at least in the immediate future), replace surveys.
It also doesn’t make for a particularly effective brand tracking service if you’re comparing it to that same traditional approach either.
I don’t think it too presumptuous to state that few of us use our social media accounts to maintain positive sentiment towards a brand we like each month.
As we’ve found in our research at Precise, people are probably more likely to use Twitter to complain to companies, particularly when they have a presence there.
Which isn’t to undermine the importance of understanding what’s being said about a brand in social media. Numerous departments within an organisation would want and need to know what their customers have to say. Ignoring one of the channels they’re using to express these opinions simply because you can’t be sure who they are and what their motivation to share is seems somewhat foolish.
It just means it’s not a service that can replace, or is directly comparable to, what the likes of YouGov’s BrandIndex do in a more representative and repeatable way. However, a brand tracker is usually always a starting point, allowing us to hypothesise about the direction public perception is moving in. Analysing discussions about the brand within social media is one way of helping to understand why opinion may have changed.
Tracking the size of discussion around a particular brand or issue also provides an important context, but only in the context in which those conversations take place.
What are the opportunities?
None of the above caveats mean the insight gained from social media research is useless. We just need to start thinking about it in a different way to other types of research.
In my opinion, its main value comes from the fact the comments we’re capturing are spontaneous and not defined by questions as such.
As well as helping to identify those ‘known/unknown unknowns’ about a brand (which can then be built into a more traditional quantitative approach by, for example, ensuring you’re asking better questions in your survey), I believe it’s best used to feed into the creative process; to investigative a particular issue; or to answer a question that’s difficult to answer by other means. Not to predict voting behaviour or to claim it can “accurately gauge public opinion”.
In a research context, social media effectively acts as a giant, disorganised focus group where anyone is free to share their opinion about anything. Its potential lies in offering the chance to do qualitative research at scale. Again, it’s not an opportunity to necessarily do it better than other types of qualitative research, it’s just another methodology and different approach at our disposal.
We shouldn’t dismiss it simply because the results aren’t always representative of the population at large.
I would like to also point out that a Twitter analysis isn’t the same thing as research conducted across all forms of social media. The former is a lot quicker and easier to do but it’s far more powerful if you include other sites too.
People tend to underestimate their audience on forums and blogs so their opinions tend to be more balanced (which also means we need to look at each type of site in context). Too many analyses focus on Twitter alone for no other reason than it’s quicker and easier to categorise what’s being said because of its character limit. In our experience, most of the interesting and genuine conversation takes place on forums anyway.
Finally, it is worth noting that the conclusion drawn by Pew is based on two important premises:
i) Pew’s survey is an accurate reflection of true public opinion; and
ii) Pew did the Twitter analysis effectively.
Sentiment is a rather subjective thing and what we write may be interpreted differently to what we have to say. Our opinion may also be more nuanced than the box we’re required to tick in a survey allows too and, as one PhD student highlighted on Twitter, we need to be particularly careful with the conclusions we draw from quantitative research: “wonder if Twitter liking Obama’s State of the Union less than Public Opinion is more “conservative”? maybe Twitter was to the left of Obama?“