On Wednesday morning of November 9th people all over the world were asking, “How did the pollsters get it so wrong? Have we lost our ability to understand the true sentiment and intentions of a population using surveys, questionnaires and polls?” For those of us in the customer experience space, these results made us step back and try to understand why. Such discrepancies could reflect poorly on all survey research and cause companies to wonder if their own survey results are valid. But don’t fear, take comfort in knowing that political polling is a not quite the same process as surveying customers. Additionally, a survey researcher can utilize many more techniques to uncover and correct potential problems.

Likelihood to Vote

Normally with surveys, a random sample is taken in which the statistics are assumed to generalize to the population. Usually, such an approach works well – unless, for example, a demographic group is under-represented and provides a different response pattern. In this situation, researchers can overweight the group to get a truer estimate of how ALL customers feel.

That might seem very similar to a poll. Pollsters may weight on several demographics including age, income, education and race. However, there is one important difference. Pollsters are NOT making inferences with respect to how the election results would be if everyone voted. They also need to determine the likelihood that these groups would come out to vote. In a survey, researchers don’t really care about the probability of participating in another “grand” survey – only how the survey results apply to all customers.

Social Desirability Bias

Another culprit for the poll discrepancies is known as social desirability bias. In such a scenario, the respondent may decline to participate in the survey or alter their response based on the perceived reaction of the interviewer. Consequently, if an individual feels that his candidate may generate a negative reaction he may not state his true intentions. In this election, researchers have speculated some voters were reluctant to admit they wouldn’t vote for a woman or that they would vote for a celebrity who is labeled as disrespectful to women.

So how can one guard against social desirability bias? Sometimes conducting a poll anonymously through a different medium like the internet may be enough. However, let’s revisit the presidential race. There is no doubt that these two candidates were polarizing. Pollsters needed to determine if people were really voting for the candidate or rather the proposed “product.” A deeper dive into product issues such as jobs, the economy, trade and supreme court nominees may have shed some light on the voter’s true intention.[1]

Even in survey research, such biases need to be taken into account. For example, if a survey indicates that customers are most concerned about green initiatives, additional inquiry needs to happen. It could be that the clients want to give that impression but they may be more concerned about quality and price. Certainly a manufacturer shouldn’t be told that their green initiatives are the most important aspect of their business unless that result was cross-validated.

Checking Results with Text Analytics

Another way to investigate social desirability bias is by using open-ended questions. People are usually more comfortable explaining their positions than settling on a static score or binary answer. Text analytic techniques can process open-ended answers and identify discrepancies with closed-ended responses. Clients may also find that the reason “Why” someone gave the score to be more insightful than the closed-ended response itself.

Survey Research Methods Are Sound

There is no doubt that these election results will be analyzed ad nauseam. Certainly, there are other sources for errors such as determining a national result that isn’t based on total votes but rather an electoral college system. However, you should feel comfortable that survey-based research methods are sound.