NPS can be quite controversial. I’ve come across many that feel NPS isn’t right, especially in B2B environments where there are multiple persona in any given account, many touchpoints, and generally characterized by relationships more than transactions.
Candidly, we’re not big fans of the Net Promoter Score either, but companies are looking for KPIs to report top-line performance so NPS is often debated. The key question I’d ask those “Net Promoter detractors” is what specifically are they objecting to: Is it (1) the “recommend” question, (2) the use of 4 different groups (promoters / passives / detractors / disengaged) in B2B, or (3) the aggregation of data into the Net Promoter Score?
There are pros and cons to every KPI, and that fact shouldn’t slow down a company’s desire to improve. I suspect most of the objections around NPS are around the third item above (i.e. the overall score), which frankly is the least-important element and I generally have no heartache with giving that up (with certain caveats):
- In B2B, much of the relationship-strengthening work happens at the account-level, not at the segment/aggregation level. This means that equipping account teams with the feedback so they can use it effectively to drive the right customer outcomes tends to be the fastest way to use the feedback, and gain from it. Aggregating feedback always must happen with caution, and NPS is no exception, just like CSAT, CLI, CES, or any of the other methods.
- Any good researcher will recognize the need to measure the right outcomes as the dependent variable. So if those people that are opposed to the Net Promoter System have a better dependent-variable outcome in mind then I’d be all ears. I don’t see what’s wrong with using “recommend” as the dependent variable – B2B firms grow their business through word-of-mouth just like consumer business. We could create a question around “customer success” or more specific customer outcomes, but even that is one-step removed from a useful outcome for your company’s purposes. And I definitely would steer clear of things like “likelihood to renew” or “likelihood to buy more” because those smell like sales and not feedback. Also stay away from “satisfaction” and the like because it simply is not an outcome from action; it’s a sentiment (and one with a very low bar, btw). So if not “recommend” then what in its place?
Keeping in mind that “NPS” stands for Net Promoter System (it hasn’t focused on “Score” for at least 10 years), the primary idea is to put customer-contacts into the right groups so the company can drive the right treatment strategies. We use the NPSystem to categorize customer-contacts in 4 groups (promoter / passive / detractor / disengaged — don’t forgot those non-respondents, which in B2B environments are a C-R-I-T-I-C-A-L datapoint) that can then drive the right follow-up approaches for each. If “Net Promoter detractors” have a better categorization system then I suppose I’d listen. But since NPS is really an industry-standard, know that every minute of explanation about a new methodology or system steals a minute from driving improvements. Personally, I’d rather invest the time into doing the right work, not debating standards (and we all know, every standard in the world will have its detractors). This is especially true when considering point #1 above that relationship strengthening work happens at the account level. So if not NPS or Top 2 Box, then what outcome should be measured instead?
I’d enjoy hearing from folks that have a differing point of view. Perhaps I’ve misunderstood the objection, although if it has anything to do with better ways to aggregate data then I’m not sure we’ll be able to come to an agreement.