Like any other SaaS company, we were dealing with churn; a constant, annoying drip of churn, like when someone doesn’t turn off a faucet properly. We had a reasonable churn rate, 2.3%. Nothing to write home about, but no reason to burn down the house either.

We set ourselves a target to lower our churn rate. We haven’t assigned an exact number to this target yet, since we still weren’t clear on a few things:

  1. Why is it that we have churn?

  2. Is the reason internal, external or a combination of both?

  3. Is it even fixable?

  4. Considering all of the above, how long will it take to fix it?

Our first order of business was to do some research into what companies similar to us in the SaaS space have done, successfully or not, to reduce churn. As with any other topic, the breadth of content for How To Reduce Churn is somewhere between overwhelming to nonsensical.

Long story short, we gave a try to a wide array of churn reducing best practices. It didn’t work. It was time to go deeper.

Performance and Satisfaction Deviation

We noticed that for our company a major churn crossroads was the one-year renewal of contract. This was encouraging and disappointing at the same time. Encouraging because it meant customers didn’t proactively jump ship, but disappointing since it told us that in some cases we weren’t attractive enough to garner a renewal.

Now that was, with all humility, odd. If customers are happy with our product during the trial period and first few months, why their level of happiness drops with time? Our product is automated, algorithm-based – it performs the same after one month and after ten months. Actually, its performance is improving with time due to machine-learning elements of the algorithm.

This was our first eureka moment, the realization that correlation between performance and satisfaction tends to deviate from the mark with time. We sent our Customer Success team to do some sniffing; try to figure out why customers are less happy as time goes by.

Customer Success came back from their excavation with an eye-popping haul: customers believed that the impact of our product wasn’t as significant as it was at the beginning, that the value it was bringing to them was diminishing with time.

It’s like dating a really beautiful woman (or an exceptionally handsome man, whatever moves your needle.) First you can’t stop marveling her looks but after a while the effect wears off; she is still obviously as beautiful as she was a few months back, but you got used to it and once we are used to something, it ceases to impress us.

Perceived vs. True Value: The Gap that Leads to Churn

Here it is, in a quasi-equation form: there was a gap between our perceived value and true value, a gap that was widening with time. And this gap was our main churn reason. So what’s a SaaS to do with this information? Find a way to close this gap.

One of the quotes Customer Success brought us from a customer was, “How can we still know that you give us value?”

And also, “At first we saw the impact of your product, but now we’re not sure anymore.”

Off we ran to to fetch the data files of these two companies and checked their performance since they signed up with us. Here is one:

This immediately and concretely confirmed our value gap theory. Here is a company that benefits from consistent performance of our algorithm but still isn’t entirely convinced that we are worth its money.

At this point we thought we could wrap this up pretty quickly. We sent this very graph to the company, saying, see? Your conversion rate shows a steady increase over the last nine months. This is what we promised our algorithm will do.

Their response took us by surprise, and we’re paraphrasing here: “Yeah, but how do you know this is thanks to your algorithm? We do plenty of marketing stuff ourselves, the conversions can be attributed to our doings.”

They were absolutely, 100% right. How could we know?

Super Quick Explanation of What We Do to Make Things Clearer

We have an algorithm that runs on customers’ sites in order to deepen engagement and increase visitor-to-lead conversion rate. The algorithm first “learns” the site’s content assets (applying textual and semantic analysis) and then, in real time, analyzes visitor behavior, recommending the most relevant content for them at suitable points during the natural browsing flow.

So we are measured by two things:

  1. Level of engagement

  2. Rate of conversion

It’s a straightforward affair (or so we thought.) We insert our line into the site’s code, customers see a boost in engagement and conversion, these levels are kept and slightly increase as time goes by, everybody’s happy.

Two things we realized upon receiving the reply, “How do you know this is thanks to your algorithm?”:

  1. Once the initial wow of engagement and conversion boost wears off, customers accept the new numbers as the performance baseline, basically forgetting their site’s performance prior to BrightInfo.

  2. We know it’s thanks to our algorithm, but we fail to prove it to our customers.

Isolating Your Impact and Presenting It

A possible solution to these two problems is to record the customer’s website performance prior to the installment of our algorithm and in time of forgetness, refresh their memories with a quick reminder. This solution didn’t hold water for two reasons:

  1. It felt petty, undignified. Are we on our way to develop a reminder app?

  2. It didn’t take into account the passage of time and the effect of the customer’s other marketing and CRO efforts.

We were zooming in on the solution. We just needed to isolate BrightInfo’s impact on the site’s performance and be able to present it to customers. From a value perspective, we had to find a way to prove the value we bring to our customers on an ongoing basis, in a way that is beyond reproach.

The best way on the web to isolate performance and measure it is by A/B testing; any landing page will tell you that. So, if it’s good enough for a landing page, there’s no reason it shouldn’t work on a bundle of pages, a.k.a. a website.

Proof of Value

This is how we’re going to prove the value of our product to our customers: we will run our algorithm in a continuous A/B mode, with and without BrightInfo. This way customers will be able to see, at all time, our impact on their website’s engagement and conversion rates.

Two problems with this:

  1. We are exposing only half of the website’s visitors to our algorithm, thus not maximizing the performance of our own product.

  2. What if we found out our value wasn’t as great as we assumed it was, based on customer’s reporting?

To solve the first problem, we allocate the lowest possible control group (the B mode, without BrightInfo) to maintain statistical significance while ensuring our algorithm achieves maximum impact. It depends on the traffic volume of each site, but it is in the range of 5% to 1.5% of overall traffic. This way a customer can, at any point in time, see how their website is performing without BrightInfo.

To put it bluntly, they can see in front of their eyes the immediate drop in engagement and conversion rate that awaits them if they choose to discontinue their contract with us. (This isn’t a show-off in any way. This is the way we found to prove, beyond doubt, on an ongoing basis, the value our product brings to our customers. )

This is how it looks in their control panel:

Quantcast_20160819_20160918.png

The second problem proved more elusive, and much more scary, to be honest. It is not everyday that a company put its product to such a rigorous, quantified test. Across all its customers, on ongoing basis. We would be left with nothing to hide behind.

To look at the bright side, it will push us as a company to improve our performance. With this level of transparency, you have no choice.

Since you are reading this, you are right to assume it worked. In the span of four months of running our product in A/B mode we reduced our churn from 2.3% to 0.5%.

On top of that, the A/B mode became our most effective sales tool, and a valuable instrument for showcasing our value internally within our customers’ companies. One can say that we provided marketing departments a strong argument for justifying the cost of BrightInfo to the CFO… During the trial period we run the A/B testing in 50-50 mode, providing the most clear and straightforward measurement of our product’s impact. As mentioned before, once a customer signs with us, we tune down the B to the lowest percentage statistically significant to ensure the full effect of our algorithm.

The fact that we have a constant, site-wide as well as platform-wide (i.e. all of our customers) ongoing A/B testing allows our algorithm to constantly improve, overall and on specific-site level. By doing so we are providing a steadily-growing added value to all our current and future customers that benefit from each other’s; we’ve created a system that feeds itself to growth.

The whole process enabled us to shift the discussion from pricing to value and ROI. And that’s where we want to be, where we feel most comfortable, confident and relevant.