Why We Dropped NPS For a Better Customer Success Metric

By

As a former head of Customer Success, I’ve spent a lot of time working with the Net Promoter Score NPS. While I agree it’s a fine referral proxy, I don’t see its connection to growth. That’s why we had to drop it.

An old client of mine (let’s call them Banana Stand Inc.) started collecting the Net Promoter Score (NPS) when they were launching a new SaaS platform for both B2B and B2C users. Among a small early adopter audience of 120, Banana Stand got what most believed was a “bad” score. It was an 8. About 6 years later (and something like 24million in sales) Banana Stand scored a 12. The scores themselves and the gap between them probably say something about product, service, loyalty, and company, I just don’t know what that is.

Despite the research, I never quite cracked benchmarking.Some scores were as high as 45, others got to interesting lows like -35. When I presented Banana Stand’s NPS of “8” proposals from the board and executive team, all pointed at Customer Success. We used the opportunity to learn a great deal about our firm and our customers. As we worked to improve both B2B and B2C experiences, we learned that NPS was often useless. An icebreaker at best. In 2011, both “Training”, “incentives” and “engagement” all ended up in the CSM project priority. I had no idea what to work on first. A Marketing colleague even tried to make the case that an 8 was awesome because of how many negative scores there are out there. To make things trickier, my team looked at NPS by individual customer and user. Many scores confirmed a “gut” hypothesis we had, but many didn’t. When we investigated, good or bad scores were often based on an “event” or action that had recently occurred. The score also poorly signaled renewal, upsell or cross-sell. One client gave us straight 10s but ended up dropping, another gave 4’s but increased their contract size by 3X for instance.

Frustrated, I turned to the Customer Effort Score (CES), developed by CEB (now Gartner). I set Banana Stand’s CES questions on a scale of 1-4. No room for neutrality. We asked these questions sometimes weekly. Although NPS was seen as a CS metric, we pointed CES toward all supporting business functions and, eureka! That quarter a single aspect of our firm scored “very difficult” across the board for users. Clients were telling us to “fix a new technology”, something the product team thought was a successful enhancement.

Working now with lean scaleups, I’m frustrated to see colleagues struggling with the same issue. This is why NPS, especially for B2B scaleups has so little utility:

1. Didn’t Foster Commercial Alignment – Despite our focus from day 1 on Commercial Teams like Marketing, Sales and Customer Support, NPS was more of a boardroom icebreaker than an instrument of alignment. The dark side of the NPS’ simplicity meant about a change in terms of “WE”. “WE got an 8….“WE need to do XYZ about it”. But “we” turned out to be difficult in holding accountable for specific enhancements and “we” didn’t do much to change anything. If you want to rate the performance of Customer Success reps, then do that. If you want to rate your customer experience do that. But please don’t waste customer time assembling an amalgam metric! If NPS could just be parsed out a customer service experience vs the customer experience, I would find more value in it.

“people have lots of ideas about what “we” should do … but “we” doesn’t do it” ― Brian J. Robertson, Holacracy

 

2. Weak Pulse on Client Health – Another dilemma for me was (collecting through Survey Monkey and Marketo) was that users stopped responding to the NPS question about 1.5 years (or around 4-6 surveys) working with us. When I asked why on check-ins, most champions and buyers would say “I’ve given you guys great feedback a bunch of times, right?” The same was true of low scores. Often the detractor was a user who would say “I told you guys how irritated I was about that bug right”?

3. It Is a Fallacious Average – Averages are nice to have but in commercially situations I need more context. As Nassim Nicholas Taleb warns in his book Black Swan “Don’t cross a river if it is four feet deep on average.” This was a downside of the word “net” in NPS for me. What was missing is client variability in each license of the service, among colleagues and from one cycle to the next. NPS was too opaque than visibility to the textured reality on the ground for us. We also found that our “passives” always averaged around 32% of the total. And passives never got figured into anything. They didn’t get included in the NPS math so they didn’t make it to our interest list either.

4. Narcissistic – NPS is THE vanity metric on a company’s dashboard. Commercial teams already have an ego stereotype, they don’t also need to be collecting a vanity score!? “Hey customer, thanks for buying…now tell us how awesome are we?!” Don’t get me wrong, I want to know who our fans and critics are, but I don’t want to force people into a fan or critic box. That’s kinda manipulative (another stereotype we’re hoping to avoid in the commercial world). In the end, it didn’t feel like we were really asking the question for the customer’s sake. We weren’t using the data to the fullest anyway.

“Not adding value is the same as taking it away” – Seth Godin

 

5. Slow – Admittedly, the way I collected NPS is also to blame. We only conducted NPS up to 4 times a year max. In the beginning, it was 1 time. That never matched our on-boarding schedule. It also didn’t reflect those “event” based situations. It was as awkward as doing lots of team projects in the year but then doing only one employee performance review. NPS delivery like this limited our startup sensitivity to changing business.

6. Did Not Inform Growth – This was the “coup de grace” for me. Claims for NPS have been: NPS is “the best predictor of growth,” (Reichheld, 2003), NPS is “the single most reliable indicator of a company’s ability to grow” (Netpromoter.com, 2007) and “CSAT lacks a demonstrable connection to… growth” (Reichheld, 2003). As a B2B growth expert, I agree with the inferiority of CSAT but these other claims are dubious. Sure, I tried to supplement this question with others, but that just diluted the case that this was “the ultimate question”!

So, why do we use Customer Effort Score (CES)?

1. It Targets Friction – Friction is the enemy of any human performance. It’s the proverbial wind in your face. We live in a nonlinear, semi-chaotic world in which our endeavors collide and clash with the wills of others (customers, partners, suppliers, competitors, etc). It’s unavoidable. Friction is why we see gaps in expected revenue, gaps in how we think people will behave vs how they actually behave and why the information we have is never enough. The secret for us is aligning teamwork to individual aspects of the process. I learned a lot about dealing with team alignment and friction from Stephen Bungay’s book “The Art of Action.” Focusing on the customer’s effort helps us target friction and the kinds of countermeasures we can put in place.

 

2. Its Iterative and Adaptive – The firms I work with now are recurring revenue, subscription and so-called “platforms”. They are often technology-based and running some form of Agile workflow. Customer Effort is tailor-made for them. CES allows us to pulse clients on all the little things that factor into our product roadmap, weekly stand-ups, campaign updates, and messaging tweaks. We can also focus on those “very difficult” responses and still let users can skip a question that doesn’t apply. We ask questions like “How Much Effort Did You Have to Exert?” Or something like this:

3. Customer Centric – Shifting the focus to client effort and away from our approval is important. You could say we’re in the dark on overall advocacy but we don’t muddy our own desire for endorsement with empathy for the client.

 

“Obsess about your customer’s VEX and VISION, not your product” 

 

4. It Points to a Specific Improvements – CES helps us identify a problem and an owner. NPS at it’s finest granularity only points from a person to the overall company. So while NPS can seem like an initiative killer, CES has an orientation toward action. Our customers’ success is our success (and failure) so it’s most important to identify one type of project category for someone to work on.

As Nick Toman and team point out in their research, the hidden factor in customer service today is making things easy. Don’t worry so much about delight, promotion, and approval. That’s easier said than done of course but a nice rule to follow.

The Real Growth Driver

If Customer Success can become friction killers, a fascinating possibility emerges. The CSM becomes a consultant minimizing gaps in data, performance, plans, and actions. Both for internal and external stakeholders. And, if 70-95% of recurring revenue has the potential to come from Customer Success, removing this friction is central to scale.

Source: Gainsight

Leave a Reply

Your email address will not be published. Required fields are marked *