What Is a Net Promoter Score and How Does It Work?

NPS stands for Net Promoter Score, a single-number metric that measures how likely your customers are to recommend your company to someone else. It runs from -100 to +100, and it’s built on one deceptively simple question: “How likely are you to recommend us to a friend or colleague?” Two-thirds of the Fortune 1000 use it, making it one of the most widely adopted customer loyalty metrics in business.

How NPS Works

Customers answer the recommendation question on a scale of 0 to 10. Based on their response, they fall into one of three groups:

  • Promoters (9 or 10): Enthusiastic customers who are likely to keep buying and refer others.
  • Passives (7 or 8): Satisfied but unenthusiastic. They won’t actively promote you and could be swayed by a competitor.
  • Detractors (0 through 6): Unhappy customers who may discourage others from doing business with you.

The formula is straightforward: take the percentage of Promoters and subtract the percentage of Detractors. Passives don’t factor into the math. So if 60% of respondents are Promoters and 20% are Detractors, your NPS is +40. The score can range anywhere from -100 (every single customer is a Detractor) to +100 (everyone is a Promoter).

Fred Reichheld and a team at Bain & Company developed the metric after researching which single survey question best predicted real customer behavior. High scores on the recommendation question correlated strongly with repurchases, referrals, and long-term revenue growth, which is why Reichheld called it “The Ultimate Question.”

What Counts as a Good Score

Any score above 0 means you have more Promoters than Detractors, which is a positive signal. Beyond that, the general benchmarks break down like this:

  • 0 to 30: Good
  • 30 to 70: Great
  • 70 to 100: World-class

These ranges only tell part of the story, though. What’s “good” depends heavily on your industry. In 2025, the average NPS for SaaS and tech companies is around 42, while e-commerce and retail average 36, financial services sit at 37, and healthcare comes in around 31. Top performers in SaaS reach scores of 60 or higher. A healthcare company scoring 50 would be exceptional, while the same score in tech would simply be solid. Always compare against your own industry and, more importantly, against your own past scores.

The Follow-Up Question Matters More

The number alone tells you where you stand. The follow-up question tells you why. Most companies pair the 0-to-10 rating with an open-ended prompt like “What is the most important reason for your score?” or “What can we do to improve your experience?” Some tailor the follow-up based on the score: Detractors might see “Our apologies for not meeting your needs. Care to tell us why?” while Promoters get “We’re thrilled you feel that way. Would you tell us why?”

These qualitative responses are where the real value lives. A score of 25 doesn’t tell you what to fix. But hundreds of written responses mentioning slow shipping times or confusing checkout processes give you something to act on.

Relational vs. Transactional Surveys

There are two ways to deploy NPS, and they answer different questions. Relational NPS measures how customers feel about your company overall. You send it on a regular schedule (quarterly, every six months, or annually) with no specific trigger event. Ideally, the customer isn’t in the middle of a purchase or support interaction when they receive it, since that would color their response.

Transactional NPS captures feedback after a specific moment: a purchase, a support call, onboarding as a new customer, or a major product update. These surveys go out more frequently and produce more actionable data because you can tie the score directly to a particular experience. For product updates or redesigns, it’s worth waiting a few days or weeks before sending the survey so customers have time to adjust rather than reacting to the change itself.

Many companies run both types simultaneously. The relational survey tracks the big picture over time, while transactional surveys pinpoint exactly where the experience breaks down or excels.

Closing the Loop

Collecting scores without acting on them is the most common NPS mistake. The practice of “closing the loop” means following up with customers, especially Detractors, to address their concerns. This might look like a personal email from a support manager, a phone call to understand the issue, or a targeted message letting the customer know their specific complaint led to a change.

Equally important is closing the loop at scale. Companies segment feedback by theme (feature requests, performance complaints, pricing concerns) and route those insights to the teams that can act on them. When changes are made, letting customers know their input shaped the decision reinforces the value of giving feedback in the first place.

How NPS Compares to Other Metrics

NPS isn’t the only customer experience metric. Two others show up frequently alongside it, and each serves a different purpose.

Customer Satisfaction Score (CSAT) asks customers to rate their satisfaction with a specific interaction or experience. It’s great for evaluating individual touchpoints, like a support ticket resolution or a checkout process, but it doesn’t capture the broader relationship. A customer might rate a single interaction highly while still planning to leave for a competitor.

Customer Effort Score (CES) measures how easy it was to accomplish something with your company, like resolving an issue or completing a return. It’s a strong predictor of loyalty because customers who have to work hard to get what they need tend not to come back. Like CSAT, though, its scope is narrow. It works best for individual transactions, not overall sentiment.

NPS sits above both of these. It captures general loyalty and willingness to advocate rather than satisfaction with any single moment. Most companies that invest seriously in customer experience use some combination of all three.

Employee NPS

The same methodology applies internally. Employee Net Promoter Score (eNPS) asks workers a parallel question: “How likely are you to recommend this company as a great place to work?” The scoring and calculation are identical. Promoters score 9 or 10, Passives score 7 or 8, Detractors score 0 through 6, and the formula subtracts the Detractor percentage from the Promoter percentage.

Companies that track eNPS over time often find it correlates with their customer-facing NPS. Disengaged employees tend to deliver worse customer experiences. The metric is useful as a pulse check, but it has limitations. A single question can’t capture the full picture of employee motivation, retention risk, or workplace culture. Benchmarking against other companies is also tricky because scores vary significantly across industries and professions. Tracking your own trend over time is more meaningful than comparing to an outside number.

Known Limitations

NPS is popular partly because it’s simple, but that simplicity comes with trade-offs. A score of 8 means “Passive” in the NPS framework, but in many cultures, giving an 8 out of 10 signals strong satisfaction. This cultural variation makes international comparisons unreliable. Research has also found that age affects scoring patterns: people over 70 tend to give lower recommendation scores regardless of their actual satisfaction.

The metric also struggles in situations where customers don’t choose their provider. A systematic review of NPS use in healthcare found that the recommendation question feels odd to patients who can’t easily switch doctors or hospitals. Four studies in that review concluded NPS provided a large volume of data but had minimal impact on actual quality improvement. A simpler “How would you rate your care?” question showed stronger associations with quality indicators and patient experience.

None of this means NPS is useless. It means it works best in competitive markets where customers have genuine choices, and it should be interpreted alongside other data rather than treated as a single source of truth. The score is a starting point for conversation, not the final word.