Blog →

by
Maryna Paryvai
February 5, 2024
· Updated on
April 17, 2026
Ah, unhappy customers. The not-so-silent killer of business.
Your team can deliver, innovate, and grow. But if customers aren’t happy, none of that matters for long. And you can’t fix what you don’t measure.
So how do you actually measure customer happiness?
With customer satisfaction metrics. There are dozens of them, which is both good and bad — good because you can pick the ones that fit your business, bad because it’s easy to get lost in a sea of acronyms. This guide covers the nine that actually matter, when to use each, and how to interpret what the numbers are telling you.
Customer satisfaction metrics are the numbers companies use to understand how happy customers are with their product, service, and overall experience. They give you a feedback loop — without one, you’re just guessing about what customers think.
Some metrics shed light on specific interactions (a support conversation, a product onboarding). Others capture the bigger picture (overall loyalty, retention, revenue impact). The best teams use a mix: one or two moment-to-moment metrics, one or two relationship-level metrics, and one financial metric.
Let’s dig in.
NPS is a customer satisfaction metric that gauges loyalty based on one question: how likely is a customer to recommend you?
If someone will enthusiastically tell their friends about your product, it’s a strong sign they’re happy with what you’ve built.
NPS is based on a single survey question:
How likely would you be to recommend X to a friend or colleague?
Respondents rate on a scale from 0 to 10. Based on their rating, they fall into one of three groups:
Calculate the percentage of promoters and the percentage of detractors from your total responses. Then subtract detractors from promoters:
NPS = % of promoters − % of detractors
If you got 100 responses — 50 promoters, 30 passives, 20 detractors — your NPS is 30 (50% promoters minus 20% detractors).
NPS ranges from −100 (everyone’s a detractor) to 100 (everyone’s a promoter), but real-world scores sit in the middle. Anything above 0 means you have more promoters than detractors. According to recent benchmarking research, the overall NPS benchmark sits around 32. But the bigger signal is the trend — is your NPS rising or falling over time?
NPS works as a KPI for overall customer satisfaction. In practice:
Product and marketing teams often use NPS as a headline KPI. Customer success teams use it as an input for churn prediction and customer health scoring.
The good: NPS gives you a single number that summarizes overall loyalty, easy to track over time.
The bad: “Likelihood to recommend” doesn’t always correlate with actual behavior. It also doesn’t tell you why — you need to pair it with open-text feedback to act on it.
CSAT measures how happy customers are with a specific interaction — a support conversation, a sales demo, a feature they just used. It’s a snapshot, not a long-term relationship metric.
The survey usually asks something like:
How satisfied were you with your recent experience?
Customers answer on a scale (1–5 is most common), and you measure the percentage who gave a positive rating.
Count the number of “satisfied” responses (usually 4s and 5s on a 5-point scale) and divide by total responses:
CSAT = (satisfied ratings / total ratings) × 100%
If 100 people respond and 80 of them rate 4 or 5, your CSAT is 80%.
CSAT ranges from 0% to 100%. Under 50% is a red flag — more people are leaving unhappy than satisfied. In competitive industries like SaaS and e-commerce, the benchmark is around 80%. A 95% CSAT is realistic for a high-performing team.
You should also see a 5–20% response rate on CSAT surveys. If fewer people are responding, rethink your survey timing, messaging, or channel.
CSAT works best as a follow-up after specific touchpoints:
Many teams use CSAT as a KPI for individual agents and entire customer-facing teams. Don’t send CSAT after every single interaction — it gets annoying, response rates crater, and the data becomes unreliable.
The good: captures satisfaction at specific moments, actionable when you tie it to who handled the interaction.
The bad: “Satisfaction” is subjective. Cultural differences affect what a 4 vs. a 5 means. Response rates vary, so results don’t always reflect your full customer base.
CES measures how easy it is for customers to do something — get their question answered, complete a task, find what they need. It’s based on research showing that reducing effort is a better predictor of loyalty than trying to “delight” customers.
Instead of asking about satisfaction, the survey asks:
How easy was it to [resolve your issue / find what you needed / complete your task]?
CES uses a 7-point scale. Divide the number of responses rating 5, 6, or 7 (easy) by total responses:
CES = (ratings of 5, 6, 7 / total ratings) × 100%
If 100 people respond and 60 rate 5 or higher, your CES is 60%.
CES ranges from 0% to 100%, higher is better. Because it’s a relatively new metric (Gartner introduced it in 2010), benchmarking is less mature than NPS or CSAT. What matters more is your own trend over time.
CES works well after any interaction where “ease” is the thing you want to optimize:
Timing matters — send the survey immediately, while the experience is fresh.
The good: highly actionable. If CES is low, you know exactly where the friction is.
The bad: can be misleading without context. A low CES might mean your product is genuinely hard to use, or it might mean you serve technical users working on complex problems.
Churn is the rate at which you lose customers. It’s the ultimate satisfaction metric — if customers are canceling, they’re telling you something, even if they never filled out a survey.
Divide the number of customers lost in a period by the number you had at the start:
Churn rate = (customers lost / customers at start of period) × 100%
If you start the month with 100 customers and lose 20, your monthly churn is 20%.
Lower is better. Ideally it should be below your growth rate, and under 7% annually for most subscription businesses. Much higher than that, and you’re filling a leaky bucket — every new customer you add is offset by one walking out the door.
Churn rate is critical for any subscription or recurring-revenue business. It’s also a lagging indicator — by the time you see it rise, customers are already gone. So pair it with leading indicators (CSAT, NPS, support ticket volume) that can warn you before churn happens.
The good: directly ties customer satisfaction to business outcomes.
The bad: not actionable on its own. A high churn rate tells you there’s a problem, but not what the problem is.
Retention is the flip side of churn — instead of measuring who leaves, you measure who stays. For some teams, framing it this way is more motivating and maps more cleanly to customer success work.
CRR = ((Customers at end of period − New customers acquired) / Customers at start of period) × 100%
If you start with 100 customers, gain 30 new ones, and end with 110, your retention rate is (110 − 30) / 100 = 80%.
Retention rate works especially well for customer success teams and account management teams, where the goal is keeping existing customers happy, expanding accounts, and preventing churn. It also surfaces issues earlier than churn rate alone, because it accounts for the fact that new customer acquisition can mask retention problems.
The good: a positive framing that ties to customer success activity.
The bad: like churn, it’s a lagging indicator. You need leading indicators to act before it moves.
CLV estimates the total revenue you’ll earn from a customer across the entire relationship. It’s not a direct satisfaction metric, but it tells you whether your satisfaction efforts are paying off — happy customers stay longer and buy more.
The simple version:
CLV = Average purchase value × Purchase frequency × Customer lifespan
For a SaaS business, it’s often calculated as average revenue per customer divided by churn rate. If your average customer pays $100/month and your monthly churn is 5%, CLV is roughly $2,000.
CLV is most useful for decision-making — how much can you afford to spend acquiring a customer? How much should you invest in customer success? It also helps identify which customer segments are most valuable, so you can focus retention efforts where they’ll have the biggest impact.
The good: connects satisfaction work directly to revenue. Easy to justify budgets.
The bad: calculation gets complex for businesses with varied purchase patterns. Also backward-looking — doesn’t tell you what customers think right now.
FRT is how long it takes your team to send the first reply to a customer inquiry. It’s not satisfaction itself, but it correlates strongly with it — customers who wait hours or days for a first response are already unhappy by the time you write back.
Measure the time between when a customer contacts you and when your team sends the first human response. Average it across a time period (daily, weekly, monthly).
Benchmarks vary by channel:
FRT is a headline metric for customer support teams. It’s also a useful operational metric — if it’s getting worse, you need more staff, better automation, or better routing.
The good: easy to measure, directly actionable. If FRT is high, you know what to fix.
The bad: fast responses don’t equal good responses. A two-minute reply that misses the question is worse than a ten-minute reply that solves it.
Resolution time is how long it takes to fully resolve a customer’s issue — from the first message to the last. It captures the full experience, not just the first response.
Measure the time between when a conversation starts and when it’s marked closed/resolved. Average it across conversations.
Varies wildly by issue type. Simple billing questions should resolve in minutes. Complex technical issues might take days. Track by category — a long average resolution time for password resets means something different than a long average for bug reports.
Resolution time is useful for identifying systemic issues. Is one category of problem taking dramatically longer than others? Is a particular agent’s resolution time an outlier (either good or bad)? Does resolution time correlate with CSAT scores?
The good: captures the full experience, not just the opening exchange.
The bad: can push teams toward rushed closures. Make sure you’re not optimizing for speed at the expense of actually solving the problem.
Customer health score combines multiple signals into a single indicator of how a customer is doing overall. It’s less a metric and more a framework — you pick inputs that predict churn or expansion for your business and combine them into a score.
Every team calculates it differently. Common inputs:
Weight them by how predictive they are of your actual outcomes (churn, expansion), then combine into a 0–100 score or a red/yellow/green indicator.
Health scores are most useful for customer success teams managing a portfolio of accounts. They help prioritize — which accounts need attention this week? Which ones are ripe for expansion?
The good: early warning system. Health scores catch problems before they show up in churn.
The bad: garbage in, garbage out. If your inputs aren’t actually predictive, the score is just a number. Requires ongoing calibration.
Metrics are great for spotting trends and setting KPIs. But a number on a dashboard never tells you the whole story.
The teams that really understand their customers pair metrics with open-ended feedback. Adding a free-text field to your CSAT survey, for example, often reveals that low ratings have nothing to do with your support team — they’re about a specific product issue that’s easy to fix. Without the text field, you’d be troubleshooting the wrong problem.
The opposite happens too. Customers might rate individual interactions highly while being broadly dissatisfied with your product. Or they might churn without ever leaving a negative review, because they were “too polite” to complain.
So treat metrics as a starting point. The number tells you that something is happening. Conversations with customers tell you why. The best customer satisfaction programs use both.
Missive is a collaborative email client that helps teams handle customer support, gather feedback, and automate follow-ups. If you’re trying to close the loop between metrics and actual customer conversations, Missive might be worth a look. Try it free.