March 7, 2024
Property management email templates (with examples)
Property management email templates for applications, move-ins, maintenance, rent reminders, renewals, and move-outs. Copy, paste, and personalize.
The best property management email templates cover the five moments that repeat every month: applications (auto-reply, approved, declined, waitlist), move-in, maintenance requests, rent reminders, renewals, and move-outs. Save them as canned responses in your email tool so your team can fire off a polished, on-brand reply without rewriting it each time.
Managing properties comes with a lot of communication. Whether you’re emailing potential tenants or resolving maintenance issues, there’s only so much you can handle one-off.
To help you build and maintain good landlord-tenant relationships, we’ve put together email templates that make it quicker to respond to maintenance requests, send rent reminders, follow up with applicants, and more. Let’s jump in.
We’ve pulled together the templates you’ll actually use in daily property management work, with variable placeholders you can fill in.
If you’re using Missive, a collaborative email client for teams, you can copy/paste these directly into your canned responses and share them with your team so everyone sends the same clean message.
First impressions matter, especially when it comes to attracting and retaining quality tenants. The application process sets the tone and often influences whether a tenant decides to move into one of your properties.
You might be thinking:
Wait, I don’t need an email template for applications; most of my leads come from Facebook Marketplace. I don’t do email marketing.
The good news is that these templates save time whether you’re sending via email or Facebook Messenger. With Missive, you can manage your Facebook Messenger account alongside email and reuse the same templates across both.
A template for an auto-reply when you receive a rental application:
When an applicant passes the credit check and is approved, here’s a follow-up email template to use:
Some applicants won’t be approved. Here’s a template to make declining straightforward and respectful:
Last one for applications: for when you need to tell an applicant they’re on the waiting list.
A welcoming email with all the info new tenants need for their move-in kicks off the relationship well. A good message makes them feel valued and cuts down on the “wait, what time?” questions.
Most of the emails filling up your inbox as a property manager are maintenance requests. Replying quickly and letting the resident know you’re taking care of the issue is how you keep them happy.
Acknowledge the request right away so the resident knows it’s on your radar:
Once the issue is fixed, a quick follow-up shows you care about the resident’s experience:
Two messages (acknowledge + resolution follow-up) set the right expectation and cut down on the “any update?” emails.
If a resident is late on rent, a firmer reminder is in order. A few tips:
Here’s a template:
Timely renewal notices are how you retain residents and avoid vacancies. Start at least 90 days before the lease expires (adjust for your local laws). The email should spell out any changes and give a clear deadline for notice to vacate.
A template:
When a resident decides to move out, you’ll need to communicate all the info they need for the process. It can feel like a lot, especially if you manage many properties, but templates handle most of it.
A few tips before the template:
With those in mind, here’s the template:
Being a good property manager isn’t just about caring for brick and mortar, it’s also about nurturing relationships. Whether you’re a manager or a landlord, a few email best practices save time and avoid misunderstandings:
Master property management email communication and you’ll deliver five-star service, operate efficiently, and support your team.
Use a shared inbox tool so your whole team can see every tenant conversation, assign the right person, and avoid duplicate replies. Missive, for example, lets multiple leasing agents and maintenance staff work the same inbox without forwarding or sharing passwords.
Save them as canned responses in your email client. That way, any team member can pull up the right template in one click and personalize the variables before sending. If you’re using Missive, you can share templates across your team so everyone sends the same polished message.
Same-day acknowledgment for anything, even if full resolution takes longer. For maintenance issues, acknowledge within a few hours and set a clear expectation for when you’ll follow up. For applications, reply within 24-48 hours. For response-time SLAs, 4 business hours is a solid benchmark for tenant communication.
Yes, where you can. Rent reminders, late payment notices, and renewal notices all follow the same pattern every month or year, which makes them perfect for automation. Use your property management software’s built-in reminders, or set up rules in your email tool to handle the routine nudges and free your time for the stuff that actually needs a human.
Acknowledge the complaint quickly (within a few hours), validate their frustration, and give a clear timeline for resolution. For anything complex or emotional, switch to a phone call after the first email response. Written channels are better for records; voice is better for tone. Missive lets you track every step of a complaint across email, SMS, and calls in the same thread so nothing falls through the cracks.
February 23, 2024
How to avoid emails going to spam
Emails end up in spam for four main reasons: list management, content, DNS authentication, and reputation monitoring. Here’s how to fix each one and improve your deliverability.
Emails end up in spam for four main reasons: poor list management (sending to unengaged or unconsented addresses), low-quality or keyword-stuffed content, weak DNS authentication (missing or misconfigured SPF, DKIM, DMARC, and BIMI records), and poor sender reputation monitoring. Fix all four and your deliverability improves substantially, even without any content changes.
Every day, approximately 350 billion emails are sent and received. Of those, more than 45% end up in spam. That’s a massive hit for businesses: marketing emails don’t reach subscribers, transactional emails don’t inform customers, and teams struggle to communicate effectively.

Email deliverability is something of a black box, much like SEO. The rules change often and aren’t fully disclosed by major Email Service Providers (ESPs) like Google, Apple, and Microsoft.
Sometimes they are disclosed, as with Google and Yahoo’s enforcement of stricter sender authentication requirements starting in February 2024 and tightened further since. More often, the rules stay opaque.
The good news: even with that uncertainty, you can significantly improve your email deliverability. If your messages are getting lost in the spam folder, read on; we’ll cover why emails end up there and how to prevent it.
Before we dive into why your emails end up in spam, let’s start with a distinction that trips up most people:
Just because your emails show as “delivered” in your sending tool (bounce rate, delivery rate) doesn’t mean they’re actually reaching the recipient’s inbox.
Email deliverability is the odds that your email makes it to the inbox and not to spam. “Delivered” just means the recipient’s server accepted it; spam filtering happens after that.
Emails trigger spam filters for many reasons, but the story usually comes down to four pillars:
Avoid these red flags and your emails will land in the inbox far more often.
How you collect emails and build your subscriber list matters a lot. If you use a deceptive method to grab email addresses and then send unsolicited messages, those recipients will be unhappy, and unhappy recipients mark you as spam.
Use an opt-in form that clearly tells users they’ll receive content from you by checking a box or a similar mechanism. Be clear, not sneaky.
Make it easy to unsubscribe. Don’t hide the link in gray-on-white at the bottom of your template. People who can’t unsubscribe flag your email as spam, which damages your reputation.
Google and Yahoo now require an unsubscribe button directly in the header of bulk email. Here’s what it looks like:

Use a third-party tool to remove deactivated or banned accounts from your list. Those create hard bounces, and hard bounces hurt your reputation.
We like Neverbounce for this.
If a group of subscribers hasn’t opened a single email in the last six months, send them a message asking if they’re still interested. Emails that consistently get ignored are likely to be flagged as spam, which isn’t good for your sender reputation and isn’t a great final touchpoint with your brand either. Be kind and warm about it; let them sail into the sunset if that’s their wish.
The content you send matters more than almost anything else. People’s time is valuable, so when you ask for their attention, make sure what you’re sending is worth the read.
A quick checklist before hitting send:
Authenticating and securing your emails is a crucial step in making sure they reach the inbox. It’s often overlooked and one of the easiest wins for deliverability.
There’s a tight link between security and compliance here. ESPs want to reduce spam, scams, and phishing. To support that, they favor domains with well-configured security and authentication protocols in their DNS.
This part is tricky to set up but incredibly valuable. It can mean the difference between a +39% open rate and a +34% purchase likelihood.
So what are these authentication and security protocols?
DNS is the address book of the internet. Computers use DNS to look up domain names and find the corresponding IP addresses needed to connect to websites, servers, and other resources.
DNS is also where ESPs like Google, Apple, and Microsoft check how your emails are secured and authenticated:
Let’s go through each one.
SPF records are like a guest list for sending emails. An SPF record is a line of text that specifies which domains or IP addresses are permitted to send emails on behalf of your domain. It lives in your DNS manager, under TXT records.
Here’s an example SPF record:
v=spf1 ip4:192.0.2.0/24 ip6:2001:db8::/32 include:_spf.example.com ~all
If an email from your domain is sent to a recipient server from an IP not on your SPF list, deliverability takes a hit.
A quick tip: to check whether your DNS is configured properly and your email has a good chance of reaching the inbox, use Palisade’s free Email Deliverability Score tool. It audits your DNS configuration and suggests improvements.
DKIM records add a digital signature to your emails that proves they’re authentic when they arrive at the recipient server. Think of it like the signature on the back of your credit card.
Each third-party service you use with your domain typically needs its own DKIM key and record.
Here’s an example DKIM record:
v=DKIM1; k=rsa; p=MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAgAS4QZzH+/iM5ilpxexFK7uVnX5OasDMW61p7IvUjM+488QnpLqDTlsvGdJtG/oHgwRpXcNSxKKhtX3R4zg0MoSdLJYTEMiirr8UdeuGng/ZKM2XtLa+qGve6kp3H5NBx2uYHVj+E0WANeRT3bK5sMVRTYSAywN/m9ugX5T5PkbvJ2HRTmrX00ov4/VoVFSbfHZzaA/FDX/hyFnWEiOb1JihArP2+cMs+CYgIi7u8t+p0FqR/37kuEh5PLxOct/fnhqjn35XPn8C1s2fAC5J2WZjmmC5QM2qYV90isu03jeCI7Vap9ocKj5P+qJAlooYNujICd84ZmcHeA2UJqj22QIDAQAB
DMARC protects your domain from people trying to send fake emails (phishing, spam) on your behalf.
The DMARC policy is central to deliverability and security. It tells recipient servers what to do if the emails they receive from you aren’t authenticated properly through your SPF or DKIM (often called alignment).
Here’s an example DMARC record:
v=DMARC1; p=none; rua=mailto:dmarc@palisade.email; ruf=mailto:dmarc@palisade.email; fo=1;
Google and Yahoo have been enforcing DMARC policies for bulk senders since early 2024, and their requirements have continued to tighten since. If you’re sending any volume of email, DMARC alignment is non-negotiable.
BIMI was adopted by Google, Apple, Yahoo, and most major ESPs (still waiting on Outlook) back in May 2023. It’s now the standard way to verify your identity via email, display your brand logo in the inbox, and get a verified checkmark. You’ll see this rolled out by large brands like LinkedIn and Google:

Here’s an example BIMI record:
v=BIMI1;l=https://images.palisade.email/brand/bimi-logo.svg;a=https://images.palisade.emai/brand/certificate.pem
Monitoring your sender reputation is a big part of keeping deliverability high. Sender reputation is like a person’s reputation: it takes time to build and is easy to damage.
There’s no single tool that does it all, but several tools can give you partial visibility into your deliverability health.
One of the best tools available, even if it only monitors your reputation from Google’s perspective, is Google Postmaster.
It gives you three key data points on sender reputation:
Email deliverability isn’t set-and-forget; it’s ongoing work, but worth it.
Many companies spend significant time A/B testing funnels and producing content but skip the critical step of making sure their emails actually reach the inbox. If your users aren’t seeing your content, what’s the point of investing so much in creating it?
It’s not easy. List management best practices change. Content engagement shifts with trends. DNS compliance evolves. Reputation monitoring is sensitive. After reading this, you should have a better understanding of email deliverability basics (and the difference between “delivered” and “in the inbox”), and know where to focus your attention first.
The most common causes are missing or misconfigured SPF, DKIM, and DMARC records; sending to a dirty list with unengaged or invalid addresses; using content patterns spam filters flag (all caps, misleading subject lines, spam-trigger words); and poor domain or IP reputation. Start with DNS authentication; it’s the most impactful fix and often the most overlooked.
“Delivered” means the recipient’s email server accepted the message. Inbox placement means the message actually made it to the inbox instead of spam, Promotions, or other filtered folders. Your ESP will typically show you delivered rates but not inbox placement. For that you need deliverability tools like Palisade or Google Postmaster.
Yes. Since Google and Yahoo’s 2024 enforcement changes, bulk senders (over 5,000 emails a day to Gmail or Yahoo addresses) that don’t have proper SPF, DKIM, and DMARC alignment see their messages quietly dropped into spam or rejected outright. Even for smaller senders, proper authentication measurably improves inbox placement.
Free tools like Palisade’s Email Deliverability Score, MXToolbox, and Google Postmaster will audit your DNS configuration, flag missing records, and highlight issues with SPF, DKIM, DMARC, and BIMI. Run a check before you send any big campaign.
Early data suggests yes, in the +10% to +39% range for open rates, because a verified logo in the inbox increases trust at a glance. The tradeoff is that BIMI requires a Verified Mark Certificate (VMC) from a certifying authority, which costs several hundred dollars per year and requires a trademarked logo.
Often yes. Proper DNS authentication (SPF, DKIM, DMARC), list cleaning (removing hard bounces and long-inactive subscribers), and consistent sending volume can all improve inbox placement without any content changes at all. If your content is reasonable and your deliverability is bad, the technical setup is usually the culprit.
February 5, 2024
How to set customer service goals that actually stick
Setting customer service goals is more than picking a number and hoping. Here’s how to use SMART goals to build targets your team can actually hit, with examples for departments, managers, and individual agents.
Customer service goals are specific, measurable targets that define what success looks like for a support team. They translate strategic priorities into concrete numbers like first-response time, customer satisfaction (CSAT), ticket deflection rate, or team engagement, so individual agents and managers know what they’re working toward and how they’re being measured.
If you’ve just taken over a new support team, or you’re leading one through a period of change (a reorganization, an acquisition, a leadership transition), one of the hardest parts of the job is setting goals that are actually useful. Especially if your company is trying to become more customer-first, the targets you set will shape what that means in practice.
Too vague, and nobody knows what success looks like. Too ambitious, and the team burns out chasing them. Too safe, and the team coasts. The skill is landing somewhere in between, with goals specific enough to guide daily decisions and realistic enough to feel achievable.
This guide covers what SMART goals look like applied to customer service, how to set them for different levels of the team, and how to adjust when reality doesn’t match the plan.
Running a support team without clear goals is like sailing without a destination, you move a lot, but it’s hard to tell whether you’re getting anywhere. A few specific benefits of real goals:
It connects support to the business. Well-defined goals make it clear how your team is contributing to company outcomes. That helps when budget conversations come around and you need to justify headcount or tools.
It makes customer experience improvements traceable. When you’re targeting a specific CSAT number or response time, you can measure whether the change you made actually moved the needle. Without a target, improvement is vibes.
It makes your team happier. This one gets underestimated. People want to know what “good” looks like. Ambiguity about performance is stressful, and it makes it hard to have meaningful career conversations. Clear goals give individual team members something to work toward and something to be measured fairly against.
You’ve probably seen the SMART framework before. The acronym stands for:
It’s useful because vague goals are the enemy of accountability. “Improve response times“ is a wish. “Send a first response to 80% of chat inquiries within 60 seconds by the end of Q2” is a goal.
If you need something to copy and adapt right now:
Let’s walk through each piece in the context of a real support goal.
Get precise. “Answer customers faster” doesn’t tell anyone what channel, what threshold, or what counts as “faster.”
Better: “Send a first response to customers within 60 seconds of their initial chat message.”
Better still: “Send a first response to order-status chat inquiries within 60 seconds during business hours.”
The more specific the goal, the clearer the path.
You need a number you can check against. Our chat goal gets measurable with a percentage: “80% of chat customers will receive a response within 60 seconds.”
Pick metrics you can actually track in your tool. If it takes engineering time to instrument, the goal won’t survive contact with reality.
This is where the assessment work before goal-setting pays off.
“80% in 60 seconds” might be a stretch goal for a small chat team seeing hundreds of conversations a day. Might be easy for a larger team with capacity to spare. Without an honest look at your starting point, you either set goals that demotivate (too hard) or ones that don’t move the team (too easy).
If the realistic starting point is 30% in 90 seconds, a reasonable Q1 goal might be 50% in 90 seconds, and a stretch Q2 goal might be 70%. You’ll get further with escalating goals than with one aspirational target the team gives up on in week three.
The goal has to connect to something bigger. Is it aligned with the customer service values your company operates on? Does it support the company’s strategic priorities?
A chat response time goal makes sense if customer speed is a differentiator. It matters less if your customers are mostly asynchronous and prefer email follow-ups. Matching the goal to the actual priority prevents wasted effort.
Without a deadline, measurement never happens. “By the end of Q2 2026, we’ll be responding to 80% of chat customers within 60 seconds” gives you a specific checkpoint.
Pick deadlines long enough to drive meaningful change but short enough that feedback loops are useful. Quarter-long goals usually work well. Annual goals tend to drift and get revisited only once, too late.
Before you write a goal, spend time understanding where the team actually is. A few questions that help:
Answer those first, and your goals will land somewhere sensible. Skip this step and you’ll end up with goals pulled from industry averages that have nothing to do with your team’s reality. (For teams just starting out, a set of general customer service tips is a fine baseline to work from.)
Customers increasingly want self-service options. A help center with good coverage deflects tickets before they’re ever created.
Example goal: “By end of Q3 2026, launch a help center covering our 15 most frequently asked support questions, with the goal of reducing tickets on those topics by 20%.”
Measure success through help center analytics (view counts, search terms, time on page) and ticket volume trends on the covered topics.
A QA program is one where managers regularly review a sample of agent conversations against a scorecard. It improves consistency, surfaces training opportunities, and gives individual feedback a data foundation.
Example goal: “In Q2 2026, finalize a QA scorecard based on 100 ticket reviews from the previous quarter, and begin monthly calibration sessions with the team in Q3.”
Success is measured by whether the scorecard ships on time and calibration sessions actually happen monthly. Secondary measures include QA score trends once the program is running.
CSAT is a direct customer-voice metric. Moving it is slow work, but improvement shows up in retention and referrals over time.
Example goal: “Maintain an average CSAT of 88% or higher across email and chat each month in 2026, with no month below 85%.”
Collect CSAT through a post-resolution survey. Most modern support tools have this built in. Some teams also use AI rules to route low-scoring surveys to a manager for immediate follow-up.
Engaged support teams stick around longer and do better work. Attrition in support is expensive, both the direct cost of hiring and the indirect cost of losing institutional knowledge.
Example goal: “Hold a monthly 1:1 with each direct report, run one team social event per quarter, and reduce voluntary turnover by 20% year over year.”
Turnover is the measurable outcome. 1:1 cadence and social events are the inputs.
Support managers sit at the clearest vantage point in the company for what customers are actually saying. Translating that into product, engineering, and marketing decisions is a high-use part of the job.
Example goal: “Establish a bi-weekly Voice of the Customer meeting with product leadership in Q2 2026, with the goal of influencing at least one product release and one bug fix per quarter based on support insights.”
Measure through meeting cadence and the count of shipped changes attributable to support-surfaced feedback.
Every agent has strengths and growth areas. A good performance system identifies those and builds specific goals around them.
Example goal: “Complete the company’s de-escalation training by end of Q2, and reduce my escalation rate on tier-1 tickets by 15% in Q3.”
Measurable through training completion and escalation-rate data.
The more agents take ownership of their customers’ end-to-end experience, the better the outcomes, for customers and for the agents’ own growth.
Example goal: “Respond to every CSAT rating I receive (positive and negative) within 24 hours for the next quarter, using the responses to identify at least three improvement areas by end of Q2.”
Ownership isn’t always numerical, but activity-based goals like this work well as a way to build habits that compound over time.
The real trap with goal-setting isn’t picking the wrong goals. It’s picking goals once, putting them in a document, and never looking at them again.
Goals need a rhythm:
The teams that run this cycle consistently tend to outperform the teams that treat goal-setting as an annual planning ritual. It’s not magic. It’s just doing the work.
SMART goals are useful because they force specificity. They’re not a replacement for thinking.
If a goal starts pushing the team toward behavior that hurts customers (agents closing tickets too fast to hit handle-time targets, for example), the goal is the problem, not the team. Rewrite it. Goals should serve outcomes, not the other way around.
The teams that do this well treat goals as hypotheses: “we think hitting this number will lead to this outcome.” When the number moves but the outcome doesn’t, they change the goal instead of doubling down.
Missive is a collaborative email client for teams that care about customer experience. Shared inboxes, assignments, internal chat, and rules that work across email, SMS, WhatsApp, and live chat. Free for up to 3 users, try it free.
February 5, 2024
The 9 customer satisfaction metrics every team should be tracking
You can’t improve what you don’t measure. Here are the nine customer satisfaction metrics that actually tell you something useful — how to calculate each, when to use them, and what to do with the numbers.
Ah, unhappy customers. The not-so-silent killer of business.
Your team can deliver, innovate, and grow. But if customers aren’t happy, none of that matters for long. And you can’t fix what you don’t measure.
So how do you actually measure customer happiness?
With customer satisfaction metrics. There are dozens of them, which is both good and bad — good because you can pick the ones that fit your business, bad because it’s easy to get lost in a sea of acronyms. This guide covers the nine that actually matter, when to use each, and how to interpret what the numbers are telling you.
Customer satisfaction metrics are the numbers companies use to understand how happy customers are with their product, service, and overall experience. They give you a feedback loop — without one, you’re just guessing about what customers think.
Some metrics shed light on specific interactions (a support conversation, a product onboarding). Others capture the bigger picture (overall loyalty, retention, revenue impact). The best teams use a mix: one or two moment-to-moment metrics, one or two relationship-level metrics, and one financial metric.
Let’s dig in.
NPS is a customer satisfaction metric that gauges loyalty based on one question: how likely is a customer to recommend you?
If someone will enthusiastically tell their friends about your product, it’s a strong sign they’re happy with what you’ve built.
NPS is based on a single survey question:
How likely would you be to recommend X to a friend or colleague?
Respondents rate on a scale from 0 to 10. Based on their rating, they fall into one of three groups:
Calculate the percentage of promoters and the percentage of detractors from your total responses. Then subtract detractors from promoters:
NPS = % of promoters − % of detractors
If you got 100 responses — 50 promoters, 30 passives, 20 detractors — your NPS is 30 (50% promoters minus 20% detractors).
NPS ranges from −100 (everyone’s a detractor) to 100 (everyone’s a promoter), but real-world scores sit in the middle. Anything above 0 means you have more promoters than detractors. According to recent benchmarking research, the overall NPS benchmark sits around 32. But the bigger signal is the trend — is your NPS rising or falling over time?
NPS works as a KPI for overall customer satisfaction. In practice:
Product and marketing teams often use NPS as a headline KPI. Customer success teams use it as an input for churn prediction and customer health scoring.
The good: NPS gives you a single number that summarizes overall loyalty, easy to track over time.
The bad: “Likelihood to recommend” doesn’t always correlate with actual behavior. It also doesn’t tell you why — you need to pair it with open-text feedback to act on it.
CSAT measures how happy customers are with a specific interaction — a support conversation, a sales demo, a feature they just used. It’s a snapshot, not a long-term relationship metric.
The survey usually asks something like:
How satisfied were you with your recent experience?
Customers answer on a scale (1–5 is most common), and you measure the percentage who gave a positive rating.
Count the number of “satisfied” responses (usually 4s and 5s on a 5-point scale) and divide by total responses:
CSAT = (satisfied ratings / total ratings) × 100%
If 100 people respond and 80 of them rate 4 or 5, your CSAT is 80%.
CSAT ranges from 0% to 100%. Under 50% is a red flag — more people are leaving unhappy than satisfied. In competitive industries like SaaS and e-commerce, the benchmark is around 80%. A 95% CSAT is realistic for a high-performing team.
You should also see a 5–20% response rate on CSAT surveys. If fewer people are responding, rethink your survey timing, messaging, or channel.
CSAT works best as a follow-up after specific touchpoints:
Many teams use CSAT as a KPI for individual agents and entire customer-facing teams. Don’t send CSAT after every single interaction — it gets annoying, response rates crater, and the data becomes unreliable.
The good: captures satisfaction at specific moments, actionable when you tie it to who handled the interaction.
The bad: “Satisfaction” is subjective. Cultural differences affect what a 4 vs. a 5 means. Response rates vary, so results don’t always reflect your full customer base.
CES measures how easy it is for customers to do something — get their question answered, complete a task, find what they need. It’s based on research showing that reducing effort is a better predictor of loyalty than trying to “delight” customers.
Instead of asking about satisfaction, the survey asks:
How easy was it to [resolve your issue / find what you needed / complete your task]?
CES uses a 7-point scale. Divide the number of responses rating 5, 6, or 7 (easy) by total responses:
CES = (ratings of 5, 6, 7 / total ratings) × 100%
If 100 people respond and 60 rate 5 or higher, your CES is 60%.
CES ranges from 0% to 100%, higher is better. Because it’s a relatively new metric (Gartner introduced it in 2010), benchmarking is less mature than NPS or CSAT. What matters more is your own trend over time.
CES works well after any interaction where “ease” is the thing you want to optimize:
Timing matters — send the survey immediately, while the experience is fresh.
The good: highly actionable. If CES is low, you know exactly where the friction is.
The bad: can be misleading without context. A low CES might mean your product is genuinely hard to use, or it might mean you serve technical users working on complex problems.
Churn is the rate at which you lose customers. It’s the ultimate satisfaction metric — if customers are canceling, they’re telling you something, even if they never filled out a survey.
Divide the number of customers lost in a period by the number you had at the start:
Churn rate = (customers lost / customers at start of period) × 100%
If you start the month with 100 customers and lose 20, your monthly churn is 20%.
Lower is better. Ideally it should be below your growth rate, and under 7% annually for most subscription businesses. Much higher than that, and you’re filling a leaky bucket — every new customer you add is offset by one walking out the door.
Churn rate is critical for any subscription or recurring-revenue business. It’s also a lagging indicator — by the time you see it rise, customers are already gone. So pair it with leading indicators (CSAT, NPS, support ticket volume) that can warn you before churn happens.
The good: directly ties customer satisfaction to business outcomes.
The bad: not actionable on its own. A high churn rate tells you there’s a problem, but not what the problem is.
Retention is the flip side of churn — instead of measuring who leaves, you measure who stays. For some teams, framing it this way is more motivating and maps more cleanly to customer success work.
CRR = ((Customers at end of period − New customers acquired) / Customers at start of period) × 100%
If you start with 100 customers, gain 30 new ones, and end with 110, your retention rate is (110 − 30) / 100 = 80%.
Retention rate works especially well for customer success teams and account management teams, where the goal is keeping existing customers happy, expanding accounts, and preventing churn. It also surfaces issues earlier than churn rate alone, because it accounts for the fact that new customer acquisition can mask retention problems.
The good: a positive framing that ties to customer success activity.
The bad: like churn, it’s a lagging indicator. You need leading indicators to act before it moves.
CLV estimates the total revenue you’ll earn from a customer across the entire relationship. It’s not a direct satisfaction metric, but it tells you whether your satisfaction efforts are paying off — happy customers stay longer and buy more.
The simple version:
CLV = Average purchase value × Purchase frequency × Customer lifespan
For a SaaS business, it’s often calculated as average revenue per customer divided by churn rate. If your average customer pays $100/month and your monthly churn is 5%, CLV is roughly $2,000.
CLV is most useful for decision-making — how much can you afford to spend acquiring a customer? How much should you invest in customer success? It also helps identify which customer segments are most valuable, so you can focus retention efforts where they’ll have the biggest impact.
The good: connects satisfaction work directly to revenue. Easy to justify budgets.
The bad: calculation gets complex for businesses with varied purchase patterns. Also backward-looking — doesn’t tell you what customers think right now.
FRT is how long it takes your team to send the first reply to a customer inquiry. It’s not satisfaction itself, but it correlates strongly with it — customers who wait hours or days for a first response are already unhappy by the time you write back.
Measure the time between when a customer contacts you and when your team sends the first human response. Average it across a time period (daily, weekly, monthly).
Benchmarks vary by channel:
FRT is a headline metric for customer support teams. It’s also a useful operational metric — if it’s getting worse, you need more staff, better automation, or better routing.
The good: easy to measure, directly actionable. If FRT is high, you know what to fix.
The bad: fast responses don’t equal good responses. A two-minute reply that misses the question is worse than a ten-minute reply that solves it.
Resolution time is how long it takes to fully resolve a customer’s issue — from the first message to the last. It captures the full experience, not just the first response.
Measure the time between when a conversation starts and when it’s marked closed/resolved. Average it across conversations.
Varies wildly by issue type. Simple billing questions should resolve in minutes. Complex technical issues might take days. Track by category — a long average resolution time for password resets means something different than a long average for bug reports.
Resolution time is useful for identifying systemic issues. Is one category of problem taking dramatically longer than others? Is a particular agent’s resolution time an outlier (either good or bad)? Does resolution time correlate with CSAT scores?
The good: captures the full experience, not just the opening exchange.
The bad: can push teams toward rushed closures. Make sure you’re not optimizing for speed at the expense of actually solving the problem.
Customer health score combines multiple signals into a single indicator of how a customer is doing overall. It’s less a metric and more a framework — you pick inputs that predict churn or expansion for your business and combine them into a score.
Every team calculates it differently. Common inputs:
Weight them by how predictive they are of your actual outcomes (churn, expansion), then combine into a 0–100 score or a red/yellow/green indicator.
Health scores are most useful for customer success teams managing a portfolio of accounts. They help prioritize — which accounts need attention this week? Which ones are ripe for expansion?
The good: early warning system. Health scores catch problems before they show up in churn.
The bad: garbage in, garbage out. If your inputs aren’t actually predictive, the score is just a number. Requires ongoing calibration.
Metrics are great for spotting trends and setting KPIs. But a number on a dashboard never tells you the whole story.
The teams that really understand their customers pair metrics with open-ended feedback. Adding a free-text field to your CSAT survey, for example, often reveals that low ratings have nothing to do with your support team — they’re about a specific product issue that’s easy to fix. Without the text field, you’d be troubleshooting the wrong problem.
The opposite happens too. Customers might rate individual interactions highly while being broadly dissatisfied with your product. Or they might churn without ever leaving a negative review, because they were “too polite” to complain.
So treat metrics as a starting point. The number tells you that something is happening. Conversations with customers tell you why. The best customer satisfaction programs use both.
Missive is a collaborative email client that helps teams handle customer support, gather feedback, and automate follow-ups. If you’re trying to close the loop between metrics and actual customer conversations, Missive might be worth a look. Try it free.
January 31, 2024
Customer experience optimization: a practical guide for growing teams
Customer experience optimization isn’t a one-time project. It’s the ongoing work of making every touchpoint a little better. Here’s how to actually do it — no jargon, no fluff, just what works.
“Customer experience” has become one of those phrases that means everything and nothing. Every company claims to care about it. Every vendor promises to help you improve it. Everyone agrees it matters.
But if you ask ten people at the same company what “customer experience optimization” actually involves, you’ll get ten different answers.
This guide tries to fix that. It covers what customer experience optimization actually is, why it matters for real business reasons, the practical pillars that make it work, and the seven things you can do this quarter to start improving. No abstract frameworks. No buzzwords. Just what works for teams that are trying to get this right.
Customer experience optimization (CXO) is the ongoing process of improving every touchpoint a customer has with your company — from the first ad they see to the last support conversation they have.
The word “optimization” matters here. You’re not building customer experience from scratch each time. You already have one, whether you designed it or not. Every email you send, every signup flow, every support reply, every outage post-mortem — they all shape how customers perceive you.
The question is whether that perception is happening by design or by accident.
In today’s market, there isn’t much room to get this wrong. Research from Adobe shows that 86% of consumers are willing to pay more for a better experience. A study from Qualtrics found that 80% of customers have switched brands after a bad experience. The companies that get experience right pull ahead. The ones that don’t lose customers to competitors who do.
Unlike a one-time project, customer experience optimization is continuous. Customer expectations change. Your product changes. The tools and channels you use change. What counted as “great” three years ago is table stakes now. The teams that do this well treat it as ongoing work, not a quarterly initiative.
Most good customer experience work breaks down into four categories. You don’t have to master all of them at once, but you do have to be deliberate about each.
67% of customers want a personalized experience, according to Adobe’s research. The one-size-fits-all approach stopped being competitive a while ago.
Personalization isn’t just about using someone’s name in an email. It’s about recognizing that different customers have different jobs to do with your product, and designing around that. A first-time user exploring what your tool does needs different onboarding than a power user coming back to recover their account. A small business has different support needs than an enterprise customer.
The practical version of personalization looks like this: your support team sees a customer’s history when they reply. Your onboarding adapts to what the customer said they wanted to do. Your emails reference their actual use of the product, not just their signup date.
We’ve all had this experience: you call support, get transferred, and have to explain your problem from scratch to the next agent. Or you read a help article that contradicts what the salesperson told you last week.
Consistency is hard. You’ve got multiple people replying to emails, multiple writers producing content, multiple channels your customers reach out through. Getting all of them to sound like the same company is a real coordination problem — but it’s the difference between feeling like a cohesive brand and feeling like a random collection of departments.
The teams that do this well invest in shared context — shared style guides, shared canned responses, shared customer history, shared internal documentation. When everyone can see the same picture of the customer, consistency is easier to maintain.
A Forrester study found that 77% of consumers say valuing their time is the single most important thing companies can do to deliver a great experience. Speed matters. Waiting four days for a reply to a simple question doesn’t feel like good customer experience, regardless of how thoughtful the reply eventually is.
Practical responsiveness looks like: a first response within minutes on live chat, within hours on email. A clear path to reach a human when AI or self-service can’t help. Automated confirmations that set expectations, even when a real reply takes longer.
AI has shifted what’s possible here. Tools that can draft first responses, categorize incoming messages, and route to the right person mean you can respond faster without adding staff. But AI that gives the wrong answer fast is worse than a slower human who gets it right — so the goal is speed with accuracy, not speed at all costs.
Accessibility is about making it easy for customers to get what they need. This includes:
Removing friction is often the most impactful CXO activity for growing teams. The smallest annoyances compound into real dissatisfaction.
Good customer experience isn’t just a feel-good initiative. It shows up in the numbers:
Revenue. Deloitte found that customers who have positive experiences spend 140% more than those who have negative ones. Same product, different experience, nearly 2.5x the revenue per customer.
Retention. Happy customers stay longer. In subscription businesses, even a small reduction in churn has a compounding effect on revenue — a 5% improvement in retention can translate to 25–95% more profit over the customer’s lifetime.
Referrals. Every customer tells their friends something about you, one way or another. Make sure it’s something you want repeated.
Word of mouth resilience. When something goes wrong (and it will), customers with strong experience loyalty give you the benefit of the doubt. Customers who already felt mistreated use the incident as an excuse to leave.
The business case for CXO isn’t soft. It’s some of the hardest-dollar value work your team can do.
Customer experience optimization sounds straightforward on paper. In practice, three things consistently make it hard.
You can’t improve what you can’t measure. But most teams discover that their customer data is scattered across five tools that don’t talk to each other — support tickets in one place, product usage in another, survey responses in a third, revenue data in a fourth.
The fix usually isn’t buying more tools. It’s getting the tools you have to share information. That might mean building a dashboard in Looker or Tableau that pulls from multiple sources. It might mean having your support team log context into your CRM. Or it might mean a weekly meeting where different teams share what they’re seeing in their corner of the customer experience.
Start with the data you actually have. You don’t need perfect data to start spotting trends.
CXO isn’t a project that ends. It’s ongoing work that competes for attention against everything else on your team’s plate. The teams that sustain it do two things: they assign clear ownership (someone whose job includes CXO, not an initiative that’s everyone’s and therefore no one’s), and they build it into regular rhythms (monthly reviews, quarterly goals, standing meetings).
Without those structures, CXO becomes something everyone agrees is important and nobody gets around to.
Tool selection matters more than it seems. The wrong tool creates years of friction. The right tool fades into the background and lets your team focus on the work.
When evaluating CXO tools, involve the teams who’ll actually use them. Your support lead knows what features matter for a shared inbox. Your data lead knows whether the analytics export is usable. Your customer success lead knows whether the CRM integration actually works.
Here’s what to actually do, in the order that usually works best.
Start by drawing out every touchpoint a customer has with your company — from the first ad or blog post they see, through signup, onboarding, regular use, support interactions, renewal, and churn. Include the emotions they likely feel at each stage.
This sounds basic, but most teams have never actually done it. The exercise alone surfaces gaps. “Wait, what happens between day 3 and day 10 of the trial? Nothing?”
Quantitative data tells you what is happening — response times, survey scores, churn rates, feature usage. Qualitative data tells you why — verbatim customer feedback, recorded support calls, user interviews, open-text survey responses.
You need both. The numbers tell you where to look. The words tell you what to do about it.
The Jobs to Be Done framework is a useful lens. Every customer bought your product to do a specific job — make their team more productive, save time on a recurring task, solve a specific problem. When you understand the job, you can design experiences around helping them get it done faster.
Useful questions to ask real customers:
Don’t try to personalize everything. Pick the two or three touchpoints where personalization has the biggest impact:
Small, well-targeted personalization beats broad, shallow personalization every time.
Every web page about your company is a touchpoint. Your help center. Your pricing page. Random blog posts from 2019. Community forum threads. If any of it is outdated or wrong, it’s shaping customer perception.
Pick the top 10 pages by traffic and read them honestly. Update what’s wrong. Archive what’s obsolete. Make sure what’s live reflects your current product, positioning, and policies.
You don’t need permission to run experiments. Try two different onboarding email sequences and see which gets better activation. Try two versions of your out-of-office autoresponder and see which gets fewer angry follow-ups. A/B test your help center search.
The goal isn’t statistical significance. The goal is building a habit of trying things, measuring them, and learning.
Consistency isn’t a feature you add. It’s a cultural norm you build and maintain.
Practical moves:
The best customer experience optimization happens when your team isn’t fighting their tools.
Missive is an email client built for team collaboration. When customer messages — email, SMS, WhatsApp, live chat — all land in one inbox with shared context, a few CXO things get easier:
Consistency becomes the default. Shared canned responses, shared labels, shared history. Anyone can pick up any conversation and continue it like they’ve been handling it all along.
Personalization is trivial. The customer’s full conversation history is right there. The last support issue, the last sales exchange, the internal notes from customer success — all visible to whoever’s replying.
Response time drops. Team inboxes, assignments, and rules get messages to the right person fast. AI-drafted replies for common questions cut response time without sacrificing accuracy.
Cross-channel visibility improves. A customer’s email from last week and their chat from today live in the same conversation. No more asking “did they mention this somewhere else?”
None of this is a replacement for CXO strategy. But good tools remove the friction between strategy and execution.
Ten years ago, “customer-centric” was a differentiator. Today, every company claims to be customer-centric. The ones that mean it are the ones investing in real, ongoing optimization work — mapping journeys, collecting both types of data, personalizing smartly, staying consistent, moving fast.
The companies that don’t do this work eventually lose customers to the ones that do. Not all at once, but steadily — one difficult interaction at a time, one friction point at a time, one missed opportunity at a time.
The good news is you don’t have to do everything today. Pick one or two pillars. Pick two or three tactics from this guide. Make them part of how your team works. Then pick the next ones.
Missive is a collaborative email client that helps teams deliver consistent, personalized customer experiences. Shared inboxes, internal chat, assignments, and AI-powered automation — all in one place. Try it free.
January 30, 2024
What does CC mean in email? CC and BCC explained
CC sends an email copy to recipients who all see each other. BCC hides them. Here’s when to use each, plus team alternatives.
CC (carbon copy) sends a copy of an email to additional recipients who can all see each other. BCC (blind carbon copy) does the same, but hides those recipients from everyone else on the thread. Use CC when you want to keep someone in the loop on a conversation they don’t need to respond to. Use BCC when you want to include a recipient privately, or email a large group without exposing everyone’s address to each other.
Both fields were borrowed from paper-era business letters. “CC” originally referred to a literal carbon copy made on a typewriter. The mechanics have changed, but the etiquette hasn’t: “To” is for action, “CC” is for visibility, “BCC” is for discretion.
CC stands for carbon copy. It’s a field in the email header that lets you send a copy of an email to additional recipients. When someone is CC’d, they can see the full email thread and every other recipient, including other CC’d people.
It’s a common way to keep people informed about a conversation without making them the primary audience. For example, a sales rep at a marketing agency might email a prospect and CC their manager. The manager sees every reply, but isn’t expected to weigh in. They’re there for context.
The technical difference between “To” and “CC” is almost nothing. The difference is convention: “To” is for the people the email is addressed to, CC is for people you want looped in.
BCC stands for blind carbon copy. The mechanics are the same as CC, with one change: BCC’d recipients are invisible to everyone else on the thread. Nobody in the To or CC fields knows a BCC recipient exists.
This makes BCC useful in two situations:
BCC should be used carefully. Using it to quietly include a third party in a private conversation can erode trust if it comes out later, and in some industries (legal, regulated communications) it raises compliance questions.
The main difference is visibility. CC recipients are visible to everyone; BCC recipients are hidden.
| CC (carbon copy) | BCC (blind carbon copy) | |
|---|---|---|
| Visibility | All recipients can see each other | BCC recipients are hidden from all other recipients |
| Recipient awareness | Everyone knows who else is on the email | Only you and the BCC’d person know they’re included |
| Purpose | Keep additional people informed transparently | Include someone without the other recipients’ knowledge |
| Reply-all behavior | CC’d people receive replies when someone hits Reply All | BCC’d people do not receive replies when someone hits Reply All |
| Best for | Transparency, collaboration, FYI loops | Privacy, mass emails, discreet oversight |
| Risks | Inbox overload, reply-all chaos | Trust and compliance concerns if discovered |
CC is the right tool when you want someone to see the conversation, but you don’t need anything from them. Three common cases:
To share context without demanding a response. If you’re emailing a vendor about a billing issue and your coworker in finance should know it’s happening, CC them. They don’t have to reply; they just have a record.
To introduce a new person to an existing thread. When looping someone new into an ongoing conversation, CC’ing them is standard. They see the history and can jump in if they want.
To build a paper trail inside your company. CC’ing a project lead or manager on client communications keeps them informed and creates a reference for later. This is common at accounting firms like KPMG or law firms where project leads want visibility into junior staff’s client-facing emails without taking over.
BCC has fewer legitimate use cases than CC, and most of them come down to privacy.
Emailing a large group where recipients shouldn’t see each other. Event invites, newsletter blasts, or announcements to a client list. Putting everyone in BCC (and yourself in To) protects privacy and prevents reply-all disasters.
Sending a copy to yourself at another address. If you want a copy of an outgoing email in a separate personal or archive inbox without the recipient seeing it, BCC is the cleanest option.
Quietly looping in a supervisor. Use this one carefully. Occasionally a manager needs visibility on a conversation for oversight reasons (HR, compliance, escalation tracking). BCC keeps them informed without changing the dynamics of the conversation.
There are three situations where reaching for CC or BCC is a mistake.
When you need a reply or action. CC’d recipients usually assume they don’t need to respond. If you actually need input from someone, put them in the To field. Anything else invites confusion and missed responses.
When you don’t have consent. If the thread contains sensitive information, adding new recipients without checking first is a fast way to lose trust. When in doubt, ask the original sender before looping anyone in.
When you’re CC’ing the same people over and over. If you find yourself CC’ing the same three coworkers on every customer email just so they have visibility, CC is the wrong tool. You’re trying to do shared work through a tool built for point-to-point communication, and everyone’s inboxes are paying for it. More on the alternative below.
CC works fine for one-off visibility. It falls apart when “keeping the team in the loop” is a constant, not an exception.
Common symptoms that CC has outgrown its usefulness:
The underlying problem is that CC was designed for individual senders. When a team shares responsibility for an inbox (support@, sales@, info@, or a partner address for a small firm), you need a tool built around shared work, not one that mimics it with CC.
The option most teams settle on is a shared inbox.
In a shared inbox, every member of a team sees the same conversations automatically. Nobody has to CC anyone because everyone already has access. Instead of replying-all to coordinate a response, you discuss the thread internally, where the discussion stays attached to the email itself.
Missive is a collaborative email client built around this pattern. It works like a regular email client for your personal inbox, and then layers shared conversations, internal chat, and assignments on top for the addresses your team handles together.
What that looks like in practice:
The result is that CC goes back to being what it was originally for: an occasional FYI to someone outside the immediate conversation. Day-to-day team communication stops routing through the CC field entirely.
CC (carbon copy) lets you send a copy of an email to additional recipients who don’t need to take action, but benefit from seeing the conversation. It’s a way to keep people informed, create a paper trail inside a company, or loop in someone new without making them the primary audience.
CC and BCC both send a copy of an email to additional recipients. The difference is visibility. If you CC your manager on an email to a client, the client can see your manager is on the thread. If you BCC your manager on the same email, the client has no idea your manager is included.
Example: You email a vendor asking for an updated quote. You CC your procurement lead so they can follow along (the vendor sees them). You BCC your manager so they have a record for budget approvals (the vendor has no idea).
Avoid CC when you actually need a response or action; CC’d people usually assume they don’t have to reply. Also avoid CC’ing the same coworkers repeatedly on routine team communications. That pattern is a sign you need a shared inbox, not more CC chains.
Use CC when transparency matters and you want all recipients to see each other. Use BCC when privacy matters, you’re emailing a large group of people who shouldn’t see each other’s addresses, or you need to discreetly loop someone in.
Their reply goes to the sender and every visible recipient (everyone in To and CC). The BCC’d person’s address still doesn’t appear in the message headers, but the content of the reply can give them away. If they reference something only the original email contained, other recipients will realize someone was quietly copied.
Usually no. When you receive a BCC’d email, it arrives like any other message, but your address doesn’t appear in the To or CC fields. Some email clients show a small note like “bcc: you” in the headers, but only you can see it, not the other recipients. If you don’t see your address anywhere in the visible fields but you still got the email, you were BCC’d.
It depends on context. CC’ing a manager to keep them informed on routine updates is normal at most companies. CC’ing a manager specifically to escalate or pressure someone into responding (sometimes called “CC’ing the boss”) is usually seen as passive-aggressive. If the goal is accountability, a direct conversation is almost always a better move.
Not directly, but very large recipient lists in CC or BCC can trigger spam filters or get flagged by your email provider. For sending to big groups (hundreds of recipients), a proper email marketing tool or mailing list is a better option than stuffing addresses into BCC. It also gives you unsubscribe handling and bounce tracking, both of which matter for staying out of spam folders.
Missive is a collaborative email client built for teams that have outgrown CC. Connect your team’s shared addresses, discuss conversations internally, and handle email, SMS, WhatsApp, and more from one place. Try Missive free.
December 22, 2023
5 examples of bad customer service (and how to fix them)
Bad customer service costs companies customers and trust. Here are 5 clear examples of poor customer service, why they happen, and how to turn them around.
Not long ago, I came across a company whose support team was drowning in tickets.
Their solution to handling the overwhelming volume of customer requests was… particular.
All incoming tickets received outside business hours were automatically closed, with an auto-reply asking the customer to contact the support team again during business hours.
That’s a sure way to a negative experience for your customers, and it reflects horribly on your brand.
Bad customer service is still far more common than it should be, which got us asking: what are some examples of horrible customer service?
Sometimes it’s easiest to learn about what your customer service team should do by looking at times when other teams made the wrong call. Negative examples, if you will.
So if you’re curious to learn how your business can be customer-centric and consistently deliver excellent customer service, read on for examples of terrible customer service interactions, and tips on how to turn them around.
Bad customer service is a support interaction that doesn’t meet a customer’s expectations. Excessive delays in responding to an inquiry, rude or unhelpful behavior from customer service representatives, mishandling customer complaints, and not fully resolving a problem are all examples of inadequate customer support.
That’s a subjective definition, and there’s no way around that. Whether an interaction with a customer service rep is good or bad depends on what a customer expects.
But on the other hand, some customer interactions are just flat out bad. Take obscenely long hold times or rude agents, for example.
These things are bad for business, but they happen all the time.
And that’s despite the considerable impact that customer service has on business. 68% of customers will willingly pay more for products from brands known to offer a great customer experience. Great experiences increase revenue, boost retention, and improve customer satisfaction.
Or look at it the other way: 65% of customers have switched to a different brand after a bad experience. Bad customer support increases churn and hurts your bottom line.
That’s why you need a sound customer service strategy: because in today’s competitive landscape, your company can’t afford poor customer service.
Below, five common examples of poor customer service along with tips on how to make them better:
In an ideal world, customers would ask for exactly what they need in terms your support agents can understand.
That’s not what usually happens in a real interaction.
Customers describe situations based on their own understanding. They share the symptoms as they see them, and your support team has to play the role of a doctor identifying the root cause of their pain.
That’s why learning to ask good questions and read between the lines are key customer service skills.
Here’s an example from a recent support ticket at a bank:
A worried customer contacted her bank’s customer service department. Her card purchases were being declined, despite having a positive balance in her account. She feared her money was blocked or, worse, lost.
In response, the customer service rep shared a knowledge base article about existing limits on the number of card transactions. The article wasn’t exactly wrong, she had exceeded the number of transactions, but the agent completely missed the real pain point. The source of the customer’s concern was whether she’d lost access to her money, and some reassurance would have transformed the interaction.
Train agents to use critical thinking and ask great questions. That’s how they’ll pick up on what customers need, even when they don’t say it directly. In the interaction above, the bank employee should have addressed the primary concern, reassured her that the money wasn’t blocked, and informed her when the transaction limit would reset.
Other tactical tips to improve in this area:
Bruce Lee famously encouraged his students to “Be water, my friend.” He recognized the importance of adapting based on the situation at hand.
Sure, policies and guidelines are there to be followed. They’re crucial in keeping departments on the same page and making operations run smoothly.
But a strict or inflexible process can also be harmful.
Let’s say one of your biggest customers contacts you because they need to make a return, but they happened to miss the deadline by a week. They’ve spent a lot of money with your brand, and they also happen to be an influencer in your industry.
But their call gets routed to a new support rep, who opts to follow the return policy by the book, explaining that the customer is ineligible for a refund. That puts the customer in an awkward spot: they can push for an exception, share the bad experience publicly, or suffer in silence.
A knowledgeable agent would recognize that keeping this particular customer happy is more important than following the standard process.
Empower your frontline staff. Knowledgeable customer service reps can recognize outdated processes that no longer serve the business. They can also identify situations that are the exception to the rule.
Other ideas:
Frontline staff should never demean customers or display brash or sarcastic attitudes. The same goes for showing apathy or simply displaying no interest in solving a customer’s issues.
Unfortunately, it happens.
A full 73% of customers surveyed by chatbot and AI solution provider Netomi reported being on the receiving end of rudeness from a customer service agent.
This actually happened to me personally. My wife ordered a new area rug online. It ended up being the wrong size, so she initiated a return. The rug was so large that it needed to be picked up by a third-party logistics service, and she waited two weeks to hear from them.
Silence.
After calling the logistics service, she was told there was no record of her request. She tried again, and after several more days of silence, she called back the company she’d purchased from.
The customer service rep gave her the runaround, ultimately telling her it was her fault the return had stalled because she had waited too long, even though their system had failed to notify the logistics service of the request.
The moment a customer takes the time to contact your support team, they’re already frustrated. Opening the conversation with empathy and communicating a willingness to resolve their problem goes a long way.
To help with this:
Long wait times are a classic example of subpar customer support. They’re a great way to create frustrated customers and build a negative brand reputation.
If you’re curious how it plays out, there are entire Reddit threads about how long consumers have waited on hold.
You’ll read about a customer trying to cancel their phone company and waiting 85 minutes on the phone. Or the 42 minutes it took to book a doctor’s appointment.
That’s about 84 minutes and 41 minutes longer than customers should be waiting.
An excessive response time is only made worse by having to repeat yourself across multiple agents. In a recent survey, almost two-thirds of US adults said valuing their time is the most important thing a brand can do to provide a good customer service experience.
Reduce your hold times and respond faster. The right approach depends on your situation. A few ideas:
Is anything worse than struggling to reach a business when you need help?
Comcast/Xfinity is infamous for this, as Reddit threads like this show. Here’s a snippet from one user:
“I asked to cancel (which took 4 tries as it ‘accidentally’ kept hanging up on me in the process..) and I said the same to them. They offered $75 at first and I said no. They then offered $45. I thought about it, but they said that’s only for a year then it’s back to the ‘regular rate’ I told them to cancel it then. Had my fiancée sign up immediately after that, and now we are locked in at $30 for two years.”
Is it possible that the phone system hung up on them four times? Technically, yes. But it’s highly unlikely.
Whether it’s unhelpful support agents, a chatbot that gets users caught in a loop, or burying your contact form deep in your help center, situations like these are incredibly frustrating. While offering good self-service is a critical part of a modern customer service strategy, always make it easy for users to get human help when they need it.
Whatever communication channels your support team offers, make them easy to find and access. Customers contact you when they have problems, don’t create additional problems by making it hard to reach your team.
Tactical tips:
We’ve seen examples of inadequate customer support and how to improve it. It’s tough to deliver a consistently great experience. It takes hard work and intentionality.
Across the board, there are a number of underlying reasons why bad customer experiences are still so prevalent:
Negative customer experiences are damaging to your business. Your customers are your company’s most important resource, and building out systems that enable you to support them well won’t happen by accident.
At the same time, your customer service processes will always be evolving. This work is never done, so don’t focus on getting across a finish line that doesn’t exist.
Instead, make it a regular part of your routine to audit your customer experience and analyze customer feedback. By creating feedback loops that enable you to continually improve, you’ll build a flexible customer service operation that your customers can rely on.
The most common forms of bad customer service are: long wait times, unhelpful or rude agents, rigid policy enforcement over customer needs, support channels that are hard to find or use, and agents who don’t address the customer’s actual problem. Most specific bad experiences fall into one of these five buckets.
Inadequate training and poor staffing are usually the root causes. Agents can’t deliver great service without the skills to read between the lines, the authority to make judgment calls, or enough time in the day to actually handle the queue well. Most “bad agent” stories are actually “bad system” stories in disguise.
Acknowledge the issue directly, apologize sincerely (without making excuses), fix the root problem, and offer a tangible gesture of goodwill where appropriate. The recovery itself can sometimes produce a more loyal customer than a smooth original experience, but only if it’s handled with genuine accountability.
Research consistently finds that about 65% of customers switch brands after a bad experience. The direct cost is lost revenue from that customer, but the compounding cost is everyone they tell, and increasingly, everything they post online. One bad review often costs more than the revenue from the original interaction.
The highest-leverage moves are: role-playing difficult conversations, coaching on listening skills (not just scripts), empowering agents to make exceptions to policy when warranted, and setting response-time KPIs that don’t penalize spending extra time on genuinely hard cases. Most bad service comes from agents following process correctly; the process itself is usually the problem.
December 22, 2023
8 soft skills proven to improve customer service
Soft skills aren’t soft, they’re the skills that separate a good customer service team from a great one. Here are the eight that matter most and how to build them.
There’s a saying in the support world: no one majors in customer service.
There’s no single educational or career path that prepares you specifically for this kind of work. Most of us take a winding professional journey before landing in a support role, and we pick up the soft skills that define great customer service along the way, sometimes formally but often through direct exposure to customer-facing jobs.
Whether you’re new to support or a career customer service professional, this piece covers the skills that matter most, why they matter, and how to build them.
Soft skills are usually described as the counterpart to “hard” skills, things like communication, empathy, and emotional intelligence as opposed to technical competencies like coding or accounting. The common framing is that hard skills are measurable and soft skills are abstract.
That framing doesn’t hold up well when you actually look at customer service.
There’s nothing soft about the skills a support agent needs to defuse an angry customer on a phone call, guide a non-technical user through a troubleshooting process without condescension, or stay composed through a sixteen-email thread about a billing error. Many software engineers would not last a single shift on a support queue.
And the measurement argument falls apart too:
Soft skills are measurable. They’re also the skills that most directly shape customer perception of your brand.
A few numbers that hold up consistently across years of customer experience research:
All of these are direct consequences of soft skills in action, or their absence. Great agents calm frustrated customers; mediocre ones inflame them. Skilled listeners catch the real issue; rushed ones miss it.
If you only have time for five, these are the ones to hire for and train on:
The deeper list below goes further, but if you’re setting up hiring criteria or a training curriculum from scratch, these five cover most of what matters.
These three are so tightly linked that separating them is artificial. Customers contacting support are often in moments of real frustration or need. They’re not at their best, and they’re not obligated to be.
Empathy lets you understand the problem from their side. Compassion gives you the patience to stay in that headspace even when the problem is your tenth of the day. Patience lets you guide a customer through a solution without condescension, no matter how technical the gap.
The customer doesn’t care about your metrics. They care about whether they feel heard. Empathy is the skill that makes them feel heard.
Most customers aren’t hostile. They’re frustrated, stressed, or dealing with something underneath the surface that you’re not privy to. If they’ve had bad experiences with other companies, they may arrive conditioned to expect the same from you.
De-escalation is the skill of absorbing someone’s emotional state without absorbing the anger. It’s staying calm, acknowledging frustration genuinely, and redirecting toward a path forward without making the customer feel dismissed.
The quieter version of this skill, and maybe the more important one, is not taking it personally. When a customer is sharp with you, it’s almost never about you. Knowing that lets you respond to the situation instead of reacting to the tone.
Customer perception forms fast. When customers are frustrated or a problem can’t be solved immediately, a friendly tone can turn an interaction around more than almost anything else.
A concrete example: reframing from negative to positive.
Instead of “I’m sorry, I can’t offer you a refund,” try “I can get that product replaced for you, would that help?”
The second version offers a path forward and invites collaboration. The first leaves the customer at a dead end with nowhere to go.
The same applies to channel. Warmth in your voice on a phone call, or a well-placed emoji in a chat message, genuinely lands. Customers can tell when you’re engaged versus going through the motions.
This is the job. Clear communication means your message gets across without the customer needing to interpret it. It means pacing your explanations to the customer’s level. It means knowing when to use a reply template as a starting point and when to write fresh. It also means following basic email etiquette on the channels where customers expect professionalism.
Whatever channel you’re on (phone, chat, email, SMS, WhatsApp), communication skills are what turn intent into outcome. A technically correct answer delivered poorly will leave the customer more confused than before. A plain-language answer from someone who clearly understood the problem lands cleanly.
There’s a difference between hearing and listening. Active listening means paying attention not just to what the customer says but to what they’re not saying, how they phrase things, and what’s under the surface of the question.
Active listeners ask fewer but better questions. They don’t make customers repeat themselves. They notice when a question is standing in for a bigger problem.
A common tell of passive listening: the agent who immediately starts troubleshooting the wrong thing because they heard a keyword and pattern-matched. Active listeners wait until they actually understand the problem before offering a solution.
Support agents regularly run into situations they’ve never encountered. A new bug, an edge case, a feature used in a way no one thought of. The difference between a good agent and a great one is how they respond to not knowing.
Curious agents dig in. They approach unfamiliar problems with “let me figure this out” rather than “let me deflect this.” Adaptable agents adjust when their first theory turns out to be wrong. Resilient agents stay functional at the end of a long day when a P1 lands in their queue.
You can’t train curiosity exactly, but you can hire for it and reinforce it by celebrating the deep dives.
Curiosity gets you interested in a problem. Problem-solving gets it resolved.
The best agents have a methodology, a reliable way to narrow down what’s happening when something goes wrong. They know which tools to check, which logs to read, which teammate to ask. They know when to try a quick fix versus when to escalate to engineering.
Resourcefulness is the adjacent skill: knowing where information lives, who holds it, and how to get to it without burning an hour on Slack.
Support agents handle dozens of conversations a day. Each customer, though, only interacts with you once (or rarely). What’s routine for you is rare for them.
Ownership means the agent treats each customer’s issue as their issue, not just a ticket. It means following up on escalated issues instead of marking them closed and moving on. It means representing the customer’s experience internally when you share feedback with product or engineering.
Advocacy is the outward version of ownership. Great support teams don’t just solve customer problems, they make sure the company knows about patterns that matter. The agent who flags that ten customers asked about the same bug this week is doing advocacy work. That’s how support teams become engines of product improvement instead of ticket-closing machines.
If you’re looking at this list and thinking “I want to work on some of these,” you’re in good company, everyone at every level of customer service has growth areas here.
A few ways that actually work:
Structured courses and training. Platforms like LinkedIn Learning, Coursera, Udemy, and Skillshare have solid content on empathy, active listening, and conflict resolution. They’re not magic, but they give you vocabulary and frameworks for practicing.
Books that hold up. A few that still get recommended:
Peer feedback and mentorship. The people you work with are often your best resource. Regular peer review of customer conversations, low stakes, high candor, builds pattern recognition fast. Role-playing tough customer scenarios in a safe setting lets you try new approaches without a real customer on the line.
Shadowing and reverse-shadowing. Watching senior agents handle calls or chats teaches what you can’t get from a book. Having senior agents watch you teaches what you can’t see in yourself.
Recording and reviewing your own work. With permission and for training purposes, reviewing your own customer conversations surfaces habits you don’t know you have. Most agents who do this find at least one pattern they wish they hadn’t.
The last word on this: there’s no real boundary between hard and soft skills in customer service. They feed each other constantly.
The best support professionals have both. Looking at any great support team, the people who stand out combine technical fluency with the human skills to deploy that fluency well.
That’s not a nice-to-have. That’s the job.
You’ll occasionally see lists of the “7 Cs of customer service” floating around: clarity, consistency, care, competency, choice, courtesy, and communication. It’s a useful mnemonic if it helps, but don’t get hung up on matching a specific framework. The actual work is building the underlying skills (the ones covered above), not memorizing the acronym. Pick whichever framework helps your team internalize the point.
Missive is a collaborative email client built for teams that take customer service seriously. Shared inboxes, internal chat on every conversation, and multi-channel support across email, SMS, WhatsApp, and live chat. Free for up to 3 users, try it free.

December 19, 2023
66 Most Significant Customer Service Statistics in 2026
These statistics can help you see the direction the customer service industry is heading in—and what you need to do to prepare your business in 2026.
In a recent McKinsey & Company study, customer service leaders were asked: what is your highest priority?
The answer at the top of the list was improving customer experience.
This goal has become the driving change of many aspects of the customer service industry, from the tech we use to how we design omnichannel experiences and even response times.
To highlight the different aspects of customer service and their importance, we have collected 66 key customer service statistics that talk about rapid changes like AI, chatbots, and automation that are helping customer service teams meet these expectations.
These statistics can help you see the direction the customer service industry is heading in—and what you need to do to prepare your business in 2026.
Let's take a look 👇
Poor customer service directly drives customer churn, negative word-of-mouth, and lost revenue. Research shows 96% of customers will cut ties with a company after bad service, and US businesses risk losing $1.9 trillion annually due to poor experiences.
The data on how customers respond to bad service has been consistent for decades. A White House Office of Consumer Affairs study found unhappy customers tell 9-15 people about their experience—some tell 20 or more. For every customer who complains, 26 others stay silent. As HuffPost reminisced, the results were... humbling.
Yikes 🥴
Further research from Qualtrics and ServiceNow found that 80% of customers have switched brands because of a poor customer experience, and US companies risk losing $1.9 trillion in spending because of it.
Customers rarely give businesses a second chance after poor service 👇
Interestingly, this sentiment was shared across age brackets. A Propel Software study found a majority of Millennials (57%) will cut ties with a brand after one bad encounter, while 54% of all survey respondents said they would do the same.
What is perhaps most alarming for brands is how unforgiving customers are unless the customer service team can save the day.
These statistics are clear: customer service teams can win people back, even after a rotten experience.
Excellent customer service means fast issue resolution, first-contact problem solving, and consistent empathy. 90% of customers say issue resolution is their top concern, and 83% feel more loyal to brands that respond to and resolve complaints.
Here's what customers actually want 👇
What's interesting is brand loyalty can be achieved through great customer service. Propel Software found that brands can win over customers for life if they remember their birthday, service reps call customers by their name, and are swift to make changes when complaints are made.
Artificial Intelligence (AI) is changing everything we do, from how we write to how we program and yes—how we talk to customers.
Forbes labeled AI as a new industrial revolution, and a 2022 IBM survey found AI adoption rates are steadily increasing across the globe.

For customer service, the emergence of AI has led to monumental shifts:
According to HubSpot, AI is making customer service teams more efficient across the board:

HubSpot: The State of AI in Service, 2023. Source
There's a gap between how leaders invest in AI and how much customers want to interact with it.

There is still a need for humans in the customer service world. Intercom: The State of AI in Customer Service 2023 Report.
The worries of such disruptive tech are not new. The same thing happened when computers were put into workplaces in the 1980s—many people feared they would lose their jobs. But just as those computers still require a human to run them, Intercom found AI and automation tools will need people to develop chatbots, design AI conversations, and create strategies. The future of customer service and AI looks different—but the progression looks promising.
Chatbots handle up to 80% of routine customer service tasks, offer 24/7 availability, and can cut service costs by up to 30%. They've become essential for meeting modern customer expectations around speed and accessibility.
The chatbot market reflects this shift—it's predicted to reach $15.5 billion by 2028, growing 23.3% annually. (MarketsandMarkets)
The other big bonus of chatbots is they are incredibly beneficial for a company's budget. Not only can chatbots cut customer service costs by up to 30% ( IBM ), but:
However, there is also a generational divide around chatbot preferences. While 20% of Gen Z customers want to start a customer service experience with a chatbot, that figure drops to just 4% for Boomers. ( Simplr )
It also depends on what type of issue the customer has.
Chatbot use is definitely increasing, and more customers are happy to use them. But the stats are clear—a large portion of customers out there still want to talk to a real human 🙋
Omnichannel customer service means customers can switch between channels—chat, email, phone, social—without repeating themselves. 9 out of 10 customers expect this seamless experience, yet 77% of companies struggle to deliver it.
Sound familiar? 👇
Each response is reasonable on its own. The problem starts when customers jump channels and have to explain everything again.
Customers want a painless support experience. In fact, 9 out of 10 customers expect a seamless omnichannel experience no matter what communication method they use. ( CX Today )
Brands must decide what communication channels to prioritize, depending on customer preferences.
However, some brands struggle to meet these customer demands.
77% of companies struggle to create a cohesive customer experience across devices and channels, even when 62% of customers say they want to engage over multiple digital channels. The good news is there is a huge opportunity for businesses to let customers self-service a problem 👇
But be warned—self-service doesn't mean forgetting about your customers. 77% of customers say a poor self-service option is worse than not offering any support at all, as it wastes their time!
There is no doubt the way we approach customer service is changing at a rapid pace.
Gartner predicts that by 2025, customer service teams that use AI in their multichannel customer strategy will boost operational efficiency by 25%. And 84% of companies think AI chatbots will become a crucial communication tool for talking to customers ( CCW ).
What's interesting is how these changes will come about. Research by Boston Consulting Group (BCG) predicts generative AI will be embedded and rolled out across customer service functions until it can provide continuous assistance across all customer journeys:

If this predicted rollout becomes a reality, BCG expects Generative AI to increase customer service productivity by anywhere from 30% to 50%.
Gartner also expects that by 2027, chatbots will become the main customer service channel for a quarter of all businesses. If this happens, it will lead to a major shakeup of the entire customer journey, and businesses must start to plan for how AI will work alongside customer service representatives.
According to McKinsey research , an estimated 75% of customers use multiple channels in their ongoing experience. It has offered a vision of what a future customer service model could look like if AI was introduced at every customer touchpoint:

While the statistics we have talked about highlight that customers are not quite all in on AI and automated support experiences, they are getting more comfortable.
The best thing your business can do is embrace customer service tech while keeping humans central to complex issues. Customers don't care how they get good service—they just expect it.
Brands that win will leverage every tool in their toolkit:
Follow these trends in 2026—and your customer service team will thrive 🥳
December 19, 2023
The 7 best Zendesk alternatives for 2026
Most teams evaluating Zendesk don't actually need a help desk. Here are the 7 best Zendesk alternatives for 2026, sorted by what your team actually needs.
The best Zendesk alternatives in 2026 are Missive, Help Scout, Freshdesk, HubSpot Service Hub, Zoho Desk, Intercom, and Gorgias. Each one handles the two complaints that push teams off Zendesk (the price and the complexity), but they’re built for different jobs. This guide sorts them by what you actually need, a shared inbox or a full help desk, with current April 2026 pricing for every tool.
There’s a question most Zendesk shoppers don’t ask before they start comparing: do you even need a help desk?
A lot of teams land on Zendesk because it’s the default, then spend months configuring a ticketing system they don’t really need. They reply to email, forward things to coworkers, and chase the occasional WhatsApp message. What they need is a shared inbox with real collaboration, not a platform that turns every customer message into a numbered ticket.
If that’s you, the list below starts with tools built around email and collaboration. If you genuinely need full help-desk features (SLAs, queues, ticket schemas, self-service portals), the second half of the list covers those.
Four reasons come up in almost every conversation with a team switching off Zendesk.
Zendesk advertises Support Team at $19 per agent per month. Most teams quickly realize that tier is an email-only ticketing system with almost nothing else. The one people actually end up on, Suite Team, starts at $55 per agent per month. Suite Professional is $115. Suite Enterprise is $169.
Add the Advanced AI agent at $50 per agent per month, and a 20-person team can spend more than $3,000 a month before a single automation is configured.
Reports from current and former customers consistently put Zendesk’s full deployment at 3+ months for mid-size teams. The feature surface is huge: custom ticket schemas, triggers, workflows, macros, SLAs, routing rules. Most teams end up hiring a consultant or dedicating an admin to keep it running.
That’s fine if you need it. Most teams don’t.
This one hurts more than it should. If you’re evaluating customer support software, and the vendor’s support team is itself hard to reach, that’s a signal. Reddit threads surface the same complaint on repeat: teams spending tens of thousands a year who can’t get a human on the phone. For a customer service tool, that’s a rough look.
This is the one nobody talks about. We’ve spoken with small specialty retailers who had spent $50 per user on Zendesk before realizing the real problem: they didn’t want to automate their customer service, they wanted to collaborate on email. What they needed wasn’t tickets. It was a shared inbox multiple people could work together, plus separate shared inboxes for accounts receivable and vendor orders. Zendesk solved none of that.
If most of what you’d do in a help desk is answer email while looping a teammate in, you’re paying for a ticketing layer you don’t use.
Most comparison articles treat every tool on the list as a like-for-like Zendesk replacement. That’s the wrong frame for most small and mid-size teams.
A better question: do you need tickets or do you need a shared inbox?
The split matters because the two categories have completely different tools, pricing models, and user experiences. If you pick the wrong category, you’ll pay more, implement slower, and make your customers’ experience worse than before.
Before the list, some context for readers who haven’t come across us: Missive is a collaborative email client. It looks and works like an email app (Gmail, Outlook, Apple Mail), except multiple people can share inboxes, assign conversations to each other, chat inside the thread, and co-write drafts in real time. It handles email, SMS, WhatsApp, Instagram, Messenger, and live chat from the same interface. Teams use it for support, sales, ops, and general shared-inbox work across departments.
The rest of this article is an honest comparison of the main options, including Missive.
Best for: teams that want to collaborate on email (and other channels) without turning every message into a ticket.
Missive is the clearest break from the Zendesk model on this list. Instead of treating customer messages as tickets moving through a pipeline, it treats them as what they are: emails, texts, DMs, and chat conversations that multiple people might need to work on.
You connect your existing email accounts (Gmail, Outlook, Microsoft 365, IMAP) and can start collaborating in minutes. Internal chat lives inside every thread, so discussing a reply never means forwarding the email or switching to Slack. Assignments route a conversation to a specific person or team without duplicating it. Rules handle the repetitive routing work, and AI rules can auto-classify, label, or draft replies using your own OpenAI, Anthropic, or Google API key.
The multichannel story is unusually complete for the price: the same rules engine handles WhatsApp, SMS, Instagram, Messenger, and live chat alongside email, no add-ons.
Where it wins: setup in an afternoon instead of months. Real collaboration on email (live co-drafting, internal chat in-thread). Flat, predictable pricing. Bring-your-own-key AI so you’re not paying a per-agent AI upcharge.
Where it falls short: Missive isn’t a full help desk. If you need strict SLA enforcement, deep ticket schema customization, or a customer-facing help center with community forums, you’ll outgrow Missive faster than a tool like Zendesk or Freshdesk.
Pricing (annual billing):
| Plan | Free | Starter | Productive | Business |
|---|---|---|---|---|
| Price | $0/user | $14/user/mo | $24/user/mo | $36/user/mo |
Free for up to 3 users with 15-day history. Paid plans include unlimited history and a 30-day money-back guarantee.
Try Missive free or book a demo.
Best for: support teams that want a clean, email-style interface with a built-in knowledge base.
Help Scout has built its identity around “support that feels like email, not a ticket number.” It’s a solid shared inbox with a nicely designed knowledge base (Docs), an embeddable widget (Beacon), and a cleaner experience than most help desks. Teams who care about human-feeling customer replies often land here.
The tradeoff is that Help Scout’s shared-inbox pricing isn’t dramatically below Zendesk’s on a seat-for-seat basis once you add AI and extra inboxes, and AI Answers is billed separately at $0.75 per resolution. If you’re choosing Help Scout primarily for cost reasons, run the full math first. If you’re comparing further, we have a full write-up of Help Scout alternatives.
Where it wins: clean UI, genuinely good knowledge base, support for teams that want a warm brand voice.
Where it falls short: limited multichannel (WhatsApp is on Plus and up), AI Answers is per-resolution on top of seat costs, reporting history is capped by tier.
Pricing (annual billing):
| Plan | Free | Standard | Plus | Pro |
|---|---|---|---|---|
| Price | $0 (5 users) | $25/user/mo | $45/user/mo | $65/user/mo (10-user min) |
Best for: teams that want Zendesk-class features at meaningfully lower prices.
Freshdesk is the most direct feature match to Zendesk on this list, just priced lower. Multichannel support, a decent free tier, Freddy AI (priced as an add-on), and scalability into the hundreds of agents are all there. Agent collision detection (which prevents two agents from replying to the same ticket) is especially useful at volume.
The tradeoff is flexibility. Freshdesk customizes less aggressively than Zendesk. If you have a workflow with dozens of custom fields, custom triggers, and conditional logic chained five levels deep, you’ll hit walls faster on Freshdesk than on Zendesk or Salesforce Service Cloud.
Where it wins: the closest thing to “Zendesk without the premium.” Free tier covers small teams. Decent AI at a predictable add-on price.
Where it falls short: Freddy AI Copilot is $29 per agent per month on top of seat cost; Freddy AI Agent sessions are $100 per 1,000 with no rollover. Adds up at scale.
Pricing (annual billing):
| Plan | Free | Growth | Pro | Enterprise |
|---|---|---|---|---|
| Price | $0 (2 agents) | $15/agent/mo | $49/agent/mo | $79/agent/mo |
Best for: teams already on HubSpot CRM or the broader HubSpot platform.
If you’re running HubSpot for sales or marketing, Service Hub is the obvious default for support. Customer records, tickets, deals, and conversations live on the same object, which is genuinely valuable when a support interaction needs sales context or vice versa.
Adopting Service Hub as a standalone support tool (without using the rest of HubSpot) is a harder pitch. The value is in the integration; without it, you’re paying HubSpot-scale prices for a mid-tier help desk.
Where it wins: the deepest native CRM integration of any tool on this list. Clean free tier for very small teams.
Where it falls short: Enterprise requires a 10-seat minimum and a $3,500 onboarding fee. Professional tier jumps from $15 to $90 per seat per month, a hefty cliff. Starter plans have limited support for continuing chat conversations via email, a workflow quirk that trips up new teams.
Pricing (annual billing):
| Plan | Free | Starter | Professional | Enterprise |
|---|---|---|---|---|
| Price | $0 (2 users) | $15/user/mo | $90/user/mo | $150/user/mo |
Professional has a $1,500 onboarding fee. Enterprise has a $3,500 onboarding fee and 10-seat minimum.
Best for: teams that want a full help desk at the cheapest credible price.
Zoho Desk gives you the full multichannel help desk feature set (tickets, email, chat, phone, social, web forms) at a meaningfully lower price than Zendesk Suite equivalents. Zia, Zoho’s AI assistant, drafts replies, categorizes inbound, and surfaces knowledge base articles for self-service. Integration with the rest of the Zoho suite (CRM, Analytics, Campaigns) is tight.
The catch is that Standard is intentionally limited; most teams end up on Professional or Enterprise once they hit the usage caps. Still cheaper than Zendesk, but run the numbers for your actual team size before signing.
Where it wins: best cost-per-feature ratio if you need traditional help-desk capabilities. Plays well with the rest of the Zoho stack.
Where it falls short: UI is dated relative to newer tools. Standard tier is restrictive enough that the “real” starting price is closer to Professional.
Pricing (annual billing):
| Plan | Standard | Professional | Enterprise |
|---|---|---|---|
| Price | $14/user/mo | $23/user/mo | $40/user/mo |
Best for: product-led companies where live chat and in-app messaging are the main channels.
Intercom started as live chat and grew into full customer service, which shapes everything. The UI prioritizes real-time channels. Fin, their AI agent, is genuinely best-in-class for chat deflection. Workflows assume you’re messaging customers actively, not waiting for emails to land.
If most of your support happens in an in-app widget, Intercom is a legitimate best-in-class choice. If most of your support is email, the tool pulls you toward channels you don’t use.
Where it wins: the best chat-based AI agent on the market (Fin). Tight integrations with product analytics and onboarding flows.
Where it falls short: pricing is opaque and can spike fast. Fin charges $0.99 per automated resolution on top of seat costs, and your bill grows as your chatbot improves, which is backwards for most cost models.
Pricing (annual billing, plus Fin at $0.99 per resolution):
| Plan | Essential | Advanced | Expert |
|---|---|---|---|
| Price | $29/seat/mo | $85/seat/mo | $132/seat/mo |
Best for: Shopify (or BigCommerce / Magento) brands.
Gorgias is the clear pick for ecommerce, full stop. The Shopify integration is deeper than anything else on this list. Agents can edit orders, issue refunds, and apply discounts without leaving a ticket. Revenue attribution at the ticket level is unique to Gorgias.
Pricing is ticket-volume-based, which is great when volume is predictable and brutal when it isn’t. Black Friday spikes can double a monthly bill. AI Agent interactions are charged per resolution on top of the base plan.
Where it wins: the best ecommerce integration on the market. Unlimited agent seats on paid plans (you pay for volume, not headcount).
Where it falls short: non-ecommerce businesses pay for integrations they won’t use. Ticket-volume pricing adds variance to the budget every month.
Pricing (annual billing):
| Plan | Starter | Basic | Pro | Advanced |
|---|---|---|---|---|
| Price | $10/mo (50 tickets) | $50/mo (300 tickets) | $300/mo (2,000 tickets) | $750/mo (5,000 tickets) |
AI Agent interactions: $0.90 each on annual billing.
Prices below reflect annual billing unless noted. Monthly billing on most of these tools runs 20-40% higher. Verified April 2026; spot-check current tiers before buying.
| Tool | Starting price | Best for |
|---|---|---|
| Missive | Free for 3 users, then $14/user | Teams that want a shared inbox, not ticketing |
| Help Scout | Free for 5 users, then $25/user | Support teams wanting email-style UX plus a knowledge base |
| Freshdesk | Free for 2 agents, then $15/agent | Teams wanting Zendesk-class features at lower prices |
| HubSpot Service Hub | Free for 2 users, then $15/user | Teams already using HubSpot CRM |
| Zoho Desk | $14/user | Teams that need a full help desk at minimum cost |
| Intercom | $29/seat | Product-led companies leaning on live chat |
| Gorgias | $10/mo (volume-based) | Shopify / ecommerce brands |
Seven questions to work through before you pick a tool.
This is the question most teams skip, and it’s the most important one. If most of your work is replying to email, forwarding to coworkers, and occasionally responding to a text or WhatsApp, you probably need a shared inbox. If you have formal SLAs, a structured support org, and real queue management, you need a help desk. Picking the wrong category makes everything downstream harder.
List the channels your customers use most: email, live chat, WhatsApp, SMS, Instagram, phone. Then compare against each tool’s native channel support. Some tools treat chat as first-class and email as an afterthought, or vice versa. Bolted-on channels usually mean extra costs and a worse experience.
Per-seat pricing (most tools) is clean until you want to add a contractor for a week. Volume-based pricing (Gorgias) is fair when volume’s predictable and harsh when it’s not. Tiered seat pricing with features locked behind higher tiers (Zendesk, Help Scout) is fine until you need the one feature on the next tier up.
Model 12 months of growth before signing anything. A tool that’s cheap at 5 users can turn expensive at 25.
For Missive, Freshdesk Free, or Zoho Standard, you can be running in an afternoon. For Zendesk, Salesforce Service Cloud, or HubSpot Professional, plan on weeks or months. The cost of a long implementation isn’t just consultants; it’s the three months your team isn’t using the new tool.
Three common models, all with different long-term economics:
“Team inbox” means different things across tools. Test the specific workflows: can two people edit the same draft simultaneously? Can you @mention a teammate inside a conversation without forwarding? Can you assign a conversation and have it show up in that person’s inbox automatically? These questions separate genuine collaboration from single-user inboxes with a thin multiuser layer.
Ironic but relevant. A customer service tool with slow, hard-to-reach support is a red flag. Read G2 and Capterra reviews for response-time patterns, and reach out yourself before buying.
Missive (free for up to 3 users, all features except unlimited history), Freshdesk (free for 2 agents), HubSpot Service Hub (free for 2 users with HubSpot branding), and Help Scout (free for 5 users, 1 inbox) all have credible free tiers. Missive’s free plan is the most feature-complete; the rest trade some feature access for higher user limits.
If you’re a small team that works collaboratively on email (rather than running formal support operations), Missive is the cleanest fit. If you want a traditional help desk on a small-business budget, Freshdesk Growth or Zoho Desk Standard are the value picks. If you already use HubSpot or Shopify, default to the native option (Service Hub or Gorgias). A broader comparison for small businesses lives in our help desk software guide.
For large enterprises, the real shortlist is usually Salesforce Service Cloud (if you’re already standardized on Salesforce), Kustomer (for high-volume consumer brands wanting CRM + support in one), or Intercom (for product-led companies with live chat as the main channel). None of those are on this list because they’re rarely the right answer for small and mid-size teams, which is who most Zendesk shoppers are.
Zendesk Suite Team (the realistic starting tier) is $55 per agent per month. Freshdesk Growth is $15. Zoho Desk Standard is $14. Missive Starter is $14. Help Scout Standard is $25. For a 10-person team, switching from Zendesk Suite Team to Freshdesk or Zoho saves roughly $400 to $500 per month on the base plan, before AI and add-ons factor in.
It depends on how much you’ve built. If you’re using Zendesk as a shared inbox with a handful of rules, migration takes a day or two. If you’ve built custom workflows, extensive automations, a complex ticket schema, and integrations across your stack, plan on weeks. Most alternatives offer migration tools or paid migration services to move tickets, contacts, and knowledge base articles.
This is the question worth stopping on. If most of your “support” is replying to email, looping in a coworker, and the occasional WhatsApp message, a shared inbox tool like Missive handles the work without the help-desk overhead. The ticket model makes sense when you have real volume, formal SLAs, and a structured support org. For teams smaller than that, it adds cost and friction without adding value.
Missive. Internal chat lives inside every conversation, live drafting shows who’s typing in real time, assignments make ownership explicit, and rules work across every channel. Other tools bolt collaboration onto a ticketing model; Missive is built around collaboration from the start.
Zendesk isn’t a bad product. It’s an expensive, complex, enterprise-focused product. If you’re not an enterprise, one of the seven tools above will serve you better.
For most small and mid-size teams, the answer is one of three: Missive (if you want real collaboration on email and don’t need ticketing), Freshdesk (if you want a traditional help desk at a reasonable price), or HubSpot Service Hub (if you’re already in HubSpot). Try your top two against real customer conversations for a week. The best tool on paper is almost never the best tool in practice.
If you’re leaning toward a shared inbox approach instead of a help desk, try Missive for free. You’ll know within a week whether it fits how your team actually works.