AI FAQ

Common questions about AI features in Missive.

General

chevron-rightWhich AI provider should I choose?hashtag

All three providers work well with Missive. Choose based on your preferences:

  • OpenAI. Widely used, broad model selection, good all-around performance.

  • Anthropic Claude. Strong at nuanced writing and careful reasoning.

  • Google Gemini. Competitive pricing, large context windows, free tier available.

You can connect multiple providers and switch between them at any time.

chevron-rightCan I use multiple providers at the same time?hashtag

Yes. Connect as many providers as you want. The assistant lets you pick which model to use per conversation, and you can configure each prompt and AI rule to use a specific provider.

chevron-rightWill AI providers use my data to train models?hashtag

No. All three providers state that data submitted through their APIs is not used to train models:

Missive sends content to your provider only during active AI processing. No data is stored beyond the immediate request.

chevron-rightHow do I monitor AI costs?hashtag

It depends on how you've set up AI.

Missive AI credits. Organization admins can see their current balance and usage directly in Settings > AI.

Bring your own key (BYOK). Each provider has its own usage dashboard:

Set budget limits in your provider's account to prevent unexpected charges.

chevron-rightDo Missive AI credits work across all providers?hashtag

Yes. Credits are a single balance that applies to any supported provider. Spend $5 on Claude, then on GPT, then on Gemini -- it all draws from the same balance. Pick whichever model you want per conversation, prompt, or rule.

chevron-rightCan I use Missive AI credits and bring-your-own-key at the same time?hashtag

Yes. Add one of each in Settings > AI. When you pick a model in the assistant, you'll see models from both and can choose per conversation. The same applies to prompts and AI rules.

chevron-rightWhat happens when my Missive AI credits run out?hashtag

Organization admins get an email when the balance is low. When it hits $0, AI features stop working and every user sees an error in the app. An admin can top up from Settings > AI > Buy AI credits to resume. Top-ups are manual -- there's no auto-recharge today.

chevron-rightWhat does "The AI provider rate limit has been exceeded" mean?hashtag

This error comes from your AI provider (OpenAI, Anthropic, or Google), not from Missive. Despite the name, it can surface for reasons beyond actual rate limiting -- a missing payment method, an expired card, exhausted credits, or a hit spend cap on the provider account will produce the same message.

Work through these checks:

  • Provider account billing. Confirm the account behind the integration has a valid payment method and sufficient funds or credits. Errors typically start the moment the balance runs out or a card fails.

  • Scope of the problem. If every user in the organization hits the error, the cause is almost always on the provider account (billing, expired credits, spend cap). If only one user is affected, it's more likely a true rate limit triggered by their usage volume.

  • Try another provider. Add a second AI integration under Settings > AI -- for example Claude or Gemini -- and run the same action through it. If it works there, the original provider account is the problem.

For provider-specific causes and tier details, see the OpenAI, Anthropic Claude, and Google Gemini sections below.

chevron-rightWho can add or manage AI providers?hashtag

Only organization admins. All AI providers are linked to the organization rather than to an individual user.

Once a provider is added, the admin decides who inside the organization can use it: the entire organization, a specific team, a few users, or a single user. Access can be adjusted at any time from Settings > AI.

chevron-rightWhy isn't a recently released AI model available yet?hashtag

Not every newly announced model is immediately available in Missive. When a provider releases a model in preview or beta, it often comes with significantly lower rate limits and can produce errors under normal usage. We wait until a model is production-ready before offering it.

Once a model is stable and performs reliably at the rate limits real-world usage requires, we add it. We actively monitor new releases from all supported providers.

Using AI

chevron-rightHow is quoted email history handled when sending context to AI?hashtag

It depends on the context type you're using.

Conversation context (@Current conversation): Missive strips quoted history from all messages except the first. The first message keeps its quoted content to preserve the original context; every subsequent message has quotes removed, since that content is already present earlier in the thread. This means you won't burn extra tokens on repeated quoted history in long threads.

Message context (@Current message): The full message body is sent as-is, including any quoted history below the reply. Quotes are not stripped.

chevron-rightHow does Missive handle very long conversation threads?hashtag

For very long threads, Missive truncates content to fit within the model's context window. You don't need to worry about a 100-email thread exceeding limits -- Missive manages this automatically.

chevron-rightWhy does a built-in prompt (Summarize, Translate, etc.) always use the same model?hashtag

Missive controls the prompt text sent to AI providers for built-in prompts, and selects the model that works best with those prompts per provider. Because the prompt text is fixed, the model is fixed too.

You can't change the model for built-in prompts. To use a different model, create a custom prompt with the same instructions and select your preferred model there.

chevron-rightWhy isn't web search available in the AI assistant?hashtag

Web search is not available for any AI model in the assistant. It's something we'd like to add eventually, but there's an important security reason we're being cautious about it.

The risk is prompt injection. Consider this scenario: you receive an email with hidden text (white text on a white background) containing carefully crafted instructions targeting the AI. That hidden text could instruct the AI to make a web request to a specific URL and include sensitive information from your conversation -- calendar details, contact info, email content -- as URL parameters, leaking it to a third party.

The AI would detect and refuse this most of the time, but the risk is real enough that we've chosen not to enable web access for now. A web search tool gives the AI the ability to make outbound requests, and that's exactly the mechanism an attacker would need to exfiltrate data.

We're exploring mitigation techniques and plan to address this in the future once we're confident we can do it securely.

chevron-rightHow does replying using canned responses work?hashtag

When you reference canned responses in a prompt (using @Responses or @All responses), Missive searches your canned responses using semantic (concept-based) matching rather than keywords. The most relevant matches are sent to your AI provider as context, and the AI drafts a reply based on both the email and your canned responses.

Because the search is concept-based, it works across languages. A canned response written in German about invoices can still match an English customer email asking about their invoice -- the AI understands the underlying concept, not just the literal words.

chevron-rightHow do canned responses affect token usage in prompts?hashtag

It depends on how you reference them.

Individual canned responses (@Response name): The full text body of each referenced response is added directly to the prompt. Token cost is predictable -- it's the character count of each response you attach.

@All responses: This does not add every canned response to the prompt. Instead, Missive runs a semantic search against your canned responses and passes up to 20 of the best matches to the AI. Token cost varies: a generic prompt or a library of very similar responses will return more matches and use more tokens; a specific prompt against a diverse library will return fewer matches and use fewer.

If token cost is a concern, referencing specific responses is more predictable than @All responses.

chevron-rightHow do I make the AI respond in another language?hashtag

Create a custom prompt and save it for reuse. For example:

Or for replies:

You can also use the built-in Translate prompt from the draft toolbar, which supports multiple languages.

chevron-rightHow do I monitor cost usage per user?hashtag

Go to Settings > AI. The Usage section shows a per-member breakdown for the last 30 days.

Missive AI credits. You see both AI actions (one action per AI request) and AI usage in dollars per user. The dollar amount includes the 25% Missive markup.

Bring your own key. You see AI actions per user inside Missive. Provider-side dollar costs aren't visible in Missive -- check the provider's own usage dashboard for those.

If you want per-user dollar billing with BYOK, add one BYOK integration per user (each with their own API key) so provider-side usage bills to individual accounts.

chevron-rightAfter adding a new AI provider, do my existing rules update automatically?hashtag

No. AI rules are configured with a specific provider and model. Adding a new integration doesn't change existing rules.

After connecting a new provider, open each AI rule and update the provider and model selection to use the new integration.

MCP

chevron-rightHow do I connect HubSpot MCP to Missive?hashtag

Connect HubSpot as a Custom MCP integration.

  1. In HubSpot, go to Development > MCP Auth Apps.

  2. Create an MCP auth app named Missive.

  3. Set Redirect URL to https://auth.missiveapp.com/mcp/callback.

  4. Copy the app's Client ID and Client secret.

  5. In Missive, go to Settings > Integrations > MCP > Custom MCP.

  6. Set Name to HubSpot, Server URL to https://mcp.hubspot.com/, and Authentication to OAuth 2.0 (authorization flow after adding).

  7. Expand Advanced settings, paste the HubSpot Client ID and Client secret, then click Add.

  8. Choose the HubSpot account to connect and grant the requested permissions.

If you do not see Development in HubSpot, ask a HubSpot super admin to give you a Developer seat or the Developer tools access permission, or to create the MCP auth app for you.

If HubSpot says the redirect URL is missing or does not match, open the HubSpot MCP auth app and add https://auth.missiveapp.com/mcp/callback under Redirect URLs.

HubSpot permissions still apply. Missive AI can only access the HubSpot data the authorizing user can access.

See MCP integrations for capabilities and examples.

chevron-rightHow do I connect a knowledge base or internal documentation to the AI assistant?hashtag

Use an MCP integration. Putting large amounts of documentation directly into instructions works, but every instruction is included in every AI request -- which increases token costs even when that content isn't relevant.

MCP gives the assistant on-demand access to external content. It fetches only what it needs for a given conversation, so you're not paying to pass your entire knowledge base on every request.

  • Notion: Connect your Notion workspace to give the assistant access to SOPs, product docs, onboarding guides, or any other internal pages.

  • Custom MCP: If your documentation lives elsewhere -- a help center, a wiki, or a custom platform -- connect it as a Custom MCP integration. Many documentation platforms (including GitBook) offer MCP servers you can point Missive at.

See MCP integrations for setup details.

chevron-rightDoes Missive offer an MCP server for external AI tools?hashtag

No. Missive does not currently offer an MCP server that external AI tools (such as Claude Desktop, Cursor, or other MCP clients) could connect to. There is no official Missive MCP server you can point those tools at.

What Missive does support is connecting to external MCP servers from within Missive's AI assistant -- for example, Notion, Linear, Stripe, or a custom MCP endpoint. See MCP integrations for details.

If you need Missive exposed as an MCP server today, Relay has a guidearrow-up-right on building one using Missive's REST API.

You can upvote the native MCP server feature request herearrow-up-right.

OpenAI

chevron-rightWhy does the OpenAI integration show a billing setup error?hashtag

You need OpenAI API access, not ChatGPT Plus ($20/month). These are different products.

  1. Use that API key when adding the OpenAI integration in Missive

chevron-rightWhy did AI suddenly stop working after it was working fine?hashtag

Your OpenAI credit balance may have hit $0. When credits run out, API requests fail and AI features stop working -- even if you have a credit card on file.

To fix it: add credits to your OpenAI account and enable auto-recharge in your OpenAI billing settings. Auto-recharge automatically tops up your balance when it falls below a threshold, so you don't have to manage it manually.

chevron-rightWhy does Missive show "You exceeded your current quota" for OpenAI?hashtag

This error can appear when connecting the integration or any time you use an AI feature. It means OpenAI rejected the request due to a billing or quota issue on your OpenAI account.

Common causes:

  • Expired pre-purchased credits. OpenAI credits have an expiration date. If credits you bought have since expired, your balance will be $0 even if you've never used the integration. Check your balance at platform.openai.com/account/billing/overview. If credits have expired, add a new credit purchase. Enabling auto-recharge prevents this from happening again.

  • Wrong organization or project. If you have multiple organizations in your OpenAI account, the API key you're using in Missive may belong to one that doesn't have active billing configured. Make sure the key is from the organization with billing set up.

  • Monthly spend limit reached. OpenAI accounts have a configurable maximum monthly spend. If you've hit that cap, all requests fail until the billing cycle resets or you raise the limit.

chevron-rightWhy does Missive show "The AI provider rate limit has been exceeded" for OpenAI?hashtag

This error means OpenAI is throttling requests because your account has exceeded its rate limit -- requests per minute or tokens per minute. The limit comes from OpenAI, not from Missive.

Having billing set up and a positive credit balance does not guarantee high rate limits. OpenAI enforces rate limits based on your usage tier, which is determined by how much you have paid in total over time -- not your current balance.

OpenAI's tier system:

Tier
Total paid
Account age

1

$5

Any

2

$50

7+ days since first payment

3

$100

7+ days since first payment

4

$250

14+ days since first payment

5

$1,000

30+ days since first payment

Each tier significantly increases your requests per minute (RPM) and tokens per minute (TPM). Long email threads can consume thousands of tokens per request, which exhausts low-tier limits quickly.

To raise your limits, pay more into your OpenAI account over time to advance to the next tier. Check your current tier and limits in the Limits section of your OpenAI organization settings.

chevron-rightCan I keep my OpenAI data in the EU or another region?hashtag

Yes. OpenAI offers data residency in the US, EU, and several other regions (Australia, Canada, India, Japan, Singapore, South Korea, UK, UAE). To route requests through OpenAI's EU infrastructure, set the API Base URL to https://eu.api.openai.com/v1 in the integration's Advanced settings.

Non-US regions require approval from OpenAI for abuse monitoring controls (Zero Data Retention or Modified Abuse Monitoring). Contact OpenAI's sales teamarrow-up-right to check eligibility. Supported regions and requirements change over time -- see OpenAI's data controls documentationarrow-up-right for current details.

chevron-rightDoes OpenAI data residency cover all my data?hashtag

Data residency covers customer content (prompts and responses) stored at rest. System data like metadata, billing, and usage analytics may still be processed outside your selected region. See OpenAI's data controls documentationarrow-up-right for what's covered.

Anthropic Claude

chevron-rightWhy does my Anthropic API key not work?hashtag

The most common cause: your Anthropic account doesn't have billing set up. API access requires prepaid credits or an active billing plan.

A Claude Pro subscription ($20/month) is not the same as API access. You need to add credits separately in the Anthropic Consolearrow-up-right.

chevron-rightHow do I set up Anthropic billing?hashtag

Go to console.anthropic.com/settings/billingarrow-up-right and add a payment method or purchase prepaid credits.

chevron-rightWhy am I hitting rate limits with Anthropic Claude?hashtag

Rate limits (such as tokens per minute) are set by Anthropic based on your account's usage tier, not by Missive. Long email threads can consume thousands of input tokens per request, which can exhaust low-tier limits quickly.

To raise your limits, add prepaid credits to your Anthropic account. Anthropic automatically upgrades your usage tier once you've added sufficient credits -- even a small credit purchase ($5) moves your account to a higher tier with significantly raised limits.

Check your current tier and limits in the Anthropic Consolearrow-up-right.

Google Gemini

chevron-rightWhich Google API key do I need?hashtag

You need a Google AI Studio API key from aistudio.google.comarrow-up-right. This is different from a Google Cloud Vertex AI key.

Google AI Studio is the developer API for Gemini models. You don't need a Google Cloud Platform account or project.

chevron-rightIs there a free tier for Gemini?hashtag

Google AI Studio offers a free tier with rate limits. Check Google AI pricingarrow-up-right for current limits and paid tier rates.

The free tier may be sufficient for small teams with light usage. For heavier usage or production workloads, you'll need a paid plan.

Last updated

gitbookPowered by GitBook