AI FAQ

Common questions about AI features in Missive.

General

chevron-rightWhich AI provider should I choose?hashtag

All three providers work well with Missive. Choose based on your preferences:

  • OpenAI. Widely used, broad model selection, good all-around performance.

  • Anthropic Claude. Strong at nuanced writing and careful reasoning.

  • Google Gemini. Competitive pricing, large context windows, free tier available.

You can connect multiple providers and switch between them at any time.

chevron-rightCan I use multiple providers at the same time?hashtag

Yes. Connect as many providers as you want. The assistant lets you pick which model to use per conversation, and you can configure each prompt and AI rule to use a specific provider.

chevron-rightWill AI providers use my data to train models?hashtag

No. All three providers state that data submitted through their APIs is not used to train models:

Missive sends content to your provider only during active AI processing. No data is stored beyond the immediate request.

chevron-rightHow do I monitor AI costs?hashtag

Each provider has its own usage dashboard:

Set budget limits in your provider's account to prevent unexpected charges.

chevron-rightWhy isn't a recently released AI model available yet?hashtag

Not every newly announced model is immediately available in Missive. When a provider releases a model in preview or beta, it often comes with significantly lower rate limits and can produce errors under normal usage. We wait until a model is production-ready before offering it.

Once a model is stable and performs reliably at the rate limits real-world usage requires, we add it. We actively monitor new releases from all supported providers.

Using AI

chevron-rightHow is quoted email history handled when sending context to AI?hashtag

It depends on the context type you're using.

Conversation context (@Current conversation): Missive strips quoted history from all messages except the first. The first message keeps its quoted content to preserve the original context; every subsequent message has quotes removed, since that content is already present earlier in the thread. This means you won't burn extra tokens on repeated quoted history in long threads.

Message context (@Current message): The full message body is sent as-is, including any quoted history below the reply. Quotes are not stripped.

chevron-rightHow does Missive handle very long conversation threads?hashtag

For very long threads, Missive truncates content to fit within the model's context window. You don't need to worry about a 100-email thread exceeding limits -- Missive manages this automatically.

chevron-rightWhy does a built-in prompt (Summarize, Translate, etc.) always use the same model?hashtag

Missive controls the prompt text sent to AI providers for built-in prompts, and selects the model that works best with those prompts per provider. Because the prompt text is fixed, the model is fixed too.

You can't change the model for built-in prompts. To use a different model, create a custom prompt with the same instructions and select your preferred model there.

chevron-rightWhy isn't web search available in the AI assistant?hashtag

Web search is not available for any AI model in the assistant. It's something we'd like to add eventually, but there's an important security reason we're being cautious about it.

The risk is prompt injection. Consider this scenario: you receive an email with hidden text (white text on a white background) containing carefully crafted instructions targeting the AI. That hidden text could instruct the AI to make a web request to a specific URL and include sensitive information from your conversation -- calendar details, contact info, email content -- as URL parameters, leaking it to a third party.

The AI would detect and refuse this most of the time, but the risk is real enough that we've chosen not to enable web access for now. A web search tool gives the AI the ability to make outbound requests, and that's exactly the mechanism an attacker would need to exfiltrate data.

We're exploring mitigation techniques and plan to address this in the future once we're confident we can do it securely.

chevron-rightHow does replying using canned responses work?hashtag

When you reference canned responses in a prompt (using @Responses or @All responses), Missive searches your canned responses using semantic (concept-based) matching rather than keywords. The most relevant matches are sent to your AI provider as context, and the AI drafts a reply based on both the email and your canned responses.

Because the search is concept-based, it works across languages. A canned response written in German about invoices can still match an English customer email asking about their invoice -- the AI understands the underlying concept, not just the literal words.

chevron-rightHow do canned responses affect token usage in prompts?hashtag

It depends on how you reference them.

Individual canned responses (@Response name): The full text body of each referenced response is added directly to the prompt. Token cost is predictable -- it's the character count of each response you attach.

@All responses: This does not add every canned response to the prompt. Instead, Missive runs a semantic search against your canned responses and passes up to 20 of the best matches to the AI. Token cost varies: a generic prompt or a library of very similar responses will return more matches and use more tokens; a specific prompt against a diverse library will return fewer matches and use fewer.

If token cost is a concern, referencing specific responses is more predictable than @All responses.

chevron-rightHow do I make the AI respond in another language?hashtag

Create a custom prompt and save it for reuse. For example:

Or for replies:

You can also use the built-in Translate prompt from the draft toolbar, which supports multiple languages.

chevron-rightHow do I monitor cost usage per user?hashtag

Add one integration per user, each with their own API key. Each user's usage bills to their own provider account.

In Missive, go to Settings > Integrations > Add Integration, choose your provider, and add each key separately.

chevron-rightAfter adding a new AI provider, do my existing rules update automatically?hashtag

No. AI rules are configured with a specific provider and model. Adding a new integration doesn't change existing rules.

After connecting a new provider, open each AI rule and update the provider and model selection to use the new integration.

OpenAI

chevron-rightWhy does the OpenAI integration show a billing setup error?hashtag

You need OpenAI API access, not ChatGPT Plus ($20/month). These are different products.

  1. Use that API key when adding the OpenAI integration in Missive

chevron-rightWhy did AI suddenly stop working after it was working fine?hashtag

Your OpenAI credit balance may have hit $0. When credits run out, API requests fail and AI features stop working -- even if you have a credit card on file.

To fix it: add credits to your OpenAI account and enable auto-recharge in your OpenAI billing settings. Auto-recharge automatically tops up your balance when it falls below a threshold, so you don't have to manage it manually.

Anthropic Claude

chevron-rightWhy does my Anthropic API key not work?hashtag

The most common cause: your Anthropic account doesn't have billing set up. API access requires prepaid credits or an active billing plan.

A Claude Pro subscription ($20/month) is not the same as API access. You need to add credits separately in the Anthropic Consolearrow-up-right.

chevron-rightHow do I set up Anthropic billing?hashtag

Go to console.anthropic.com/settings/billingarrow-up-right and add a payment method or purchase prepaid credits.

chevron-rightWhy am I hitting rate limits with Anthropic Claude?hashtag

Rate limits (such as tokens per minute) are set by Anthropic based on your account's usage tier, not by Missive. Long email threads can consume thousands of input tokens per request, which can exhaust low-tier limits quickly.

To raise your limits, add prepaid credits to your Anthropic account. Anthropic automatically upgrades your usage tier once you've added sufficient credits -- even a small credit purchase ($5) moves your account to a higher tier with significantly raised limits.

Check your current tier and limits in the Anthropic Consolearrow-up-right.

Google Gemini

chevron-rightWhich Google API key do I need?hashtag

You need a Google AI Studio API key from aistudio.google.comarrow-up-right. This is different from a Google Cloud Vertex AI key.

Google AI Studio is the developer API for Gemini models. You don't need a Google Cloud Platform account or project.

chevron-rightIs there a free tier for Gemini?hashtag

Google AI Studio offers a free tier with rate limits. Check Google AI pricingarrow-up-right for current limits and paid tier rates.

The free tier may be sufficient for small teams with light usage. For heavier usage or production workloads, you'll need a paid plan.

Last updated