AI FAQ
Common questions about AI features in Missive.
General
Which AI provider should I choose?
All three providers work well with Missive. Choose based on your preferences:
OpenAI. Widely used, broad model selection, good all-around performance.
Anthropic Claude. Strong at nuanced writing and careful reasoning.
Google Gemini. Competitive pricing, large context windows, free tier available.
You can connect multiple providers and switch between them at any time.
Can I use multiple providers at the same time?
Yes. Connect as many providers as you want. The assistant lets you pick which model to use per conversation, and you can configure each prompt and AI rule to use a specific provider.
Will AI providers use my data to train models?
No. All three providers state that data submitted through their APIs is not used to train models:
Missive sends content to your provider only during active AI processing. No data is stored beyond the immediate request.
How do I monitor AI costs?
Each provider has its own usage dashboard:
OpenAI: platform.openai.com/usage
Anthropic: console.anthropic.com/settings/usage
Google AI: aistudio.google.com (usage section)
Set budget limits in your provider's account to prevent unexpected charges.
Why isn't a recently released AI model available yet?
Not every newly announced model is immediately available in Missive. When a provider releases a model in preview or beta, it often comes with significantly lower rate limits and can produce errors under normal usage. We wait until a model is production-ready before offering it.
Once a model is stable and performs reliably at the rate limits real-world usage requires, we add it. We actively monitor new releases from all supported providers.
Using AI
How is quoted email history handled when sending context to AI?
It depends on the context type you're using.
Conversation context (@Current conversation): Missive strips quoted history from all messages except the first. The first message keeps its quoted content to preserve the original context; every subsequent message has quotes removed, since that content is already present earlier in the thread. This means you won't burn extra tokens on repeated quoted history in long threads.
Message context (@Current message): The full message body is sent as-is, including any quoted history below the reply. Quotes are not stripped.
How does Missive handle very long conversation threads?
For very long threads, Missive truncates content to fit within the model's context window. You don't need to worry about a 100-email thread exceeding limits -- Missive manages this automatically.
Why does a built-in prompt (Summarize, Translate, etc.) always use the same model?
Missive controls the prompt text sent to AI providers for built-in prompts, and selects the model that works best with those prompts per provider. Because the prompt text is fixed, the model is fixed too.
You can't change the model for built-in prompts. To use a different model, create a custom prompt with the same instructions and select your preferred model there.
Why isn't web search available in the AI assistant?
Web search is not available for any AI model in the assistant. It's something we'd like to add eventually, but there's an important security reason we're being cautious about it.
The risk is prompt injection. Consider this scenario: you receive an email with hidden text (white text on a white background) containing carefully crafted instructions targeting the AI. That hidden text could instruct the AI to make a web request to a specific URL and include sensitive information from your conversation -- calendar details, contact info, email content -- as URL parameters, leaking it to a third party.
The AI would detect and refuse this most of the time, but the risk is real enough that we've chosen not to enable web access for now. A web search tool gives the AI the ability to make outbound requests, and that's exactly the mechanism an attacker would need to exfiltrate data.
We're exploring mitigation techniques and plan to address this in the future once we're confident we can do it securely.
How does replying using canned responses work?
When you reference canned responses in a prompt (using @Responses or @All responses), Missive searches your canned responses using semantic (concept-based) matching rather than keywords. The most relevant matches are sent to your AI provider as context, and the AI drafts a reply based on both the email and your canned responses.
Because the search is concept-based, it works across languages. A canned response written in German about invoices can still match an English customer email asking about their invoice -- the AI understands the underlying concept, not just the literal words.
How do canned responses affect token usage in prompts?
It depends on how you reference them.
Individual canned responses (@Response name): The full text body of each referenced response is added directly to the prompt. Token cost is predictable -- it's the character count of each response you attach.
@All responses: This does not add every canned response to the prompt. Instead, Missive runs a semantic search against your canned responses and passes up to 20 of the best matches to the AI. Token cost varies: a generic prompt or a library of very similar responses will return more matches and use more tokens; a specific prompt against a diverse library will return fewer matches and use fewer.
If token cost is a concern, referencing specific responses is more predictable than @All responses.
How do I make the AI respond in another language?
Create a custom prompt and save it for reuse. For example:
Or for replies:
You can also use the built-in Translate prompt from the draft toolbar, which supports multiple languages.
How do I monitor cost usage per user?
Add one integration per user, each with their own API key. Each user's usage bills to their own provider account.
In Missive, go to Settings > Integrations > Add Integration, choose your provider, and add each key separately.
After adding a new AI provider, do my existing rules update automatically?
No. AI rules are configured with a specific provider and model. Adding a new integration doesn't change existing rules.
After connecting a new provider, open each AI rule and update the provider and model selection to use the new integration.
OpenAI
Why does the OpenAI integration show a billing setup error?
You need OpenAI API access, not ChatGPT Plus ($20/month). These are different products.
Add billing at platform.openai.com/account/billing/overview
Create an API key at platform.openai.com/account/api-keys
Use that API key when adding the OpenAI integration in Missive
Why did AI suddenly stop working after it was working fine?
Your OpenAI credit balance may have hit $0. When credits run out, API requests fail and AI features stop working -- even if you have a credit card on file.
To fix it: add credits to your OpenAI account and enable auto-recharge in your OpenAI billing settings. Auto-recharge automatically tops up your balance when it falls below a threshold, so you don't have to manage it manually.
Anthropic Claude
Why does my Anthropic API key not work?
The most common cause: your Anthropic account doesn't have billing set up. API access requires prepaid credits or an active billing plan.
A Claude Pro subscription ($20/month) is not the same as API access. You need to add credits separately in the Anthropic Console.
How do I set up Anthropic billing?
Go to console.anthropic.com/settings/billing and add a payment method or purchase prepaid credits.
Why am I hitting rate limits with Anthropic Claude?
Rate limits (such as tokens per minute) are set by Anthropic based on your account's usage tier, not by Missive. Long email threads can consume thousands of input tokens per request, which can exhaust low-tier limits quickly.
To raise your limits, add prepaid credits to your Anthropic account. Anthropic automatically upgrades your usage tier once you've added sufficient credits -- even a small credit purchase ($5) moves your account to a higher tier with significantly raised limits.
Check your current tier and limits in the Anthropic Console.
Google Gemini
Which Google API key do I need?
You need a Google AI Studio API key from aistudio.google.com. This is different from a Google Cloud Vertex AI key.
Google AI Studio is the developer API for Gemini models. You don't need a Google Cloud Platform account or project.
Is there a free tier for Gemini?
Google AI Studio offers a free tier with rate limits. Check Google AI pricing for current limits and paid tier rates.
The free tier may be sufficient for small teams with light usage. For heavier usage or production workloads, you'll need a paid plan.
Last updated