# AI FAQ

## General

<details>

<summary>Which AI provider should I choose?</summary>

All three providers work well with Missive. Choose based on your preferences:

* **OpenAI.** Widely used, broad model selection, good all-around performance.
* **Anthropic Claude.** Strong at nuanced writing and careful reasoning.
* **Google Gemini.** Competitive pricing, large context windows, free tier available.

You can connect multiple providers and switch between them at any time.

</details>

<details>

<summary>Can I use multiple providers at the same time?</summary>

Yes. Connect as many providers as you want. The assistant lets you pick which model to use per conversation, and you can configure each prompt and AI rule to use a specific provider.

</details>

<details>

<summary>Will AI providers use my data to train models?</summary>

No. All three providers state that data submitted through their APIs is not used to train models:

* [OpenAI API data usage policy](https://openai.com/security)
* [Anthropic API data usage policy](https://www.anthropic.com/policies)
* [Google AI Studio data usage policy](https://ai.google.dev/terms)

Missive sends content to your provider only during active AI processing. No data is stored beyond the immediate request.

</details>

<details>

<summary>How do I monitor AI costs?</summary>

Each provider has its own usage dashboard:

* **OpenAI:** [platform.openai.com/usage](https://platform.openai.com/usage)
* **Anthropic:** [console.anthropic.com/settings/usage](https://console.anthropic.com/settings/usage)
* **Google AI:** [aistudio.google.com](https://aistudio.google.com/) (usage section)

Set budget limits in your provider's account to prevent unexpected charges.

</details>

<details>

<summary>Why isn't a recently released AI model available yet?</summary>

Not every newly announced model is immediately available in Missive. When a provider releases a model in preview or beta, it often comes with significantly lower rate limits and can produce errors under normal usage. We wait until a model is production-ready before offering it.

Once a model is stable and performs reliably at the rate limits real-world usage requires, we add it. We actively monitor new releases from all supported providers.

</details>

## Using AI

<details>

<summary>How is quoted email history handled when sending context to AI?</summary>

It depends on the context type you're using.

**Conversation context** (`@Current conversation`): Missive strips quoted history from all messages except the first. The first message keeps its quoted content to preserve the original context; every subsequent message has quotes removed, since that content is already present earlier in the thread. This means you won't burn extra tokens on repeated quoted history in long threads.

**Message context** (`@Current message`): The full message body is sent as-is, including any quoted history below the reply. Quotes are not stripped.

</details>

<details>

<summary>How does Missive handle very long conversation threads?</summary>

For very long threads, Missive truncates content to fit within the model's context window. You don't need to worry about a 100-email thread exceeding limits -- Missive manages this automatically.

</details>

<details>

<summary>Why does a built-in prompt (Summarize, Translate, etc.) always use the same model?</summary>

Missive controls the prompt text sent to AI providers for built-in prompts, and selects the model that works best with those prompts per provider. Because the prompt text is fixed, the model is fixed too.

You can't change the model for built-in prompts. To use a different model, create a [custom prompt](https://missiveapp.com/docs/ai/using-ai/prompts) with the same instructions and select your preferred model there.

</details>

<details>

<summary>Why isn't web search available in the AI assistant?</summary>

Web search is not available for any AI model in the assistant. It's something we'd like to add eventually, but there's an important security reason we're being cautious about it.

The risk is prompt injection. Consider this scenario: you receive an email with hidden text (white text on a white background) containing carefully crafted instructions targeting the AI. That hidden text could instruct the AI to make a web request to a specific URL and include sensitive information from your conversation -- calendar details, contact info, email content -- as URL parameters, leaking it to a third party.

The AI would detect and refuse this most of the time, but the risk is real enough that we've chosen not to enable web access for now. A web search tool gives the AI the ability to make outbound requests, and that's exactly the mechanism an attacker would need to exfiltrate data.

We're exploring mitigation techniques and plan to address this in the future once we're confident we can do it securely.

</details>

<details>

<summary>How does replying using canned responses work?</summary>

When you reference canned responses in a prompt (using `@Responses` or `@All responses`), Missive searches your canned responses using semantic (concept-based) matching rather than keywords. The most relevant matches are sent to your AI provider as context, and the AI drafts a reply based on both the email and your canned responses.

Because the search is concept-based, it works across languages. A canned response written in German about invoices can still match an English customer email asking about their invoice -- the AI understands the underlying concept, not just the literal words.

</details>

<details>

<summary>How do canned responses affect token usage in prompts?</summary>

It depends on how you reference them.

**Individual canned responses** (`@Response name`): The full text body of each referenced response is added directly to the prompt. Token cost is predictable -- it's the character count of each response you attach.

**`@All responses`**: This does not add every canned response to the prompt. Instead, Missive runs a semantic search against your canned responses and passes up to 20 of the best matches to the AI. Token cost varies: a generic prompt or a library of very similar responses will return more matches and use more tokens; a specific prompt against a diverse library will return fewer matches and use fewer.

If token cost is a concern, referencing specific responses is more predictable than `@All responses`.

</details>

<details>

<summary>How do I make the AI respond in another language?</summary>

Create a custom [prompt](https://missiveapp.com/docs/ai/using-ai/prompts) and save it for reuse. For example:

```
Translate @Current draft to French.
```

Or for replies:

```
Reply to @Current message in Spanish. Be helpful and professional.
```

You can also use the built-in **Translate** prompt from the draft toolbar, which supports multiple languages.

</details>

<details>

<summary>How do I monitor cost usage per user?</summary>

Add one integration per user, each with their own API key. Each user's usage bills to their own provider account.

In Missive, go to **Settings** > **Integrations** > **Add Integration**, choose your provider, and add each key separately.

</details>

<details>

<summary>After adding a new AI provider, do my existing rules update automatically?</summary>

No. AI rules are configured with a specific provider and model. Adding a new integration doesn't change existing rules.

After connecting a new provider, open each AI rule and update the provider and model selection to use the new integration.

</details>

## MCP

<details>

<summary>Does Missive offer an MCP server for external AI tools?</summary>

No. Missive does not currently offer an MCP server that external AI tools (such as Claude Desktop, Cursor, or other MCP clients) could connect to. There is no official Missive MCP server you can point those tools at.

What Missive does support is connecting *to* external MCP servers from within Missive's AI assistant -- for example, Notion, Linear, Stripe, or a custom MCP endpoint. See [MCP integrations](https://missiveapp.com/docs/ai/using-ai/mcp-integrations) for details.

If you need Missive exposed as an MCP server today, Relay has a [guide](https://docs.relay.app/app-specific-faqs/missive) on building one using Missive's REST API.

You can upvote the native MCP server feature request [here](https://feedback.missiveapp.com/feature-requests/p/model-context-protocol-mcp-for-missive).

</details>

## OpenAI

<details>

<summary>Why does the OpenAI integration show a billing setup error?</summary>

You need OpenAI **API access**, not ChatGPT Plus ($20/month). These are different products.

1. Add billing at [platform.openai.com/account/billing/overview](https://platform.openai.com/account/billing/overview)
2. Create an API key at [platform.openai.com/account/api-keys](https://platform.openai.com/account/api-keys)
3. Use that API key when adding the OpenAI integration in Missive

</details>

<details>

<summary>Why did AI suddenly stop working after it was working fine?</summary>

Your OpenAI credit balance may have hit $0. When credits run out, API requests fail and AI features stop working -- even if you have a credit card on file.

To fix it: add credits to your OpenAI account and enable **auto-recharge** in your OpenAI billing settings. Auto-recharge automatically tops up your balance when it falls below a threshold, so you don't have to manage it manually.

</details>

<details>

<summary>Why does Missive show "You exceeded your current quota" for OpenAI?</summary>

This error can appear when connecting the integration or any time you use an AI feature. It means OpenAI rejected the request due to a billing or quota issue on your OpenAI account.

Common causes:

* **Expired pre-purchased credits.** OpenAI credits have an expiration date. If credits you bought have since expired, your balance will be $0 even if you've never used the integration. Check your balance at **platform.openai.com/account/billing/overview**. If credits have expired, add a new credit purchase. Enabling **auto-recharge** prevents this from happening again.
* **Wrong organization or project.** If you have multiple organizations in your OpenAI account, the API key you're using in Missive may belong to one that doesn't have active billing configured. Make sure the key is from the organization with billing set up.
* **Monthly spend limit reached.** OpenAI accounts have a configurable maximum monthly spend. If you've hit that cap, all requests fail until the billing cycle resets or you raise the limit.

</details>

<details>

<summary>Can I keep my OpenAI data in the EU or another region?</summary>

Yes. OpenAI offers data residency in the US, EU, and several other regions (Australia, Canada, India, Japan, Singapore, South Korea, UK, UAE). To route requests through OpenAI's EU infrastructure, set the **API Base URL** to `https://eu.api.openai.com/v1` in the integration's **Advanced settings**.

Non-US regions require approval from OpenAI for abuse monitoring controls (Zero Data Retention or Modified Abuse Monitoring). Contact [OpenAI's sales team](https://openai.com/contact-sales) to check eligibility. Supported regions and requirements change over time -- see [OpenAI's data controls documentation](https://developers.openai.com/api/docs/guides/your-data) for current details.

</details>

<details>

<summary>Does OpenAI data residency cover all my data?</summary>

Data residency covers customer content (prompts and responses) stored at rest. System data like metadata, billing, and usage analytics may still be processed outside your selected region. See [OpenAI's data controls documentation](https://developers.openai.com/api/docs/guides/your-data) for what's covered.

</details>

## Anthropic Claude

<details>

<summary>Why does my Anthropic API key not work?</summary>

The most common cause: your Anthropic account doesn't have billing set up. API access requires prepaid credits or an active billing plan.

A **Claude Pro subscription** ($20/month) is not the same as API access. You need to add credits separately in the [Anthropic Console](https://console.anthropic.com/settings/billing).

</details>

<details>

<summary>How do I set up Anthropic billing?</summary>

Go to [console.anthropic.com/settings/billing](https://console.anthropic.com/settings/billing) and add a payment method or purchase prepaid credits.

</details>

<details>

<summary>Why am I hitting rate limits with Anthropic Claude?</summary>

Rate limits (such as tokens per minute) are set by Anthropic based on your account's usage tier, not by Missive. Long email threads can consume thousands of input tokens per request, which can exhaust low-tier limits quickly.

To raise your limits, add prepaid credits to your Anthropic account. Anthropic automatically upgrades your usage tier once you've added sufficient credits -- even a small credit purchase ($5) moves your account to a higher tier with significantly raised limits.

Check your current tier and limits in the [Anthropic Console](https://console.anthropic.com/settings/limits).

</details>

## Google Gemini

<details>

<summary>Which Google API key do I need?</summary>

You need a **Google AI Studio** API key from [aistudio.google.com](https://aistudio.google.com/). This is different from a Google Cloud Vertex AI key.

Google AI Studio is the developer API for Gemini models. You don't need a Google Cloud Platform account or project.

</details>

<details>

<summary>Is there a free tier for Gemini?</summary>

Google AI Studio offers a free tier with rate limits. Check [Google AI pricing](https://ai.google.dev/pricing) for current limits and paid tier rates.

The free tier may be sufficient for small teams with light usage. For heavier usage or production workloads, you'll need a paid plan.

</details>
