MCP Server
Transparency is central to us at Intric. When an assistant uses an external tool via MCP (Model Context Protocol), specific processes are in place to ensure you have full visibility into what data leaves the platform and where it goes.
The process from your question to a response that includes tool results occurs in an exchange between the Intric platform, the language model selected for the assistant, the MCP server, and the data source(s) the tool uses. Intric always acts as the intermediary — the language model never contacts the MCP server directly.
Step-by-step: How your data is handled
All transfers between Intric and its sub-processors occur over secure, encrypted connections.
Step 1 — User interacts with Intric in the browser
The user writes a message to an assistant that has one or more MCP tools configured.
Data sent to Intric’s server:
- The user’s message
- Chat history
- Any attached files
Step 2 — Intric calls the assistant’s selected language model
In this step, relevant content from Intric’s servers is sent to the assistant’s selected language model.
What happens at the language model: The model determines that a tool should be used.
Reasoning (example): “The user wants to use information available at Eurostat; I need to retrieve it first.”
Response: The model sends a request to Intric to use the Eurostat tool, with a suggested search query.
Step 3 — Intric calls the MCP server
Intric’s server receives the response from the language model with a request to call the tool. Intric verifies that the tool exists and that the call is permitted, then makes a call to the MCP server with what the language model has suggested.
Data sent from Intric’s server:
- Tool name and tool arguments (generated by the LLM)
What data is sent depends on the arguments the LLM generated — these are based on the user’s question and the tool’s description. The LLM never contacts the MCP server directly; all communication goes through Intric’s backend.
Step 4 — MCP server retrieves data from the data source
The MCP server receives the request and calls the data source(s) the tool is connected to in order to retrieve the needed information. The data source can be, for example, a database, an API, or a document archive.
What happens:
- The MCP server sends a request to the data source with the required parameters.
- The data source returns a response to the MCP server.
Step 5 — Response to Intric from the MCP server
The MCP server processes the response from the data source and sends the result back to Intric’s server.
Data sent from the MCP server to Intric:
- The result from the data source query
Step 6 — Intric processes the response with the language model
Intric sends the tool result and the required context to the assistant’s selected language model so the model can formulate a response to the user based on both the original question and what was retrieved via the MCP server.
Data sent from Intric’s server to the LLM (second call):
- The original prompt and conversation history
- The tool result
What happens at the language model: The model processes the result and sends a response back to Intric.
Both the call to the language model and the response back to Intric are shown in the same step in the diagram.
Step 7 — User sees the response in the browser
The response is displayed to the user in Intric in the browser.
Data stored on Intric’s servers:
- The generated response and the history from the user’s interaction with the assistant (according to the assistant’s deletion settings)
- Metadata about tool calls and results in the conversation history
Data sharing and privacy
To protect your and your organization’s privacy, we apply the principle of data minimization. This means the sub-processor only gets access to the content absolutely necessary to perform the task — no user identity ever leaves your infrastructure.
When using MCP tools there are two separate privacy considerations — data is sent to both the LLM and the MCP server. Intric applies the principle of data minimization in both cases, but it is important to understand that data privacy during MCP calls depends on where the MCP server is hosted.
In the table below, you can see exactly what data is sent to each external service and what is kept completely private.
| Sent to external service | Not sent to external service |
|---|---|
|
|
If the MCP server is a third-party service, data sent as tool arguments may leave your jurisdiction. What data is actually sent depends on the arguments the LLM generates — these are based on the user’s question and the tool’s parameter definition.
Data retention and deletion
Intric stores conversation history — including tool calls and their results — in the same way as other assistant interactions. Storage happens on Intric’s servers in Sweden.
- Conversation history including tool calls and results is governed by the assistant’s configured deletion settings
- The MCP server controls its own storage — Intric has no control over what the external service logs or saves
- The LLM provider has zero data retention clauses in Intric’s contracts and stores neither prompts nor responses
Metadata about how users interact with assistants is stored for a longer period and is available to administrators.