Assistant
Transparency is central to us at Intric. When you interact with an assistant in our platform, specific processes are in place to ensure your privacy and data sovereignty.
The process from your question to a finished answer occurs through a secure interaction between the Intric platform (where your data is managed) and the LLM provider you have selected (e.g. Berget or Airon).
Step-by-step: How your data is handled
Section titled “Step-by-step: How your data is handled”All transfers between Intric and its sub-processors occur over secure, encrypted connections.
Step 1 — User interacts with Intric in the browser
Section titled “Step 1 — User interacts with Intric in the browser”The user writes a message to an assistant (input in the prompt).
Data sent to Intric’s server:
- The user’s message
- Chat history
- Any attached files
Step 2 — Intric processes the user’s message
Section titled “Step 2 — Intric processes the user’s message”The message content is processed by Intric before anything is forwarded to the language model. In this step, Intric collects relevant context, adds instructions, and removes all identifying metadata before sending it on.
Nothing leaves Intric’s server at this stage — all processing happens internally on the platform before any outgoing call to the LLM.
Step 3 — Intric calls the assistant’s selected language model
Section titled “Step 3 — Intric calls the assistant’s selected language model”In this step, relevant content from Intric’s servers is sent to the assistant’s selected language model.
Data sent from Intric’s server:
- The prompt content (the user’s message)
- Relevant information retrieved from attachments, Knowledge, or Tools
What happens at the language model: A response is generated based solely on the information provided. The language model has zero context about the user’s or organization’s identity.
Step 4 — Response
Section titled “Step 4 — Response”The language model’s response is sent to Intric’s server, which receives and stores the information encrypted in its database.
Data sent from the language model to Intric:
- The language model’s generated response
Immediately after the response is sent back to Intric, both the user’s input and the generated response are deleted from the language model’s server.
Step 5 — User sees the response in the browser
Section titled “Step 5 — User sees the response in the browser”The response is displayed to the user in Intric in the browser.
Data stored on Intric’s servers:
- The generated response and the history from the user’s interaction with the assistant
- Metadata about who sent the request
Data sharing and privacy
Section titled “Data sharing and privacy”To protect your and your organization’s privacy, we apply the principle of data minimization. This means the sub-processor only gets access to the content absolutely necessary to perform the task — no user identity ever leaves your infrastructure.
We have strict zero data retention clauses in all our contracts with language model sub-processors. This guarantees that your prompts and the generated responses are never saved by the provider after the response is returned, nor is the information used to train their AI models.
In the table below, you can see exactly what data is sent to the sub-processor and what does not leave Intric’s servers.
| Sent to the language model | Not sent to the language model |
|---|---|
|
|
Data retention and deletion
Section titled “Data retention and deletion”History of the user’s interactions with assistants in Intric is managed in two different places depending on the type of assistant:
- Personal assistant is configured by Admin for the entire organization
- Assistants in Spaces are configured by Creator per assistant
Metadata about how users interact with assistants is stored for a longer period and is available to administrators.