Deployment Options

Intric offers three main deployment options to meet different requirements for security, data control and infrastructure. Regardless of which option is chosen, all communication (both between web server and application and to external services) occurs encrypted via SSL/TLS.

Multi-tenant Cloud

Multi-tenant is the most common deployment form for Intric, where resources are used efficiently to deliver a stable and cost-effective service. In this solution, multiple customers share the application server and database, but data is kept strictly separate through logical separation. The solution is operated on virtual servers at Glesys AB, which is ISO 27001-certified, and all data storage occurs securely in their data centers.

This option suits organizations that want a smooth and maintenance-free solution without needing to manage their own infrastructure.

Dedicated Instance Cloud

For organizations that have higher requirements for isolation but still want to benefit from the cloud’s advantages, we offer dedicated instance. Here, each customer is assigned a completely own, isolated database, despite being on a shared server infrastructure. This guarantees that your data is never physically mixed with other customers’ data in the database layer, which provides an extra level of security. Just like with the Multi-tenant solution, operation occurs on virtual servers at Glesys AB.

On-prem Managed

On-prem Managed is the option for organizations with very strict security requirements, where data according to policy must stay within the own IT environment. In this case, Intric is installed and run locally on your own server, and the platform is by default only accessible from within your internal network. This gives you full control over both hardware and data flows. External services are only reached if you specifically open for them in your firewall, according to the instructions under the section about Language Models.

As a complement to this option, there’s also the possibility to install local GPUs. By doing this, you can run even the language models themselves locally in your data center. This minimizes, or in some cases completely eliminates, the need for external API calls, which creates a completely closed environment for maximum security and integrity.