AI model view and security classifications
Define hierarchical security classes and assign AI models to each class
Plan for handling different data types and internal communication about security
Discussion of organizational requirements
Review and finalize DPAs as needed
Build a foundation of trust through clear security classifications that give everyone confidence in where and how AI can be used
Provide safe ground to start innovating by showing exactly what data can be used where, enabling your teams to say "yes" to AI
Create different "rooms" for different use cases - work safely with public data in one space, sensitive data in another
Configure the framework once as admins, so your users don't have to think about compliance - it's built into the platform
Different processes have different requirements for functionality, integrations and security. In Intric, you can create different spaces that function as different rooms. Security classes create different rules for these rooms.
Highest class at the top: Sweden β EU β Open. Models permitted in a higher class are automatically available in lower classes, but not vice versa.
As admins, we set the rules. Your users simply work in the right room for their task - compliance is automatic.
Think short-term and long-term:
If you only allow "Open information", make sure your security class label clearly informs users about this restriction (e.g., "Open Data Only").
The goal isn't to block AI use - it's to provide clear, safe paths forward. Give IT and Legal the tools to say "yes" to specific use cases.
Intric provides comprehensive DPA guidance and supporting documentation to make legal review straightforward.
Security classifications are your primary risk control - they ensure data is only processed by appropriate AI models in appropriate contexts.