AI Act
With Intric’s platform, you have a lot of flexibility and opportunities for collaboration. Municipalities can decide — either each time or once and for all — how much control they want over how the assistant is used and shared.
To ensure compliance with the EU AI Act, two fundamental assessments are required for each assistant you create:
- Which risk class does the assistant fall into?
- What is the municipality’s role?
Based on these two assessments, the municipality receives a clear list of obligations. The obligations aim to protect the health, safety and fundamental rights of residents.
Step by step: How to assess your assistant
Section titled “Step by step: How to assess your assistant”Step 1 — Describe the use case of your assistant
Section titled “Step 1 — Describe the use case of your assistant”Start by mapping exactly what the assistant will do in practice.
Questions to ask:
- What is the purpose of the assistant?
- Who are the end users (caseworkers, teachers, residents)?
- What types of data will it process?
Step 2 — Assess the risk class of the use case
Section titled “Step 2 — Assess the risk class of the use case”Based on the description, assess which risk class the assistant falls under according to the regulation.
Categories:
- High-risk: For example case handling, access to welfare services, and recruitment.
- High-risk, but meets exception criteria: Performs only a limited, procedural task (e.g. quality assurance of plain language requirements in a document) without making independent decisions.
- Not high-risk: Use with minimal risk due to clear limitations (e.g. simple rule lookups).
Step 3 — Determine your role
Section titled “Step 3 — Determine your role”Your role is determined by whether you share the assistant outside your own organisation. This determines where in the “AI value chain” your responsibility lies.
Do you share the assistant with another organisation?
- No → Deployer: You build and use the assistant exclusively internally.
- Yes → Downstream provider: You give another municipality direct system access to an assistant you have built (simply sharing a prompt in “Assistant Library” does not count as this).
Step 4 — Map obligations according to risk class and role
Section titled “Step 4 — Map obligations according to risk class and role”Based on the choices in Step 2 and Step 3, you now know exactly which requirements apply to you. Go to the checklist that fits your situation:
1. Risk class: What is the assistant used for?
Section titled “1. Risk class: What is the assistant used for?”The risk class is determined by the purpose of the AI assistant. The AI Act divides use into different categories:
| Category | Description and typical municipal areas | Important clarifications and exceptions |
|---|---|---|
| High-risk | Use in education, employment and recruitment, access to essential services (incl. welfare), and democratic processes. | Exception: If the assistant performs a limited, procedural task that does not pose a risk to health, safety or fundamental rights, an exception can be claimed. |
| Not high-risk | Use that poses minimal risk. For example tasks that perform a purely limited procedure, or that are not intended to replace or significantly affect a human assessment. | |
| Prohibited use | Systems with unacceptable risk to fundamental rights (e.g. deliberate manipulation, social scoring, assessment of a person’s risk of committing a crime). | Largely irrelevant for municipalities. |
Exception: How to assess risk in practice
Section titled “Exception: How to assess risk in practice”Exceptions are very common for municipal use of AI assistants. Even if an assistant is used in an area originally defined as high-risk (such as education or welfare), the use can qualify for an exception from the strictest requirements. This applies particularly if the assistant:
- Is not intended to replace or influence a human assessment.
- Is designed to perform a limited, procedural task.
To illustrate that there are many safe and highly useful use cases that are not classified as high-risk, here are two typical examples from municipal day-to-day work:
Example 1: Quality assurance of case documents (Human assessment retained)
An adviser uses an AI assistant to quality-assure case documents before they are sent to the city council. The AI assistant refers to relevant templates and gives feedback on the documents: It checks that previous case reference numbers are cited in the right place, that the content meets minimum requirements, and that the written style meets plain language requirements.
- Why is this an exception? The AI assistant is only used to improve the quality of work the caseworkers have already done. The adviser reads through the feedback, makes their own assessments, and manually edits the documents before sending. The assistant neither replaces nor makes independent decisions.
Example 2: Rule lookups for schools and after-school programmes (Limited and procedural)
A school department head builds an AI assistant based on the municipality’s own admission regulations, enrolment rules and priority criteria. The purpose is for colleagues to easily look up answers to procedural questions: What documentation is required when applying for an after-school place? What are the deadlines? How are the school catchment zone boundaries defined?
- Why is this an exception? Even though this concerns “education”, the AI assistant only performs a limited, procedural task — explaining rules already published on the municipality’s website. It makes no assessment of an individual child or a specific case. The employee takes the information, assesses it, and writes a reply to the parents.
2. Your role: Do you share the assistant?
Section titled “2. Your role: Do you share the assistant?”Why does your role matter?
The AI Act is largely designed as a product safety regulation. This means that responsibility and obligations are distributed based on where the organisation sits in the AI value chain. This is a deliberate distinction to place responsibility where it can best be managed:
- Providers bear the heaviest responsibility and must document that the AI system itself is safe before it is made available (design, technical documentation, testing, CE marking).
- Deployers have a responsibility directed at the specific use of the system in everyday operations (ensuring human oversight, following instructions for use, and safeguarding residents’ rights).
Your role under the AI Act is determined primarily by whether you share the assistant outside your own organisation:
- Deployer: You are a deployer if your municipality uses the assistants exclusively internally.
- Downstream provider: You become a downstream provider if your municipality shares a pre-built assistant with other municipalities. (You then take on a “Provider” responsibility for what you share.)
- Note: By “sharing” we mean giving another municipality access to an assistant you have built, which they have not built themselves on the platform. If another municipality builds their own assistant based on your instructions/prompts (for example via “Assistant Library”), this does not count as sharing the system, and you remain a deployer.
3. Your obligations
Section titled “3. Your obligations”Your obligations are determined by the combination of risk class (how the assistant is used) and role (whether it is shared).
Here are the three most common scenarios for municipalities. The lists below describe the specific requirements you must meet before and during use.
Scenario A: Deployer + Not high-risk
Section titled “Scenario A: Deployer + Not high-risk”This applies to municipalities that use internally developed assistants exclusively internally for low-risk tasks.
Example of use: The municipality has created an assistant that helps the HR department draft job postings based on existing templates, or an assistant that summarises long public reports for municipal leadership.
These are your obligations (checklist):
- AI literacy requirement: The municipality must ensure sufficient AI knowledge (“AI literacy”) among users. Employees must have enough competence to use the platform safely and make informed decisions about the content Intric generates.
Scenario B: Deployer + High-risk
Section titled “Scenario B: Deployer + High-risk”This applies to municipalities that use assistants internally in areas defined as high-risk.
Example of use: The municipality uses an AI assistant as part of an automated case handling process to calculate the allocation of social welfare benefits, or to sort and assess candidates in a recruitment process where it makes or significantly influences decisions about individuals.
These are your obligations (checklist):
- Human oversight: Ensure real human oversight by people with the right authority and competence. A human must always have the final say.
- Notification of employees and unions: Inform employee representatives and employees before the system is put into use in the workplace.
- Notification of residents: Affected persons (e.g. applicants) must be informed that they are subject to automated decision-making or that AI is being used as decision support.
- Correct input data: Ensure that only relevant and appropriate data (input) is fed into the assistant.
- Use in accordance with instructions: The assistant must be used exclusively in accordance with the instructions for use provided by the supplier (Intric).
- Monitoring and logging: Monitor the use of the system and log events systematically.
- Registration with authorities: The system must be registered in a public register at the Norwegian Communications Authority (Nkom).
- Privacy (DPIA): Carry out a Data Protection Impact Assessment (DPIA) if the solution processes personal data.
Scenario C: Downstream provider + High-risk
Section titled “Scenario C: Downstream provider + High-risk”This applies to municipalities that have built an assistant for a high-risk area and choose to share actual system access with other organisations or municipalities.
Example of use: The municipality has built an advanced AI tool for assessing building permits. Instead of just sharing the recipe (the prompt), they give a neighbouring municipality direct system access to the assistant through an inter-municipal collaboration. The municipality thus “delivers” the system onwards.
These are your obligations (checklist):
- Quality management: Establish and maintain a formal quality management system for the AI assistant.
- Risk assessments: Carry out and document formal risk assessments before and during sharing.
- Data governance: Establish strict routines for data management and data quality (“Data governance”).
- Registration with authorities: Register the assistant as a high-risk system at Nkom.
- Monitoring and logging: Implement systematic monitoring and logging of how the system performs and is used by the other municipalities.