Your obligations
Your obligations are determined by the combination of risk class (how the assistant is used) and role (whether it is shared).
Here are the three most common scenarios for municipalities. The lists below describe the specific requirements you must meet before and during use.
Scenario A: Deployer + Not high-risk
Section titled “Scenario A: Deployer + Not high-risk”This applies to municipalities that use internally developed assistants exclusively internally for low-risk tasks.
Example of use: The municipality has created an assistant that helps the HR department draft job postings based on existing templates, or an assistant that summarises long public reports for municipal leadership.
These are your obligations (checklist):
- AI literacy requirement: The municipality must ensure sufficient AI knowledge (“AI literacy”) among users. Employees must have enough competence to use the platform safely and make informed decisions about the content Intric generates.
Scenario B: Deployer + High-risk
Section titled “Scenario B: Deployer + High-risk”This applies to municipalities that use assistants internally in areas defined as high-risk.
Example of use: The municipality uses an AI assistant as part of an automated case handling process to calculate the allocation of social welfare benefits, or to sort and assess candidates in a recruitment process where it makes or significantly influences decisions about individuals.
These are your obligations (checklist):
- Human oversight: Ensure real human oversight by people with the right authority and competence. A human must always have the final say.
- Notification of employees and unions: Inform employee representatives and employees before the system is put into use in the workplace.
- Notification of residents: Affected persons (e.g. applicants) must be informed that they are subject to automated decision-making or that AI is being used as decision support.
- Correct input data: Ensure that only relevant and appropriate data (input) is fed into the assistant.
- Use in accordance with instructions: The assistant must be used exclusively in accordance with the instructions for use provided by the supplier (Intric).
- Monitoring and logging: Monitor the use of the system and log events systematically.
- Registration with authorities: The system must be registered in a public register at the Norwegian Communications Authority (Nkom).
- Privacy (DPIA): Carry out a Data Protection Impact Assessment (DPIA) if the solution processes personal data.
Scenario C: Downstream provider + High-risk
Section titled “Scenario C: Downstream provider + High-risk”This applies to municipalities that have built an assistant for a high-risk area and choose to share actual system access with other organisations or municipalities.
Example of use: The municipality has built an advanced AI tool for assessing building permits. Instead of just sharing the recipe (the prompt), they give a neighbouring municipality direct system access to the assistant through an inter-municipal collaboration. The municipality thus “delivers” the system onwards.
These are your obligations (checklist):
- Quality management: Establish and maintain a formal quality management system for the AI assistant.
- Risk assessments: Carry out and document formal risk assessments before and during sharing.
- Data governance: Establish strict routines for data management and data quality (“Data governance”).
- Registration with authorities: Register the assistant as a high-risk system at Nkom.
- Monitoring and logging: Implement systematic monitoring and logging of how the system performs and is used by the other municipalities.