Step-by-step guide
This help center article walks you through the DPIA Assistant — an interactive tool that guides you step by step through the data protection impact assessment before you introduce an AI service in your organization.
The texts below are used as support panels in the help center, one per step. Each panel explains why the step is carried out and what to consider. In each step, the assistant asks targeted questions, automatically pulls in relevant information from legislation and Intric documentation, and compiles your answers into a DPIA document that fits your organization’s template.
Use the step selector below to work through the process at your own pace.
Introduction and template
Choose a template for the final document and upload any internal policy documents, such as an AI policy or data protection policy. The assistant uses them as context in the legal assessments later on. If you do not have your own policy documents, pick one of the built-in template options and continue without them.
The assistant:
- Reads in the template structure and identifies all fields to be completed
- Asks whether the municipality (or your organization) has internal policies linked to GDPR or AI
- Saves the template and policy documents as reference for the entire assessment process
📌 Example scenario: The examples in this guide are taken from a DPIA on AI-assisted analysis and summarisation of consultation responses, carried out by a municipality using Intric.
Purpose
The purpose is the starting point for the entire DPIA. Be concrete in your description: “AI assistant for teachers” is too vague; “AI assistant that helps teachers draft individual feedback on pupil texts” sets you up properly. Three classification questions then control how thorough the assessment needs to be in the remaining steps.
The assistant:
- Asks for a concrete description of what the AI service should do and for whom
- Asks three classification questions about personal data, impact on individuals, and internal vs external use
- Searches relevant legislation to verify that the purpose is lawful
✏️ Example:
The municipality uses an AI assistant built on Intric to support case officers in analysing and summarising incoming consultation responses. The service is an internal support tool in the case-handling process.
- Personal data processed: Yes
- Affects individual decisions: No
- Internal use
Content and technology
Here you map which personal data is processed, in which systems, and how data flows. State which Intric features are enabled, for example web search or image analysis. Remember that users may still enter sensitive information in free text even if the system is not designed for that.
Read more about platform architecture and data flows per feature in Intric’s documentation.
The assistant:
- Maps processing activities, data sources, and categories of personal data present
- Automatically pulls platform and data-flow information from Intric’s documentation
- Flags warnings for sensitive data, vulnerable data subjects, and third-country transfers
✏️ Example:
Systems involved:
- ACOS Websak — case records and primary storage
- Intric — an AI assistant (GleSys AB, Sweden)
- Word — final summary
Features enabled in Intric: tools to search legislation, web search.
Sensitive consultation responses (approx. 1 in 100) are handled manually and are not part of the AI flow.
Classification and risk
The assistant performs an automatic risk assessment based on steps 1 and 2, and you confirm that it is correct. Pay particular attention to AI-specific risks such as incorrect outputs (hallucination) and biased results. A tool that is formally only a support aid can quickly become decisive if users rely on it too much.
The assistant:
- Classifies the information’s protection needs across three dimensions: confidentiality, integrity, and availability
- Assesses AI-specific factors such as level of autonomy and impact on individuals
- Produces a risk matrix with likelihood and consequence for each identified risk
✏️ Example:
- Confidentiality: 2 — Consultation responses are public records but contain names, addresses, and opinions
- Autonomy: Support — AI generates proposals; all decisions are taken by the case officer
- Hallucination risk: Likelihood Medium / Impact Low
- Overall risk level: Medium
Legal assessment
The assistant automatically searches the GDPR and relevant national legislation to identify a suitable legal basis. Your task is to confirm that the assessment matches how you actually work. Check that the basis covers all personal data and the entire use case. If special-category data under Article 9 is processed, a separate exemption is also required.
The assistant:
- Automatically searches the GDPR and relevant national laws to propose a suitable legal basis
- Checks that the basis covers all purposes, categories of data, and operations
- Flags whether processing requires an exemption under GDPR Art. 9
✏️ Example:
- Legal basis: GDPR Art. 6(1)(e) — processing is necessary for the performance of a task carried out in the public interest
- Supplementary basis (Swedish municipal example): Förvaltningslagen 37 § (consultation obligation) and Plan- och bygglagen Ch. 5 § 2
- Article 9: not expected to occur in the AI flow
- The mailroom’s control procedure ensures that sensitive documents are not made public and never enter the AI flow
Data protection principles
The assistant works through each principle in GDPR Article 5 and proposes measures based on your description and the legislation. For each principle, the assistant pulls relevant information on how Intric works — for example how data retention, access control, and purpose limitation are implemented in the platform — and maps this to GDPR requirements. Add your organization’s context so the proposed measures are accurate.
The assistant:
- Automatically pulls information on how Intric fulfills each data protection principle, for example how data retention, access control, and purpose limitation are implemented in the platform
- Maps Intric’s features to GDPR requirements and proposes complementary measures for your organization
- Draws on the GDPR text to justify the assessment for each principle
✏️ Example:
- Transparency: The municipality updates the consultation letter template with: “Consultation responses will be processed with AI tools to produce a summary. Submissions containing sensitive personal data have been removed from AI processing.”
- Purpose limitation: Intric’s sub-processors never use customer data for AI training and apply zero data retention.
- Data minimisation: Names and addresses are kept only where geographic relevance is objectively justified.
Security and access control
Responsibility is shared between your organization and Intric. Both parts need to be in place.
Your organization’s measures
Section titled “Your organization’s measures”Check that you have documented procedures for incident management and log follow-up, and that relevant training is completed before go-live.
Intric’s measures
Section titled “Intric’s measures”Intric’s technical safeguards such as encryption, logging, and certifications are pulled automatically from Intric’s documentation. Your task is to confirm that they are sufficient for your risk level.
The assistant:
- Automatically pulls Intric’s technical and organisational measures (TOMs) from Intric’s documentation
- Documents the municipality’s (or your organization’s) own measures based on your answers
- Checks that the level of protection is proportionate to the risk level from step 3
✏️ Example:
Your organization’s measures:
- MFA enabled via the municipality’s AD/Entra ID
- SSO integration with Intric
- RBAC configured — only relevant case officers have access
- Mandatory training (AI literacy and practical use) completed
Intric’s measures:
- ISO 27001:2022 certified
- TLS 1.2+ in transit, industry-standard encryption at rest
- Zero data retention with all AI model providers — prompts deleted immediately after the response
Deployment and configuration
Turn the assessment into actual settings. A common risk is go-live with defaults that do not match the risk assessment — for example web search left on or retention periods never configured.
Your organization’s measures
Section titled “Your organization’s measures”Ensure a responsible person is appointed, the DPO has approved, users are trained, and the deletion routine has been tested.
Features available in Intric
Section titled “Features available in Intric”Configure RBAC, retention periods, logging, and AI features in line with your assessment. See Intric’s guide to secure configuration for a full go-live checklist.
The assistant:
- Checks that configuration matches the requirements from steps 3 to 5
- Completes the go-live checklist and identifies any gaps
- Verifies retention, RBAC, logging, and AI features against the assessment
✏️ Example:
- Retention rule: 120 days, automatic and permanent deletion
- AI model providers: immediate deletion (zero data retention)
- RBAC configured
- Insights disabled (verified)
- Data processing agreement with Intric AB signed and filed before go-live
Review
The assessment is checked by two independent AI reviewers — one legal and one technical — who do not see each other’s findings. If gaps are found, you go back and complete the relevant step. Final responsibility always rests with your organization and the data protection officer.
The assistant:
- Sends the assessment to two independent AI reviewers (legal and technical) who work without seeing each other’s conclusions
- Produces an overall decision: Approved / Not approved / Requires completion
- Specifies which steps need to be completed when gaps are found
✏️ Example:
The review identified three open items before go-live:
- Model choice and third-country assessment not clarified
- Legal clarification needed on whether AI working material constitutes a public record
- Insights must be verified as disabled on the assistant
Final document
The assistant fills in the chosen template with all information from the process. Check that no fields are empty and that the action plan has been communicated to those responsible. If the service changes materially — new model, new purpose, or expanded data — the DPIA must be redone.
The assistant:
- Fills in the chosen template with all information collected in the previous steps
- Marks fields that require manual completion
- Generates an action plan with open items for follow-up
✏️ Example:
Processing may go live after:
- ☐ Data processing agreement concluded with Intric AB
- ☐ Consultation letter template updated with information on AI use
- ☐ Model choice clarified, including third-country assessment
- ☐ DPO comments obtained