Walma
AI Compliance Under the EU AI Act: How to Get Ready
InsightMarch 03, 2026• 8 min read
Written by: Walma

AI Compliance Under the EU AI Act: How to Get Ready

The EU AI Act introduces new requirements for organisations that develop or use AI. Here we walk through what compliance means in practice – from risk classification and high-risk requirements to connections with GDPR and NIS2.

AIEU AI Actcompliancehigh-risk AIAI regulationGDPRNIS2

With the EU AI Act (Regulation (EU) 2024/1689) in place, AI compliance has become a concrete reality for organisations across Europe. The regulation began its phased application during 2025, and in 2026 the most extensive requirements take effect – particularly for high-risk AI systems.

This article provides a practical walkthrough of what compliance means, which requirements apply, and how your organisation can prepare.

What Does AI Compliance Mean?

AI compliance is about ensuring that AI systems developed, deployed, or used within the EU meet the requirements set by the AI Act. This involves:

  • Risk classification – identifying whether your AI system falls under unacceptable, high, limited, or minimal risk.
  • Documentation and transparency – being able to demonstrate how the system works, what data it was trained on, and what decisions it makes.
  • Human oversight – ensuring that human control and intervention are possible.
  • Data quality and governance – ensuring that training and test data meet quality standards.

Risk Classification: The Foundation of Your Obligations

The AI Act is built on a risk-based model with four levels:

Unacceptable Risk (Prohibited)

Certain AI uses have been completely prohibited since February 2025. These include:

  • Social scoring by public authorities
  • Real-time remote biometric identification in public spaces (with limited exceptions)
  • AI that exploits vulnerabilities of specific groups
  • Subliminal manipulation that may cause harm

High Risk

High-risk AI systems face the most extensive requirements. This includes systems used in:

  • Critical infrastructure – energy, transport, water
  • Education – admissions, grading
  • Employment – recruitment, dismissal, performance evaluation
  • Law enforcement – risk assessment, evidence analysis
  • Migration and border control – application processing, risk profiling

Requirements include risk management systems, data quality standards, technical documentation, logging, transparency information, human oversight, and cybersecurity.

Limited Risk

Systems with limited risk primarily face transparency obligations – for example, chatbots and deepfakes must be clearly labelled so users know they are interacting with AI or that the content is AI-generated.

Minimal Risk

Most AI systems fall under minimal risk and have no specific requirements, although voluntary codes of conduct are encouraged.

High-Risk AI: The Key Requirements in Practice

If your system is classified as high-risk, you need to meet a number of requirements that become fully applicable in August 2026:

1. Risk Management System (Art. 9)

A risk management system must be established, implemented, and documented throughout the system's lifecycle. It must identify and analyse known and foreseeable risks, and evaluate risks that may arise during intended use and reasonably foreseeable misuse.

2. Data Quality (Art. 10)

Training, validation, and test data must meet quality criteria regarding relevance, representativeness, accuracy, and completeness. Bias in data must be identified and addressed.

3. Technical Documentation (Art. 11)

Documentation must be drawn up before the system is placed on the market and kept up to date. It must provide sufficient information for authorities to assess the system's conformity.

4. Logging (Art. 12)

High-risk AI systems must have automatic logging that enables traceability. Logs must be retained for an appropriate period.

5. Transparency and Information (Art. 13)

Users must receive clear information about the system's capabilities, limitations, intended purpose, and interpretability.

6. Human Oversight (Art. 14)

The system must be designed so that it can be effectively overseen by humans. There must be the ability to intervene, interrupt, or correct.

7. Cybersecurity (Art. 15)

The system must achieve an appropriate level of security, resilience, and accuracy with regard to its intended purpose.

The Connection to GDPR and NIS2

AI compliance does not exist in a vacuum. Two other key regulatory frameworks have a direct impact:

GDPR

AI systems that process personal data must continue to comply with GDPR. This includes:

  • Legal basis for data processing (e.g. consent, legitimate interest)
  • Data Protection Impact Assessment (DPIA) for high-risk processing
  • Right to explanation in automated decision-making (Art. 22 GDPR)
  • Privacy by design in system development

NIS2

Organisations covered by the NIS2 Directive already have obligations around cybersecurity and incident reporting. The AI Act's cybersecurity requirements for high-risk AI overlap with NIS2, creating an opportunity for coordinated compliance.

Timeline: When Does What Apply?

DateWhat Happens
February 2025Prohibition on unacceptable-risk AI takes effect
August 2025Rules for GPAI models (General Purpose AI) apply
August 2026Full requirements for high-risk AI systems take effect
August 2027Requirements for high-risk AI embedded in regulated products

How to Prepare Your Organisation

Step 1: Inventory Your AI Systems

Map out which AI systems you develop, procure, or use. Identify which may be classified as high-risk.

Step 2: Conduct a Risk Assessment

Analyse the risk level for each system. Document the assessment that led to the classification.

Step 3: Establish a Compliance Framework

Implement processes for documentation, risk management, data quality, human oversight, and incident management.

Step 4: Connect with GDPR and NIS2

Review existing processes for data protection and cybersecurity. Identify synergies and gaps.

Step 5: Train Your Organisation

Ensure that key personnel – developers, legal teams, management – understand the requirements and their role in compliance. The AI Act requires "AI literacy" (Art. 4).

Summary

The EU AI Act changes the playing field for everyone working with AI in Europe. Compliance is not just a legal obligation – it is an opportunity to build trust, reduce risk, and create sustainable AI systems.

Organisations that start preparing now – with risk classification, documentation, and integrated processes – will have a clear advantage when the full requirements take effect in August 2026.

Want to know where you stand? Walma helps organisations map their AI usage, assess risk levels, and build a compliance framework that meets the EU AI Act, GDPR, and NIS2 – in practice, not just on paper.

Gabriel Lagerström de Jong

About the author

Gabriel Lagerström de Jong

CEO, Walma AI

Gabriel is the CEO and founder of Walma AI. With experience from the EU AI Act and secure AI implementation, he helps organisations use AI responsibly and effectively.

Ready to take the next step?

Get in touch with us at Walma

Walma AI AB
Skånegatan 78, 116 37 Stockholm
info@walma.ai
Organization number: 559447-3299 | Privacy Policy