Walma
The EU AI Act: History, What Applies Now – and What Happens in 2026–2027
InsightMarch 01, 2026• 10 min read
Written by: Walma

The EU AI Act: History, What Applies Now – and What Happens in 2026–2027

The EU AI Act is the world's first comprehensive regulation for AI. Here we cover the history, risk levels, what applies today, and the full timeline for 2026–2027.

AIEU AI Actcomplianceregulationhigh-risk AIGPAI

The EU's AI regulation, commonly known as the AI Act, is the world's first broad "horizontal" regulatory framework governing the development, market introduction, and use of AI systems in the EU. It is built on a risk-based model: certain AI uses are prohibited, others require transparency or extensive requirements (particularly for high-risk AI), and general-purpose AI models have their own obligations.

This article provides:

  • a historical overview (how we got here),
  • a practical walkthrough of what applies, and
  • a clear timeline for 2026 and 2027.

Brief Background: What Is the AI Act?

The AI Act is Regulation (EU) 2024/1689 ("Artificial Intelligence Act"), published in the Official Journal of the EU in the summer of 2024. It creates a common framework within the EU to:

  • protect health, safety, and fundamental rights,
  • while enabling innovation through measures such as sandboxes and support mechanisms.

The History: Key Milestones (2021–2024)

1. The European Commission's Proposal (2021)

The journey began when the European Commission presented a formal proposal for an AI regulation in April 2021 (COM(2021) 206).

2. Negotiations and "Provisional Agreement" (2023)

After intensive trilogue negotiations, the Council and Parliament reached a provisional political agreement in December 2023.

3. European Parliament Approval (March 2024)

The European Parliament adopted the text at its plenary session on 13 March 2024.

4. The Council's Final "Green Light" (May 2024)

The EU Council gave final approval on 21 May 2024, effectively concluding the legislative process.

5. Publication and Entry into Force (Summer 2024)

The regulation was published in the Official Journal of the EU and entered into force as EU law in the summer of 2024.

Important: entry into force does not mean all obligations apply immediately – the regulation has a phased application.

What Applies in Practice: Risk Levels and Key Requirements

The AI Act is typically summarised across four levels:

Prohibited AI Practices

Certain uses are prohibited, such as certain forms of manipulation, social scoring, and certain biometric scenarios. The European Commission has also published guidelines on prohibited practices to support uniform interpretation.

High-Risk AI

AI systems in certain areas (e.g. critical infrastructure, education, recruitment, credit scoring, etc.) are classified as high-risk and require, among other things:

  • risk management,
  • data quality,
  • technical documentation,
  • logging,
  • human oversight,
  • cybersecurity and robustness,

…plus obligations for both providers and deployers.

Transparency Requirements

Certain AI functions are subject to clear transparency requirements – for example when people interact with AI or in the case of synthetically generated content. The transparency rules in Article 50 take effect later in the rollout.

General-Purpose AI / GPAI

The regulation contains specific rules for general-purpose AI models. This is crucial for providers of broad models and for the ecosystem around them.

The Timeline: What Happens in 2026 and 2027?

The European Commission's AI Act Service Desk describes that the law is applied progressively with "full roll-out" by 2 August 2027 at the latest.

During 2026: "The Majority of Rules" Take Effect

The major milestone is 2 August 2026:

  • The majority of rules begin to apply and supervision/enforcement starts at national and EU level.
  • High-risk AI in Annex III becomes covered (e.g. high-risk uses across multiple societal sectors).
  • Transparency rules (Article 50) take effect.
  • Innovation support takes effect, and member states must have at least one regulatory AI sandbox per country established.

Implications for businesses in 2026: This is when many organisations "go live" with compliance: classification, risk and quality processes, documentation, supply chain management, incident handling, and governance need to be in place for systems covered by Annex III.

During 2027: Rules for High-Risk AI in Regulated Products Take Effect

The next major milestone is 2 August 2027:

  • Rules for high-risk AI embedded in regulated products begin to apply.
  • This typically concerns AI that is part of product-regulated areas (e.g. certain machinery, vehicles, and medical devices – depending on classification and which EU regulatory framework applies to the product).

Table: History + What Applies 2026–2027

DateWhat HappensWho Is Most AffectedPractical Implication
21 Apr 2021Commission's proposal (COM(2021) 206)EveryoneStart of the legislative process
8 Dec 2023Provisional political agreementEveryoneText "settles" after trilogue
13 Mar 2024European Parliament adopts the textEveryoneDemocratic final step in EP
21 May 2024Council gives final green lightEveryoneThe law is formally adopted
02 Feb 2025Definitions/AI literacy + prohibitions take effectMost organisationsRequirements on AI competence and avoiding prohibited practices
02 Aug 2025GPAI rules + governance must be in placeModel/platform providers + authoritiesRequirements on GPAI providers; national authorities and EU bodies established
02 Aug 2026Majority of rules + enforcement startsHigh-risk actors, most larger orgsHigh-risk Annex III takes effect, transparency (Art. 50), sandboxes, etc.
02 Aug 2027High-risk AI in regulated products takes effectManufacturers + product ecosystemProduct-embedded AI gets full high-risk requirements in regulated product areas

Can the Timeline Change?

There are two "moving parts" to watch:

The Commission's Signal Regarding Standards

The AI Act Service Desk states that the Commission (in the context of a "Digital Omnibus package") has proposed linking the application of rules for high-risk AI to the availability of support tools, including harmonised standards. This suggests that practical application may be affected by how quickly standards and support materials are finalised.

Political Discussion on Adjustments

There have been media reports that parts of the AI Act may be subject to adjustments or delays under pressure from industry and geopolitics, particularly related to "high-risk" parts and sanctions logic. This is not the same as the law "not applying", but it is a reason to follow the European Commission's official updates.

Practical Checklist for 2026

If your organisation is affected by the AI Act – here is what businesses typically need to do during 2025–2026:

  1. Inventory AI uses – including embedded models, third-party tools, and automated decisions.
  2. Classify risk – prohibited, high-risk Annex III, transparency, or other.
  3. Governance & roles – AI policy, ownership, and supplier management.
  4. Documentation & traceability – technical documentation, logging, and data sources.
  5. Incident and change processes – procedures for when the model, data, or usage changes.
  6. AI literacy programme – training adapted to roles: product, IT, legal, procurement, and operations.

The exact requirements depend on role: provider, importer, distributor, or deployer.

FAQ

When does the AI Act "really" take effect?

It is already in force in the EU law sense, but is applied in phases. The key dates for broad business compliance are 2 August 2026 (majority of rules) and 2 August 2027 (high-risk AI in regulated products).

Which businesses are affected by the AI Act?

All organisations that develop, distribute, or use AI systems within the EU are affected. This also applies to companies outside the EU whose AI systems are used within the union. Particularly relevant are actors in high-risk sectors such as healthcare, education, recruitment, credit scoring, and critical infrastructure.

What counts as high-risk AI?

High-risk AI is defined in Annex III and includes systems used in areas such as biometric identification, critical infrastructure, education and vocational training, employment and personnel management, access to public services, law enforcement, migration management, and the justice system. AI systems embedded in already-regulated products (e.g. medical devices, vehicles) may also be classified as high-risk.

What is most important during 2026?

That high-risk AI in Annex III becomes subject to the regulation and that enforcement begins. Organisations need to have their risk and quality processes, documentation, and governance in place by 2 August 2026 at the latest.

What is the difference in 2027 compared to 2026?

2027 specifically targets high-risk AI embedded in regulated products – for example AI components in machinery, vehicles, or medical devices that are already covered by other EU product legislation.

What do the GPAI rules mean?

General-Purpose AI models (GPAI) have their own obligations that took effect on 2 August 2025. Providers of such models must, among other things, provide technical documentation, comply with copyright rules, and publish summaries of training data. GPAI models with "systemic risk" have additional requirements.

What happens if you don't comply with the AI Act?

Sanctions vary depending on the nature of the violation. Fines can amount to EUR 35 million or 7% of global annual turnover for violations of prohibited AI practices, and up to EUR 15 million or 3% of turnover for other violations. National supervisory authorities are responsible for enforcement.

Does the AI Act apply to Swedish public authorities?

Yes, the AI Act applies to all actors that develop or use AI systems – including the public sector and government authorities. Swedish authorities that use AI systems for decision-making, assessments, or automated processes must follow the same risk classification and requirements as private actors.

Do we need an AI policy?

It is not an explicit requirement in the regulation to have a separate "AI policy", but in practice organisations need documented governance, role allocation, and processes to meet the requirements. An AI policy is often the simplest way to consolidate this. The AI literacy requirement (Article 4) also means that personnel working with AI systems must have sufficient competence.

How can Walma help with the AI Act?

Walma offers AI solutions designed with compliance in mind. Our platform Noda provides full traceability, source references, and Swedish data sovereignty – three key requirements in the AI Act. We also offer AI training and workshops that help organisations build the AI literacy the regulation requires.

Gabriel Lagerström de Jong

About the author

Gabriel Lagerström de Jong

CEO, Walma AI

Gabriel is the CEO and founder of Walma AI. With experience from the EU AI Act and secure AI implementation, he helps organisations use AI responsibly and effectively.

Ready to take the next step?

Get in touch with us at Walma

Walma AI AB
Skånegatan 78, 116 37 Stockholm
info@walma.ai
Organization number: 559447-3299 | Privacy Policy