The EU AI Act (Regulation 2024/1689), which became law in August 2024 and is being phased in through 2027, is the world's first comprehensive legal framework for artificial intelligence. It classifies AI systems by risk level (unacceptable, high, limited, minimal), imposes compliance obligations on developers and deployers, and has extraterritorial reach — applying to any AI system used in the EU regardless of where it was built. In parallel, the US, UK, China, and dozens of other countries are implementing their own AI governance frameworks, making AI regulation one of the most consequential and heavily-searched topics in the field.
The EU AI Act's risk-based framework
The Act classifies AI systems into four risk tiers, with obligations escalating from minimal to complete prohibition:
| Risk tier | Definition | Examples | Requirements |
|---|---|---|---|
| Unacceptable risk (BANNED) | AI that poses clear threat to fundamental rights or safety | Social scoring by governments; real-time biometric surveillance in public; subliminal manipulation; emotion recognition in schools/workplaces | Prohibited outright. Violations: up to €35M or 7% global revenue. |
| High risk | AI that affects health, safety, or fundamental rights in high-stakes domains | CV screening tools; credit scoring; medical diagnosis AI; biometric ID; critical infrastructure management; educational assessment AI | Conformity assessment; transparency obligations; human oversight requirements; registration in EU database; CE marking required. |
| Limited risk (transparency) | AI that interacts directly with users without being obvious | Chatbots; deepfakes; AI-generated content | Must disclose that content is AI-generated; users must be informed they are interacting with AI. |
| Minimal risk | All other AI — the vast majority of systems | Spam filters; recommendation systems; video games with AI | No mandatory requirements; voluntary codes of practice encouraged. |
GPAI: General Purpose AI obligations
The Act also covers General Purpose AI models (GPAIs) — including frontier LLMs like GPT, Claude, and Gemini. All GPAI providers must maintain technical documentation, comply with copyright law, and publish summaries of training data used. GPAIs with "systemic risk" (trained on >10²⁵ FLOPs) face additional red-teaming, incident reporting, and cybersecurity obligations. This directly affects Anthropic, OpenAI, Google, and Meta.
Timeline and current status
| Date | Milestone |
|---|---|
| August 2024 | EU AI Act enters into force |
| February 2025 | Prohibited practices (unacceptable risk) apply — AI social scoring, real-time biometric surveillance in public banned in EU |
| August 2025 | GPAI model obligations apply — LLM providers must comply with transparency and copyright rules |
| August 2026 | High-risk AI obligations for regulated sectors (medical devices, vehicles, etc.) take effect |
| August 2027 | All high-risk AI system obligations fully apply; full enforcement regime active |
| Ongoing (2026) | US states (California SB-1047 successor bills), UK AI Safety Institute work, China's Generative AI regulations, and the G7 Hiroshima AI Process all progressing in parallel |
US vs. EU approach in 2026
The US is taking a markedly different path. In December 2025, President Trump signed an executive order aiming to preempt state AI laws, prioritizing innovation over regulation. The White House and US states are now in an active legal and political battle over who governs AI. As of early 2026, the US has no federal AI law, while the EU's Act is the global regulatory baseline. Many multinational companies are designing to EU AI Act standards globally, since it's the most stringent framework — similar to how GDPR set a global de facto privacy standard.
Practical compliance for developers
| If you are building… | Key questions to ask | Likely tier |
|---|---|---|
| A customer service chatbot | Does it tell users it's AI? Does it make significant decisions (credit, insurance)? | Limited risk if disclosure is in place; High risk if decision-making |
| A CV screening or hiring tool | Is it used for recruitment decisions? In what countries? | High risk — requires conformity assessment, human oversight, bias testing |
| A medical diagnosis tool | Is it a medical device under EU MDR? Does it influence treatment decisions? | High risk — requires clinical validation, CE marking, extensive documentation |
| An LLM-powered general assistant | What is it trained on? Does it generate images/video/audio of real people? | GPAI rules apply; Limited risk transparency rules apply; deepfake detection required if generating synthetic media of people |
| An AI tutor / educational tool | Does it assess students? Is it used to make decisions about grades or admission? | High risk if used for assessment; limited risk if purely educational support |
| A code generation tool | No decisions about people; no biometric data; no safety-critical functions? | Minimal risk — voluntary compliance only |
Resources for compliance
The EU AI Office (digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence) is the main official source. The AI Act compliance checker at artificialintelligenceact.eu lets you assess your system's risk tier. For GPAI compliance, the EU is developing codes of practice through a multi-stakeholder process with the AI Office.
Practice questions
- What are the four risk categories in the EU AI Act with examples of each? (Answer: Unacceptable risk (prohibited): social scoring by governments, real-time biometric surveillance in public spaces, subliminal manipulation, exploitation of vulnerabilities. High risk: medical devices, critical infrastructure, employment AI, credit scoring, law enforcement. Limited risk: chatbots (must disclose AI), deepfakes (must be labelled). Minimal risk: spam filters, AI in video games — majority of AI systems fall here with no specific requirements.)
- The EU AI Act requires GPAI (General Purpose AI) providers to comply with copyright law. What does this mean in practice? (Answer: GPAI providers must: (1) Publish a summary of training data used (which datasets, which sources). (2) Implement an opt-out mechanism for copyright holders who do not want their content used for training. (3) Maintain technical measures to identify AI-generated content (watermarking). This is why companies like OpenAI, Anthropic, and Google publish training data transparency reports and offer copyright opt-out registries.)
- A startup in the US develops an AI hiring tool used by European companies. Does the EU AI Act apply? (Answer: Yes — the EU AI Act has extraterritorial reach. It applies to: (1) AI systems placed on the EU market (even if developed outside the EU). (2) AI systems whose outputs are used within the EU. A US startup whose AI tool is used by EU employers for hiring (a High Risk category) must comply with all High Risk requirements: risk management system, bias testing, transparency to candidates, human oversight, and EU market conformity assessment.)
- What is conformity assessment under the EU AI Act and who performs it? (Answer: Conformity assessment: the process of verifying that a High Risk AI system meets the EU AI Act requirements before market placement. For most High Risk systems: self-assessment (the provider audits themselves against the requirements and signs a Declaration of Conformity). For certain critical systems (biometric identification, critical infrastructure): third-party assessment by a notified body. Providers must maintain technical documentation and update assessments when the system changes materially.)
- What penalties does the EU AI Act impose for violations? (Answer: Prohibited AI violations: up to €35 million or 7% of global annual turnover (whichever is higher). High-risk AI non-compliance: up to €15 million or 3% of global turnover. Incorrect information to authorities: up to €7.5 million or 1% of global turnover. For SMEs: proportionally lower maximums. These penalties are significantly higher than GDPR's maximum (€20M or 4% of turnover) — reflecting the EU's intent to make AI regulation enforcement credible.)