AI governance encompasses the laws, regulations, standards, and voluntary frameworks that shape how AI is developed and deployed. The global governance landscape is fragmented: the EU leads with binding regulation (AI Act), the US uses executive orders and agency guidance (largely voluntary), China emphasises state oversight and content control, and industry self-governance through voluntary commitments and model cards. This fragmentation creates compliance complexity for global AI companies and risk of regulatory arbitrage.
Global regulatory landscape
| Region | Key regulation | Approach | Status | Key requirements |
|---|---|---|---|---|
| European Union | EU AI Act (2024) | Risk-based, binding law | In force, 2025-2026 phased | Prohibit unacceptable risk; audit high-risk AI; transparency for GPT systems |
| United States | AI Executive Order (Oct 2023) | Agency guidance, voluntary | Active (reversible by admin) | Safety testing for frontier models, NIST AI RMF, watermarking pilots |
| United Kingdom | Pro-innovation approach | Principles-based, sector-specific | Active | Existing regulators apply AI rules in their domains; no dedicated AI law yet |
| China | Generative AI Measures (2023) | State-directed, content focus | In force | Mandatory content filtering, real-name registration, pro-socialist content requirements |
| Canada | AIDA (Artificial Intelligence and Data Act) | Risk-based, similar to EU | Proposed | High-impact AI systems must demonstrate safety and fairness |
| India | Digital India Act | Principles-based | Proposed | Advisory approach; startups largely exempt from regulation initially |
AI governance compliance checklist framework
from dataclasses import dataclass, field
from typing import List, Dict
from enum import Enum
class RiskLevel(Enum):
UNACCEPTABLE = "Prohibited"
HIGH = "High Risk — strict requirements"
LIMITED = "Limited Risk — transparency obligations"
MINIMAL = "Minimal Risk — voluntary codes"
@dataclass
class AISystemGovernanceAssessment:
"""
Structured assessment for EU AI Act + global compliance.
"""
system_name: str
description: str
# Risk classification (EU AI Act Annex III = high risk)
risk_level: RiskLevel = RiskLevel.MINIMAL
# EU AI Act compliance checklist
eu_compliance: Dict[str, bool] = field(default_factory=lambda: {
"risk_management_system": False,
"data_governance_documented": False,
"technical_documentation": False,
"transparency_obligations_met": False,
"human_oversight_mechanisms": False,
"accuracy_robustness_tested": False,
"conformity_assessment_done": False,
"registered_eu_database": False,
})
# Model card published?
model_card_published: bool = False
# Voluntary commitments
frontier_ai_commitments: List[str] = field(default_factory=list)
def compliance_score(self) -> float:
checks = list(self.eu_compliance.values())
return sum(checks) / len(checks) * 100
def generate_report(self) -> str:
score = self.compliance_score()
failed = [k for k, v in self.eu_compliance.items() if not v]
report = [
f"AI Governance Report: {self.system_name}",
f"Risk Level: {self.risk_level.value}",
f"EU AI Act Compliance: {score:.0f}%",
]
if failed:
report.append(f"Missing requirements: {', '.join(failed)}")
if not self.model_card_published:
report.append("⚠️ No model card published")
return '
'.join(report)
# Example: High-risk AI in recruitment
hiring_ai = AISystemGovernanceAssessment(
system_name = "AutoHire-v2",
description = "AI screening of job applications",
risk_level = RiskLevel.HIGH, # EU AI Act Annex III: employment decisions
eu_compliance = {
"risk_management_system": True,
"data_governance_documented": True,
"technical_documentation": True,
"transparency_obligations_met": False, # Candidates not informed
"human_oversight_mechanisms": True,
"accuracy_robustness_tested": False, # No bias audit done
"conformity_assessment_done": False, # Required before deployment
"registered_eu_database": False, # Must register in EU database
},
model_card_published=False,
frontier_ai_commitments=[]
)
print(hiring_ai.generate_report())Industry self-governance and voluntary commitments
- Frontier AI Safety Commitments (White House 2023): Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI committed to: safety testing before deployment, sharing safety information, investing in cybersecurity, and developing watermarking for AI content. Voluntary — no enforcement mechanism.
- Model Cards (Google): Standardised documentation disclosing model purpose, training data, performance metrics (disaggregated), known limitations, and intended use cases. Now adopted by HuggingFace as a standard for all publicly hosted models.
- NIST AI Risk Management Framework (RMF): US voluntary framework for AI risk assessment, with four functions: Govern, Map, Measure, Manage. Increasingly used as a baseline for government AI procurement.
- Partnership on AI: Industry-civil society coalition setting best practice guidelines for AI development and deployment across domains.
EU AI Act key dates (2024-2026)
August 2024: EU AI Act entered into force. February 2025: Prohibited AI systems banned (social scoring, real-time biometric surveillance, emotion recognition at work/school). August 2025: GPAI (general purpose AI) rules apply — all frontier model providers must publish technical documentation, comply with copyright law, publish model cards. August 2026: High-risk AI system requirements fully apply (healthcare, education, employment, justice). 2027: All AI systems must be compliant.
Practice questions
- What are the four risk categories in the EU AI Act and give an example of each? (Answer: Unacceptable risk (prohibited): social scoring systems, real-time biometric surveillance in public spaces. High risk (strict requirements): AI in medical devices, hiring, loan decisions, law enforcement. Limited risk (transparency): chatbots must disclose they are AI, deepfakes must be labelled. Minimal risk (voluntary): AI in video games, spam filters — most AI falls here.)
- Why might an AI company based in the US face EU AI Act compliance requirements? (Answer: The EU AI Act has extraterritorial reach — it applies to any AI system placed on the EU market or whose outputs are used in the EU, regardless of where the company is based. A US company whose AI product is used by EU citizens or businesses must comply. This mirrors GDPR's approach.)
- What is regulatory arbitrage in AI governance? (Answer: Moving AI development or deployment to jurisdictions with weaker regulations to avoid compliance requirements. Example: a US company might process EU citizens' biometric data in a country without GDPR-equivalent protections. Global governance fragmentation enables this. Counter-measures: extraterritorial rules (EU AI Act), international cooperation standards (G7 Hiroshima AI Process).)
- Model cards are voluntary documentation. Why are they important for AI governance? (Answer: Model cards enable: accountability (who built it, for what purpose), transparency (what data, what limitations), safety (known failure modes, appropriate use cases), and fairness evaluation (disaggregated performance metrics). They create a paper trail that supports regulatory audits, post-deployment incident investigation, and informed procurement decisions.)
- China's AI governance focuses on content control. How does this differ from the EU approach? (Answer: China: state-directed approach focused on ensuring AI does not produce content deemed harmful to social stability or socialist values. Requires content filtering, real-name registration, and government approval for GPAI systems. EU: rights-based approach focused on individual dignity, non-discrimination, and fundamental rights. Requires technical safety, fairness, and transparency regardless of content ideology.)
On LumiChats
Anthropic actively participates in global AI governance — signing White House safety commitments, engaging with EU AI Act regulatory consultations, and publishing safety policies publicly. Understanding AI governance helps you evaluate whether AI products you use are compliant with applicable regulations and company ethical commitments.
Try it free