
As the European Union finalizes the first comprehensive AI law, companies worldwide are scrambling to adapt to the new risk-based framework that mandates transparency and accountability.
The European Parliament has officially passed the Artificial Intelligence Act, marking the world's first comprehensive legal framework for AI. The legislation introduces a tiered risk-based approach, categorizing AI systems into four levels: unacceptable risk, high risk, limited risk, and minimal risk. This structure is designed to foster innovation while ensuring that the rights and safety of citizens are protected from the potential harms of unregulated automation.
Under the new rules, AI systems deemed to pose an 'unacceptable risk'—such as social scoring systems or those used for manipulative biometric identification—are strictly prohibited. Meanwhile, 'high-risk' systems used in critical infrastructure, education, and healthcare will be subject to rigorous testing, documentation, and human oversight requirements before they can enter the EU market. For developers of General Purpose AI (GPAI) like GPT-4, the act mandates transparency regarding training data and compliance with EU copyright law.
The global impact of the EU AI Act cannot be overstated. Similar to the GDPR's effect on data privacy, this legislation is expected to become a de facto global standard. Multinationals are likely to align their global internal policies with the EU framework to maintain access to the European market, effectively exporting European digital ethics to the rest of the world. Non-compliance carries heavy penalties, with fines reaching up to 7% of a company's global annual turnover.
