
As the EU AI Act moves toward full implementation, the US and China are also refining their regulatory frameworks to balance safety and innovation.
The era of 'move fast and break things' in AI is meeting the reality of global regulation. The European Union's AI Act is now the world's most comprehensive framework, categorizing AI systems by risk levels. High-risk applications, such as those used in critical infrastructure or law enforcement, face stringent transparency and security requirements, while 'unacceptable' uses like social scoring are banned.
In the United States, the debate has centered on California's SB 1047 bill, which sought to hold developers liable for catastrophic harms caused by their models. While the bill was ultimately vetoed, the conversation it sparked has led many labs to adopt voluntary safety commitments, including 'red-teaming' and watermarking AI-generated content.
China is also taking a proactive stance, focusing on generative AI services. Their regulations require providers to ensure that content is accurate and adheres to core socialist values, while also mandating that labels be applied to synthetic media. This patchwork of global rules presents a challenge for multi-national tech companies trying to navigate differing standards of compliance.


