Breaking News: European Union's AI Regulation in Effect - Compliance Deadlines Approaching Fast
As of August 1, 2024, the European Union's risk-based regulation for applications of artificial intelligence is now in force. This marks the beginning of a countdown to various compliance deadlines that will impact different AI developers and applications. The majority of provisions will be fully applicable by mid-2026, but the first deadline, which prohibits certain uses of AI like law enforcement's use of remote biometrics in public areas, will take effect in just six months.
The EU's approach categorizes most AI applications as low or no risk, exempting them from the regulation. However, high-risk uses such as biometrics, facial recognition, and AI in fields like education and employment must be registered in an EU database, with developers ensuring compliance with risk and quality management obligations. Additionally, a "limited risk" tier applies to technologies like chatbots and deepfake tools, requiring transparency measures to prevent user deception.
The regulation also targets developers of general purpose AIs (GPAIs), with most facing minimal transparency requirements. Only the most powerful models will need to undergo risk assessment and mitigation measures. The specifics of GPAI compliance under the AI Act are still being discussed, with Codes of Practice set to be finalized by April 2025 following a consultation process led by the AI Office.
In a recent primer on the AI Act, OpenAI hinted at close collaboration with EU authorities to ensure compliance with the new regulations. They advised organizations to classify AI systems in scope, identify GPAI models, and consider associated obligations based on use cases. Legal counsel is recommended for complex issues related to AI system providers and deployers.
In conclusion, the EU's AI regulation is a significant development that will impact how AI technologies are developed and deployed. It sets out clear guidelines for high-risk applications, ensuring accountability and transparency in AI use. Organizations working with AI systems must stay informed and take necessary steps to comply with the regulation to avoid penalties and maintain trust with consumers and regulators.