EU’s risk-based AI regulation takes effect on August 1, 2024

  • The EU’s risk-based regulation for AI applications takes effect on August 1, 2024, with staggered compliance deadlines set to apply.
  • Initial bans on specific AI uses, such as law enforcement’s use of remote biometrics in public places, will be enforced in six months, with full provisions applicable by mid-2026.

OUR TAKE
The EU’s AI Act signifies a major step towards regulating AI technologies, ensuring responsible usage while mitigating risks. This regulation highlights the importance of compliance for developers, with significant penalties in place for violations. The staggered implementation allows for gradual adaptation, making it essential for AI companies to stay updated on their obligations.

— Zoey Zhu, BTW reporter

What happened

The European Union’s risk-based regulation for applications of artificial intelligence has come into force starting Thursday, August 1, 2024. This starts the clock on a series of staggered compliance deadlines that the law will apply to different types of AI developers and applications. Most provisions will be fully applicable by mid-2026. But the first deadline, which enforces bans on a small number of prohibited uses of AI in specific contexts, such as law enforcement use of remote biometrics in public places, will apply in just six months’ time.

Under the bloc’s approach, most applications of AI are considered low/no-risk, so they will not be in scope of the regulation at all. A subset of potential uses of AI are classified as high risk, such as biometrics and facial recognition, AI-based medical software, or AI used in domains like education and employment. Their developers will need to ensure compliance with risk and quality management obligations, including undertaking a pre-market conformity assessment — with the possibility of being subject to regulatory audit. High-risk systems used by public sector authorities or their suppliers will also have to be registered in an EU database.

Also read: White House implements new AI regulations for federal agencies

Also read: ‘EU AI Act’ takes effect in August: A landmark regulation for AI

Why it’s important

A third “limited risk” tier applies to AI technologies such as chatbots or tools that could be used to produce deepfakes. These will have to meet some transparency requirements to ensure users are not deceived. Penalties are also tiered, with fines of up to 7% of global annual turnover for violations of banned AI applications; up to 3% for breaches of other obligations; and up to 1.5% for supplying incorrect information to regulators.

Another important strand of the law applies to developers of so-called general purpose AIs (GPAIs). Again, the EU has taken a risk-based approach, with most GPAI developers facing light transparency requirements — though they will need to provide a summary of training data and commit to having policies to ensure they respect copyright rules, among other requirements. Just a subset of the most powerful models will be expected to undertake risk assessment and mitigation measures, too. Currently these GPAIs with the potential to post a systemic risk are defined as models trained using a total computing power of more than 10^25 FLOPs.

While enforcement of the AI Act’s general rules is devolved to member state-level bodies, rules for GPAIs are enforced at the EU level. What exactly GPAI developers will need to do to comply with the AI Act is still being discussed, as Codes of Practice are yet to be drawn up. Earlier this week, the AI Office, a strategic oversight and AI-ecosystem building body, kicked off a consultation and call for participation in this rule-making process, saying it expects to finalise the Codes in April 2025.

Zoey-Zhu

Zoey Zhu

Zoey Zhu is a news reporter at Blue Tech Wave media specialised in tech trends. She got a Master degree from University College London. Send emails to z.zhu@btw.media.
Follow Me:

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *