- The EU AI Act, now published in the bloc’s Official Journal, will come into force on August 1, with full applicability to AI developers by mid-2026.
- This landmark regulation aims to ensure ethical AI development across Europe, but the phased implementation raises concerns about potential industry influence undermining the Act’s protections.
OUR TAKE
This regulation could be a double-edged sword. On one hand, it aims to curb AI’s negative potential, protecting citizens from invasive and unethical AI practices. On the other hand, there is a fear that such stringent rules might stifle innovation and hinder Europe’s ability to compete with AI giants. The EU must remain vigilant to ensure the operations of these rules.
–Ashley Wang, BTW reporter
What happened
The full text of the EU AI Act, the European Union’s landmark risk-based regulation for AI, has been published in the bloc’s Official Journal. Set to come into force on August 1, the law will be fully applicable to AI developers by mid-2026, though it will be implemented in phases with various deadlines over the next few years.
The EU AI Act, agreed upon by EU lawmakers in December last year, introduces a comprehensive rulebook for AI, placing different obligations on developers based on use cases and perceived risk. While most AI applications considered low-risk will not be regulated, the law bans a small number of high-risk uses, including China-style social credit scoring and untargeted facial recognition database compilation.
The phased implementation of the EU AI Act begins with the prohibition of unacceptable risk AI applications, taking effect six months after the law’s enforcement, around early 2025. This includes the ban on real-time remote biometric identification by law enforcement in public spaces, with exceptions for cases like missing person searches.
Also read: EU accuses Meta of violating digital competition rules
Also read: Apple’s App Store rules breach EU tech rules, EU regulators say
Why it’s important
The EU AI Act is the first comprehensive regulatory framework for AI, aiming to ensure ethical AI development and use across Europe. By introducing stringent rules, the EU is making a bold statement against the potential dangers of unregulated AI, ensuring that AI technologies are developed responsibly and transparently. The various rules present the EU’s protection of individual freedom in the digital age, and the adherence to robust data quality and anti-bias standards.
However, the phased implementation of this landmark legislation raises several critical questions. With different deadlines extending over the next few years, there is a risk that industry players may unduly influence the drafting of vital guidelines. Concerns are already mounting about the involvement of consultancy firms and AI industry stakeholders in shaping the codes of practice. The EU must remain vigilant to ensure that corporate interests do not dilute these protections, undermining the very purpose of the Act.