The EU’s AI Act is a new starting point for global regulation

  • The EU’s AI Act, which comes into force on 1 August 2024, sets out phased compliance deadlines for different types of AI developers and applications, with high-risk AI systems required to be fully compliant by mid-2026.
  • The regulations categorise AI applications in terms of risk: low/no risk, high risk and limited risk. They have different compliance requirements and transition periods.

OUR TAKE
The EU’s AI Act goes into effect on 1 August 2024, marking the implementation of the world’s first comprehensive regulation of AI applications. The Act adopts a risk-based approach, categorising AI applications into low/no risk, high risk and limited risk, with corresponding compliance requirements and transition periods. The EU plans to establish national regulators in the 27 member states to oversee compliance and coordinate the work of these bodies to ensure consistent application across the EU.

-Rae Li, BTW reporter

What happened

The EU’s AI Act comes into force on 1 August 2024, marking the start of comprehensive regulation of AI applications. The regulation categorises AI applications into low/no risk, high risk and limited risk with different compliance requirements and transition periods. High-risk applications, such as biometrics and medical software, must undergo risk assessments and compliance audits and may face regulatory audits. Generic AI developers need to comply with transparency requirements and copyright rules. Violations can face fines of up to 7% of annual global turnover.

For high-risk AI systems, developers must be fully compliant by mid-2026. In addition, the EU plans to create a database for registering high-risk AI systems deployed in the public sector. The EU is also placing special emphasis on the regulation of General Purpose AI (GPAI), requiring developers to provide a summary of training data and ensure compliance with copyright rules. National regulators in the EU will be responsible for enforcing the general rules of the AI Act, while the rules for GPAIs will be enforced at the EU level. Discussions on how GPAI developers will comply with the specific requirements of the AI Act are still ongoing, and a related code of conduct has not yet been developed. The EU AI Office has begun consulting and engaging in the rule-making process this week and expects to finalise these codes by April 2025.

AI technology companies such as OpenAI say they will work closely with the EU AI Office and other relevant organisations to ensure that technologies such as their GPT large-scale language models are compliant when implementing the new laws. For high-risk AI systems, the EU standards bodies are involved in developing specific requirements that will be finalised by April 2025 and will come into force when approved by the EU.

Also read: CIS embraces post-WRC-23 spectrum and orbital regulation

Also read: Trump advocates cryptocurrency, targeting China and regulation

Why it’s important

The EU’s AI Act not only provides guidance on the innovation and application of AI technologies, but also ensures that these technologies develop while respecting individual privacy, safeguarding safety and promoting fair competition. By clearly distinguishing between AI applications at different risk levels and setting out the corresponding compliance requirements, the Act helps guide the healthy development of AI technologies while protecting consumers and the public from potential risks.

The implementation of the Act reflects the EU’s leadership in global AI governance and provides a reference for other countries and regions to develop AI regulations. It emphasises the importance of legal and regulatory frameworks in the context of the rapid development of AI technologies and the need to ensure that technological advances evolve in tandem with societal ethical and legal norms. By setting compliance deadlines and fine mechanisms, the EU aims to promote the responsible use of AI technology while encouraging technological innovation and fair competition, which will have a far-reaching impact on promoting the sustainable development of AI technology globally.

Rae-Li

Rae Li

Rae Li is an intern reporter at BTW Media covering IT infrastructure and Internet governance. She graduated from the University of Washington in Seattle. Send tips to rae.li@btw.media.

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *