Close Menu
    Facebook LinkedIn YouTube Instagram X (Twitter)
    Blue Tech Wave Media
    Facebook LinkedIn YouTube Instagram X (Twitter)
    • Home
    • Leadership Alliance
    • Exclusives
    • Internet Governance
      • Regulation
      • Governance Bodies
      • Emerging Tech
    • IT Infrastructure
      • Networking
      • Cloud
      • Data Centres
    • Company Stories
      • Profiles
      • Startups
      • Tech Titans
      • Partner Content
    • Others
      • Fintech
        • Blockchain
        • Payments
        • Regulation
      • Tech Trends
        • AI
        • AR/VR
        • IoT
      • Video / Podcast
    Blue Tech Wave Media
    Home » The EU’s AI Act is a new starting point for global regulation
    08-02-EU
    08-02-EU
    AI

    The EU’s AI Act is a new starting point for global regulation

    By Rae LiAugust 2, 2024No Comments4 Mins Read
    Share
    Facebook Twitter LinkedIn Pinterest Email
    • The EU’s AI Act, which comes into force on 1 August 2024, sets out phased compliance deadlines for different types of AI developers and applications, with high-risk AI systems required to be fully compliant by mid-2026.
    • The regulations categorise AI applications in terms of risk: low/no risk, high risk and limited risk. They have different compliance requirements and transition periods.

    OUR TAKE
    The EU’s AI Act goes into effect on 1 August 2024, marking the implementation of the world’s first comprehensive regulation of AI applications. The Act adopts a risk-based approach, categorising AI applications into low/no risk, high risk and limited risk, with corresponding compliance requirements and transition periods. The EU plans to establish national regulators in the 27 member states to oversee compliance and coordinate the work of these bodies to ensure consistent application across the EU.

    -Rae Li, BTW reporter

    What happened

    The EU’s AI Act comes into force on 1 August 2024, marking the start of comprehensive regulation of AI applications. The regulation categorises AI applications into low/no risk, high risk and limited risk with different compliance requirements and transition periods. High-risk applications, such as biometrics and medical software, must undergo risk assessments and compliance audits and may face regulatory audits. Generic AI developers need to comply with transparency requirements and copyright rules. Violations can face fines of up to 7% of annual global turnover.

    For high-risk AI systems, developers must be fully compliant by mid-2026. In addition, the EU plans to create a database for registering high-risk AI systems deployed in the public sector. The EU is also placing special emphasis on the regulation of General Purpose AI (GPAI), requiring developers to provide a summary of training data and ensure compliance with copyright rules. National regulators in the EU will be responsible for enforcing the general rules of the AI Act, while the rules for GPAIs will be enforced at the EU level. Discussions on how GPAI developers will comply with the specific requirements of the AI Act are still ongoing, and a related code of conduct has not yet been developed. The EU AI Office has begun consulting and engaging in the rule-making process this week and expects to finalise these codes by April 2025.

    AI technology companies such as OpenAI say they will work closely with the EU AI Office and other relevant organisations to ensure that technologies such as their GPT large-scale language models are compliant when implementing the new laws. For high-risk AI systems, the EU standards bodies are involved in developing specific requirements that will be finalised by April 2025 and will come into force when approved by the EU.

    Also read: CIS embraces post-WRC-23 spectrum and orbital regulation

    Also read: Trump advocates cryptocurrency, targeting China and regulation

    Why it’s important

    The EU’s AI Act not only provides guidance on the innovation and application of AI technologies, but also ensures that these technologies develop while respecting individual privacy, safeguarding safety and promoting fair competition. By clearly distinguishing between AI applications at different risk levels and setting out the corresponding compliance requirements, the Act helps guide the healthy development of AI technologies while protecting consumers and the public from potential risks.

    The implementation of the Act reflects the EU’s leadership in global AI governance and provides a reference for other countries and regions to develop AI regulations. It emphasises the importance of legal and regulatory frameworks in the context of the rapid development of AI technologies and the need to ensure that technological advances evolve in tandem with societal ethical and legal norms. By setting compliance deadlines and fine mechanisms, the EU aims to promote the responsible use of AI technology while encouraging technological innovation and fair competition, which will have a far-reaching impact on promoting the sustainable development of AI technology globally.

    ai model EU GPAI
    Rae Li

    Rae Li is an intern reporter at BTW Media covering IT infrastructure and Internet governance. She graduated from the University of Washington in Seattle. Send tips to rae.li@btw.media.

    Related Posts

    CoreWeave acquires Core Scientific in $9bn AI infrastructure deal

    July 9, 2025

    Comcast moves more data with less energy used

    July 9, 2025

    Prestabist: Advances AI commerce tools across Africa

    July 9, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    CATEGORIES
    Archives
    • July 2025
    • June 2025
    • May 2025
    • April 2025
    • March 2025
    • February 2025
    • January 2025
    • December 2024
    • November 2024
    • October 2024
    • September 2024
    • August 2024
    • July 2024
    • June 2024
    • May 2024
    • April 2024
    • March 2024
    • February 2024
    • January 2024
    • December 2023
    • November 2023
    • October 2023
    • September 2023
    • August 2023
    • July 2023

    Blue Tech Wave (BTW.Media) is a future-facing tech media brand delivering sharp insights, trendspotting, and bold storytelling across digital, social, and video. We translate complexity into clarity—so you’re always ahead of the curve.

    BTW
    • About BTW
    • Contact Us
    • Join Our Team
    TERMS
    • Privacy Policy
    • Cookie Policy
    • Terms of Use
    Facebook X (Twitter) Instagram YouTube LinkedIn

    Type above and press Enter to search. Press Esc to cancel.