Close Menu
  • Home
  • Leadership Alliance
  • Exclusives
  • History of the Internet
  • AFRINIC News
  • Internet Governance
    • Regulations
    • Governance Bodies
    • Emerging Tech
  • Others
    • IT Infrastructure
      • Networking
      • Cloud
      • Data Centres
    • Company Stories
      • Profile
      • Startups
      • Tech Titans
      • Partner Content
    • Fintech
      • Blockchain
      • Payments
      • Regulations
    • Tech Trends
      • AI
      • AR / VR
      • IoT
    • Video / Podcast
  • Country News
    • Africa
    • Asia Pacific
    • North America
    • Lat Am/Caribbean
    • Europe/Middle East
Facebook LinkedIn YouTube Instagram X (Twitter)
Blue Tech Wave Media
Facebook LinkedIn YouTube Instagram X (Twitter)
  • Home
  • Leadership Alliance
  • Exclusives
  • History of the Internet
  • AFRINIC News
  • Internet Governance
    • Regulation
    • Governance Bodies
    • Emerging Tech
  • Others
    • IT Infrastructure
      • Networking
      • Cloud
      • Data Centres
    • Company Stories
      • Profiles
      • Startups
      • Tech Titans
      • Partner Content
    • Fintech
      • Blockchain
      • Payments
      • Regulation
    • Tech Trends
      • AI
      • AR/VR
      • IoT
    • Video / Podcast
  • Africa
  • Asia-Pacific
  • North America
  • Lat Am/Caribbean
  • Europe/Middle East
Blue Tech Wave Media
Home » US proposes requiring reporting for AI and cloud providers
9-10-AI
9-10-AI
Internet Governance

US proposes requiring reporting for AI and cloud providers

By Tacy DingSeptember 10, 2024No Comments3 Mins Read
Share
Facebook Twitter LinkedIn Pinterest Email
  • The U.S. Department of Commerce proposed new reporting requirements for AI developers and cloud providers to ensure cybersecurity.
  • The rules would mandate reporting on AI model development, cybersecurity efforts, and red-teaming results, aimed at detecting risks like aiding cyberattacks or lowering barriers to developing WMDs.

OUR TAKE
This move reflects growing concerns over AI’s impact and the need for rigorous oversight. By mandating detailed reporting, the U.S. government is stepping up efforts to mitigate risks associated with advanced AI, such as misuse in cyberattacks or dangerous technologies. This proactive approach highlights the balancing act between fostering innovation and safeguarding against potential threats, aiming to keep AI development safe and under control.
-Tacy Ding, BTW reporter

What happened

The proposal for detailed reporting requirements, aimed at developers of advanced artificial intelligence and cloud computing providers to ensure these technologies are secure and capable of withstanding cyberattacks, was announced by the U.S. Department of Commerce on Monday.

The proposal from the department’s Bureau of Industry and Security would mandate reporting to the federal government regarding development activities of “frontier” AI models and computing clusters.

It would also mandate reporting on cybersecurity measures, as well as the outcomes of so-called red-teaming efforts, such as testing for dangerous capabilities, including the potential to aid in cyberattacks or reduce barriers for non-experts to develop chemical, biological, radiological, or nuclear weapons.

External red-teaming has been utilised for years in cybersecurity to identify emerging risks, with the term originating from U.S. Cold War simulations, where the adversary was referred to as the “red team.”

Also read: DOJ eyes Google AI plans to tackle search monopoly

Also read: US implements new controls on advanced tech alongside international partners

Why it’s important 

Generative AI, which can produce text, images, and videos in response to open-ended prompts, has generated both excitement and concern. There are fears it could render certain jobs obsolete, disrupt elections, and potentially overpower humans, leading to catastrophic consequences.

The Department of Commerce stated that the information gathered under the proposal “will be crucial for ensuring these technologies meet stringent safety and reliability standards, can withstand cyberattacks, and pose minimal risk of misuse by foreign adversaries or non-state actors.”

In October 2023, President Joe Biden signed an executive order requiring developers of AI systems that pose risks to U.S. national security, the economy, public health, or safety to share the results of safety tests with the U.S. government before these systems are released to the public.

The rule would introduce reporting requirements for advanced artificial intelligence (AI) models and computing clusters.

AI cloud US
Tacy Ding

Tacy Ding is an intern reporter at BTW Media covering network. She is studying at Zhejiang Gongshang University. Send tips to t.ding@btw.media.

Related Posts

CAIGA’s rise and AFRINIC’s challenges: What comes next?

November 21, 2025

How African internet governance could evolve under CAIGA

November 21, 2025

Nokia restructures business to drive AI and network innovation

November 21, 2025
Add A Comment
Leave A Reply Cancel Reply

CATEGORIES
Archives
  • November 2025
  • October 2025
  • September 2025
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023

Blue Tech Wave (BTW.Media) is a future-facing tech media brand delivering sharp insights, trendspotting, and bold storytelling across digital, social, and video. We translate complexity into clarity—so you’re always ahead of the curve.

BTW
  • About BTW
  • Contact Us
  • Join Our Team
  • About AFRINIC
  • History of the Internet
TERMS
  • Privacy Policy
  • Cookie Policy
  • Terms of Use
Facebook X (Twitter) Instagram YouTube LinkedIn
BTW.MEDIA is proudly owned by LARUS Ltd.

Type above and press Enter to search. Press Esc to cancel.