US proposes requiring reporting for AI and cloud providers

  • The U.S. Department of Commerce proposed new reporting requirements for AI developers and cloud providers to ensure cybersecurity.
  • The rules would mandate reporting on AI model development, cybersecurity efforts, and red-teaming results, aimed at detecting risks like aiding cyberattacks or lowering barriers to developing WMDs.

OUR TAKE
This move reflects growing concerns over AI’s impact and the need for rigorous oversight. By mandating detailed reporting, the U.S. government is stepping up efforts to mitigate risks associated with advanced AI, such as misuse in cyberattacks or dangerous technologies. This proactive approach highlights the balancing act between fostering innovation and safeguarding against potential threats, aiming to keep AI development safe and under control.
-Tacy Ding, BTW reporter

What happened

The proposal for detailed reporting requirements, aimed at developers of advanced artificial intelligence and cloud computing providers to ensure these technologies are secure and capable of withstanding cyberattacks, was announced by the U.S. Department of Commerce on Monday.

The proposal from the department’s Bureau of Industry and Security would mandate reporting to the federal government regarding development activities of “frontier” AI models and computing clusters.

It would also mandate reporting on cybersecurity measures, as well as the outcomes of so-called red-teaming efforts, such as testing for dangerous capabilities, including the potential to aid in cyberattacks or reduce barriers for non-experts to develop chemical, biological, radiological, or nuclear weapons.

External red-teaming has been utilised for years in cybersecurity to identify emerging risks, with the term originating from U.S. Cold War simulations, where the adversary was referred to as the “red team.”

Also read: DOJ eyes Google AI plans to tackle search monopoly

Also read: US implements new controls on advanced tech alongside international partners

Why it’s important 

Generative AI, which can produce text, images, and videos in response to open-ended prompts, has generated both excitement and concern. There are fears it could render certain jobs obsolete, disrupt elections, and potentially overpower humans, leading to catastrophic consequences.

The Department of Commerce stated that the information gathered under the proposal “will be crucial for ensuring these technologies meet stringent safety and reliability standards, can withstand cyberattacks, and pose minimal risk of misuse by foreign adversaries or non-state actors.”

In October 2023, President Joe Biden signed an executive order requiring developers of AI systems that pose risks to U.S. national security, the economy, public health, or safety to share the results of safety tests with the U.S. government before these systems are released to the public.

The rule would introduce reporting requirements for advanced artificial intelligence (AI) models and computing clusters.

Tacy-Ding

Tacy Ding

Tacy Ding is an intern reporter at BTW Media covering network. She is studying at Zhejiang Gongshang University. Send tips to t.ding@btw.media.

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *