• The agreement, the first of its kind, comes as the two companies face regulatory scrutiny over the safe and ethical use of AI technology.
  • The institute, part of the U.S. Department of Commerce’s National Institute of Standards and Technology (NIST), will also work with the U.K. ‘s AI Security Institute and provide feedback to companies on potential security improvements.

OUR TAKE
AIstartups OpenAI and Anthropic have signed an agreement with the U.S. government to conduct research, testing, and evaluation of their AI models, a move aimed at jointly assessing security risks and mitigating potential problems through regulatory channels to mitigate potentially catastrophic risks facing AI.
— Iydia Ding, BTW reporter

What happened

California lawmakers are set to vote as early as this week on a bill that would broadly regulate how artificial intelligence is developed and deployed in the state. Artificial intelligence startups OpenAI and Anthropic have signed agreements with the US government to conduct research, testing and evaluation of their artificial intelligence models, the Institute for Artificial Intelligence Security said on Thursday. Under the agreement, the AI Security Institute will have access to major new models of OpenAI and Anthropic before and after their public launches. The agreement will also enable collaborative research to assess the capabilities of AI models and the risks associated with them.

“We believe the Institute has a key role to play in defining American leadership. Develop AI responsibly and hope that our collaboration provides a framework that the rest of the world can build upon, “said Jason Kwon, chief strategy officer at OpenAI, the maker of ChatGPT.

Also read: Baidu’s upgraded AI model hits 300 million users

Also read: Character.AI introduces new calling feature

Why it’s important

The agreement, the first of its kind, comes as the two companies face regulatory scrutiny over the safe and ethical use of AI technology.

“These agreements are just a start, but they are an important milestone as we work to help manage the future of AI responsibly,” said Elizabeth Kelly, president of the US AI Safety Institute. The institute, part of the U.S. Department of Commerce’s National Institute of Standards and Technology (NIST), will also work with the U.K. ‘s AI Security Institute and provide feedback to companies on potential security improvements.

The US ai Safety Institute was established last year as part of an executive order by the Joe Biden administration to assess known and emerging risks to AI models. This collaborative model can help regulators better understand the development of AI technology, so that more adaptable and effective policies and regulations can be formulated.