Trends
OpenAl, Anthropic sign work with governments to test AI
The agreement, the first of its kind, comes as the two companies face regulatory scrutiny over the safe and ethical use of AI technology.

Headline
The agreement, the first of its kind, comes as the two companies face regulatory scrutiny over the safe and ethical use of AI technology.
Context
OUR TAKE AIstartups OpenAI and Anthropic have signed an agreement with the U.S. government to conduct research, testing, and evaluation of their AI models, a move aimed at jointly assessing security risks and mitigating potential problems through regulatory channels to mitigate potentially catastrophic risks facing AI. — Iydia Ding, BTW reporter California lawmakers are set to vote as early as this week on a bill that would broadly regulate how artificial intelligence is developed and deployed in the state. Artificial intelligence startups OpenAI and Anthropic have signed agreements with the US government to conduct research, testing and evaluation of their artificial intelligence models, the Institute for Artificial Intelligence Security said on Thursday. Under the agreement, the AI Security Institute will have access to major new models of OpenAI and Anthropic before and after their public launches. The agreement will also enable collaborative research to assess the capabilities of AI models and the risks associated with them.
Evidence
Pending intelligence enrichment.
Analysis
“We believe the Institute has a key role to play in defining American leadership. Develop AI responsibly and hope that our collaboration provides a framework that the rest of the world can build upon, “said Jason Kwon, chief strategy officer at OpenAI, the maker of ChatGPT. Also read: Baidu’s upgraded AI model hits 300 million users Also read: Character.AI introduces new calling feature The agreement, the first of its kind, comes as the two companies face regulatory scrutiny over the safe and ethical use of AI technology.
Key Points
- The agreement, the first of its kind, comes as the two companies face regulatory scrutiny over the safe and ethical use of AI technology.
- The institute, part of the U.S. Department of Commerce’s National Institute of Standards and Technology (NIST), will also work with the U.K. ‘s AI Security Institute and provide feedback to companies on potential security improvements.
Actions
Pending intelligence enrichment.





