Trends
OpenAI’s next model to undergo safety checks by the U.S. Government
Altman says that the company’s next AI model will be subject to U.S. Government safety checks, marking a significant step in AI development.

Headline
Altman says that the company’s next AI model will be subject to U.S. Government safety checks, marking a significant step in AI development.
Context
OUR TAKE OpenAI’s initiative to subject its next model to government safety checks sets a precedent for other tech companies and research institutions to prioritise safety, ethics, and accountability in their AI projects. As AI continues to advance rapidly, ensuring the responsible and beneficial use of these technologies will be essential for building trust and confidence among stakeholders and the public. –Rebecca Xu, BTW reporter OpenAI has become a resounding name in the AI industry, thanks to ChatGPT and the suite of foundation models developed by the company. Under Altman ‘s leadership, the lab has actively promoted the development of new products, but this fast-paced approach has also attracted criticism. Including its former co-head of safety, claim that the lab has overlooked safety issues in advanced AI research.
Evidence
Pending intelligence enrichment.
Analysis
In light of these concerns, five U.S. Senators recently wrote to Altman, questioning OpenAI’s commitment to safety and the cases of potential retaliation against former employees who publicly raised concerns, based on the non-disparagement clauses in their employment contracts. In a post on X, Sam Altman revealed that OpenAI has been collaborating with the U.S. AI Safety Institute, a federal entity, to establish an arrangement for granting early access to the upcoming foundation model. This partnership aims to advance the scientific understanding and evaluation of AI technologies. Altman also highlighted that the organisation has revised its non-disparagement policies, now permitting both current and former staff to openly voice concerns regarding the company and its projects. OpenAI maintains its dedication to allocating a minimum of 20% of its computational resources towards AI safety research. Also read: OpenAI improves AI safety through U.S. AI Safety Institute
Key Points
- OpenAI CEO Sam Altman has said that the next model developed by OpenAI will undergo safety checks and evaluations by the U.S. Government before its release to the public.
- This move is a major stride in ensuring the careful development and use of advanced AI, addressing concerns over risks and ethical issues.
Actions
Pending intelligence enrichment.





