- OpenAI CEO Sam Altman has said that the next model developed by OpenAI will undergo safety checks and evaluations by the U.S. Government before its release to the public.
- This move is a major stride in ensuring the careful development and use of advanced AI, addressing concerns over risks and ethical issues.
OUR TAKE
OpenAI’s initiative to subject its next model to government safety checks sets a precedent for other tech companies and research institutions to prioritise safety, ethics, and accountability in their AI projects. As AI continues to advance rapidly, ensuring the responsible and beneficial use of these technologies will be essential for building trust and confidence among stakeholders and the public.
–Rebecca Xu, BTW reporter
What happened
OpenAI has become a resounding name in the AI industry, thanks to ChatGPT and the suite of foundation models developed by the company. Under Altman‘s leadership, the lab has actively promoted the development of new products, but this fast-paced approach has also attracted criticism. Including its former co-head of safety, claim that the lab has overlooked safety issues in advanced AI research.
In light of these concerns, five U.S. Senators recently wrote to Altman, questioning OpenAI’s commitment to safety and the cases of potential retaliation against former employees who publicly raised concerns, based on the non-disparagement clauses in their employment contracts.
In a post on X, Sam Altman revealed that OpenAI has been collaborating with the U.S. AI Safety Institute, a federal entity, to establish an arrangement for granting early access to the upcoming foundation model. This partnership aims to advance the scientific understanding and evaluation of AI technologies.
Altman also highlighted that the organisation has revised its non-disparagement policies, now permitting both current and former staff to openly voice concerns regarding the company and its projects. OpenAI maintains its dedication to allocating a minimum of 20% of its computational resources towards AI safety research.
Also read: What is OpenAI?
Also read: OpenAI improves AI safety through U.S. AI Safety Institute
Also read: OpenAI supports legislation to shape the future of AI
Why it’s important
OpenAI, known for its cutting-edge research in AI and its commitment to promoting safe and beneficial AI for society, has partnered with government agencies to conduct rigorous safety assessments of its upcoming model. The collaboration aims to address potential risks such as unintended biases, security vulnerabilities, and ethical considerations that may arise from the model’s usage.
By subjecting the AI model to thorough safety checks by the U.S. Government, OpenAI seeks to demonstrate its commitment to transparency, accountability, and responsible innovation in the field of artificial intelligence. The evaluation process will involve experts from various disciplines, including AI researchers, ethicists, policymakers, and representatives from civil society.
The decision to involve government oversight in the development of the AI model reflects the growing recognition of the importance of regulatory frameworks and safety mechanisms for advanced technologies. It also highlights the need for collaborative efforts between industry, academia, and government to address the complex challenges posed by AI development and deployment.