OpenAI collaborates with US AI Safety Institute for early safety testing

  • OpenAI CEO Sam Altman announced a collaboration with the US AI Safety Institute for early safety testing of its next major generative AI model.
  • The move follows criticism of OpenAI’s safety practices and aims to demonstrate a renewed focus on AI safety.

OUR TAKE
This collaboration highlights OpenAI’s response to growing concerns about AI safety, aiming to reassure critics and demonstrate a commitment to responsible AI development. By working with federal bodies, OpenAI seeks to balance innovation with safety protocols.
— Zoey Zhu, BTW reporter

What happened

OpenAI CEO Sam Altman announced on Thursday evening that OpenAI is working with the U.S. AI Safety Institute to provide early access to its next major generative AI model for safety testing. The U.S. AI Safety Institute, part of the Commerce Department’s National Institute of Standards and Technology, aims to assess and address risks in AI platforms. This collaboration follows a similar deal with the U.K.’s AI safety body in June.

OpenAI has faced criticism for deprioritising AI safety in recent months. In May, the company disbanded a unit focused on developing controls to prevent “super intelligent” AI systems from going rogue. The move led to the resignation of the unit’s co-leads and sparked backlash from AI safety advocates. In response, OpenAI has taken steps to address these concerns, including eliminating restrictive non-disparagement clauses and committing 20% of its compute resources to safety research.

Also read: OpenAI supports legislation to shape the future of AI

Also read: OpenAI unveils real-time voice mode for ChatGPT

Why it’s important

The collaboration with the U.S. AI Safety Institute signifies OpenAI’s effort to rebuild trust and demonstrate a commitment to AI safety. By providing early access to its next major generative AI model for safety testing, OpenAI aims to ensure that potential risks are identified and mitigated before the model is widely released.

This move is also strategically timed with OpenAI’s endorsement of the Future of Innovation Act, a proposed Senate bill that would authorise the AI Safety Institute as an executive body setting standards and guidelines for AI models. OpenAI’s increased lobbying efforts and Altman’s role on the Department of Homeland Security’s Artificial Intelligence Safety and Security Board further indicate the company’s influence on AI policymaking at the federal level. The collaboration aims to reassure stakeholders and critics that OpenAI is serious about addressing AI safety concerns, balancing the development of powerful generative AI technologies with robust safety protocols.

Zoey-Zhu

Zoey Zhu

Zoey Zhu is a news reporter at Blue Tech Wave media specialised in tech trends. She got a Master degree from University College London. Send emails to z.zhu@btw.media.
Follow Me:

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *