Trends

OpenAI collaborates with US AI Safety Institute for early safety testing

OUR TAKEThis collaboration highlights OpenAI’s response to growing concerns about AI safety, aiming to reassure critics and demonstrate a commitment to responsible AI development. By working with federal bodies, OpenAI seeks to balance innovation with safety protocols. — Zoey Zhu, BTW reporter What …

OpenAI-801

Headline

OUR TAKEThis collaboration highlights OpenAI’s response to growing concerns about AI safety, aiming to reassure critics and demonstrate a commitment to responsible AI development. By working with federal bodies, OpenAI seeks to balance innovation with safety protocols. — Zoey…

Context

OUR TAKE This collaboration highlights OpenAI’s response to growing concerns about AI safety, aiming to reassure critics and demonstrate a commitment to responsible AI development. By working with federal bodies, OpenAI seeks to balance innovation with safety protocols. — Zoey Zhu, BTW reporter OpenAI CEO Sam Altman announced on Thursday evening that OpenAI is working with the U.S. AI Safety Institute to provide early access to its next major generative AI model for safety testing. The U.S. AI Safety Institute , part of the Commerce Department’s National Institute of Standards and Technology, aims to assess and address risks in AI platforms. This collaboration follows a similar deal with the U.K.’s AI safety body in June.

Evidence

Pending intelligence enrichment.

Analysis

OpenAI has faced criticism for deprioritising AI safety in recent months. In May, the company disbanded a unit focused on developing controls to prevent “super intelligent” AI systems from going rogue. The move led to the resignation of the unit’s co-leads and sparked backlash from AI safety advocates. In response, OpenAI has taken steps to address these concerns, including eliminating restrictive non-disparagement clauses and committing 20% of its compute resources to safety research. Also read: OpenAI supports legislation to shape the future of AI Also read: OpenAI unveils real-time voice mode for ChatGPT The collaboration with the U.S. AI Safety Institute signifies OpenAI’s effort to rebuild trust and demonstrate a commitment to AI safety. By providing early access to its next major generative AI model for safety testing, OpenAI aims to ensure that potential risks are identified and mitigated before the model is widely released.

Key Points

  • OpenAI CEO Sam Altman announced a collaboration with the US AI Safety Institute for early safety testing of its next major generative AI model.
  • The move follows criticism of OpenAI’s safety practices and aims to demonstrate a renewed focus on AI safety.

Actions

Pending intelligence enrichment.

Author

Zoey Zhu