OpenAI created a team to control ‘superintelligent’ AI

  • OpenAI’s Superalignment team, tasked with overseeing “superintelligent” AI systems, was assured 20% of the company’s computing power. However, they frequently faced refusals when requesting even a portion of this allocation, hindering their progress.
  • Among various concerns, several team members, including co-lead Jan Leike, resigned this week. Leike, a former DeepMind researcher involved in the development of ChatGPT, GPT-4, and its predecessor, InstructGPT, cited issues like the allocation problem as reasons for departure.
  • Leike went public with some reasons for his resignation on Friday morning. “I have been disagreeing with OpenAI leadership about the company’s core priorities for quite some time, until we finally reached a breaking point,” Leike wrote in a series of posts on X. 

Members of OpenAI’s Superalignment team, tasked with guiding advanced AI systems, were promised significant compute resources but often had requests denied, hindering their progress. This led to several resignations, including co-lead Jan Leike, who had contributed to the development of ChatGPT, GPT-4, and their predecessor, InstructGPT. Leike cited disagreements with OpenAI leadership over priorities, emphasising the need to focus on preparing for future AI models, security, safety, alignment, and societal impact.

Also read: Reddit and OpenAI announce partnership

Superalignment team

OpenAI formed the Superalignment team last July, and it was led by Leike and OpenAI co-founder Ilya Sutskever, who also resigned from the company this week. It had the ambitious goal of solving the core technical challenges of controlling superintelligent AI in the next four years. Joined by scientists and engineers from OpenAI’s previous alignment division as well as researchers from other orgs across the company, the team was to contribute research informing the safety of both in-house and non-OpenAI models, and, through initiatives including a research grant program, solicit from and share work with the broader AI industry.

The Superalignment team did manage to publish a body of safety research and funnel millions of dollars in grants to outside researchers. But, as product launches began to take up an increasing amount of OpenAI leadership’s bandwidth, the Superalignment team found itself having to fight for more upfront investments — investments it believed were critical to the company’s stated mission of developing superintelligent AI for the benefit of all humanity.

Also read: OpenAI could announce new AI model to directly take on Google

 Smarter-than-human machines is dangerous

“Building smarter-than-human machines is an inherently dangerous endeavor,” Leike sadi. “But over the past years, safety culture and processes have taken a backseat to shiny products.”

Following the departures of Leike, John Schulman, another OpenAI co-founder, has moved to head up the type of work the Superalignment team was doing, but there will no longer be a dedicated team — instead, it will be a loosely associated group of researchers embedded in divisions throughout the company. An OpenAI spokesperson described it as “integrating [the team] more deeply.”

The fear is that, as a result, OpenAI’s AI development won’t be as safety-focused as it could’ve been.


Aria Jiang

Aria Jiang, an intern reporter at BTW media dedicated in IT infrastructure. She graduated from Ningbo Tech University. Send tips to

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *