Close Menu
    Facebook LinkedIn YouTube Instagram X (Twitter)
    Blue Tech Wave Media
    Facebook LinkedIn YouTube Instagram X (Twitter)
    • Home
    • Leadership Alliance
    • Exclusives
    • Internet Governance
      • Regulation
      • Governance Bodies
      • Emerging Tech
    • IT Infrastructure
      • Networking
      • Cloud
      • Data Centres
    • Company Stories
      • Profiles
      • Startups
      • Tech Titans
      • Partner Content
    • Others
      • Fintech
        • Blockchain
        • Payments
        • Regulation
      • Tech Trends
        • AI
        • AR/VR
        • IoT
      • Video / Podcast
    Blue Tech Wave Media
    Home » OpenAI created a team to control ‘superintelligent’ AI
    OpenAI;superintelligentAI;Superalignment team
    OpenAI;superintelligentAI;Superalignment team
    AI

    OpenAI created a team to control ‘superintelligent’ AI

    By Aria JiangMay 20, 2024No Comments3 Mins Read
    Share
    Facebook Twitter LinkedIn Pinterest Email
    • OpenAI’s Superalignment team, tasked with overseeing “superintelligent” AI systems, was assured 20% of the company’s computing power. However, they frequently faced refusals when requesting even a portion of this allocation, hindering their progress.
    • Among various concerns, several team members, including co-lead Jan Leike, resigned this week. Leike, a former DeepMind researcher involved in the development of ChatGPT, GPT-4, and its predecessor, InstructGPT, cited issues like the allocation problem as reasons for departure.
    • Leike went public with some reasons for his resignation on Friday morning. “I have been disagreeing with OpenAI leadership about the company’s core priorities for quite some time, until we finally reached a breaking point,” Leike wrote in a series of posts on X. 

    Members of OpenAI’s Superalignment team, tasked with guiding advanced AI systems, were promised significant compute resources but often had requests denied, hindering their progress. This led to several resignations, including co-lead Jan Leike, who had contributed to the development of ChatGPT, GPT-4, and their predecessor, InstructGPT. Leike cited disagreements with OpenAI leadership over priorities, emphasising the need to focus on preparing for future AI models, security, safety, alignment, and societal impact.

    Also read: Reddit and OpenAI announce partnership

    Superalignment team

    OpenAI formed the Superalignment team last July, and it was led by Leike and OpenAI co-founder Ilya Sutskever, who also resigned from the company this week. It had the ambitious goal of solving the core technical challenges of controlling superintelligent AI in the next four years. Joined by scientists and engineers from OpenAI’s previous alignment division as well as researchers from other orgs across the company, the team was to contribute research informing the safety of both in-house and non-OpenAI models, and, through initiatives including a research grant program, solicit from and share work with the broader AI industry.

    The Superalignment team did manage to publish a body of safety research and funnel millions of dollars in grants to outside researchers. But, as product launches began to take up an increasing amount of OpenAI leadership’s bandwidth, the Superalignment team found itself having to fight for more upfront investments — investments it believed were critical to the company’s stated mission of developing superintelligent AI for the benefit of all humanity.

    Also read: OpenAI could announce new AI model to directly take on Google

     Smarter-than-human machines is dangerous

    “Building smarter-than-human machines is an inherently dangerous endeavor,” Leike sadi. “But over the past years, safety culture and processes have taken a backseat to shiny products.”

    Following the departures of Leike, John Schulman, another OpenAI co-founder, has moved to head up the type of work the Superalignment team was doing, but there will no longer be a dedicated team — instead, it will be a loosely associated group of researchers embedded in divisions throughout the company. An OpenAI spokesperson described it as “integrating [the team] more deeply.”

    The fear is that, as a result, OpenAI’s AI development won’t be as safety-focused as it could’ve been.

    OpenAi Superalignment team superintelligentAI
    Aria Jiang

    Aria Jiang, an intern reporter at BTW media dedicated in IT infrastructure. She graduated from Ningbo Tech University. Send tips to a.jiang@btw.media

    Related Posts

    Oracle sells Gemini AI models via Google cloud deal

    August 15, 2025

    CoreWeave’s Q2 surge signals AI-cloud momentum

    August 14, 2025

    Datacloud USA 2025 convenes leaders in Austin

    August 14, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    CATEGORIES
    Archives
    • August 2025
    • July 2025
    • June 2025
    • May 2025
    • April 2025
    • March 2025
    • February 2025
    • January 2025
    • December 2024
    • November 2024
    • October 2024
    • September 2024
    • August 2024
    • July 2024
    • June 2024
    • May 2024
    • April 2024
    • March 2024
    • February 2024
    • January 2024
    • December 2023
    • November 2023
    • October 2023
    • September 2023
    • August 2023
    • July 2023

    Blue Tech Wave (BTW.Media) is a future-facing tech media brand delivering sharp insights, trendspotting, and bold storytelling across digital, social, and video. We translate complexity into clarity—so you’re always ahead of the curve.

    BTW
    • About BTW
    • Contact Us
    • Join Our Team
    TERMS
    • Privacy Policy
    • Cookie Policy
    • Terms of Use
    Facebook X (Twitter) Instagram YouTube LinkedIn

    Type above and press Enter to search. Press Esc to cancel.