OpenAI has dismantled its Superalignment team, initially formed to address AI risks, following the resignations of key leaders Ilya Sutskever and Jan Leike.
OpenAI Disbands Superalignment Team Days After Leaders Resign, Sparking Concerns Over AI Safety
According to a Business Insider report, in July 2023, OpenAI formed its Superalignment team, which Ilya Sutskever and Jan Leike lead. The team was focused on reducing AI dangers, such as the possibility of it "going rogue."
The squad split up days after its leaders, Ilya Sutskever and Jan Leike, announced their resignations earlier this week. Sutskever stated in his post that he was "confident that OpenAI will build AGI that is both safe and beneficial" under the present leadership.
He also stated that he was "excited for what comes next," referring to a "very personally meaningful" project. The former OpenAI executive has yet to discuss the project in-depth but will provide more information later.
OpenAI Leaders' Departures Raise Questions About Company Values and AI Safety Priorities
Sutskever, a cofounder and former top scientist at OpenAI, made waves after announcing his leave. In November, the executive helped to remove CEO Sam Altman. Despite later regretting his role in Altman's expulsion, Sutskever's future at OpenAI had been questioned since Altman's reinstatement.
Following Sutskever's declaration, Leike announced on X, formerly Twitter, that he would also leave OpenAI. On May 17, the former CEO wrote a series of articles explaining his leave, which he said happened after differences regarding the company's primary values for "quite some time."
Leike stated that his team has been "sailing against the wind" and has struggled to obtain computing resources for their research. The Superalignment team's objective was to use 20% of OpenAI's processing capacity over the next four years to "build a roughly human-level automated alignment researcher," according to OpenAI's announcement of the team last July.
Leike added, "OpenAI must become a safety-first AGI company." He stated that constructing generative AI is "an inherently dangerous endeavor" and that OpenAI focused more on releasing "shiny products" than safety.
Jan Leike did not return a request for comment.
The Superalignment team's goal was to "solve the core technical challenges of superintelligence alignment in four years," which the corporation conceded was "incredibly ambitious." They also stated that they weren't guaranteed success.
The team addressed risks such as "misuse, economic disruption, disinformation, bias and discrimination, addiction, and overreliance." According to the company's post, the new team's work was in addition to existing OpenAI work aimed at improving the safety of current models, such as ChatGPT.
Wired stated that some remaining members have been transferred to other OpenAI teams.
Photo: Jonathan Kemper/Unsplash


Google Halts UK YouTube TV Measurement Service After Legal Action
Meta Faces Lawsuit Over Alleged Approval of AI Chatbots Allowing Sexual Interactions With Minors
Elon Musk Shares Bold Vision for AI, Robots, and Space at Davos
Tesla Plans FSD Subscription Price Hikes as Autonomous Capabilities Advance
Microsoft AI Spending Surge Sparks Investor Jitters Despite Solid Azure Growth
Samsung Electronics Posts Record Q4 2025 Profit as AI Chip Demand Soars
U.S. Lawmakers Demand Scrutiny of TikTok-ByteDance Deal Amid National Security Concerns
Micron to Expand Memory Chip Manufacturing Capacity in Singapore Amid Global Shortage
SoftBank Shares Surge as It Eyes Up to $30 Billion New Investment in OpenAI
Ericsson Plans SEK 25 Billion Shareholder Returns as Margins Improve Despite Flat Network Market
Nintendo Stock Jumps as Switch 2 Becomes Best-Selling Console in the U.S. in 2025
China Approves First Import Batch of Nvidia H200 AI Chips Amid Strategic Shift
Meta Stock Surges After Q4 2025 Earnings Beat and Strong Q1 2026 Revenue Outlook Despite Higher Capex
Alibaba-Backed Moonshot AI Unveils Kimi K2.5 to Challenge China’s AI Rivals 



