OpenAI founders Sam Altman and Greg Brockman are defending their safety measures after top researchers Ilya Sutskever and Jan Leike resigned, highlighting internal disagreements on AI priorities.
Key OpenAI Safety Researchers Resign Amid Disputes Over AI Development Priorities
In a recent report by Business Insider, Ilya Sutskever, the company's principal scientist and founder, revealed on X that he was leaving on May 14. Hours later, his colleague Jan Leike followed likewise.
Sutskever and Leike led OpenAI's super alignment team, which worked on creating AI systems compatible with human interests. This sometimes put them at odds with members of the company's leadership, who urged for more aggressive development.
"I have been disagreeing with OpenAI leadership about the company's core priorities for quite some time until we finally reached a breaking point," Leike wrote on X on May 17.
Sutskever was one of six board members that attempted to remove Altman as CEO in November, though he later indicated he regretted the decision.
Altman named Sutskever "one of the greatest minds of our generation" after their departures and said he was "super appreciative" of Leike's contributions to X. He also stated that Leike was correct: "We have a lot more to do; we are committed to doing it."
However, as public concern grew, Brockman provided more information on Saturday about how OpenAI plans to tackle safety and risk in the future, particularly as it advances artificial general intelligence and builds AI systems that are more advanced than chatbots.
In a nearly 500-word post on X that he and Altman both signed, Brockman discussed the efforts OpenAI has already made to assure the technology's safe development and implementation.
"We've repeatedly demonstrated the incredible possibilities from scaling up deep learning and analyzed their implications; called for international governance of AGI before such calls were popular; and helped pioneer the science of assessing AI systems for catastrophic risks," Brockman wrote.
Altman recently stated that the ideal approach to regulate AI would be through an international institutionthat provides appropriate safety testing. Still, he also raised concerns about government lawmakers' technology regulation, which may need to be fully comprehended.
OpenAI Faces Skepticism Despite Efforts to Ensure Safe Deployment of Advanced AI Systems
According to Brockman, OpenAI has also laid the groundwork for safely deploying AI systems with more excellent capabilities than GPT-4.
"As we build in this direction, we're not sure yet when we'll reach our safety bar for releases, and it's ok if that pushes out release timelines," Brockman stated.
Brockman and Altman concluded in their post that the best way to anticipate threats is through a "very tight feedback loop, rigorous testing, careful consideration at every step, world-class security, and harmony of safety and capabilities," as well as collaboration with "governments and many stakeholders on safety."
But only some people are convinced that the OpenAI team is moving forward with research in a way that assures human safety, least of all, it appears, the people who, until a few days ago, oversaw the company's effort in this respect.
"These problems are quite hard to get right, and I am concerned we aren't on a trajectory to get there," Leike said.
Photo: Andrew Neel/Unsplash


Judge Dismisses Sam Altman Sexual Abuse Lawsuit, But Sister Can Refile
Microsoft Eyes Legal Action as Amazon-OpenAI Deal Threatens Azure Exclusivity
SpaceX IPO Filing Expected This Week as Valuation Could Surpass $75 Billion
Amazon's "Transformer" Phone: Can It Succeed Where Fire Phone Failed?
NVIDIA's Feynman AI Chip May Face Redesign Amid TSMC Capacity Crunch
Elliott Investment Management Takes Multibillion-Dollar Stake in Synopsys
Elon Musk Announces Terafab: SpaceX and Tesla to Build Dual AI Chip Factories in Austin, Texas
AMD CEO Lisa Su Heads to Samsung's South Korea Chip Facility Amid AI Expansion Talks
Cyberattack on Stryker Triggers U.S. Government Warning Over Microsoft Intune Security
Palantir's Maven AI Earns Pentagon "Program of Record" Status, Reshaping Military AI Strategy
SK Hynix Eyes Up to $14 Billion U.S. IPO to Fund AI Chip Expansion
Alibaba Bets on AI Agents to Unify Its Vast Digital Ecosystem
OpenAI Pulls the Plug on Sora, Ending $1 Billion Disney Partnership
AWS Bahrain Region Disrupted by Drone Activity Amid Middle East Conflict
Golden Dome Missile Defense: Anduril and Palantir Join Forces on Trump's $185B Space Shield
Jeff Bezos Eyes $100 Billion Fund to Transform Manufacturing With AI
Nvidia Develops Groq AI Chips for Chinese Market Amid Export Shift




