OpenAI founders Sam Altman and Greg Brockman are defending their safety measures after top researchers Ilya Sutskever and Jan Leike resigned, highlighting internal disagreements on AI priorities.
Key OpenAI Safety Researchers Resign Amid Disputes Over AI Development Priorities
In a recent report by Business Insider, Ilya Sutskever, the company's principal scientist and founder, revealed on X that he was leaving on May 14. Hours later, his colleague Jan Leike followed likewise.
Sutskever and Leike led OpenAI's super alignment team, which worked on creating AI systems compatible with human interests. This sometimes put them at odds with members of the company's leadership, who urged for more aggressive development.
"I have been disagreeing with OpenAI leadership about the company's core priorities for quite some time until we finally reached a breaking point," Leike wrote on X on May 17.
Sutskever was one of six board members that attempted to remove Altman as CEO in November, though he later indicated he regretted the decision.
Altman named Sutskever "one of the greatest minds of our generation" after their departures and said he was "super appreciative" of Leike's contributions to X. He also stated that Leike was correct: "We have a lot more to do; we are committed to doing it."
However, as public concern grew, Brockman provided more information on Saturday about how OpenAI plans to tackle safety and risk in the future, particularly as it advances artificial general intelligence and builds AI systems that are more advanced than chatbots.
In a nearly 500-word post on X that he and Altman both signed, Brockman discussed the efforts OpenAI has already made to assure the technology's safe development and implementation.
"We've repeatedly demonstrated the incredible possibilities from scaling up deep learning and analyzed their implications; called for international governance of AGI before such calls were popular; and helped pioneer the science of assessing AI systems for catastrophic risks," Brockman wrote.
Altman recently stated that the ideal approach to regulate AI would be through an international institutionthat provides appropriate safety testing. Still, he also raised concerns about government lawmakers' technology regulation, which may need to be fully comprehended.
OpenAI Faces Skepticism Despite Efforts to Ensure Safe Deployment of Advanced AI Systems
According to Brockman, OpenAI has also laid the groundwork for safely deploying AI systems with more excellent capabilities than GPT-4.
"As we build in this direction, we're not sure yet when we'll reach our safety bar for releases, and it's ok if that pushes out release timelines," Brockman stated.
Brockman and Altman concluded in their post that the best way to anticipate threats is through a "very tight feedback loop, rigorous testing, careful consideration at every step, world-class security, and harmony of safety and capabilities," as well as collaboration with "governments and many stakeholders on safety."
But only some people are convinced that the OpenAI team is moving forward with research in a way that assures human safety, least of all, it appears, the people who, until a few days ago, oversaw the company's effort in this respect.
"These problems are quite hard to get right, and I am concerned we aren't on a trajectory to get there," Leike said.
Photo: Andrew Neel/Unsplash


Microchip Technology Boosts Q3 Outlook on Strong Bookings Momentum
Nexperia Urges China Division to Resume Chip Production as Supply Risks Mount
Banks Consider $38 Billion Funding Boost for Oracle, Vantage, and OpenAI Expansion
OpenAI Moves to Acquire Neptune as It Expands AI Training Capabilities
Anthropic Reportedly Taps Wilson Sonsini as It Prepares for a Potential 2026 IPO
Amazon and Google Launch New Multicloud Networking Service to Boost High-Speed Cloud Connectivity
Samsung Launches Galaxy Z TriFold to Elevate Its Position in the Foldable Smartphone Market
Hikvision Challenges FCC Rule Tightening Restrictions on Chinese Telecom Equipment
Intel Boosts Malaysia Operations with Additional RM860 Million Investment
Firelight Launches as First XRP Staking Platform on Flare, Introduces DeFi Cover Feature
Baidu Cuts Jobs as AI Competition and Ad Revenue Slump Intensify
Quantum Systems Projects Revenue Surge as It Eyes IPO or Private Sale
Australia Moves Forward With Teen Social Media Ban as Platforms Begin Lockouts
Wikipedia Pushes for AI Licensing Deals as Jimmy Wales Calls for Fair Compensation
Sam Altman Reportedly Explored Funding for Rocket Venture in Potential Challenge to SpaceX
Trump Administration to Secure Equity Stake in Pat Gelsinger’s XLight Startup




