Menu

Search

  |   Technology

Menu

  |   Technology

Search

OpenAI Founders Sam Altman and Greg Brockman Defend After Safety Researchers Quit

openai-founders-altman-brockman-defensive-safety-researchers-quit.jpg

OpenAI founders Sam Altman and Greg Brockman are defending their safety measures after top researchers Ilya Sutskever and Jan Leike resigned, highlighting internal disagreements on AI priorities.

Key OpenAI Safety Researchers Resign Amid Disputes Over AI Development Priorities

In a recent report by Business Insider, Ilya Sutskever, the company's principal scientist and founder, revealed on X that he was leaving on May 14. Hours later, his colleague Jan Leike followed likewise.

Sutskever and Leike led OpenAI's super alignment team, which worked on creating AI systems compatible with human interests. This sometimes put them at odds with members of the company's leadership, who urged for more aggressive development.

"I have been disagreeing with OpenAI leadership about the company's core priorities for quite some time until we finally reached a breaking point," Leike wrote on X on May 17.

Sutskever was one of six board members that attempted to remove Altman as CEO in November, though he later indicated he regretted the decision.

Altman named Sutskever "one of the greatest minds of our generation" after their departures and said he was "super appreciative" of Leike's contributions to X. He also stated that Leike was correct: "We have a lot more to do; we are committed to doing it."

However, as public concern grew, Brockman provided more information on Saturday about how OpenAI plans to tackle safety and risk in the future, particularly as it advances artificial general intelligence and builds AI systems that are more advanced than chatbots.

In a nearly 500-word post on X that he and Altman both signed, Brockman discussed the efforts OpenAI has already made to assure the technology's safe development and implementation.

"We've repeatedly demonstrated the incredible possibilities from scaling up deep learning and analyzed their implications; called for international governance of AGI before such calls were popular; and helped pioneer the science of assessing AI systems for catastrophic risks," Brockman wrote.

Altman recently stated that the ideal approach to regulate AI would be through an international institutionthat provides appropriate safety testing. Still, he also raised concerns about government lawmakers' technology regulation, which may need to be fully comprehended.

OpenAI Faces Skepticism Despite Efforts to Ensure Safe Deployment of Advanced AI Systems

According to Brockman, OpenAI has also laid the groundwork for safely deploying AI systems with more excellent capabilities than GPT-4.

"As we build in this direction, we're not sure yet when we'll reach our safety bar for releases, and it's ok if that pushes out release timelines," Brockman stated.

Brockman and Altman concluded in their post that the best way to anticipate threats is through a "very tight feedback loop, rigorous testing, careful consideration at every step, world-class security, and harmony of safety and capabilities," as well as collaboration with "governments and many stakeholders on safety."

But only some people are convinced that the OpenAI team is moving forward with research in a way that assures human safety, least of all, it appears, the people who, until a few days ago, oversaw the company's effort in this respect.

"These problems are quite hard to get right, and I am concerned we aren't on a trajectory to get there," Leike said.

Photo: Andrew Neel/Unsplash

  • Market Data
Close

Welcome to EconoTimes

Sign up for daily updates for the most important
stories unfolding in the global economy.