Menu

Search

  |   Technology

Menu

  |   Technology

Search

OpenAI CEO Sam Altman and Other AI Leaders Join Federal Safety Board, Pledge to Protect Children Online

AI industry leaders join federal board to enhance national AI safety and child protection efforts.

Sam Altman of OpenAI and executives from Microsoft, Google, and Nvidia have joined a new government AI safety board. This initiative, part of a broader effort to regulate AI's deployment in critical sectors, coincides with leading AI firms pledging to safeguard children online.

Tech Titans Join Government AI Safety Board to Secure Critical Infrastructure
According to The Wall Street Journal (via Engadget), Sam Altman, CEO of OpenAI, Microsoft's Satya Nadella, and Alphabet CEO Sundar Pichai will join the government's Artificial Intelligence Safety and Security board. They are joined by Nvidia's Jensen Huang, Northrop Grumman's Kathy Warden, Delta's Ed Bastian, and other tech and AI industry heavyweights.

The AI board will collaborate with and advise the Department of Homeland Security on how to deploy AI safely across the country's vital infrastructure. It is also entrusted with developing recommendations for power grid operators, transportation service providers, and manufacturing units to defend their systems from possible threats posed by technological advancements.

Last year, the Biden administration established an AI safety board as part of a broad executive order to control AI development. According to the Homeland Security website, the board "includes AI experts from the private sector and government that advise the Secretary and the critical infrastructure community."

Homeland Security Secretary Alejandro Mayorkas told the Journal that the use of AI in critical infrastructure can greatly improve services—it can, for instance, speed up illness diagnoses or quickly detect anomalies in power plants—but they carry a significant risk, which the agency is

However, one can't help but wonder if these AI tech experts can deliver counsel that isn't intended to benefit themselves and their companies. Their work focuses on developing and promoting AI technology, whereas the board's role is to guarantee that vital infrastructure systems use AI responsibly.

Mayorkas appears to be sure they will execute their roles properly, telling the Journal that the tech executives "understand the mission of this board," which is "not a mission about business development."

AI Industry Leaders Commit to Child Safety, Tackling Abuse in Generative AI

Leading artificial intelligence organizations such as OpenAI, Microsoft, Google, Meta, and others have vowed to prevent their AI tools from being used to exploit children and create child sexual abuse material (CSAM). Thorn, a child-safety organization, and All Tech Is Human, a non-profit that promotes responsible technology, spearheaded the campaign.

Thorn stated that the pledges made by AI companies "set a groundbreaking precedent for the industry and represent a significant leap in efforts to defend children from sexual abuse as a feature with generative AI unfolds."

The initiative's goal is to prevent the creation of sexually explicit material involving children and remove it from social media platforms and search engines. According to Thorn, more than 104 million files containing suspected child sexual abuse content were reported in the United States in 2023 alone. Without collaborative action, generative AI is poised to exacerbate the problem and overburden law enforcement agencies, which are already struggling to identify actual victims.

Thorn and All Tech Is Human published a new paper titled "Safety by Design for Generative AI: Preventing Child Sexual Abuse" on April 23. The paper outlines strategies and recommendations for companies that build AI tools, search engines, social media platforms, hosting companies, and developers to take steps to prevent generative AI from harming children.

One of the recommendations, for example, encourages organizations to carefully select data sets used to train AI models, avoiding those that contain not only instances of CSAM but also adult sexual content due to generative AI's proclivity to blend the two.

Thorn is also requesting that social media networks and search engines remove links to websites and applications that allow users to "nudity" photographs of minors, resulting in the creation of new AI-generated child sexual assault content online. According to the research, a flood of AI-generated CSAM will make identifying genuine victims of child sexual abuse more difficult by exacerbating the "haystack problem" — referring to the volume of content that law enforcement authorities must currently filter through.

“This project was intended to make abundantly clear that you don’t need to throw up your hands,” Thorn’s vice president of data science, Rebecca Portnoff, told the Wall Street Journal. "We want to be able to change the course of this technology so that its existing harms are cut off at the knees.”

According to Portnoff, some corporations have already agreed to isolate photographs, video, and audio involving children from adult content data sets to prevent their models from merging the two. Others use watermarks to identify AI-generated information, but this method is not infallible; watermarks and metadata are easily deleted.

Photo: The Economist/YouTube Screenshot

  • Market Data
Close

Welcome to EconoTimes

Sign up for daily updates for the most important
stories unfolding in the global economy.