Menu

Search

  |   Technology

Menu

  |   Technology

Search

OpenAI Forms Child Safety Team Amid Rising Concerns Over AI Misuse by Minors

OpenAI's new child safety team focuses on protecting minors in the digital AI landscape.

In response to criticism from parents and activists, OpenAI has established a new team to investigate how to stop children from abusing or misusing its AI capabilities.

OpenAI Creates a New Group to Research Kid Protection

OpenAI has announced creating a child safety team in a new job posting on its employment page. The business claims that this team will work with outside partners, legal and investigative divisions within OpenAI, and platform policy to manage "processes, incidents, and reviews" regarding underage users.

The team is now hiring a child safety enforcement specialist to work on review processes connected to "sensitive" (probably kid-related) content and apply OpenAI's regulations in AI-generated content.

Tech suppliers of a certain size allocate a particular number of resources to comply with rules such as the U.S. Children's Online Privacy Protection Rule, which restricts the kind of information businesses can gather about children and what they can and cannot access on the internet.

Therefore, it should not come as a massive surprise that OpenAI is employing child safety specialists, especially given the company's expectation that a sizable portion of its user base would be underage. (OpenAI's current terms of usage prohibit use for children under 13 and demand parental authorization for children ages 13 to 18).

However, creating the new team also suggests that OpenAI is concerned about breaking laws about minors' use of AI and getting bad press. This is in addition to the fact that the team was formed a few weeks after the company announced a collaboration with Common Sense Media to work on kid-friendly AI guidelines and land its first education customer.

Many children and teenagers are using GenAI products to assist them with personal and academic problems. Tech Crunch said that based on a survey conducted by the Center for Democracy and Technology, 29% of children say they have utilized ChatGPT to address mental health or anxiety problems, 22% for friend-related issues, and 16% for family disputes.

There are those who perceive this as an increasing risk. Schools and universities hurried to outlaw ChatGPT last summer because of concerns about plagiarism and false information. Some have since lifted their bans. However, some remain skeptical about GenAI's positive effects, citing studies such as the U.K.

According to Safer Internet Centers, over half of children (53%) say they have witnessed someone their age using GenAI negatively, such as fabricating photos or plausible false information intended to anger someone.

OpenAI's Guidelines and UNESCO's Call for Regulation

In September, OpenAI released documentation for ChatGPT in classrooms to provide educators with direction on using GenAI as a teaching tool. This literature included questions and answers.

OpenAI stated in one of the support articles that ChatGPT “may produce output that isn't appropriate for all audiences or all ages" and that parents should exercise "caution" while exposing their children to it, even if they are of legal age.

The demand for regulations regarding children's use of GenAI is rising. Late last year, the UN Educational, Scientific, and Cultural Organization (UNESCO) urged countries to impose age restrictions on users and safeguards regarding user privacy and data security regarding using GenAI in education.

UNESCO's director-general, Audrey Azoulay, stated in a news statement that "generative AI can be a tremendous opportunity for human development, but it can also cause harm and prejudice." "It cannot be incorporated into education without public participation and the required government safeguards and regulations."

Photo: Jonathan Kemper/Unsplash

  • Market Data
Close

Welcome to EconoTimes

Sign up for daily updates for the most important
stories unfolding in the global economy.