OpenAI updating ChatGPT to strengthen teen safety measures amid wrongful death lawsuit

17 Sep 2025, 02:21 PM

In the meantime, OpenAI will hand parental control to help oversee the use of ChatGPT by their children.

Team Head&Tale

OpenAI said it is updating its ChatGPT features with an aim to strengthen protection for teenagers over privacy and freedom, as child safety issues grow.

"We’re building toward a long-term system to understand whether someone is over or under 18, so their ChatGPT experience can be tailored appropriately," said OpenAI in a blogpost.

The AI startup added that when ChatGPT identifies that a user is under 18, they will automatically be directed to a ChatGPT experience with age-appropriate policies.

Hence, graphic sexual content will be blocked and in extreme cases of distress, law enforcement forces will be involved to ensure safety.

OpenAI also admitted to the challenges of determining the age of a user correctly and what it would do in that scenario.

"If we are not confident about someone’s age or have incomplete information, we’ll take the safer route and default to the under-18 experience—and give adults ways to prove their age to unlock adult capabilities," it explained. 

In the meantime, OpenAI will hand parental control to help oversee the use of ChatGPT by their children.

"Set blackout hours when a teen cannot use ChatGPT—a new control we’re adding," it said.

OpenAI's move to take these safety measures for teenager comes the AI startup tackles wrongful death lawsuit and recent Reuters investigation of Meta's policy encouraging sexual conversations with children.

The wrongful death lawsuit was filed by the parents of Adam Raine who died by suicide after months of interaction with ChatGPT.