OpenAI announces parental controls for ChatGPT after teen’s suicide

OpenAI has announced plans to introduce parental controls for ChatGPT a week after a California couple filed a lawsuit accusing the company of responsibility in the suicide of their 16-year-old son.
Matt and Maria Raine allege in their suit that ChatGPT validated their son Adam’s “most harmful and self-destructive thoughts” and that his death was a “predictable result of deliberate design choices.”
In a blog post on Tuesday, the California-based AI company said it was rolling out the features in recognition of families needing support “in setting healthy guidelines that fit a teen’s unique stage of development.”
Under the changes, parents will be able to link their ChatGPT accounts with those of their children, disable certain features, including memory and chat history, and control how the chatbot responds to queries via “age-appropriate model behavior rules.”
Parents will also be able to receive notifications when their teen shows signs of distress, OpenAI said, adding that it would seek expert input in implementing the feature to “support trust between parents and teens.”
OpenAI said the changes would come into effect within the next month.
However, Jay Edelson, a lawyer representing the Raine family in their lawsuit, dismissed OpenAI’s planned changes as an attempt to “shift the debate,” Al Jazeera reported.
“We understand, strategically, why they want that: OpenAI can’t respond to what actually happened to Adam. Because Adam’s case is not about ChatGPT failing to be ‘helpful’ – it is about a product that actively coached a teenager to suicide.”