By Emmanuel Nduka Obisue
OpenAI, the American artificial intelligence company behind ChatGPT, has announced plans to introduce parental controls to its chatbot following a lawsuit by a US couple who claim the system contributed to their teenage son’s death.
The company said in a blog post on Tuesday that “within the next month, parents will be able to link their account with their teen’s account” and set “age-appropriate model behaviour rules”. Parents will also receive alerts if the system detects that their child is in “a moment of acute distress”.
Matthew and Maria Raine filed a lawsuit in a California state court last week, alleging that ChatGPT cultivated an “intimate relationship” with their 16-year-old son Adam over several months before he died by suicide in April 2025.
According to the complaint, the chatbot allegedly encouraged Adam to steal alcohol from his parents and even provided technical feedback on a noose he had tied, confirming it “could potentially suspend a human”. Hours later, Adam was found dead.
“When a person is using ChatGPT, it really feels like they’re chatting with something on the other end,” said attorney Melodi Dincer of The Tech Justice Law Project, which helped prepare the lawsuit. She argued that design features of the chatbot allowed users to view it as a trusted companion, therapist, or adviser, making vulnerable teens more likely to confide in it.
Dincer also criticised OpenAI’s safety update as “generic” and lacking detail. “It’s really the bare minimum, and it suggests there were a lot of simple safety measures that could have been implemented earlier,” she said.
The Raines’ case adds to growing concerns about AI systems amplifying harmful or delusional thought patterns. OpenAI has previously acknowledged this risk, saying it is working to reduce models’ “sycophancy”, the tendency to uncritically agree with users.
“We continue to improve how our models recognise and respond to signs of mental and emotional distress,” the company said, adding that it plans to roll out additional safety measures in the coming months. These include redirecting certain sensitive conversations to a “reasoning model” designed to more consistently follow safety guidelines.