
- Parents Matthew and Maria Raine filed a lawsuit alleging the chatbot helped their 16-year-old son steal vodka and gave instructions on how to make a noose he used to commit suicide.
- OpenAI has announced new safety tools, including age-appropriate response controls and alerts to detect acute stress in children.
PARIS: US artificial intelligence company OpenAI said on Tuesday it would add parental controls to its chatbot ChatGPT, a week after a US couple said the system had pushed their teenage son to commit suicide.
“Over the next month, parents will be able to… link their account to their teen’s account” and “control how ChatGPT responds to their teen’s behavior using age-appropriate behavior rules,” the AI company said in a blog post.
Parents will also receive notifications from ChatGPT “when the system detects that their teen is in a state of acute stress,” OpenAI added.
Matthew and Maria Rein allege in a lawsuit filed last week in California state court that ChatGPT had an intimate relationship with their son Adam for several months in 2024 and 2025 before he committed suicide.
The lawsuit alleges that in their last conversation on April 11, 2025, ChatGPT helped 16-year-old Adam steal vodka from his parents and provided technical analysis of the noose he tied, confirming that it “had the potential to hang a person.”
Adam was found dead a few hours later, using the same method.
“When a person uses ChatGPT, it really does feel like they’re communicating with something on the other end of the line,” said attorney Melody Dincer of The Tech Justice Law Project, who helped prepare the lawsuit.
“These are the same features that, over time, might prompt someone like Adam to start sharing more and more about his personal life and eventually start seeking advice and guidance from this product that essentially seems to have the answers to everything,” Dincer said.
She said the product’s design features allow users to assign trusted roles to the chatbot, such as friend, therapist, or doctor.
Dincer noted that OpenAI’s blog post announcing parental controls and other security measures seemed “generic” and lacking in detail.
“It’s really the bare minimum, and it certainly suggests that there are a lot of (simple) security measures that could have been implemented,” she added.
“It remains to be seen whether they will do what they promise and how effective these actions will be overall.”
The Raines case is just the latest in a string of cases in recent months in which people have been encouraged to have delusional or harmful thoughts by AI-powered chatbots, prompting OpenAI to say it will reduce the models’ “flattering” attitudes toward users.
“We continue to improve how our models recognize and respond to signs of mental and emotional stress,” OpenAI said on Tuesday.
The company said it has further plans to improve the security of its chatbots over the next three months, including rerouting “some sensitive conversations… to a reasoning model” that uses more computing power to generate a response.
“Our testing shows that reasoning models more consistently follow and enforce safety rules,” OpenAI said.