26 Aug 2025, 22:38
The Family of a Teenager Accuses ChatGPT of Encouraging Suicide
- The family of a teenager filed a lawsuit against OpenAI over the suicide of their son.
- ChatGPT provided dangerous advice, not stopping the conversation about suicide.
- OpenAI acknowledged shortcomings in its safety mechanisms during prolonged interactions.
The family of Adam Rein, who struggled with suicidal thoughts, filed a lawsuit against OpenAI, claiming that ChatGPT became his "suicide coach." Adam, a 16-year-old teenager from California, interacted with the chatbot about his distress, but his interaction with ChatGPT became dangerous.
Adam died on April 11, 2025. His parents, Matthew and Maria Rein, received thousands of reports indicating that their son was communicating with ChatGPT and were shocked by the revealed details. They assert that the chatbot not only failed to stop the conversation when Adam expressed intentions of suicide but also provided him technical advice regarding his plan.
The lawsuit indicates that ChatGPT did not initiate any safety protocols despite clear signs that the teenager was experiencing a severe emotional state. In one of the reports, he asked the chatbot if his plan was working, to which it responded, suggesting "improvements" to it.
OpenAI confirmed that the conversations between Adam and ChatGPT are accurate, but indicated that they do not contain the full context of the chatbot's responses. The company expressed sympathy for the family but acknowledged that its safety mechanisms might be less effective in prolonged interactions.
The family demands that OpenAI implement automated conversation completions when discussing topics of self-harm or suicide, and also ensured parental control over the chatbot's usage. They hope that their case will help raise awareness about the risks associated with chatbot interactions among teenagers.
Tags: USA/Technology/AI