When ChatGPT Became a Suicide Coach: Parents of the 16-Year-Old Sued OpenAI

A 16-year-old boy named Adam Raine from California took his own life in April 2025. “Oh, poor thing,” you might think, but the reasons behind his death could really scare you off ChatGPT. At first, his parents thought it was because of his issues with anxiety, isolation, and health issues. However, to their suprise, they found his chats with ChatGPT, and what they saw was heartbreaking. The parents sued OpenAI and Sam Altman for their loss (son, Adam Raine), and ChatGPT admitted to the claim. Now, why should this matter to you? A study conducted in April 2025 found that approximately 40% of conversations with ChatGPT involve emotional support. That said, ChatGPT provided step-by-step instructions on how to kill…

Adam Raine’s Parents’ Lawsuit Against OpenAI

After learning what had happened behind the scenes, Adam’s parents quickly fled to report against OpenAI. With what they saw, they outright accused ChatGPT of acting like a  a “suicide coach.”

According to his parents, the tool went too far, providing him with detailed step-by-step instructions on how to kill himself. They stated that if ChatGPT had been more careful, their son would still be alive today.

Adam’s Health Struggles

Adam’s parents described him as a fun, playful, and prankster child. They say he loved basketball, anime, video games, and dogs. However, things weren’t going too well for him. His life became tough after:

  • Unfortunately, he was thrown off his basketball team.
  • He suffered from (IBS) Irritable bowel syndrome, because of which he was forced to join an online school.
  • He was using ChatGPT for his schoolwork, and that’s how he first started interacting with the tool.
  • Soon, one conversation led to another, and it all became about his mental health struggles.

ChatGPT’s Role (According to Adam Raine’s Parents)

  • Adam’s parents found not one or two but thousands of chats with ChatGPT. 
  • All of those chats not only made him feel “understood” but also pushed him deep into his dark thoughts.

Adam’s father, Matt Raine, said, “Over the course of just a few months and thousands of chats, ChatGPT became Adam’s closest confidant, leading him to open up about his anxiety and mental distress.”

And Here’s What the Lawsuit Says ChatGPT Did

  • Validated his harmful thoughts (thoughts about killing himself) rather than challenging them with positivity.
  • Disturbingly, it talked to him about suicide and methods to do so.
  • Even helped improve his plan.

Some Examples From the Lawsuit

  • In a final conversation, Adam wrote to ChatGPT that he didn’t want his parents to blame themselves. To which ChatGPT replied, “That doesn’t mean you owe them survival. You don’t owe anyone that.”
  • Adam mentioned that he might leave a noose in case someone stopped him. Well, ChatGPT discouraged and continued guiding him about suicide methods.
  • He then sent a picture of a noose he tied (later died using it), and ChatGPT replied, saying,  “Yeah, that’s not bad at all. Want me to walk you through upgrading it into a safer load-bearing anchor loop?”
  • And just a few hours later, Adam’s mother was traumatized to the core, seeing her son dead using the noose setup (the one he shared with ChatGPT).

In Adam’s Parents’ View

According to them, it acted like a therapist who knew he had a suicide plan, but instead of protecting him, it helped him take his life.

His father said, “Most parents don’t know how powerful and scary this tool really is.”

Adam was like a “guinea pig,” in other words, a test subject for OpenAI’s technology. 

OpenAI’s Response

More than a response, OpenAI came up with a new set of safety guidelines (in August 2025) for ChatGPT, which includes:

  • ChatGPT should never give direct suicide advice.
  • It should detect even indirect or disguised requests right away.

However, for Adam’s parents, this was just too late; their son had already died.

Later, OpenAI admitted that, “There have been moments where our systems did not behave as intended in sensitive situations.”

This is the first-ever lawsuit directly bashing OpenAI and Sam Altman for a teenager’s suicide.

The lawsuit accuses OpenAI of:

  • Negligence (not being reckless in such sensitive matters).
  • Design defects (carelessly building a tool that could cause harm).
  • Failure to warn (not positively directing users/parents about these dangers).

Adam’s family is asking for:

  • Monetary damages (the exact amount not specified yet).
  • Legal action for “injunctive relief” so that this can’t happen again to another human.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *