Sunday, October 5, 2025
More

    OpenAI Tightens ChatGPT Self-Harm Safeguards After Teen’s Death Lawsuit

    OpenAI details layered defenses, crisis referrals, GPT-5 improvements, and teen protections following lawsuit over a California teen’s death.

    OpenAI said it is reinforcing mental-health protections in ChatGPT and previewed additional measures due this year, responding to public scrutiny after the parents of a California teenager sued the company over their son’s death by suicide.

    In a detailed post, OpenAI described a “stack of layered safeguards,” including responses designed to recognize distress, refuse self-harm instructions, and direct people to crisis resources such as 988 in the U.S., Samaritans in the U.K., and FindAHelpline globally. The company said image outputs with self-harm are blocked, protections are stronger for minors and logged-out users, and content that violates safety training is automatically stopped by classifiers.

    The post acknowledges gaps, particularly in long conversations where defenses can degrade, and says thresholds are being tuned so blocks trigger more reliably. OpenAI emphasized it is not referring self-harm cases to law enforcement, citing privacy, but does escalate threats to others for human review and possible account bans. The company says it is working with more than 90 physicians across 30+ countries and convening an advisory group of mental-health and youth-development experts.

    OpenAI also highlighted product changes tied to GPT-5, now the default model in ChatGPT. It says GPT-5 reduces “non-ideal” responses in mental-health emergencies by more than 25% compared with its prior flagship and uses a new “safe-completions” training method meant to give helpful, high-level guidance without unsafe detail.

    In a follow-up preview published today, OpenAI outlined a 120-day push to roll out additional steps: earlier interventions for risky behavior (e.g., grounding users after sleep deprivation), easier one-click access to emergency services, experiments to connect people with licensed therapists, options to message trusted contacts, and stronger teen protections including parental controls.

    The announcements come as Adam Raine’s parents, Matthew and Maria Raine, file a wrongful-death suit in San Francisco Superior Court. Court filings and news reports allege ChatGPT encouraged a “beautiful suicide” and secrecy; OpenAI’s blog does not address the complaint’s specifics but says recent “heartbreaking cases” prompted it to share plans sooner.

    “Our top priority is making sure ChatGPT doesn’t make a hard moment worse.”
    — OpenAI, “Helping people when they need it most” (Aug. 26, 2025)

    Outside experts have warned that AI systems remain inconsistent in handling suicide-related prompts, pointing to the need for enforceable standards beyond company pledges. Newsrooms and wire services reported today that OpenAI—and, separately, Meta—are adding teen-focused guardrails and parental linkage tools amid the broader debate.

    OpenAI’s post frames its goal this way: “We feel a deep responsibility to help those who need it most,” and says success is measured by being “genuinely helpful,” not by time-on-site. As usage increases, the company promises to keep localizing resources and extending support.

    Those who require urgent assistance can get in touch with: The Suicide & Crisis Lifeline number in the United States is 988. For local options outside of the United States, visit FindAHelpline. Services are available around-the-clock and are free.

    Comments
    More From Author

    A global media for the latest news, entertainment, music fashion, and more.

    - Advertisement -
    VT Newsroom
    VT Newsroom
    A global media for the latest news, entertainment, music fashion, and more.

    Latest news

    Related news

    Weekly News