After teen suicide, OpenAI claims it is “helping people when they need it most”

OpenAI published a blog post on Tuesday titled “Helping people when they need it most” that addresses how its ChatGPT AI assistant handles mental health crises, following what the company calls “recent heartbreaking cases of people using ChatGPT in the midst of acute crises.”
The post arrives after The New York Times reported on a lawsuit filed by Matt and Maria Raine, whose 16-year-old son Adam died by suicide in April after extensive interactions with ChatGPT, which Ars covered extensively in a previous post. According to the lawsuit, ChatGPT provided detailed instructions, romanticized suicide methods, and discouraged the teen from seeking help from his family while OpenAI’s system tracked 377 messages flagged for self-harm content without intervening.
ChatGPT is a system of multiple models interacting as an application. In addition to a main AI model like GPT-4o or GPT-5 providing the bulk of the outputs, the application includes components that are typically invisible to the user, including a moderation layer (another AI model) or classifier that reads the text of the ongoing chat sessions. That layer detects potentially harmful outputs and can cut off the conversation if it veers into unhelpful territory.