Data from ChatGPT-maker OpenAI shows that more than a million people using its generative AI chatbot have expressed interest in suicide. In a blog post published Monday, the AI โโcompany estimated that about 0.
15 percent of users have “conversations that include clear indicators of possible suicidal planning or intent. ” According to OpenAI report, more than 800 million people use ChatGPT every week, i. e.
about 1. 2 million people.
The company also estimates that about 0. 07 percent of active weekly users show possible symptoms of a mental health emergency related to psychosis or mania, which means a little less than 600,000 people.
The issue came to the fore when California teenager Adam Rhines committed suicide earlier this year. Her parents filed a lawsuit claiming that ChatGPT had given her specific advice about killing herself. OpenAI has since increased parental controls for ChatGPT and introduced other guardrails, including expanded access to crisis hotlines, automatic re-routing of sensitive conversations to secure models, and gentle reminders for users to take breaks during extended sessions.
OpenAI said it has also updated its ChatGPAT chatbot to better recognize and respond to users experiencing mental health emergencies, and is working with more than 170 mental health professionals to reduce problematic responses. (People who are in crisis or having suicidal thoughts are encouraged to seek help and counseling by calling the helpline numbers here).


