By Enyichukwu Enemanna
Data from OpenAI, makers of ChatGPT has indicated that more than a million of people using the generative AI chatbot have shown interest in carrying out suicide.
The AI firm in a blog post on Monday estimated that approximately 0.15 per cent of users have “conversations that include explicit indicators of potential suicidal planning or intent.”
With OpenAI reporting more than 800 million people use ChatGPT every week, this amounts to about 1.2 million people who have contemplated suicide.
The company also estimates that approximately 0.07 percent of active weekly users show possible signs of mental health emergencies related to psychosis or mania, meaning slightly fewer than 600,000 people.
The issue came to the fore after California teenager Adam Raine died by suicide earlier this year. His parents filed a lawsuit claiming ChatGPT provided him with specific advice on how to kill himself.
OpenAI has since increased parental controls for ChatGPT and introduced other guardrails, including expanded access to crisis hotline.
It has also put in place automatic rerouting of sensitive conversations to safer models, and gentle reminders for users to take breaks during extended sessions.
OpenAI said it has also updated its ChatGPT chatbot to better recognize and respond to users experiencing mental health emergencies, and is working with more than 170 mental health professionals to significantly reduce problematic responses.





























