Over 1 Million ChatGPT Users Mention Suicide: The Reality and Risks of an Era Where AI Becomes the "First Confidant"

Over 1 Million ChatGPT Users Mention Suicide: The Reality and Risks of an Era Where AI Becomes the "First Confidant"

1. What is happening now

OpenAI has estimated that every week, "over a million" conversations on ChatGPT suggest intentions or plans of suicide. This calculation is based on a usage scale of 800 million people weekly, with a ratio of 0.15%. Additionally, 0.07% (about 560,000 people) show signs that may indicate urgency, such as delusions or mania.The Guardian+1


2. Why confide in AI?

Immediate response even at midnight, anonymity, and a sense of "not being judged" provide comfort. The high barriers to accessing medical care, such as difficulty in making appointments and stigma, also make AI the "first outlet" for many.The Guardian


3. How helpful can it be? (Current safety measures)

OpenAI collaborates with over 170 clinicians to enhance the detection of crisis signs, calming conversations, and guidance to real-world support services. They explain that adherence to desirable safety-related responses has improved with the new model.OpenAI+1


4. Remaining risks

AI is not healthcare. In ambiguous cases, inconsistencies in judgment can occur. If dependency becomes fixed, it may deepen isolation. Tensions around lawsuits and regulations related to minors are also rising.WIRED+1


5. Implications for Japan

Due to long working hours, nighttime isolation, and high barriers to medical consultation, people tend to turn to AI as a "conversation partner" at night. Alongside AI enhancement, strengthening connections to local consultation services and medical care is essential.The Guardian


6. Legal and ethical issues

"When to report," "over-detection and privacy," "conversation data of minors and corporate responsibility"—none of these issues have definitive solutions yet. The development of regulations and guidelines in each country is urgently needed.OpenAI


7. If you're struggling now

AI can be a starting point, but human support is what saves lives. If you feel in crisis, please connect "immediately" with family, friends, workplace, school, local services, or emergency services.OpenAI


8. Summary

The "one million per week" figure reflects the magnitude of societal loneliness. AI acts as a bridge. The next challenge is to strengthen the real-world support network that receives these individuals.The Guardian