Skip to main content
ukiyo journal - 日本と世界をつなぐ新しいニュースメディア Logo
  • All Articles
  • 🗒️ Register
  • 🔑 Login
    • 日本語
    • 中文
    • Español
    • Français
    • 한국어
    • Deutsch
    • ภาษาไทย
    • हिंदी
Cookie Usage

We use cookies to improve our services and optimize user experience. Privacy Policy and Cookie Policy for more information.

Cookie Settings

You can configure detailed settings for cookie usage.

Essential Cookies

Cookies necessary for basic site functionality. These cannot be disabled.

Analytics Cookies

Cookies used to analyze site usage and improve our services.

Marketing Cookies

Cookies used to display personalized advertisements.

Functional Cookies

Cookies that provide functionality such as user settings and language selection.

Over 1 Million ChatGPT Users Mention Suicide: The Reality and Risks of an Era Where AI Becomes the "First Confidant"

Over 1 Million ChatGPT Users Mention Suicide: The Reality and Risks of an Era Where AI Becomes the "First Confidant"

2025年10月30日 17:25

1. What is happening now

OpenAI has estimated that every week, "over a million" conversations on ChatGPT suggest intentions or plans of suicide. This calculation is based on a usage scale of 800 million people weekly, with a ratio of 0.15%. Additionally, 0.07% (about 560,000 people) show signs that may indicate urgency, such as delusions or mania.The Guardian+1


2. Why confide in AI?

Immediate response even at midnight, anonymity, and a sense of "not being judged" provide comfort. The high barriers to accessing medical care, such as difficulty in making appointments and stigma, also make AI the "first outlet" for many.The Guardian


3. How helpful can it be? (Current safety measures)

OpenAI collaborates with over 170 clinicians to enhance the detection of crisis signs, calming conversations, and guidance to real-world support services. They explain that adherence to desirable safety-related responses has improved with the new model.OpenAI+1


4. Remaining risks

AI is not healthcare. In ambiguous cases, inconsistencies in judgment can occur. If dependency becomes fixed, it may deepen isolation. Tensions around lawsuits and regulations related to minors are also rising.WIRED+1


5. Implications for Japan

Due to long working hours, nighttime isolation, and high barriers to medical consultation, people tend to turn to AI as a "conversation partner" at night. Alongside AI enhancement, strengthening connections to local consultation services and medical care is essential.The Guardian


6. Legal and ethical issues

"When to report," "over-detection and privacy," "conversation data of minors and corporate responsibility"—none of these issues have definitive solutions yet. The development of regulations and guidelines in each country is urgently needed.OpenAI


7. If you're struggling now

AI can be a starting point, but human support is what saves lives. If you feel in crisis, please connect "immediately" with family, friends, workplace, school, local services, or emergency services.OpenAI


8. Summary

The "one million per week" figure reflects the magnitude of societal loneliness. AI acts as a bridge. The next challenge is to strengthen the real-world support network that receives these individuals.The Guardian

← Back to Article List

Contact |  Terms of Service |  Privacy Policy |  Cookie Policy |  Cookie Settings

© Copyright ukiyo journal - 日本と世界をつなぐ新しいニュースメディア All rights reserved.