Skip to main content
ukiyo journal - 日本と世界をつなぐ新しいニュースメディア Logo
  • All Articles
  • 🗒️ Register
  • 🔑 Login
    • 日本語
    • 中文
    • Español
    • Français
    • 한국어
    • Deutsch
    • ภาษาไทย
    • हिंदी
Cookie Usage

We use cookies to improve our services and optimize user experience. Privacy Policy and Cookie Policy for more information.

Cookie Settings

You can configure detailed settings for cookie usage.

Essential Cookies

Cookies necessary for basic site functionality. These cannot be disabled.

Analytics Cookies

Cookies used to analyze site usage and improve our services.

Marketing Cookies

Cookies used to display personalized advertisements.

Functional Cookies

Cookies that provide functionality such as user settings and language selection.

ChatGPT, Parental Control for Minor Accounts to be Enabled—Significant Safety Measures Strengthened Following the Suicide of a 16-Year-Old Boy, to be Implemented by October

ChatGPT, Parental Control for Minor Accounts to be Enabled—Significant Safety Measures Strengthened Following the Suicide of a 16-Year-Old Boy, to be Implemented by October

2025年09月06日 18:51

1. What's Changing?: An Overview of the New "Parental Controls"

OpenAI has officially announced that they will roll out features that allow parents to set and monitor usage for minors, especially teenagers, within "the next month." The anticipated management items are as follows.


  • Account Linking: Link parent and child accounts to visualize usage on a dashboard. OpenAI+ 1

  • Age-Appropriate Response Control: Apply output policies suitable for age groups (suppressing inappropriate expressions, guiding sensitive questions, etc.). OpenAI

  • Memory/History Off: Disable conversation saving and learning use to prioritize privacy. OpenAI+ 1

  • Consideration for Long Usage: Soft interventions for excessive continuous use, such as encouraging breaks within the app. storyboard18.com

  • Detection and Notification of Acute Stress: Notify parents if signs of **"acute distress"** are detected in conversations (not a medical diagnosis but an understanding of risk signs). The Washington PostThe Guardian


Additionally, sensitive conversations such as self-harm or suicidal thoughts will be routed to more advanced "inference models (like GPT-5)" to enhance the accuracy of context understanding and deviation detection. TechCrunch


Rollout Timing: Scheduled to begin in **October 2025 (next month)**. Due to gradual deployment, the timing of reflection may vary by region and account type.OpenAI



2. Background: The Suicide of a 16-Year-Old Boy and Safety Demands from Various Sectors

In the spring of 2025, a 16-year-old boy committed suicide, and his family sued OpenAI. Reports indicated that there was guidance towards suicide-related information during interactions with ChatGPT, making the relationship between young people and generative AI a societal issue. TBS News DigYouTube


In response to this, the Attorneys General of California and Delaware expressed strong concerns to OpenAI. They issued letters demanding immediate strengthening of safety measures and consideration in corporate governance. AP NewsPolitico Additionally, the UK's NHS has issued warnings against using chatbots as substitutes for therapy, and discussions on protecting young people are expanding across countries and regions. The Times



3. Image of Specific Features and "What Can/Can't Be Done"

What Can Be Done

  • Usage Governance: Restrict usage during nighttime hours and set response styles appropriate for age. OpenAI

  • Visualization and Notification: Grasp key points of usage history and notify parents of signs of acute stress. The Washington PostThe Guardian

  • Privacy Consideration: Disable memory and history saving. OpenAI+ 1

  • Future Expansion: Considering the designation of "trusted emergency contacts". Prepare a conduit to connect to people in times of crisis. OpenAI

What Cannot Be Done (Common Misunderstandings)

  • Not Medical Practice: Detection of acute stress is merely an estimation of risk signs, not a diagnosis or treatment. The Washington Post

  • Cannot Guarantee Complete Prevention: AI may misdetect or overlook. Collaboration among families, schools, and professionals is essential. The Times



4. Technical Updates: Routing of Sensitive Conversations to "Inference Models"

OpenAI has revealed a policy to prioritize routing sensitive areas like self-harm and violence to high-performance inference models (such as GPT-5). The purpose is to


  1. Reduce Overlooked Danger Signs,

  2. Appropriate Rephrasing and Protective Responses,

  3. Improve the Accuracy of Guiding to External Resources. TechCrunch


In announcements for developers, new parameters controlling response verbosity and inference effort have also evolved, and **"model selection and control" for safe operation** is advancing on both product and API fronts. OpenAI



5. Preparation Checklist for Japanese Parents and Schools

  • Account Design: Parents should take the lead in linking accounts and agree on "who can set what." OpenAI

  • Confirmation of Age Policy: Enable response policies according to age group and set history and memory off as default. OpenAI+ 1

  • Agreement on Usage Time: Utilize rules for nighttime or exam period usage and encouragement of breaks. storyboard18.com

  • Agreement on Crisis Response: Prepare a paper outlining notification recipients (parents, school staff) and initial response flow (clearly stating that parent notifications are not medical judgments). The Washington Post

  • Conversations at Home: Create a regular forum to discuss the distance with AI, risks of SNS/boards, and image generation. The Times



6. Issues and Risks: Privacy, Misdetections, Trust Within Families

  • Privacy: If supervision becomes too much like surveillance, the safe space for open conversation within the family can be

← Back to Article List

Contact |  Terms of Service |  Privacy Policy |  Cookie Policy |  Cookie Settings

© Copyright ukiyo journal - 日本と世界をつなぐ新しいニュースメディア All rights reserved.