Skip to main content
ukiyo journal - 日本と世界をつなぐ新しいニュースメディア Logo
  • All Articles
  • 🗒️ Register
  • 🔑 Login
    • 日本語
    • 中文
    • Español
    • Français
    • 한국어
    • Deutsch
    • ภาษาไทย
    • हिंदी
Cookie Usage

We use cookies to improve our services and optimize user experience. Privacy Policy and Cookie Policy for more information.

Cookie Settings

You can configure detailed settings for cookie usage.

Essential Cookies

Cookies necessary for basic site functionality. These cannot be disabled.

Analytics Cookies

Cookies used to analyze site usage and improve our services.

Marketing Cookies

Cookies used to display personalized advertisements.

Functional Cookies

Cookies that provide functionality such as user settings and language selection.

"The Dark Side of AI Therapists" - Latest Stanford Research Unveils "Dangerous Biases"

"The Dark Side of AI Therapists" - Latest Stanford Research Unveils "Dangerous Biases"

2025年07月15日 01:17
On July 13, Stanford University announced that it had analyzed five AI therapy chatbots and identified instances of "biased expressions" and "failures in crisis intervention." The study revealed discriminatory responses towards schizophrenia and alcohol dependency, as well as inappropriate replies to suggestions of suicide, warning that "scaling up models does not enhance safety." This report was disseminated by TechCrunch, sparking a debate on social media between those calling for "urgent regulation" and others advocating "not to compromise accessibility." Experts see potential in supplementary use cases but emphasize the need to urgently establish safety standards and audit systems.
← Back to Article List

Contact |  Terms of Service |  Privacy Policy |  Cookie Policy |  Cookie Settings

© Copyright ukiyo journal - 日本と世界をつなぐ新しいニュースメディア All rights reserved.