Skip to main content
ukiyo journal - 日本と世界をつなぐ新しいニュースメディア Logo
  • All Articles
  • 🗒️ Register
  • 🔑 Login
    • 日本語
    • 中文
    • Español
    • Français
    • 한국어
    • Deutsch
    • ภาษาไทย
    • हिंदी
Cookie Usage

We use cookies to improve our services and optimize user experience. Privacy Policy and Cookie Policy for more information.

Cookie Settings

You can configure detailed settings for cookie usage.

Essential Cookies

Cookies necessary for basic site functionality. These cannot be disabled.

Analytics Cookies

Cookies used to analyze site usage and improve our services.

Marketing Cookies

Cookies used to display personalized advertisements.

Functional Cookies

Cookies that provide functionality such as user settings and language selection.

Is AI That Empathizes with Emotions Dangerous? The Impact of China's "Mental Safety" Regulations

Is AI That Empathizes with Emotions Dangerous? The Impact of China's "Mental Safety" Regulations

2025年12月29日 10:59

1) China's Aim for "Human-like AI" — Key Points of the Proposed Regulations

On December 27, 2025, China's cyber regulatory authority released a draft regulation (open for public comment) to strengthen oversight of AI services that engage emotionally with users through "human-like behavior." Reported by Reuters, this move aims to enhance the requirements for ethics and safety as consumer-facing AI rapidly proliferates. InfoMoney


The focus is on AI products/services that mimic human personality traits, thought patterns, and communication habits in text, images, audio, and video, engaging in "emotional exchanges" with users. This likely targets so-called "AI companions," "AI lovers," and "advisory chatbots." Reuters


The draft regulation is centered on three main points.

  • (A) Intervention in Dependency and Addiction
    Operators are required to warn against excessive use and establish frameworks to intervene when signs of dependency are observed. Additionally, they are expected to identify user states, assess emotions and dependency levels, and take necessary measures when extreme emotions or addictive behaviors are detected. InfoMoney

  • (B) Safety Responsibility Throughout the Lifecycle
    The direction is set towards establishing systems for algorithm review, data safety, and personal information protection, bearing safety responsibilities throughout the entire product lifecycle. InfoMoney

  • (C) "Red Lines" for Generated Content
    There are clearly defined "prohibited areas" consistent with China's traditional information control, stating that content undermining national security, promoting misinformation, violence, or obscenity must not be generated. InfoMoney


Additionally, reports from Chinese media include a requirement to prominently notify users that "the counterpart is not a human but AI." China Daily


Furthermore, Bloomberg reports introduced more detailed design requirements, such as notifying users every two hours (or when signs of excessive dependency are detected) in addition to login notifications. Bloomberg.com


2) Why Are "Emotional Exchanges" Becoming an Issue Now?

When discussing the dangers of generative AI, copyright issues, misinformation, and job displacement are often mentioned first. However, this draft regulation directly addresses themes closer to everyday life—**"anthropomorphism" and "emotional dependency."**


Even when people intellectually understand that their counterpart is a machine, they are more likely to feel a "relationship" as conversations become more natural and empathetic. If the AI behaves as if it is "on your side" or "understands you," those who feel lonely or anxious may become deeply engrossed. The Chinese authorities are attempting to address this with terms like psychological risks (extreme emotions, dependency, addiction). Reuters


This issue is not unique to China. In recent years, concerns about "companion AI" have been growing globally, and it is an area where "who is responsible for what" can easily become a gray area. China's draft regulation can be seen as an attempt by the state to draw a line in this gray area.


3) What is Expected from Companies? — Can "Dependency Detection" Be Implemented?

However, whether the ideals presented by the regulations can be directly implemented on the ground is another matter. For example, detecting signs of dependency would require at least the following designs:


  • Behavioral indicators such as usage time, frequency during late-night hours, and continuous use

  • Estimating psychological states from conversation content (emotion classification, crisis word detection)

  • Interventions according to risk levels (pop-up warnings, cooldowns, support desk suggestions, function restrictions, etc.)


However, there is a significant trade-off here.The more accurate the detection, the stronger the surveillance becomes, and false detections can ruin the user experience.
Furthermore, estimating a user's emotional state involves sensitive data processing, making it challenging from a personal information protection perspective. While the draft regulation also demands data safety and personal information protection, balancing these is not easy. Reuters


4) China's AI Governance from "Point" to "Surface": Connection with Existing Rules

In recent years, China has been said to have established rules relatively early, rather than "leaving AI unchecked and regulating after problems arise." For generative AI, interim measures were implemented in 2023, and governance has been developed primarily by the regulatory authority (CAC). China Law Translate


Additionally, China has regulations concerning recommendation algorithms and "deep synthesis," building a governance framework by technology category. DigiChina


The current draft regulation on "anthropomorphized and emotional exchange AI" can be seen as a move to newly cover the "area where the boundary between humans and AI becomes ambiguous" on a broader scale. Global Times describes this regulation as a phased supervision based on risk, introducing a framework that simultaneously promotes "innovation support" and "abuse prevention." Global Times


5) Reactions on Social Media: Supporters Cite "User Protection," Critics Point to "Control and Stifling"

When this news spread on overseas social media, reactions were largely divided.


① "The Regulation is Too Late, It's Rather Progressive" Camp

On LinkedIn, considering the strong narrative in the U.S. and elsewhere that "regulation kills innovation,"China's proposal is praised for emphasizing safety, accountability, and human-centeredness.For example, one post noted, "While some advocate for no regulation, China has proposed a draft covering AI use for the entire society," highlighting the issue of regulatory delay. LinkedIn


From this standpoint, there is a tendency to voice that measures against "AI companion dependency" and "emotional manipulation" are globally necessary, and the sooner rules are established, the better.


② "Is It Surveillance Under the Guise of Protection?" Camp

On the other hand, on Reddit,there is a debate that "regulations to protect users" should be separated from "information control desired by the state."For instance, there are arguments pointing out that "China has strong regulations on AI and the tech industry in general," and conversely, doubts such as "what is for public use and what the state uses are different," leading to exchanges over the very purpose of the regulation.
Reddit


In another thread, there are concerns about a future where "AI tailored to each nation's ideology" increases, potentially making AI a tool for political and social division. Reddit


③ Common Point: "Transparency" is the Minimum Requirement

Despite differing positions, there is relatively easy consensus on the importance of transparency, specifically "making users aware they are interacting with AI." China Daily also introduces the requirement to clearly indicate to users that they are "engaging with AI." China Daily


However, the "notification every two hours" reported by Bloomberg, while ensuring transparency, significantly alters the UX, and there may be backlash in the future over whether it is "overdone" or "practically implementable." Bloomberg.com


6) Questions This Draft Regulation Poses to the World

This draft is too multifaceted to be dismissed as merely "China's increased control." The key point is that the state is attempting to institutionalize the question of who bears responsibility when AI that empathizes with emotions steps from "convenience" to "relationship."


  • Can AI become "care for the mind," or will it become a "device for dependency"?

  • Is dependency detection "protection" or "surveillance"?

  • To what extent should transparency be mandated (only at login, or continuous notifications)?

  • Who and how should design the balance between safety and innovation?


China's proposal offers one answer (and a strong one) to these questions. Depending on how the public consultation progresses and how specific and mandatory the requirements become in the final draft, the very way the AI companion market is shaped may change. Reuters


Reference Articles

China Releases Draft Rules for Regulating AI with Human-like Interaction
Source: https://www.infomoney.com.br/mundo/china-divulga-minuta-de-regras-para-regulamentar-ia-com-interacao-semelhante-a-humana/

← Back to Article List

Contact |  Terms of Service |  Privacy Policy |  Cookie Policy |  Cookie Settings

© Copyright ukiyo journal - 日本と世界をつなぐ新しいニュースメディア All rights reserved.