Skip to main content
ukiyo journal - 日本と世界をつなぐ新しいニュースメディア Logo
  • All Articles
  • 🗒️ Register
  • 🔑 Login
    • 日本語
    • 中文
    • Español
    • Français
    • 한국어
    • Deutsch
    • ภาษาไทย
    • हिंदी
Cookie Usage

We use cookies to improve our services and optimize user experience. Privacy Policy and Cookie Policy for more information.

Cookie Settings

You can configure detailed settings for cookie usage.

Essential Cookies

Cookies necessary for basic site functionality. These cannot be disabled.

Analytics Cookies

Cookies used to analyze site usage and improve our services.

Marketing Cookies

Cookies used to display personalized advertisements.

Functional Cookies

Cookies that provide functionality such as user settings and language selection.

"Kind AI" at Risk: Why Does AI Call Itself "I"? — The Reason Chatbots Get Too Close to Humans

"Kind AI" at Risk: Why Does AI Call Itself "I"? — The Reason Chatbots Get Too Close to Humans

2025年12月21日 08:45

The machine that calls itself "I" presses our "reflexes"

When talking to a chatbot, it feels like "someone" is there even before you ask a question. The responses are polite, considerate, and sometimes even humorous. The decisive factor is the first person: "I can do this," "I think this." Just that changes the texture of the text from "tool output" to "subject speech."


The point raised by Kashmir Hill of the New York Times is exactly this—why do AI chatbots use "I"? She mentions that she has been hearing criticisms for a year that designing bots to be between "friend" and "assistant" is dangerous. In a competitive environment, "robots that smile and are friendly" win over "dull robots," and while dependency is an issue, it is "good for business," as some researchers suggest. LinkedIn


This setup is blunt but realistic. In the age of social media, we know the power of designs that capture attention. Chatbots also gain attention in the form of conversation. The switch at the entrance is "I."



Why use "I": The naturalness of language and "responsibility"

The first person makes dialogue smoother because human conversations revolve around "you/I." Some defend that saying "I can't do that" is shorter and clearer when explaining refusals or limitations. In fact, discussions on LinkedIn suggest that "first and second person increase clarity and efficiency. Another 'knob' for anthropomorphism should be adjusted." LinkedIn


However, the naturalness of language also brings the "naturalness of misunderstanding."
"I" makes it feel as if there is a unified subject inside with intentions and emotions. In fact, researcher Margaret Mitchell expresses concerns that the first person can potentially claim to have "senses or a mind." LinkedIn


Furthermore, the discussion turns to whether it "happened by chance" or was "intentionally designed." Linguist Emily M. Bender criticizes that the use of "I/me" by chatbots is not a result of "upbringing" but a 100% design decision, and points out that blaming anthropomorphism on training data is an evasion of responsibility. LinkedIn


Here, what matters more than the technical details is the responsibility.
Whether to adopt "I" is directly linked not only to performance but also to ethics, safety, and revenue. That's why accountability becomes necessary.



Where does "human-likeness" come from: The job of model behavior design

Among the elements touched upon in the original article, Amanda Askell's explanation, who is responsible for shaping the "voice" and "personality" of Claude at Anthropic, is shared in a screenshot of a social media post. It states that "the behavior of chatbots reflects 'upbringing'" and that they are better at modeling "humans" than "tools" because they learn from a vast amount of text written about humans. LinkedIn


This statement indicates that the "tone" of chatbots is not an accidental byproduct but a consciously refined target. In other words, "I" is not just grammar but also a UI element of personality design.


And competition accelerates that direction. The "friendly robot wins" theory introduced in Kashmir Hill's post succinctly describes the dynamics that drive companies to enhance rather than reduce "human-likeness." LinkedIn



Reactions on social media: Divided opinions but the same focus on "anthropomorphism design"

Surprisingly, the points raised on social media (mainly LinkedIn) directly reacting to the article are aligned. Both proponents and cautious voices are ultimately discussing "how much anthropomorphism should be allowed."


1) Cautious voices: "Don't put googly eyes on tools"

  • The metaphor "you shouldn't put googly eyes on a bench saw and market it to children" vividly illustrates the dangers of anthropomorphism. LinkedIn

  • Dr. Steven Reidbord expresses concerns that chatbots appeal to human "attachment systems" and exploit them for commercial purposes. Comments also reflect the sentiment that "technology should be a tool, not a companion." LinkedIn

  • In Emily Bender's thread, there is shared surprise and caution that the long-standing rule of "not anthropomorphizing apps" is being recklessly broken. LinkedIn


2) Balanced voices: "First person is convenient. The dangerous 'knobs' are elsewhere."

  • First and second person make explanations concise and clarify refusals or limitations. The issue lies in other elements that increase "personality-like" traits (names, faces, romantic-like cues, excessive empathy), which should be adjusted. LinkedIn


3) Practical voices: "Companies can stop 'personality marketing'"

  • The comment "Users can't be stopped from naming bots, but companies can stop giving names and faces to 'non-existent someone' for marketing" is emblematic. LinkedIn


These three positions seem to be in opposition, but they actually share the same premise.
**The intimacy of chatbots can be amplified or restrained by design.** Therefore, the direction of "standardization" becomes a social issue.



When "I" becomes dangerous: Dependence, overconfidence, and "receptivity"

Intimacy is not always bad. On the contrary, there are arguments that it can be effective as a window for people to express difficult concerns. In fact, the ability of chatbots to "simulate empathy" and listen tirelessly has been pointed out. The Atlantic


However, there are conditions under which intimacy leans towards danger.
That is when the user starts treating the counterpart as a "person." "I" can easily become that entry point. Moreover, when excessive affirmation like "You are amazing" or "You can do it" (so-called "sycophantic") accumulates, the conversation can flow towards weakening reality verification. As Kashmir Hill mentioned in another context (The Atlantic podcast), carrying around a "personal yes-man" in your pocket is as precarious as the praise feels good.
The Atlantic


Additionally, for vulnerable users such as children and young people, the spread of AI companion use and the necessity of dialogue and boundaries at home are being reported. PolitiFact


PolitiFact is also following concerns about chatbots behaving like "friends" and the impact of engagement-focused design. PolitiFact



How should it be designed: Lowering the "default intimacy"

To advance the discussion, the core issue becomes the design of **defaults**, rather than a binary choice of "banning I" or "freedom."


Specifically, the following compromises can be considered.

  1. Mode separation: Separate "tool mode (non-anthropomorphic)" and "conversation mode (limited anthropomorphism)," with the initial setting on the tool side.

  2. Limitation of personality elements: Restrict expressions that encourage names, faces, romance/dependence, and self-emotional expressions like "lonely."

  3. Transparency statements: If using the first person, incorporate "I am an AI and have no consciousness or emotions" as a "periodic reminder" in the UI (naturally, without being intrusive).

  4. Detection and care of dependency: Prioritize "warm handoff to external support" over continuing the conversation when signs of prolonged use or suicidal thoughts are detected. The Atlantic


As long as the cold truth that "dependency is a problem but good for business" introduced by Kashmir Hill exists, leaving it unchecked tends to lead to "raising intimacy." LinkedIn


Therefore, either industry self-regulation, regulation, or at least design principles as product ethics are needed.



Conclusion: "I" is a small word but a big contract

When we read "I," we unconsciously treat the counterpart as a "subject."
This is a competent reflex cultivated in human society and, at the same time, a reflex that is easily hacked.


The use of "I" by chatbots does not end with the explanation that it is for natural conversation. It embodies the core of the product, such as differentiation, continuous use, attachment, and trust. As reactions on social media indicate, while opinions are divided, there is consensus that "this is a design issue." LinkedIn


The future of using AI as a "convenient tool" and the future of leaning on AI as "someone with

← Back to Article List

Contact |  Terms of Service |  Privacy Policy |  Cookie Policy |  Cookie Settings

© Copyright ukiyo journal - 日本と世界をつなぐ新しいニュースメディア All rights reserved.