Skip to main content
ukiyo journal - 日本と世界をつなぐ新しいニュースメディア Logo
  • All Articles
  • 🗒️ Register
  • 🔑 Login
    • 日本語
    • 中文
    • Español
    • Français
    • 한국어
    • Deutsch
    • ภาษาไทย
    • हिंदी
Cookie Usage

We use cookies to improve our services and optimize user experience. Privacy Policy and Cookie Policy for more information.

Cookie Settings

You can configure detailed settings for cookie usage.

Essential Cookies

Cookies necessary for basic site functionality. These cannot be disabled.

Analytics Cookies

Cookies used to analyze site usage and improve our services.

Marketing Cookies

Cookies used to display personalized advertisements.

Functional Cookies

Cookies that provide functionality such as user settings and language selection.

Why Some People Love AI and Others Hate It: Divisions Revealed by Neuroscience and Social Media - The Design Theory of Transparency, Choice, and Accountability

Why Some People Love AI and Others Hate It: Divisions Revealed by Neuroscience and Social Media - The Design Theory of Transparency, Choice, and Accountability

2025年11月05日 00:51

1. The article I read yesterday articulated today's confusion

The article republished on Phys.org on November 3, 2025 (local time) unravels why people come to "like" or "dislike" AI through the lens of the brain's perception of risk and trust. The argument is straightforward yet sharp—people trust what they can understand. Conversely, a "black box" where the path between input and output is invisible becomes a breeding ground for anxiety. The article organizes psychological points comprehensively, from the harshness people tend to feel towards algorithm failures (so-called "algorithm aversion"), the tendency to anthropomorphize AI, to the discomfort of the uncanny valley. In short, it's often more a "human mind issue" than an "AI issue." phys.org


2. Why does "violation of expectations" hurt?—The root of algorithm aversion

People are tolerant of human errors but harsh on machine errors. From the perspective of expectation violation theory, this is natural, as we tend to place implicit expectations on machines to be "logical, fair, and nearly infallible." When they fail, we feel "at least humans can be questioned." Algorithm aversion is a concept that formalizes this psychology, explaining that machines are more likely to lose trust even with equivalent errors. This is confirmed by encyclopedic organization and strangely aligns with field experiences. Wikipedia


3. We project "mind" onto things without a "mind"

The article emphasizes the fact that rationality alone does not reassure people. Tone of voice, facial expressions, gaze, and timing—people build trust by reading emotional cues outside of words. However, AI struggles with this. Not getting angry, tired, or hesitant is sometimes seen as "cold." The discomfort created by similar but different entities, namely the uncanny valley, is also in the background. This "emotional void" fosters feelings of eeriness towards AI. phys.org


4. Threat to identity—Pain beyond "jobs being taken"

In some professions, the sense of being "replaced" invites resistance beyond mere efficiency. Teachers, writers, lawyers, designers—when an "equivalent" appears to skills honed over time, one's self-image is shaken. This is what social psychology calls identity threat. Here, rational technological evaluation intertwines with existential anxiety. The article suggests that resistance is not merely conservatism but also psychological defense. phys.org


5. What does SNS amplify?—Both welcome and resistance "true feelings" are visualized

So, what is the atmosphere on SNS? Looking at X, Reddit, and news comment sections, it becomes clear that the "welcoming faction," which evaluates efficiency and creativity, and the "concerned faction," which emphasizes distrust in employment, bias, and governance, coexist, with different focuses of mention.


  • Typical of the welcoming faction:
    Sharing experiences such as "the initial speed of research and copywriting is extraordinary" and "AI as a 'partner' alleviates loneliness." Many voices also evaluate the boundary of using it for "practice" in production and development, with humans making the final decision.

  • Typical of the concerned faction:
    "AI failures are inexplicable and scary," "the boundaries of misinformation and copyright are ambiguous," "can't trust companies and regulatory authorities." Especially on Reddit, general distrust like "can't rely on tech companies or regulations" frequently surfaces. Reddit

Furthermore, media-based public opinion also supports the **"tug-of-war between expectations and anxieties."** In the US, reports continued in the fall of 2025 that concerns about AI had increased compared to 2022. Job risks, aversion to "unnecessary automation," skepticism towards AI summaries—it's a coexistence of "convenience" and "fear." In the UK, a survey showing that more people see AI as an economic risk rather than an opportunity became a topic of discussion. People value "making sense" over novelty. The Washington Post


6. Anti-AI but still using AI—The reality of "resistance" in 2025

Interestingly, many people who define themselves as resistant to AI actually use AI partially. While there is a movement among some students, developers, and creators to distance themselves due to ethics, accuracy, and employment concerns, complete avoidance is difficult, and there is a reality of "limited use in combination." Acceptance of technology is not binary. Psychologically, it can be interpreted that the sense of loss of choice fuels distrust. Axios


7. The path of media diffusion: How was the article read?

This reprint on Phys.org is based on an article from The Conversation. Similar articles were distributed and reprinted by other media, expanding exposure. Although the number of direct links on SNS is not explosive, as seen so far, discussions in comment sections and reprint destinations are running on "two wheels of pros and cons." phys.org uk.news.yahoo.com


8. "Showing the mechanism" is not enough—Three principles of trust design

The article uses the metaphor "from black box to conversation." From here, let's narrow down the principles of product and service design to three. phys.org

  1. Transparency
    Clearly define "what data is used, how it is handled, and what limitations exist" at the initial contact. Instead of a list of technical terms, preface with conditions that cause results to fluctuate and "weak patterns."

  2. Interrogability
    Leave room to "question" the results. Summaries of evidence, alternatives, and self-reporting of errors. A UI where users can throw "why?" alleviates the pain of expectation violations.

  3. Agency
    Ensure opt-in, granularity settings, and ease of withdrawal rather than default automation. Allowing users to "delegate at their own pace" becomes the key to acceptance.opt-in


9. Notes on field implementation—Both product and policy

  • Product side

    • For important tasks, "double lock": AI proposal→human final approval.

    • Standardize visualization of model confidence and uncertainty.uncertainty visualization

    • Leave audit logs and reproducibility of explanations ("same input, same explanation" principle).reproducibility of explanations

    • Leave the freedom to "not do": Avoid always-on AI summaries and "suggestion imposition."

  • Organizational and educational side

    • Teach employees and learners digital competencies, including the **"side effects of AI dependence"** (thought cessation, mislearning).

    • Institutionalize the indication of the source of generated products (disclosure of AI involvement) and the location of interpersonal responsibility for important decisions.source indicationinterpersonal responsibility

    • In the public sector, make explainability and remedies mandatory requirements (automatic decisions in credit, recruitment, medical, etc.).explainability and remedies


10. Beyond polarization—Seeking the "minimum agreement"

There is no need to praise AI, nor to completely deny it. What we need is an agreeable threshold for "under what conditions can we delegate."

  • That the source of the data is indicated

  • That remedies are prepared for errors

  • That important decisions can be returned to humans

  • That the choice to "not use" is respected

When this minimum is met, AI moves closer to being a **"convincing partner" from an "invisible black box."** The latest reports and SNS sentiments are precisely seeking that. Facing the reality where convenience and anxiety coexist, designing a relationship that can be questioned is the only prescription to overcome the divide of likes and dislikes. The Washington Post



References and Sources (Referenced in the text)

  • Reprint on Phys.org (November 3, 2025): Psychology that influences likes and dislikes towards AI (black box, anthropomorphism, expectation violation, etc.). phys.org

  • Conceptual organization of algorithm aversion. Wikipedia

  • A glimpse of public opinion on SNS (AI distrust threads on Reddit). Reddit

  • Trends in public opinion (expansion of skepticism in the US). The Washington Post

  • The UK's "risk over opportunity" trend. The Guardian

  • Reprint status of similar articles (Yahoo, etc.). uk.news.yahoo.com


Reference Articles

Why do some people love AI while others hate it? The answer lies in how our brains perceive risk and trust.
Source: https://phys.org/news/2025-11-ai-brains.html

Powered by Froala Editor##HTML_TAG

← Back to Article List

Contact |  Terms of Service |  Privacy Policy |  Cookie Policy |  Cookie Settings

© Copyright ukiyo journal - 日本と世界をつなぐ新しいニュースメディア All rights reserved.