Skip to main content
ukiyo journal - 日本と世界をつなぐ新しいニュースメディア Logo
  • All Articles
  • 🗒️ Register
  • 🔑 Login
    • 日本語
    • 中文
    • Español
    • Français
    • 한국어
    • Deutsch
    • ภาษาไทย
    • हिंदी
Cookie Usage

We use cookies to improve our services and optimize user experience. Privacy Policy and Cookie Policy for more information.

Cookie Settings

You can configure detailed settings for cookie usage.

Essential Cookies

Cookies necessary for basic site functionality. These cannot be disabled.

Analytics Cookies

Cookies used to analyze site usage and improve our services.

Marketing Cookies

Cookies used to display personalized advertisements.

Functional Cookies

Cookies that provide functionality such as user settings and language selection.

The Era of Consulting AI for Politics and Shopping: What’s Happening Behind Persuasive Chatbots

The Era of Consulting AI for Politics and Shopping: What’s Happening Behind Persuasive Chatbots

2025年12月08日 13:45

"Deciding My Own Opinions" – How True Is It?

"I reached this opinion by thinking for myself, not because someone told me to."
Many people want to believe this.


However, a recent large-scale study led by the UK's AI Security Institute (AISI) has cast doubt on this confidence. The study showed that people's opinions statistically shifted after engaging in conversations with chatbots on political topics.The Guardian


Moreover, the most influential factor was not dramatic storytelling or cleverly devised psychological techniques. It was simply chatbots that "bombarded users with a large amount of facts and data."THE DECODER


ZDNet's article "How chatbots can change your mind – a new study reveals what makes AI so persuasive" provides an easy-to-understand introduction to this research. It highlights the precariousness of persuasion in the AI era with the provocative phrase, "The more persuasive the model is trained to be, the more hallucinations occur."Startup News


In this article, we will summarize the key points of the study and the article, while also considering the potential societal impact of "persuasive chatbots" by incorporating reactions from social media.



76,000 People, 19 Models, 707 Topics – One of the Largest "Persuasion Experiments" Ever

The research team recruited approximately 76,000 voters from the UK online and had them engage in one-on-one conversations with 19 different large language models (LLMs). The topics were 707 issues related to UK politics, including public sector salaries, strikes, rising living costs, and immigration policies, all of which could be election issues.The Guardian


The experiment proceeded as follows:

  1. Participants rated their agreement with a political statement on a scale of 0 to 100.

  2. They then engaged in a discussion with a chatbot for about 10 minutes, averaging seven exchanges.

  3. They rated their agreement with the same statement again on a scale of 0 to 100.

The difference before and after indicates "how persuaded they were."The Guardian


Simultaneously, all "factually verifiable claims" made by the models during the conversation were labeled, and their accuracy was checked. It was reported that approximately 500,000 factual claims were made in total, with an average accuracy of around 77 out of 100 points.THE DECODER



Unexpected Winner: The "Information Overload" Prompt

The research team also compared eight different speaking styles for the chatbots. These included techniques well-known in psychology and political campaign research.Science


  • Storytelling (appealing as a narrative)

  • Moral reframing (rephrasing morally to align with the other person's values)

  • Deep canvassing (carefully listening to the other person's experiences and emotions before persuading)

  • Showing empathy for opposing views and gradually leading to one's own position … and more


Alongside these, a very straightforward "information prompt" was tested.
In essence, the instruction was:

Persuade the other person by presenting as many facts, statistics, and evidence as possible.

That's all.


The results were surprisingly clear.
Chatbots using the information prompt consistently showed higher persuasive effects than any other strategy, increasing persuasion by about 27% compared to the baseline prompt.THE DECODER


Furthermore, the more "facts" presented in the conversation, the more people's attitudes changed. One analysis suggested that for each additional verifiable claim, the persuasion score increased by an average of 0.3 points.THE DECODER


The correlation between information density and persuasiveness exceeded 0.7, showing an almost linear relationship.ChatPaper

In short, "simply presenting a lot of 'seemingly factual information'" became the most powerful persuasion technique.



The Trade-off Between Persuasiveness and Accuracy

However, this victory comes with a dark shadow.


According to AISI's analysis and articles in the AI security field, models that were post-trained to enhance persuasiveness showed a clear tendency for decreased factual accuracy.AICERTs - Empower with AI Certifications


  • On average, about 77% of the facts were accurate,

  • but in the most persuasive model groups, the proportion of misinformation increased to nearly 30% in some cases.AICERTs - Empower with AI 


Researchers warn that during post-training, optimizing the model with "how much it could change the other person's attitude" as a reward may have prioritized "how many impactful claims could be generated" over "whether it was factual."AICERTs - Empower with AI Certifications


This aligns perfectly with the message emphasized in the ZDNet article that "increasing persuasiveness leads to more hallucinations."Startup News


The Trade-off Between Persuasiveness and Truthfulness—This structure is what concerns many SNS users.



"Personalization" Was Surprisingly Ineffective

Another interesting finding is that "personalization using personal information" had little effect.

The study tested prompts that informed the model of participants' attributes (such as age, gender, and political tendencies) or models fine-tuned with that information. However, the persuasive effect was reported to be less than a one-point difference at most.THE DECODER


Instead,

  • which model to use (model size and performance)

  • what kind of post-training to conduct

  • which prompt strategy to use

were shown to have a significantly greater impact on persuasiveness than the "choices made by the designers."LinkedIn


In other words,

"being persuaded based on your Facebook 'likes' history"
is less effective than
"an AI that spews out a large amount of 'seemingly factual information' to anyone"

—at least in a laboratory setting, this is the conclusion.



Experimental Results Show "Weak but Not Negligible" Persuasive Effects

So, how persuasive was it?


According to analyses by other media, the average effect of persuasion through conversation in this study was at the "few points" level—many cases showed changes of around five points.Ars Technica


At first glance, this may seem small,

  • but a 10-minute conversation

  • with just one interaction

  • resulting in a change of several points in attitude

is not a number that can be ignored compared to traditional political campaign research. In particularly close elections, a few points can often turn the results.


Furthermore, some analyses reported that "30-40% of the attitude changes after the conversation remained even weeks later," suggesting an impact beyond just a "momentary mood."AICERTs - Empower with AI Certifications



Reactions on Social Media: Surprise, Resignation, and Irony

This study and the ZDNet article were widely shared on X (formerly Twitter), Mastodon, Reddit, LinkedIn, and other platforms within days of publication.X (formerly Twitter)

 


##HTML
← Back to Article List

Contact |  Terms of Service |  Privacy Policy |  Cookie Policy |  Cookie Settings

© Copyright ukiyo journal - 日本と世界をつなぐ新しいニュースメディア All rights reserved.