More Human Than Human Propaganda: Are AI Bots Faking "Public Opinion"? — The Quiet Erosion of Democracy That Began on Social Media

More Human Than Human Propaganda: Are AI Bots Faking "Public Opinion"? — The Quiet Erosion of Democracy That Began on Social Media

In the past, when people thought of threats from social media, they often imagined simple bots flooding platforms with identical messages or blatant fake accounts. However, researchers are now on alert for a quieter, more human-like, and harder-to-detect presence. The new generation of bots, integrated with generative AI, doesn't just spread the same message with a single command. Instead, each bot behaves like a separate persona, adapting its language to the interests and flow of conversation, gradually altering the atmosphere. The issue lies not just in the misinformation itself but in the ability to artificially create the impression that "everyone thinks so."


This danger is not a fantasy. In mid-2023, Filippo Menczer and others at Indiana University identified a botnet called "fox8" with over 1,000 bots spreading cryptocurrency scams on X (formerly Twitter). What was notable was how human-like the posts were. In poorly managed instances, self-disclosure phrases like "As an AI language model..." revealed their true nature, but researchers see this as just the "tip of the iceberg." If managed more carefully, these bots would be hard to distinguish from regular users. Moreover, this botnet engaged in mutual conversations, naturally interacted with human posts, and expanded its exposure and influence by riding on recommendation algorithms.


What cannot be overlooked here is that the evolution of AI bots is moving beyond the stage of "telling lies more cleverly" to the stage of "disguising the very shape of public opinion." The "malicious AI swarms" warned about in Science in January 2026 operate not individually but as a coordinated group. The research team pointed out that such swarms could infiltrate communities, learn the style and slang of conversations, and rapidly test responsive messages, amplifying only the most effective narratives. In essence, countless AIs are simultaneously "reading the room." And they never stop, 24 hours a day.


The most important keyword here is "synthetic consensus," or artificially created agreement. People do not judge whether an opinion is correct solely based on its content. If they feel that many around them share the same opinion, they are more likely to believe it. Researchers describe this as an abuse of social proof. If hundreds or thousands of AIs posing as natural empathy or casual chatter infiltrate local bulletin boards, parenting communities, sports fan gatherings, or even election supporter groups, humans can easily be misled into thinking, "This is not just a fringe opinion; it's already a common sentiment." The "public opinion" that supports democracy should be formed by the clash of diverse individual voices, but that premise is collapsing.


Moreover, the troublesome aspect of this method is that it is difficult to prevent with simple fact-checking. When looking at individual posts, they may not be blatant falsehoods. Opinions, sarcasm, nods, questions, anecdotes, slightly exaggerated anxieties—when these "not entirely false small narratives" accumulate in large numbers, they form a single story. Even if contradictory information emerges, if it appears that multiple independent humans are aligned in the same direction, the impression does not easily fade. The target of the bot swarm is to secure the majority feeling rather than the truth.


Researchers warn that it would not be surprising if this technology is deployed on a large scale by the 2028 U.S. presidential election. According to a report by The Guardian, a group of researchers and practitioners, including Nobel Peace Prize laureate Maria Ressa, expressed concern that AI swarms could even support the denial of election results or the acceptance of election cancellations. It is also noted that early examples of influence operations using AI were observed in the 2024 elections in Taiwan, Indonesia, and India. In other words, this is not a "future issue" but a "method that is already being tested."


 

On the other hand, reactions on social media are not monolithic. Public posts reveal a prominent sense of crisis, with many saying, "Here it comes." On X, there is a shared view that AI bot swarms create scaled fake consensus, eroding democracy and free discussion, and on LinkedIn, researchers and stakeholders are spreading this as a "clarion call." Particularly in academic and policy-related reactions, there is a strong consensus that "more serious than misinformation is the appearance of independent majority voices."


However, there is another nuance in the reactions of general users. On Reddit, there is a significant sense of "this has been happening for years," with comments suggesting that "old-style operations like Russian propaganda bots have just become more sophisticated with AI." Here, distrust in platforms that have long neglected the issue takes precedence over fear of AI itself. In other words, the real threat is not the new technology but the design that makes fake accounts and provocative posts more likely to spread.


A stronger reaction is the concern that "identity verification or strict monitoring as a solution could invite other dangers." On Reddit, there are concerns that strong authentication might be too invasive, leading to the loss of anonymity and the leakage of personal information. This is not merely a technical discussion. There is suspicion that measures to protect democracy might instead lead to increased surveillance and suppression of speech. If AI bot countermeasures do not head in the right direction, the intention to "protect society from bots" might increase "difficulty for humans to speak."


This conflict is actually quite fundamental. If the issue of AI swarms is treated merely as a tech risk, countermeasures quickly lean towards "enhancing detection," "tightening authentication," and "suppressing suspicious statements." But that alone is insufficient and risky. Researchers also acknowledge that there is no single silver bullet. What is needed is not just monitoring individual posts but detecting patterns of coordinated behavior, allowing researchers access to platform data, advancing labeling and provenance display of AI-generated content, and most importantly, cutting off the structure where money circulates through fake engagement. As long as attackers profit, bots will not decrease.


Ultimately, this issue cannot be dismissed with the phrase "AI is dangerous." The more troublesome aspect is that much of what we daily call "public opinion" relies on impressions from screens. Opinions seen repeatedly on timelines, claims that appear to be supported by multiple accounts, the atmosphere in comment sections, trending topics—these have been believed to be the result of collective human interest. However, if a large number of AI personas can blend in, we might not be seeing "other people's thoughts" but rather an optimized performance. The crisis of democracy is not in the moment the ballot box is taken away, but in the process where people's "feelings" are quietly rewritten before voting. What is happening now is the entrance to that process.


Source URL

FlaglerLive
https://flaglerlive.com/ai-bots/

Used for primary verification of the content
https://www.salon.com/2026/02/15/swarms-of-ai-bots-are-threatening-democracy-partner/

The Conversation
https://bibbase.org/network/publication/menczer-swarmsofaibotscanswaypeoplesbeliefsthreateningdemocracy-2026

Used for abstract verification of the Science paper on AI swarms threatening democracy
https://arxiv.org/abs/2506.06299

Summary by research institutions. Used for organizing features, infiltration, optimization, and constancy of AI swarms
https://www.bi.no/en/about-bi/news/2026/01/ai-swarms-threaten-democracy/

Summary by research institutions. Used for confirming synthetic consensus, detection, and policy response directions
https://www.cs.ubc.ca/news/2026/01/ai-swarms

Used for confirming concerns about the 2028 U.S. presidential election, early examples in the 2024 elections, and expert comments
https://www.theguardian.com/technology/2026/jan/22/experts-warn-of-threat-to-democracy-by-ai-bot-swarms-infesting-social-media

Research paper on the "fox8" botnet identified in 2023. Used for confirming the case of an AI botnet with over 1,000 bots
https://arxiv.org/pdf/2307.16336

Report on the fox8 botnet. Used for supplementary confirmation of self-disclosure phrases derived from ChatGPT and scam dissemination on X
https://www.wired.com/story/chat-gpt-crypto-botnet-scam/

Example of reactions on X. Used for confirming perceptions that AI bot swarms threaten democracy and free discussion
https://x.com/SpirosMargaris/status/2014639340547195380
https://x.com/bryan_horrigan/status/2014511410953519369

Example of reactions on LinkedIn. Used for confirming the sharing of warnings within the researcher and practitioner community
https://www.linkedin.com/posts/tangaudrey_how-malicious-ai-swarms-can-threaten-democracy-activity-7420685540989095936-EKqO
https://www.linkedin.com/posts/fmenczer_sciencepolicyforum-scienceresearch-activity-7421363633315577856-F9T9
https://www.linkedin.com/posts/science-magazine_sciencepolicyforum-ai-activity-7424097798943322112-6SKN

Example of reactions on Reddit. Used for confirming reaction trends such as "problems that existed before," "the algorithm is the essence," and "strengthening authentication is dangerous"
https://www.reddit.com/r/technology/comments/1qk5002/experts_warn_of_threat_to_democracy_from_ai_bot/
https://www.reddit.com/r/technology/comments/1r4jxlv/swarms_of_ai_bots_can_sway_peoples_beliefs/
https://www.reddit.com/r/self/comments/1qp8g9w/it_feels_like_we_should_be_turning_off_the/
https://www.reddit.com/r/science/comments/1qqjwz8/swarms_of_ai_personas_mimic_humans_so_well_they/