AI has started writing too much, and it's beginning to disrupt society.

AI has started writing too much, and it's beginning to disrupt society.

AI is not "too smart." It's "too much."

Discussions about AI often skew towards performance: how naturally it can write, how human-like it can behave, and how much work it can replace. However, the core issue highlighted in the FlaglerLive article is slightly different. What is currently shaking society is not that AI is smarter than humans. It's that AI can produce an overwhelming amount of text instantaneously, and it appears plausible.

For example, in 2023, the science fiction magazine Clarkesworld was forced to halt new submissions due to a flood of AI-generated works. This event, mentioned symbolically in the original article, is not unique to publishing. The same phenomenon is beginning to occur in systems where text is the entry point, such as newspaper letters, academic journals, public comments on policies, court submissions, job applications, and social media posts. Previously, the difficulty of writing and thinking served as a natural flow limiter. However, generative AI has suddenly removed this natural constraint.

The problem with this change is not the high quality of the text, but the volume that disrupts systems. Editors, reviewers, teachers, and judges are not flawless judgment machines. They are humans who read, select, differentiate, and process large amounts of text within limited timeframes. With AI's involvement, systems now face a battle against "processing capacity limits" rather than "quality competition." In other words, the issue lies not in AI's intelligence but in the bandwidth of humans.

A symbol of this is public opinion solicitation by the government. In 2026, it was reported that over 20,000 opposing comments were sent via AI-assisted platforms during discussions on air regulation in Southern California. Inquiries to stakeholders confirmed instances where comments were submitted without the sender's knowledge. Of course, AI assisting citizens in expressing their opinions is not inherently bad. For those not proficient in writing, short on time, or unsure about English or technical terms, AI can serve as a support for political participation. However, the same technology can also be a tool for companies and lobbyists to mass-produce "grassroots voices." The issue at hand is not efficiency but representation and authenticity.

In the judicial world, the same structure appears in a more serious form. In 2026, the National Center for State Courts warned that AI-generated evidence and citations threaten trust in the judiciary. Particularly in pro se litigation, there has been an increase in cases involving non-existent precedents, statutes, and AI-manipulated images, videos, and texts, with over 350 instances of false citations by self-represented parties recorded in the U.S. The problem is not just the presence of fakes. Even genuine evidence is now suspected of being AI-generated, eroding trust in the evidence itself. The foundation of the system is supported not only by correctness but also by the belief in that correctness.

So, can the issue be resolved by enhancing AI detection tools? The original article refers to this as an "unwinnable arms race." As the generation side becomes more sophisticated, so must the detection side. However, detectors may not function reliably in the real world. Stanford HAI reported that AI detectors frequently misjudge texts written by non-native English speakers. A 2026 study showed that while detectors appear highly accurate on benchmarks, their performance significantly deteriorates when writing styles, generation models, or data distributions change. Thus, while detection is necessary, relying solely on it risks harming innocent individuals.

Reactions on social media are also intriguing. Public posts reveal three major responses. First is a practical perspective on "how to balance benefits and harms." The Harvard Ash Center and several LinkedIn posts noted that while AI can aid participation in democracy and support expression, biases, errors, and mass submissions can damage systems. Second is the response that "the issue is not detection but the design of trust." Concerns are raised that without emphasizing human judgment, experience, and context, systems will revolve solely around "authenticity." Third is the sense of crisis from the education and academic publishing sectors. Those on the ground spoke of issues like mistrust due to false detections, AI-written peer reviews, and the inclusion of hallucinated citations as "problems already underway."

What is crucial here is that it is unlikely that "banning AI will return things to normal." The original article points out that highly capable AI is already widespread and can operate on laptops. Thus, what society needs to consider is not how to negate the existence of AI. It's about delineating what is recognized as support and what constitutes deception, how to introduce friction to mass submissions, where to retain human review, and where to delegate to AI assistance. This is the design of the system.

Rather, AI has a definite bright side. As emphasized in the original article, those with time and resources could previously rely on human assistants for writing, polishing applications, and academic English editing. AI broadly opens up this assistance. Therefore, what truly needs protection may not be the old purism of "whether a human wrote every word independently." What needs safeguarding is that an individual's intent, experience, responsibility, and the "one voice" that systems should receive are not swept away by a mass-produced fake crowd.

Ultimately, the issue in the AI era is not just determining the authenticity of text. It's about what constitutes a genuine opinion, who bears responsibility, and where the line is between support and deception. Generative AI relentlessly poses these questions to society. The problem is not that AI can write human-like text. It's that human systems are not designed for a world where "writing is too easy."


Source URL

FlaglerLive
https://flaglerlive.com/overwhelming-ai/

Reprint version used for content verification. Confirmation of the main arguments by Bruce Schneier and Nathan Sanders.
https://techxplore.com/news/2026-02-ai-generated-text-overwhelming-arms.html

Another publication on the same theme. Used for summarizing the entire article.
https://www.washingtonpost.com/ripple/2026/02/05/ai-generated-text-is-overwhelming-institutions-setting-off-a-no-win-arms-race-with-ai-detectors/

Report on Clarkesworld halting submissions due to a flood of AI-generated works
https://www.theguardian.com/technology/2023/feb/21/sci-fi-publisher-clarkesworld-halts-pitches-amid-deluge-of-ai-generated-stories

Stanford HAI's explanation that AI detectors are biased against non-native English writers
https://hai.stanford.edu/news/ai-detectors-biased-against-non-native-english-writers

2026 study showing that AI text detection's generalization performance is prone to collapse
https://arxiv.org/abs/2603.23146

Report on over 20,000 AI-assisted comments influencing decision-making on California air regulations
https://www.govtech.com/artificial-intelligence/ai-generated-comments-swayed-california-air-decision

Article by the National Center for State Courts addressing AI-generated evidence, false citations, and trust in the judiciary
https://www.ncsc.org/resources-courts/ai-generated-evidence-threat-public-trust-courts

Social media reaction 1. Public post by the Harvard Ash Center on the impact on democracy
https://www.linkedin.com/posts/harvardashcenterfordemocraticgovernanceandinnovation_ai-generated-text-is-overwhelming-institutions-activity-7425269516542185472-ltoL

Social media reaction 2. Public post questioning the balance of AI's benefits and harms
https://www.linkedin.com/posts/dr-david-ngatia_ai-generated-text-is-overwhelming-institutions-activity-7425403059226263552-brcN

Social media reaction 3. Public post discussing "data tsunami" and the need for governance
https://www.linkedin.com/posts/john-gasparovic-0ba26b15_ai-generated-text-is-overwhelming-institutions-activity-7425717144308404224-rThd

Social media reaction 4. Public post discussing misjudgment and trust decline in educational settings
https://www.linkedin.com/posts/william-garrity-b87456112_ai-generated-text-is-overwhelming-institutions-activity-7429204916256038912-abEL

Social media reaction 5. Public post emphasizing the need to focus on human judgment and authenticity rather than detection
https://www.linkedin.com/posts/john-gasparovic-0ba26b15_ai-generated-text-is-overwhelming-institutions-activity-7425717144308404224-rThd

Social media reaction 6. Public post from academia expressing anger over the decline in trust in academic peer review and citations
https://www.linkedin.com/posts/harvardashcenterfordemocraticgovernanceandinnovation_ai-generated-text-is-overwhelming-institutions-activity-7425269516542185472-ltoL