Skip to main content
ukiyo journal - 日本と世界をつなぐ新しいニュースメディア Logo
  • All Articles
  • 🗒️ Register
  • 🔑 Login
    • 日本語
    • 中文
    • Español
    • Français
    • 한국어
    • Deutsch
    • ภาษาไทย
    • हिंदी
Cookie Usage

We use cookies to improve our services and optimize user experience. Privacy Policy and Cookie Policy for more information.

Cookie Settings

You can configure detailed settings for cookie usage.

Essential Cookies

Cookies necessary for basic site functionality. These cannot be disabled.

Analytics Cookies

Cookies used to analyze site usage and improve our services.

Marketing Cookies

Cookies used to display personalized advertisements.

Functional Cookies

Cookies that provide functionality such as user settings and language selection.

How Wikipedia Fights "AI Slop" — The Impact of "Speedy Deletion": New Rules to Protect Quality

How Wikipedia Fights "AI Slop" — The Impact of "Speedy Deletion": New Rules to Protect Quality

2025年08月10日 12:05

Introduction: How Encyclopedias Confront the "Flood of AI"

The rise of generative AI has brought "speed" for better or worse. The ability to churn out large volumes of text in a short time is a double-edged sword for collaborative projects like encyclopedias. Wikipedia is now facing a surge of low-quality content known as "AI slop," inflated with misinformation and fictitious citations. The Verge reports that the community has started to function like an "immune system" against this issue, quickly detecting, eliminating, and learning how to handle foreign substances—an apt metaphor. 


Contents of the New Rule: G15 "Pages Generated by LLM Without Human Review"

After much debate, an extension of the "speedy deletion" criteria was adopted. Normally, deletion follows a seven-day public discussion, but pages clearly generated by AI without human review by the contributor can now be immediately deleted at the discretion of administrators (the criterion is named "G15"). There are two major indicators: ① phrases with a "prompt response" feel like "Here is your Wikipedia article…" or "as a large language model…", and ② "fabricated citations" such as non-existent papers, unrelated sources, unsolvable DOIs, or invalid ISBNs. Wikipedia404 Media


This new criterion targets only cases that are "obviously worthy of immediate deletion." Simply having an AI-like writing style is not enough. The key point is that it is positioned as an "obvious violation" alongside traditional speedy deletions for vandalism, advertising, fictitious articles, harassment, etc. (like G10 or G11). 


Catalog of Detecting "AI-like" Content: WikiProject AI Cleanup

On the community side, "WikiProject AI Cleanup" has been launched to map out "landmines" of AI-specific phrases and formats. These include overly promotional tones, strangely polished rhetoric, excessive use of "moreover," curved quotation marks (“ ”), and excessive em dashes (—), among others. However, these are merely "guidelines." The decisive factors for deletion are the aforementioned direct prompt imports and false citations. 


Initially Reported by 404 Media, Then Spread to Global Media

The first to report the policy shift was 404 Media. Subsequently, outlets like PCWorld and Germany's Heise followed, making the issue visible globally. The key point is that it "codified conditions under which 'speed' can be prioritized exceptionally in Wikipedia, which values a culture of discussion." 404 MediaPCWorldheise online


Foundation's Stance: AI Summaries Halted, but "AI to Assist Humans" Continues

The Wikimedia Foundation, which operates independently of community decisions, halted an experiment in June to display AI summaries at the beginning of articles due to community backlash (there had been strong calls to "stop" it in the past). However, the introduction of "AI to assist editors," such as vandalism detection and translation support, continues. Edit Check is designed to alert for lack of citations and check for neutrality, while Paste Check verifies if a large paste is "truly your own writing." 


What's Changing: Speed, Burden, Reliability

  • Speed: Obvious AI slop can be addressed immediately without waiting for discussions. Quickly removing "vandal articles" and "fabricated citations" from new creations allows more time for other editing tasks. 404 Media

  • Reduced Burden: It lightens the load of "cleanup" for AI articles without prior review, allowing limited patrolling resources to be reallocated. 

  • Reliability: Misquotations and phantom literature shake the foundation of an encyclopedia. The significance of standardizing easily recognizable "decisive traces" is substantial. PCWorld


However, It's Not a "Cure-All"

Some in the community see this as merely a "band-aid." AI-generated drafts can become high-quality if carefully reviewed by humans, and poor articles can be written by humans too. That's why G15 targets "obvious blunders." In the medium to long term, strengthening support tools like Edit Check, consensus-building on talk pages, and workflows for source verification are indispensable. PCWorld


Social Media Reactions: Mostly Supportive, but Concerns About Misjudgment and Chilling Effects

 


  • X (formerly Twitter): Posts from The Verge and Slashdot served as hubs for dissemination. The general sentiment was "well done," but there were also questions like "Isn't this a rejection of all AI?" X (formerly Twitter)

  • Reddit: In r/technology and r/wikipedia, while support was prominent with comments like "the direction is right" and "want to increase donations," there were also warnings about examples of discussions being "ghostwritten" by AI and concerns about the risk of misjudgment. Reddit

  • Hacker News: Long-time readers are concerned about "circular references of AI slop (AI→Wiki→next-gen AI)." While welcoming the standardization, there is a flow of discussion on balancing openness. Hacker News

  • Threads / Bluesky: There were scattered voices appreciating Wikipedia's public nature and suggesting that other platforms should follow suit. ThreadsBluesky Social


How to Prevent Misjudgment?

"AI-like" is merely circumstantial evidence. Wikipedia's guide also clearly states that curved quotation marks or excessive use of "moreover" alone are not grounds for deletion, and the final decision relies on "decisive factors" like direct prompt imports and fabricated citations. This serves as a safeguard to avoid discouraging legitimate writers. 


Why Human "Consensus Building" Is Still Needed

The trust in encyclopedias is based on the verifiability of sources and the transparency of revision histories. AI can assist in these areas but cannot replace them. The metaphor of an "immune system" mentioned by the foundation's representative ultimately suggests that human consensus becomes the antibodies. To outpace the "proliferation speed of misinformation," the community must enhance its learning speed. 


Outlook: Because It's Open, Codifying the Boundaries

G15 is a "minimal speeding violation" to protect reliability without compromising openness. The significance of laying down tracks for exceptional rapid response within a culture that values discussion is not small. Moving forward, the keys will be "transparency of the creation process" like Paste Check and the advancement of AI-assisted vandalism detection. Wikipedia is likely to evolve not by rejecting AI but by distinguishing between "AI that assists humans" and "AI that deceives humans."


Reference Articles

How Wikipedia is Fighting Low-Quality AI Content
Source: https://www.theverge.com/report/756810/wikipedia-ai-slop-policies-community-speedy-deletion

Powered by Froala Editor

← Back to Article List

Contact |  Terms of Service |  Privacy Policy |  Cookie Policy |  Cookie Settings

© Copyright ukiyo journal - 日本と世界をつなぐ新しいニュースメディア All rights reserved.