By Paul Wallis
SENIOR EDITOR
DIGITAL JOURNAL
PublishedMarch 27, 2026

Wikipedia image: — © AFP Can EROK
Wikipedia has announced that it’s banning AI-generated content. It’s not obvious, but this is very good, if unexpected, news for content creators and publishers.
This is also a very practical move for such a baseline reference point. AI’s droning repetitive garbage isn’t information. It’s not even really content, more a sort of filler material. It’s certainly no use at all when you’re trying to get facts. It’s substandard dreck of the worst sort.
Wikipedia has a lot of skin in this game. It’s a global factor in information. It’s a baseline reference for just about everything and everyone.
Wikipedia gets a lot of flak, some justified, some not, for content quality. Most people don’t know, and most critics don’t mention the discussion tabs on each and every page. Wikipedia content is often contested. Sometimes the contests are pretty savage, too. To the point of being genuinely ferocious.
The critiques are a rough equivalent of peer review. Doesn’t mean the critics are necessarily right, but at least there are points made.
There’s something to argue with.
This isn’t the same thing as the usual unquestioning, unchecked and indefensibly stupid, incorrect pablum put out by some very lazy news media, for instance. Maybe it’s because Wikipedia knows its role?
AI has become a huge global error factory in just about all sectors in a few years. That’s not a sustainable or tolerable situation. It’s appropriate that a general information site like Wikipedia has found at least the start of a fix.
Some points for consideration here:
Wikipedia generates huge amounts of content.
To control AI content on Wikipedia, you need a working system that can operate on that scale.
You also need the critique process in place as an added safeguard.
If you have any level of expertise in your user base, this is about survival, not just cosmetic quality control.
Seems simple, doesn’t it?
Now apply these principles to academia, business reporting, news media, and anyone who doesn’t want to endure more AI slop in their lives.
Banning AI content is really just a coarse first-line filter. You might miss some things. There’s another factor at work here, and Wikipedia may have just found AI’s Achilles heel.
AI has a serious, perhaps fatal weakness that can be easily managed by the critique process.
AI is terrible at continuity.
It often lacks focus and drivels on endlessly.
That’s where the really turgid slop comes from. Everyone notices the repetition. Nobody seems to notice the truly godawful hash it can make out of any subject simply because of the volume of content. The most useless garbage will mindlessly continue its illogical babble indefinitely.
None of this rubbish could survive a critique from a 2-year-old child. The sheer lack of focus and off-topic drift is easily identifiable. One lousy prompt can do that, but it seems to be a default for all AI content generation. It’s worse than a writer paid solely on a word count. That can be pretty gruesome.
Again, oversight is the key, but this time it’s strategic, doctrinal oversight, geared to product standards. It’s backed up by peer-level reviews. It’s a key component of core business.
Wikipedia may have just found the way out of this black hole of utter AI crap.
__________________________________________________________
Disclaimer
The opinions expressed in this Op-Ed are those of the author. They do not purport to reflect the opinions or views of the Digital Journal or its members.
No comments:
Post a Comment