Wikipedia halts AI summaries test after backlash
Editors warned that AI summaries could undermine Wikipedia’s core values, replacing collaborative accuracy with unverified, centralised outputs.

Wikipedia has paused a controversial trial of AI-generated article summaries following intense backlash from its community of volunteer editors.
The Wikimedia Foundation had planned a two-week opt-in test for mobile users using summaries produced by Aya, an open-weight AI model developed by Cohere.
However, the reaction from editors was swift and overwhelmingly negative. The discussion page became flooded with objections, with contributors arguing that such summaries risked undermining the site’s reputation for neutrality and accuracy.
Some expressed concerns that inserting AI content would override Wikipedia’s long-standing collaborative approach by effectively installing a single, unverifiable voice atop articles.
Editors warned that AI-generated summaries lacked proper sourcing and could compromise the site’s credibility. Recent AI blunders by other tech giants, including Google’s glue-on-pizza mishap and Apple’s false death alert, were cited as cautionary examples of reputational risk.
For many, the possibility of similar errors appearing on Wikipedia was unacceptable.
Marshall Miller of the Wikimedia Foundation acknowledged the misstep in communication and confirmed the project’s suspension.
While the Foundation remains interested in exploring AI to improve accessibility, it has committed to ensuring any future implementation involves direct participation from the Wikipedia community.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!