AI bot swarms emerge as a new threat to democracy

Democratic resilience is being tested as AI-driven disinformation evolves beyond traditional bot campaigns.

Researchers warn that AI bot swarms could autonomously manipulate public opinion, creating new risks for democratic systems worldwide.

Researchers and free-speech advocates are warning that coordinated swarms of AI agents could soon be deployed to manipulate public opinion at a scale capable of undermining democratic systems.

According to a consortium of academics from leading universities, advances in generative and agentic AI now enable large numbers of human-like bots to infiltrate online communities and autonomously simulate organic political discourse.

Unlike earlier forms of automated misinformation, AI swarms are designed to adapt to social dynamics, learn community norms and exchange information in pursuit of a shared objective.

By mimicking human behaviour and spreading tailored narratives gradually, such systems could fabricate consensus, amplify doubt around electoral processes and normalise anti-democratic outcomes without triggering immediate detection.

Evidence of early influence operations has already emerged in recent elections across Asia, where AI-driven accounts have engaged users with large volumes of unverifiable information rather than overt propaganda.

Researchers warn that information overload, strategic neutrality and algorithmic amplification may prove more effective than traditional disinformation campaigns.

The authors argue that democratic resilience now depends on global coordination, combining technical safeguards such as watermarking and detection tools with stronger governance of political AI use.

Without collective action, they caution that AI-enabled manipulation risks outpacing existing regulatory and institutional defences.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!