AI threats to democracy spark concern in new report
The Alan Turing Institute call for urgent action to counter AI’s potential threats to electoral integrity.
A report by the Alan Turing Institute warns that AI has fuelled harmful narratives and spread disinformation during a major year for elections. Conducted by the Institute’s Centre for Emerging Technology and Security (CETaS), the study explores how generative AI tools, including deepfake technology and bot farms, have been used to amplify conspiracy theories and sway public opinion. While no concrete evidence links AI directly to changes in election outcomes, the study points to growing concerns over AI’s influence on voter trust.
Researchers observed AI-driven bot farms that mimicked genuine voters and used fake celebrity endorsements to spread conspiracies during key elections. These tactics, they argue, have eroded trust in democratic institutions and heightened public fear of AI’s potential misuse. Lead author Sam Stockwell noted that while evidence remains limited on AI changing electoral results, the urgent need for transparency and better access to social media data is clear.
The Institute has outlined steps to counteract AI’s potential threats to democracy, suggesting stricter deterrents against disinformation, enhanced detection of deepfake content, improved media guidance, and stronger societal defences against misinformation. These recommendations aim to create a safer information environment as AI technology continues to advance.
In response to AI’s growing presence, major AI companies, including those behind ChatGPT and Meta AI, have tightened security to prevent misuse. However, some startups, like Haiper, still lag behind, with fewer safeguards in place, leading to concerns over potentially harmful AI content reaching the public.