AI threats to democracy spark concern in new report

A report by the Alan Turing Institute warns that AI has fuelled harmful narratives and spread disinformation during a major year for elections. Conducted by the Institute’s Centre for Emerging Technology and Security (CETaS), the study explores how generative AI tools, including deepfake technology and bot farms, have been used to amplify conspiracy theories and sway public opinion. While no concrete evidence links AI directly to changes in election outcomes, the study points to growing concerns over AI’s influence on voter trust.

Researchers observed AI-driven bot farms that mimicked genuine voters and used fake celebrity endorsements to spread conspiracies during key elections. These tactics, they argue, have eroded trust in democratic institutions and heightened public fear of AI’s potential misuse. Lead author Sam Stockwell noted that while evidence remains limited on AI changing electoral results, the urgent need for transparency and better access to social media data is clear.

The Institute has outlined steps to counteract AI’s potential threats to democracy, suggesting stricter deterrents against disinformation, enhanced detection of deepfake content, improved media guidance, and stronger societal defences against misinformation. These recommendations aim to create a safer information environment as AI technology continues to advance.

In response to AI’s growing presence, major AI companies, including those behind ChatGPT and Meta AI, have tightened security to prevent misuse. However, some startups, like Haiper, still lag behind, with fewer safeguards in place, leading to concerns over potentially harmful AI content reaching the public.

Australia introduces new AI regulations

Australia’s government is advancing its AI regulation framework with new rules focusing on human oversight and transparency. Industry and Science Minister Ed Husic announced that the guidelines aim to ensure that AI systems have human intervention capabilities throughout their lifecycle to prevent unintended consequences or harm. These guidelines, though currently voluntary, are part of a broader consultation to determine if they should become mandatory in high-risk settings.

The following initiative follows rising global concerns about the role of AI in spreading misinformation and fake news, fueled by the growing use of generative AI models like OpenAI’s ChatGPT and Google’s Gemini. In response, other regions, such as the European Union, have already enacted more comprehensive AI laws to address these challenges.

Australia’s existing AI regulations, first introduced in 2019, were criticised for being insufficient for high-risk scenarios. Ed Husic emphasised that only about one-third of businesses use AI responsibly, underscoring the need for stronger measures to ensure safety, fairness, accountability, and transparency.

Calls for ‘digital vaccination’ of children to combat fake news

A recently published report by the University of Sheffield and its research partners proposes implementing a ‘digital vaccination’ for children to combat misinformation and bridge the digital divide. The report sets out recommendations for digital upskilling and innovative approaches to address the digital divide that hampers the opportunities of millions of children in the UK.

The authors warn that there could be severe economic and educational consequences without addressing these issues, highlighting that over 40% of UK children lack access to broadband or a device, and digital skills shortages cost £65 billion annually.

The report calls for adopting the Minimum Digital Living Standards framework to ensure every household has the digital infrastructure. It also stresses the need for improved school digital literacy education, teacher training, and new government guidance to mitigate online risks, including fake news.

India blocks 16 YouTube-based news channels for spreading fake news.

The information and broadcasting ministry decided to block 16 YouTube-based news channels for spreading fake news related to national security issues and India’s foreign relations. The blocked accounts include 10 YouTube channels from India and six from Pakistan. A statement by the ministry explained that these digital news channels had failed to provide the information they were requested, as required under the new IT rules in the country. Consequently, the government invoked its emergency powers granted under the IT rules.