Fake video claims Nigeria is sending troops to Israel

A video circulating on TikTok falsely claims that Nigeria has announced the deployment of troops to Israel. Since 17 June, the video has been shared more than 6,100 times and presents a fabricated news segment constructed from artificial intelligence-generated visuals and outdated footage.

No official Nigerian authority has made any such announcement regarding military involvement in the ongoing Middle East crisis.

The video, attributed to a fictitious media outlet called ‘TBC News’, combines visuals of soldiers and aircraft with simulated newsroom graphics. However, no broadcaster by that name exists, and the logo and branding do not correspond to any known or legitimate media source.

Upon closer inspection, several anomalies suggest the use of generative AI. The news presenter’s appearance subtly shifts throughout the segment — with clothing changes, facial inconsistencies, and robotic voiceovers indicating non-authentic production.

Similarly, the footage of military activity lacks credible visual markers. For example, a purported official briefing displays a coat of arms inconsistent with Nigeria’s national emblems, and no standard flags or insignia are typically present at such events.

While two brief aircraft clips appear authentic — originally filmed during a May airshow in Lagos — the remainder seems digitally altered or artificially generated.

In reality, Nigerian officials have issued intense public criticism of Israel’s recent military actions in Iran and have not indicated any intent to provide military support to Israel.

The video in question, therefore, significantly distorts Nigeria’s diplomatic position and risks exacerbating tensions during an already sensitive period in international affairs.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Paraguay denies Bitcoin legal tender announcement

Paraguay’s government warned of possible unauthorised access to President Santiago Peña’s X account after a false Bitcoin legal tender claim. The now-deleted message announced a $5 million Bitcoin reserve fund and featured a decree with the national coat of arms.

Officials quickly noted inconsistencies in the statement’s formatting and tone. No matching information was published on government websites or state-run media. These red flags led observers to question the post’s authenticity almost immediately.

Authorities confirmed that the president’s account had shown signs of ‘irregular activity’, suggesting it may have been compromised. Citizens have been urged to ignore the claim and await verified updates through official channels.

Although countries like El Salvador have formally adopted Bitcoin as legal tender, Paraguay has made no such move. At the time of writing, no further details had been released regarding the source or method of the suspected breach.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI threats to democracy spark concern in new report

A report by the Alan Turing Institute warns that AI has fuelled harmful narratives and spread disinformation during a major year for elections. Conducted by the Institute’s Centre for Emerging Technology and Security (CETaS), the study explores how generative AI tools, including deepfake technology and bot farms, have been used to amplify conspiracy theories and sway public opinion. While no concrete evidence links AI directly to changes in election outcomes, the study points to growing concerns over AI’s influence on voter trust.

Researchers observed AI-driven bot farms that mimicked genuine voters and used fake celebrity endorsements to spread conspiracies during key elections. These tactics, they argue, have eroded trust in democratic institutions and heightened public fear of AI’s potential misuse. Lead author Sam Stockwell noted that while evidence remains limited on AI changing electoral results, the urgent need for transparency and better access to social media data is clear.

The Institute has outlined steps to counteract AI’s potential threats to democracy, suggesting stricter deterrents against disinformation, enhanced detection of deepfake content, improved media guidance, and stronger societal defences against misinformation. These recommendations aim to create a safer information environment as AI technology continues to advance.

In response to AI’s growing presence, major AI companies, including those behind ChatGPT and Meta AI, have tightened security to prevent misuse. However, some startups, like Haiper, still lag behind, with fewer safeguards in place, leading to concerns over potentially harmful AI content reaching the public.

Australia introduces new AI regulations

Australia’s government is advancing its AI regulation framework with new rules focusing on human oversight and transparency. Industry and Science Minister Ed Husic announced that the guidelines aim to ensure that AI systems have human intervention capabilities throughout their lifecycle to prevent unintended consequences or harm. These guidelines, though currently voluntary, are part of a broader consultation to determine if they should become mandatory in high-risk settings.

The following initiative follows rising global concerns about the role of AI in spreading misinformation and fake news, fueled by the growing use of generative AI models like OpenAI’s ChatGPT and Google’s Gemini. In response, other regions, such as the European Union, have already enacted more comprehensive AI laws to address these challenges.

Australia’s existing AI regulations, first introduced in 2019, were criticised for being insufficient for high-risk scenarios. Ed Husic emphasised that only about one-third of businesses use AI responsibly, underscoring the need for stronger measures to ensure safety, fairness, accountability, and transparency.

Calls for ‘digital vaccination’ of children to combat fake news

A recently published report by the University of Sheffield and its research partners proposes implementing a ‘digital vaccination’ for children to combat misinformation and bridge the digital divide. The report sets out recommendations for digital upskilling and innovative approaches to address the digital divide that hampers the opportunities of millions of children in the UK.

The authors warn that there could be severe economic and educational consequences without addressing these issues, highlighting that over 40% of UK children lack access to broadband or a device, and digital skills shortages cost £65 billion annually.

The report calls for adopting the Minimum Digital Living Standards framework to ensure every household has the digital infrastructure. It also stresses the need for improved school digital literacy education, teacher training, and new government guidance to mitigate online risks, including fake news.

India blocks 16 YouTube-based news channels for spreading fake news.

The information and broadcasting ministry decided to block 16 YouTube-based news channels for spreading fake news related to national security issues and India’s foreign relations. The blocked accounts include 10 YouTube channels from India and six from Pakistan. A statement by the ministry explained that these digital news channels had failed to provide the information they were requested, as required under the new IT rules in the country. Consequently, the government invoked its emergency powers granted under the IT rules.