AI-generated video falsely claims US military to ‘take over’ Nigerian army

A video circulating online, purported to show a US military officer announcing that the United States would take control of the Nigerian Army, is false.

Independent analysis has revealed that the clip was likely generated or heavily manipulated using AI, and no official announcement or credible source supports this claim.

Fact-checkers used AI-detection tools and found high levels of manipulation, and investigations uncovered inconsistencies in uniform insignia and microphones linked to non-existent media outlets. No verified reports indicate that US military forces are intervening in Nigerian defence operations.

The false claim has spread on platforms including X (formerly Twitter), generating alarm and misinterpretation about foreign military involvement in Nigeria.

Experts warn that deepfakes and AI-generated misinformation are becoming harder to spot without specialised tools and verification.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI reshapes media in North Macedonia with new regulatory guidance

A new analysis examines the impact of AI on North Macedonia’s media sector, offering guidance on ethical standards, human rights, and regulatory approaches.

Prepared in both Macedonian and English, the study benchmarks the country’s practices against European frameworks and provides actionable recommendations for future regulation and self-regulation.

The research, supported by the EU and Council of Europe’s PRO-FREX initiative and in collaboration with the Agency for Audio and Audiovisual Media Services (AVMU), was presented during Media Literacy Days 2025 in Skopje.

It highlights the relevance of EU and Council of Europe guidelines, including the Framework Convention on AI and Human Rights, and guidance on responsible AI in journalism.

AVMU’s involvement underlines its role in ensuring media freedom, fairness, and accountability amid rapid technological change. Participants highlighted the need for careful policymaking to manage AI’s impact, protecting media diversity, journalistic standards, and public trust online.

The analysis forms part of broader efforts under the Council of Europe and the EU’s Horizontal Facility for the Western Balkans and Türkiye, aiming to support North Macedonia in aligning media regulation with European standards while responsibly integrating AI technologies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

US platforms signal political shift in DSA risk reports

Major online platforms have submitted their 2025 systemic risk assessments under the Digital Services Act as the European Commission moves towards issuing its first fine against a Very Large Online Platform.

The reports arrive amid mounting political friction between Brussels and Washington, placing platform compliance under heightened scrutiny on both regulatory and geopolitical fronts.

Several US-based companies adjusted how risks related to hate speech, misinformation and diversity are framed, reflecting political changes in the US while maintaining formal alignment with EU law.

Meta softened enforcement language, reclassified hate speech under broader categories and reduced visibility of civil rights structures, while continuing to emphasise freedom of expression as a guiding principle.

Google and YouTube similarly narrowed references to misinformation, replaced established terminology with less charged language and limited enforcement narratives to cases involving severe harm.

LinkedIn followed comparable patterns, removing references to earlier commitments on health misinformation, civic integrity and EU voluntary codes that have since been integrated into the DSA framework.

X largely retained its prior approach, although its report continues to reference cooperation with governments and civil society that contrasts with the platform’s public positioning.

TikTok diverged from other platforms by expanding disclosures on hate speech, election integrity and fact-checking, likely reflecting its vulnerability to regulatory action in both the EU and the US.

European regulators are expected to assess whether these shifts represent genuine risk mitigation or strategic alignment with US political priorities.

As systemic risk reports increasingly inform enforcement decisions, subtle changes in language, scope and emphasis may carry regulatory consequences well beyond their formal compliance function.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI adds pinned chat feature to ChatGPT apps

The US tech company, OpenAI, has begun rolling out a pinned chats feature in ChatGPT across web, Android and iOS, allowing users to keep selected conversations fixed at the top of their chat history for faster access.

The function mirrors familiar behaviour from messaging platforms such as WhatsApp and Telegram instead of requiring repeated scrolling through past chats.

Users can pin a conversation by selecting the three-dot menu on the web or by long-pressing on mobile devices, ensuring that essential discussions remain visible regardless of how many new chats are created.

An update that follows earlier interface changes aimed at helping users explore conversation paths without losing the original discussion thread.

Alongside pinned chats, OpenAI is moving ChatGPT toward a more app-driven experience through an internal directory that allows users to connect third-party services directly within conversations.

The company says these integrations support tasks such as bookings, file handling and document creation without switching applications.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI expands AI training for newsrooms worldwide

The US tech company, OpenAI, has launched the OpenAI Academy for News Organisations, a new learning hub designed to support journalists, editors and publishers adopting AI in their work.

An initiative that builds on existing partnerships with the American Journalism Project and The Lenfest Institute for Journalism, reflecting a broader effort to strengthen journalism as a pillar of democratic life.

The Academy goes live with practical training, newsroom-focused playbooks and real-world examples aimed at helping news teams save time and focus on high-impact reporting.

Areas of focus include investigative research, multilingual reporting, data analysis, production efficiency and operational workflows that sustain news organisations over time.

Responsible use sits at the centre of the programme. Guidance on governance, internal policies and ethical deployment is intended to address concerns around trust, accuracy and newsroom culture, recognising that AI adoption raises structural questions rather than purely technical ones.

OpenAI plans to expand the Academy in the year ahead with additional courses, case studies and live programming.

Through collaboration with publishers, industry bodies and journalism networks worldwide, the Academy is positioned as a shared learning space that supports editorial independence while adapting journalism to an AI-shaped media environment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Denmark pushes digital identity beyond authentication

Digital identity has long focused on proving that the same person returns each time they log in. The function still matters, yet online representation increasingly happens through faces, voices and mannerisms embedded in media rather than credentials alone.

As synthetic media becomes easier to generate and remix, identity shifts from an access problem to a problem of media authenticity.

The ‘Own Your Face’ proposal by Denmark reflects the shift by treating personal likeness as something that should be controllable in the same way accounts are controlled.

Digital systems already verify who is requesting access, yet lack a trusted middle layer to manage what is being shown when media claims to represent a real person. The proxy model illustrates how an intermediary layer can bring structure, consistency and trust to otherwise unmanageable flows.

Efforts around content provenance point toward a practical path forward. By attaching machine-verifiable history to media at creation and preserving it as content moves, identity extends beyond login to representation.

Broad adoption would not eliminate deception, yet it would raise the baseline of trust by replacing visual guesswork with evidence, helping digital identity evolve for an era shaped by synthetic media.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI generated podcasts flood platforms and disrupt the audio industry

Podcasts generated by AI are rapidly reshaping the audio industry, with automated shows flooding platforms such as Spotify, Apple Podcasts and YouTube.

Advances in voice cloning and speech synthesis have enabled the production to large volumes of content at minimal cost, allowing AI hosts to compete directly with human creators in an already crowded market.

Some established podcasters are experimenting cautiously, using cloned voices for translation, post-production edits or emergency replacements. Others have embraced full automation, launching synthetic personalities designed to deliver commentary, biographies and niche updates at speed.

Studios, such as Los Angeles-based Inception Point AI, have scaled the model to scale, producing hundreds of thousands of episodes by targeting micro-audiences and trending searches instead of premium advertising slots.

The rapid expansion is fuelling concern across the industry, where trust and human connection remain central to listener loyalty.

Researchers and networks warn that large-scale automation risks devaluing premium content, while creators and audiences question how far AI voices can replace authenticity without undermining the medium itself.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI tools enable large-scale monetisation of political misinformation in the UK

YouTube channels spreading fake and inflammatory anti-Labour videos have attracted more than a billion views this year, as opportunistic creators use AI-generated content to monetise political division in the UK.

Research by non-profit group Reset Tech identified more than 150 channels promoting hostile narratives about the Labour Party and Prime Minister Keir Starmer. The study found the channels published over 56,000 videos, gaining 5.3 million subscribers and nearly 1.2 billion views in 2025.

Many videos used alarmist language, AI-generated scripts and British-accented narration to boost engagement. Starmer was referenced more than 15,000 times in titles or descriptions, often alongside fabricated claims of arrests, political collapse or public humiliation.

Reset Tech said the activity reflects a wider global trend driven by cheap AI tools and engagement-based incentives. Similar networks were found across Europe, although UK-focused channels were mostly linked to creators seeking advertising revenue rather than foreign actors.

YouTube removed all identified channels after being contacted, citing spam and deceptive practices as violations of its policies. Labour officials warned that synthetic misinformation poses a serious threat to democratic trust, urging platforms to act more quickly and strengthen their moderation systems.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Reddit challenges Australia’s teen social media ban

The US social media company, Reddit, has launched legal action in Australia as the country enforces the world’s first mandatory minimum age for social media access.

Reddit argues that banning users under 16 prevents younger Australians from taking part in political debate, instead of empowering them to learn how to navigate public discussion.

Lawyers representing the company argue that the rule undermines the implied freedom of political communication and could restrict future voters from understanding the issues that will shape national elections.

Australia’s ban took effect on December 10 and requires major platforms to block underage users or face penalties that can reach nearly 50 million Australian dollars.

Companies are relying on age inference and age estimation technologies to meet the obligation, although many have warned that the policy raises privacy concerns in addition to limiting online expression.

The government maintains that the law is designed to reduce harm for younger users and has confirmed that the list of prohibited platforms may expand as new safety issues emerge.

Reddit’s filing names the Commonwealth of Australia and Communications Minister Anika Wells. The minister’s office says the government intends to defend the law and will prioritise the protection of young Australians, rather than allowing open access to high-risk platforms.

The platform’s challenge follows another case brought by an internet rights group that claims the legislation represents an unfair restriction on free speech.

A separate list identifies services that remain open for younger users, such as Roblox, Pinterest and YouTube Kids. At the same time, platforms including Instagram, TikTok, Snapchat, Reddit and X are blocked for those under sixteen.

The case is expected to shape future digital access rights in Australia, as online communities become increasingly central to political education and civic engagement among emerging voters.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Adobe brings its leading creative tools straight into ChatGPT

Yesterday, Adobe opened a new chapter for digital creativity by introducing Photoshop, Adobe Express and Adobe Acrobat inside ChatGPT.

The integration gives 800 million weekly users direct access to trusted creative and productivity tools through a conversational interface. Adobe aims to make creative work easier for newcomers by linking its technology to simple written instructions.

Photoshop inside ChatGPT offers selective edits, tone adjustments and creative effects, while Adobe Express brings quick design templates and animation features to people who want polished content without switching between applications.

Acrobat adds powerful document controls, allowing users to organise, edit or redact PDFs inside the chat. Each action blends conversation with Adobe’s familiar toolsets, giving users either simple text-driven commands or fine control through intuitive sliders.

The launch reflects Adobe’s broader investment in agentic AI and its Model Context Protocol. Earlier releases such as Acrobat Studio and AI Assistants for Photoshop and Adobe Express signalled Adobe’s ambition to expand conversational creative experiences.

Adobe also plans to extend an upcoming Firefly AI Assistant across multiple apps to support faster movement from an idea to a finished design.

All three apps are now available to ChatGPT users on desktop, web and iOS, with Android support expanding soon. Adobe positions the integration as an entry point for new audiences who may later move into the full desktop versions for deeper control.

The company expects the partnership to widen access to creative expression by letting anyone edit images, produce designs or transform documents simply by describing what they want to achieve.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!