AI governance takes focus at UN security dialogue

The UN will mark the fourth International Day for the Prevention of Violent Extremism Conducive to Terrorism on 12 February 2026 with a high-level dialogue focused on AI. The event will examine how emerging technologies are reshaping both prevention strategies and extremist threats.

Organised by the UN Office of Counter-Terrorism in partnership with the Republic of Korea’s UN mission, the dialogue will take place at UN Headquarters in New York. Discussions will bring together policymakers, technology experts, civil society representatives, and youth stakeholders.

A central milestone will be the launch of the first UN Practice Guide on Artificial Intelligence and Preventing and Countering Violent Extremism. The guide offers human rights-based advice on responsible AI use, addressing ethical, governance, and operational risks.

Officials warn that AI-generated content, deepfakes, and algorithmic amplification are accelerating extremist narratives online. Responsibly governed AI tools could enhance early detection, research, and community prevention efforts.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU challenges Meta over WhatsApp AI restrictions

The European Commission has warned Meta that it may have breached EU antitrust rules by restricting third-party AI assistants from operating on WhatsApp. A Statement of Objections outlines regulators’ preliminary view that the policy could distort competition in the AI assistant market.

The probe centres on updated WhatsApp Business terms announced in October 2025 and enforced from January 2026. Under the changes, rival general-purpose AI assistants were effectively barred from accessing the platform, leaving Meta AI as the only integrated assistant available to users.

Regulators argue that WhatsApp serves as a critical gateway for consumers AI access AI services. Excluding competitors could reinforce Meta’s dominance in communication applications while limiting market entry and expansion opportunities for smaller AI developers.

Interim measures are now under consideration to prevent what authorities describe as potentially serious and irreversible competitive harm. Meta can respond before any interim measures are imposed, while the broader antitrust probe continues.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU faces pressure to boost action on health disinformation

A global health organisation is urging the EU to make fuller use of its digital rules to curb health disinformation as concerns grow over the impact of deepfakes on public confidence.

Warnings point to a rising risk that manipulated content could reduce vaccine uptake instead of supporting informed public debate.

Experts argue that the Digital Services Act already provides the framework needed to limit harmful misinformation, yet enforcement remains uneven. Stronger oversight could improve platforms’ ability to detect manipulated content and remove inaccurate claims that jeopardise public health.

Campaigners emphasise that deepfake technology is now accessible enough to spread false narratives rapidly. The trend threatens vaccination campaigns at a time when several member states are attempting to address declining trust in health authorities.

The EU officials continue to examine how digital regulation can reinforce public health strategies. The call for stricter enforcement highlights the pressure on Brussels to ensure that digital platforms act responsibly rather than allowing misleading material to circulate unchecked.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Learnovate launches community of practice on AI for learning

The Learnovate Centre, a global innovation hub focused on the future of work and learning at Trinity College Dublin, is spearheading a community of practice on responsible AI in learning, bringing together educators, policymakers, institutional leaders and sector specialists to discuss safe, effective and compliant uses of AI in educational settings.

This initiative aims to help practitioners interpret emerging policy frameworks, including EU AI Act requirements, share practical insights and align AI implementation with ethical and pedagogical principles.

One of the community’s early activities includes virtual meetings designed to build consensus around AI norms in teaching, compliance strategies and knowledge exchange on real-world implementation.

Participants come from diverse education domains, including schools, higher and vocational education and training, as well as representatives from government and unions, reflecting a broader push to coordinate AI adoption across the sector.

Learnovate plays a wider role in AI and education innovation, supporting research, summits and collaborative programmes that explore AI-powered tools for personalised learning, upskilling and ethical use cases.

It also partners with start-ups and projects (such as AI platforms for teachers and learners) to advance practical solutions that balance innovation with safeguards.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI-generated ‘slop’ spreads on Spotify, raising platform integrity concerns

A TechRadar report highlights the growing presence of AI-generated music on Spotify, often produced in large quantities and designed to exploit platform algorithms or royalty systems.

These tracks, sometimes described as ‘AI slop’, are appearing in playlists and recommendations, raising concerns about quality control and fairness for human musicians.

The article outlines signs that a track may be AI-generated, including generic or repetitive artwork, minimal or inconsistent artist profiles, and unusually high volumes of releases in a short time. Some tracks also feature vague or formulaic titles and metadata, making them difficult to trace to real creators.

Readers are encouraged to use Spotify’s reporting tools to flag suspicious or low-quality AI content.

The issue is a part of a broader governance challenge for streaming platforms, which must balance open access to generative tools with the need to maintain content quality, transparency and fair compensation for artists.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Discord expands teen-by-default protection worldwide

Discord is preparing a global transition to teen-appropriate settings that will apply to all users unless they confirm they are adults.

The phased rollout begins in early March and forms part of the company’s wider effort to offer protection tailored to younger audiences rather than relying on voluntary safety choices. Controls will cover communication settings, sensitive content and access to age-restricted communities.

The update is based on an expanded age assurance system designed to protect privacy while accurately identifying users’ age groups. People can use facial age estimation on their own device or select identity verification handled by approved partners.

Discord will also rely on an age-inference model that runs quietly in the background. Verification results remain private, and documents are deleted quickly, with users able to appeal group assignments through account settings.

Stricter defaults will apply across the platform. Sensitive media will stay blurred unless a user is confirmed as an adult, and access to age-gated servers or commands will require verification.

Message requests from unfamiliar contacts will be separated, friend-request alerts will be more prominent and only adults will be allowed to speak on community stages instead of sharing the feature with teens.

Discord is complementing the update by creating a Teen Council to offer advice on future safety tools and policies. The council will include up to a dozen young users and aims to embed real teen insight in product development.

The global rollout builds on earlier launches in the UK and Australia, adding to an existing safety ecosystem that includes Teen Safety Assist, Family Centre, and several moderation tools intended to support positive and secure online interactions.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Study questions reliability of AI medical guidance

AI chatbots are not yet capable of providing reliable health advice, according to new research published in the journal Nature Medicine. Findings show users gain no greater diagnostic accuracy from chatbots than from traditional internet searches.

Researchers tested nearly 1,300 UK participants using ten medical scenarios, ranging from minor symptoms to conditions requiring urgent care. Participants were assigned to use either OpenAI’s GPT-4o, Meta’s Llama 3, Command R+, or a standard search engine to assess symptoms and determine next steps.

Chatbot users identified their condition about one-third of the time, with only 45 percent selecting the correct medical response. Performance levels matched those relying solely on search engines, despite AI systems scoring highly on medical licensing benchmarks.

Experts attributed the gap to communication failures. Users often provided incomplete information or misinterpreted chatbot guidance.

Researchers and bioethicists warned that growing reliance on AI for medical queries could pose public health risks without professional oversight.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Pakistan pledges major investment in AI by 2030

Pakistan plans to invest $1 billion in AI by 2030, Prime Minister Shehbaz Sharif said at the opening of Indus AI Week in Islamabad. The pledge aims to build a national AI ecosystem in Pakistan.

The government in Pakistan said AI education would expand to schools and universities, including remote regions. Islamabad also plans 1,000 fully funded PhD scholarships in AI to strengthen research capacity in Pakistan.

Shehbaz Sharif said Pakistan would train one million non IT professionals in AI skills by 2030. Islamabad identified agriculture, mining and industry as priority sectors for AI driven productivity gains in Pakistan.

Pakistan approved a National AI Policy in 2025, although implementation has moved slowly. Officials in Islamabad said Indus AI Week marks an early step towards broader adoption of AI across Pakistan.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Super Bowl 2026 ads embrace the AI power

AI dominated the 2026 Super Bowl advertising landscape as brands relied on advanced models instead of traditional high-budget productions.

Many spots showcased AI as both the creative engine behind the visuals and the featured product, signalling a shift toward technology-centred storytelling during the most expensive broadcast event of the year.

Svedka pursued a provocative strategy by presenting a largely AI-generated commercial starring its robot pair, a choice that reignited arguments over whether generative tools could displace human creatives.

Anthropic went in a different direction by using humour to mock OpenAI’s plan to introduce advertisements to ChatGPT, a jab that led to a pointed response from Sam Altman and fuelled an online dispute.

Meta, Amazon and Google used their airtime to promote their latest consumer offerings, with Meta focusing on AI-assisted glasses for extreme activities and Amazon unveiling Alexa+, framed through a satirical performance by Chris Hemsworth about fears of malfunctioning assistants.

Google leaned toward practical design applications instead of spectacle, demonstrating its Nano Banana Pro system transforming bare rooms into personalised images.

Other companies emphasised service automation, from Ring’s AI tool for locating missing pets to Ramp, Rippling and Wix, which showcased platforms designed to ease administrative work and simplify creative tasks.

Hims & Hers adopted a more social approach by highlighting the unequal nature of healthcare access and promoting its AI-driven MedMatch feature.

The variety of tones across the adverts underscored how brands increasingly depend on AI to stand out, either through spectacle or through commentary on the technology’s expanding cultural power.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

How early internet choices shaped today’s AI

Two decisions taken on the same day in February 1996 continue to shape how the internet, and now AI, is governed today. That is the central argument of Jovan Kurbalija’s blog ‘Thirty years of Original Sin of digital and AI governance,’ which traces how early legal and ideological choices created a lasting gap between technological power and public accountability.

The first moment unfolded in Davos, where John Perry Barlow published his Declaration of the Independence of Cyberspace, portraying the internet as a realm beyond the reach of governments and existing laws. According to Kurbalija, this vision helped popularise the idea that digital space was fundamentally separate from the physical world, a powerful narrative that encouraged the belief that technology should evolve faster than, and largely outside of, politics and law.

In reality, the blog argues, there is no such thing as a stateless cyberspace. Every online action relies on physical infrastructure, data centres, and networks that exist within national jurisdictions. Treating the internet as a lawless domain, Kurbalija suggests, was less a triumph of freedom than a misconception that sidelined long-standing legal and ethical traditions.

The second event happened the same day in Washington, D.C., when the United States enacted the Communications Decency Act. Hidden within it was Section 230, a provision that granted internet platforms broad immunity from liability for the content they host. While originally designed to protect a young industry, this legal shield remains in place even as technology companies have grown into trillion-dollar corporations.

Kurbalija notes that the myth of a separate cyberspace and the legal immunity of platforms reinforced each other. The idea of a ‘new world’ helped justify why old legal principles should not apply, despite early warnings, including from US judge Frank Easterbrook, that existing laws were sufficient to regulate new technologies by focusing on human relationships rather than technical tools.

Today, this unresolved legacy has expanded into the realm of AI. AI companies, the blog argues, benefit from the same logic of non-liability, even as their systems can amplify harm at a scale comparable to, or even greater than, that of other heavily regulated industries.

Kurbalija concludes that addressing AI’s societal impact requires ending this era of legal exceptionalism and restoring a basic principle that those who create, deploy, and profit from technology must also be accountable for its consequences.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!