EU challenges Meta over WhatsApp AI restrictions

The European Commission has warned Meta that it may have breached EU antitrust rules by restricting third-party AI assistants from operating on WhatsApp. A Statement of Objections outlines regulators’ preliminary view that the policy could distort competition in the AI assistant market.

The probe centres on updated WhatsApp Business terms announced in October 2025 and enforced from January 2026. Under the changes, rival general-purpose AI assistants were effectively barred from accessing the platform, leaving Meta AI as the only integrated assistant available to users.

Regulators argue that WhatsApp serves as a critical gateway for consumers AI access AI services. Excluding competitors could reinforce Meta’s dominance in communication applications while limiting market entry and expansion opportunities for smaller AI developers.

Interim measures are now under consideration to prevent what authorities describe as potentially serious and irreversible competitive harm. Meta can respond before any interim measures are imposed, while the broader antitrust probe continues.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU telecom simplification at risk as Digital Networks Act adds extra admin

The ambitions of the EU to streamline telecom rules are facing fresh uncertainty after a Commission document indicated that the Digital Networks Act may create more administrative demands for national regulators instead of easing their workload.

The plan to simplify long-standing procedures risks becoming more complex as officials examine the impact on oversight bodies.

Concerns are growing among telecom authorities and BEREC, which may need to adjust to new reporting duties and heightened scrutiny. The additional requirements could limit regulators’ ability to respond quickly to national needs.

Policymakers hoped the new framework would reduce bureaucracy and modernise the sector. The emerging assessment now suggests that greater coordination at the EU level may introduce extra layers of compliance at a time when regulators seek clarity and flexibility.

The debate has intensified as governments push for faster network deployment and more predictable governance. The prospect of heavier administrative tasks could slow progress rather than deliver the streamlined system originally promised.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

EU faces pressure to boost action on health disinformation

A global health organisation is urging the EU to make fuller use of its digital rules to curb health disinformation as concerns grow over the impact of deepfakes on public confidence.

Warnings point to a rising risk that manipulated content could reduce vaccine uptake instead of supporting informed public debate.

Experts argue that the Digital Services Act already provides the framework needed to limit harmful misinformation, yet enforcement remains uneven. Stronger oversight could improve platforms’ ability to detect manipulated content and remove inaccurate claims that jeopardise public health.

Campaigners emphasise that deepfake technology is now accessible enough to spread false narratives rapidly. The trend threatens vaccination campaigns at a time when several member states are attempting to address declining trust in health authorities.

The EU officials continue to examine how digital regulation can reinforce public health strategies. The call for stricter enforcement highlights the pressure on Brussels to ensure that digital platforms act responsibly rather than allowing misleading material to circulate unchecked.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Discord expands teen-by-default protection worldwide

Discord is preparing a global transition to teen-appropriate settings that will apply to all users unless they confirm they are adults.

The phased rollout begins in early March and forms part of the company’s wider effort to offer protection tailored to younger audiences rather than relying on voluntary safety choices. Controls will cover communication settings, sensitive content and access to age-restricted communities.

The update is based on an expanded age assurance system designed to protect privacy while accurately identifying users’ age groups. People can use facial age estimation on their own device or select identity verification handled by approved partners.

Discord will also rely on an age-inference model that runs quietly in the background. Verification results remain private, and documents are deleted quickly, with users able to appeal group assignments through account settings.

Stricter defaults will apply across the platform. Sensitive media will stay blurred unless a user is confirmed as an adult, and access to age-gated servers or commands will require verification.

Message requests from unfamiliar contacts will be separated, friend-request alerts will be more prominent and only adults will be allowed to speak on community stages instead of sharing the feature with teens.

Discord is complementing the update by creating a Teen Council to offer advice on future safety tools and policies. The council will include up to a dozen young users and aims to embed real teen insight in product development.

The global rollout builds on earlier launches in the UK and Australia, adding to an existing safety ecosystem that includes Teen Safety Assist, Family Centre, and several moderation tools intended to support positive and secure online interactions.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

When grief meets AI

AI is now being used to create ‘deathbots’, chatbots designed to mimic people after they die using their messages and voice recordings. The technology is part of a growing digital afterlife industry, with some people using it to maintain a sense of connection with loved ones who have passed away.

Researchers at Cardiff University studied how these systems recreate personalities using digital data such as texts, emails, and audio recordings. The findings described the experience as both fascinating and unsettling, raising questions about memory, identity, and emotional impact.

Tests showed current deathbots often fail to accurately reproduce voices or personalities due to technical limitations. Researchers warned that these systems rely on simplified versions of people, which may distort memories rather than preserve them authentically.

Experts believe the technology could improve, but remain uncertain whether it will become widely accepted. Concerns remain about emotional consequences and whether digital versions could alter how people remember those who have died.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Pakistan pledges major investment in AI by 2030

Pakistan plans to invest $1 billion in AI by 2030, Prime Minister Shehbaz Sharif said at the opening of Indus AI Week in Islamabad. The pledge aims to build a national AI ecosystem in Pakistan.

The government in Pakistan said AI education would expand to schools and universities, including remote regions. Islamabad also plans 1,000 fully funded PhD scholarships in AI to strengthen research capacity in Pakistan.

Shehbaz Sharif said Pakistan would train one million non IT professionals in AI skills by 2030. Islamabad identified agriculture, mining and industry as priority sectors for AI driven productivity gains in Pakistan.

Pakistan approved a National AI Policy in 2025, although implementation has moved slowly. Officials in Islamabad said Indus AI Week marks an early step towards broader adoption of AI across Pakistan.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Educators turn to AI despite platform fatigue

Educators in the US are increasingly using AI to address resource shortages, despite growing frustration with fragmented digital platforms. A new survey highlights rising dependence on AI tools across schools and universities in the US.

The study found many educators in the US juggle numerous digital systems that fail to integrate smoothly. Respondents said constant switching between platforms adds to workload pressures and burnout in the US education sector.

AI use in the US is focused on boosting productivity, with educators applying tools to research, writing and administrative tasks. Many also use AI to support student learning as budgets tighten in the US.

Concerns remain in the US around data security, ethics and system overload. Educators said better integration between AI and learning tools could ease strain and improve outcomes in the US classroom.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

New York weighs pause on data centre expansion

Lawmakers in New York have introduced a bill proposing a three year pause on permits for new data centres. Supporters say rapid expansion linked to AI infrastructure risks straining energy systems in New York.

Concerns in New York focus on rising electricity demand and higher household bills as tech companies scale AI operations. Critics across the US argue local communities bear the cost of supporting large scale computing facilities.

The New York proposal has drawn backing from environmental groups and politicians in the US who want time to set stricter rules. US senator Bernie Sanders has also called for a nationwide halt on new data centres.

Officials in New York say the pause would allow stronger policies on grid access and fair cost sharing. The debate reflects wider US tension between economic growth driven by AI and environmental limits.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Shadow AI becomes a new governance challenge for European organisations

Employees are adopting generative tools at work faster than organisations can approve or secure them, giving rise to what is increasingly described as ‘shadow AI‘. Unlike earlier forms of shadow IT, these tools can transform data, infer sensitive insights, and trigger automated actions beyond established controls.

For European organisations, the issue is no longer whether AI should be used, but how to regain visibility and control without undermining productivity, as shadow AI increasingly appears inside approved platforms, browser extensions, and developer tools, expanding risks beyond data leakage.

Security experts warn that blanket bans often push AI use further underground, reducing transparency and trust. Instead, guidance from EU cybersecurity bodies increasingly promotes responsible enablement through clear policies, staff awareness, and targeted technical controls.

Key mitigation measures include mapping AI use across approved and informal tools, defining safe prompt data, and offering sanctioned alternatives, with logging, least-privilege access, and approval steps becoming essential as AI acts across workflows.

With the EU AI Act introducing clearer accountability across the AI value chain, unmanaged shadow AI is also emerging as a compliance risk. As AI becomes embedded across enterprise software, organisations face growing pressure to make safe use the default rather than the exception.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Super Bowl 2026 ads embrace the AI power

AI dominated the 2026 Super Bowl advertising landscape as brands relied on advanced models instead of traditional high-budget productions.

Many spots showcased AI as both the creative engine behind the visuals and the featured product, signalling a shift toward technology-centred storytelling during the most expensive broadcast event of the year.

Svedka pursued a provocative strategy by presenting a largely AI-generated commercial starring its robot pair, a choice that reignited arguments over whether generative tools could displace human creatives.

Anthropic went in a different direction by using humour to mock OpenAI’s plan to introduce advertisements to ChatGPT, a jab that led to a pointed response from Sam Altman and fuelled an online dispute.

Meta, Amazon and Google used their airtime to promote their latest consumer offerings, with Meta focusing on AI-assisted glasses for extreme activities and Amazon unveiling Alexa+, framed through a satirical performance by Chris Hemsworth about fears of malfunctioning assistants.

Google leaned toward practical design applications instead of spectacle, demonstrating its Nano Banana Pro system transforming bare rooms into personalised images.

Other companies emphasised service automation, from Ring’s AI tool for locating missing pets to Ramp, Rippling and Wix, which showcased platforms designed to ease administrative work and simplify creative tasks.

Hims & Hers adopted a more social approach by highlighting the unequal nature of healthcare access and promoting its AI-driven MedMatch feature.

The variety of tones across the adverts underscored how brands increasingly depend on AI to stand out, either through spectacle or through commentary on the technology’s expanding cultural power.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!