US freedom.gov and the EU’s DSA in a transatlantic fight over online speech

The transatlantic debate over ‘digital sovereignty’ is also, in a discrete measure, about whose rules govern online speech. In the EU, digital sovereignty has essentially meant building enforceable guardrails for platforms, especially around illegal content, systemic risks, and transparency, through instruments such as the Digital Services Act (DSA) and its transparency mechanisms for content moderation decisions. In Washington, the emphasis has been shifting toward ‘free speech diplomacy‘, framing some EU online-safety measures as de facto censorship that spills across borders when US-based platforms comply with the EU requirements.

What is ‘freedom.gov’?

The newest flashpoint is a reported US State Department plan to develop an online portal, widely described as ‘freedom.gov‘, intended to help users in the EU and elsewhere access content blocked under local rules, and it aligns with the Trump administration policy and a State Department programme called Internet Freedom. The ‘freedom.gov’ plan reportedly includes adding VPN-like functionality so traffic would appear to originate in the US, effectively sidestepping geographic enforcement of content restrictions. According to the US House of Representatives’ legal framework, the idea could be seen as a digital-rights tool, but experts warn it would export a US free-speech standard into jurisdictions that regulate hate speech and extremist material more tightly.

The ‘freedom.gov’ portal story occurs within a broader escalation that has already moved from rhetoric to sanctions. In late 2025, the US imposed visa bans on several EU figures it accused of pressuring platforms to suppress ‘American viewpoints,’ a move the EU governments and officials condemned as unjustified and politically coercive. The episode brought to the conclusion that Washington is treating some foreign content-governance actions not as domestic regulation, but as a challenge to US speech norms and US technology firms.

The EU legal perspective

From the EU perspective, this framing misses the point of to DSA. The Commission argues that the DSA is about platform accountability, requiring large platforms to assess and mitigate systemic risks, explain moderation decisions, and provide users with avenues to appeal. The EU has also built new transparency infrastructure, such as the DSA Transparency Database, to make moderation decisions more visible and auditable. Civil-society groups broadly supportive of the DSA stress that it targets illegal content and opaque algorithmic amplification; critics, especially in US policy circles, argue that compliance burdens fall disproportionately on major US platforms and can chill lawful speech through risk-averse moderation.

That’s where the two sides’ risk models diverge most sharply. The EU rules are shaped by the view that disinformation, hate speech, and extremist propaganda can create systemic harms that platforms must proactively reduce. On the other side, the US critics counter that ‘harm’ categories can expand into viewpoint policing, and that tools like a government-backed portal or VPN could be portrayed as restoring access to lawful expression. Yet the same reporting that casts the portal as a speech workaround also notes it may facilitate access to content the EU considers dangerous, raising questions about whether the initiative is rights-protective ‘diplomacy,’ a geopolitical pressure tactic, or something closer to state-enabled circumvention.

Why does it matter?

The dispute has gone from theoretical to practical, reshaping digital alliances, compliance strategies, and even travel rights for policy actors, not to mention digital sovereignty in the governance of online discourse and data. The EU’s approach is to make platforms responsible for systemic online risks through enforceable transparency and risk-reduction duties, while the US approach is increasingly to contest those duties as censorship with extraterritorial effects, using instruments ranging from public messaging to visa restrictions, and, potentially, state-backed bypass tools.

What could we expect then, if not a more fragmented internet, with platforms pulled between competing legal expectations, users encountering different speech environments by region, and governments treating content policy as an extension of foreign policy, complete with retaliation, countermeasures, and escalating mistrust?

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

MIT study finds AI chatbots underperform for vulnerable users

Research from the MIT Centre for Constructive Communication (CCC) finds that leading AI chatbots often provide lower-quality responses to users with lower English proficiency, less education, or who are outside the US.

Models tested include GPT-4, Claude 3 Opus, and Llama 3, which sometimes refuse to answer or respond condescendingly. Using TruthfulQA and SciQ datasets, researchers added user biographies to simulate differences in education, language, and country.

Accuracy fell sharply among non-native English speakers and less-educated users, with the most significant drop among those affected by both; users from countries like Iran also received lower-quality responses.

Refusal behaviour was notable. Claude 3 Opus declined 11% of questions for less-educated, non-native English speakers versus 3.6% for control users. Manual review showed 43.7% of refusals contained condescending language.

Some users were denied access to specific topics even though they answered correctly for others.

The study echoes human sociocognitive biases, in which non-native speakers are often perceived as less competent. Researchers warn AI personalisation could worsen inequities, providing marginalised users with subpar or misleading information when they need it most.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK sets 48-hour deadline for removing intimate images

The UK government plans to require technology platforms to remove intimate images shared without consent within forty-eight hours instead of allowing such content to remain online for days.

Through an amendment to the Crime and Policing Bill, firms that fail to comply could face fines amounting to ten percent of their global revenue or risk having their services blocked in the UK.

A move that reflects ministers’ commitment to treat intimate image abuse with the same seriousness as child sexual abuse material and extremist content.

The action follows mounting concern after non-consensual sexual deepfakes produced by Grok circulated widely, prompting investigations by Ofcom and political pressure on platforms owned by Elon Musk.

The government now intends victims to report an image once instead of repeating the process across multiple services. Once flagged, the content should disappear across all platforms and be blocked automatically on future uploads through hash-matching or similar detection tools.

Ministers also aim to address content hosted outside the reach of the Online Safety Act by issuing guidance requiring internet providers to block access to sites that refuse to comply.

Keir Starmer, Liz Kendall and Alex Davies-Jones emphasised that no woman should be forced to pursue platform after platform to secure removal and that the online environment must offer safety and respect.

The package of reforms forms part of a broader pledge to halve violence against women and girls during the next decade.

Alongside tackling intimate image abuse, the government is legislating against nudification tools and ensuring AI chatbots fall within regulatory scope, using this agenda to reshape online safety instead of relying on voluntary compliance from large technology firms.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Summit in India hears call for safe AI

The UN Secretary General has warned that AI must augment human potential rather than replace it, speaking at the India AI Impact Summit in New Delhi. Addressing leaders at Bharat Mandapam in New Delhi, he urged investment in workers so that technology strengthens, rather than displaces, human capacity.

In New Delhi, he cautioned that AI could deepen inequality, amplify bias and fuel harm if left unchecked. He called for stronger safeguards to protect people from exploitation and insisted that no child should be exposed to unregulated AI systems.

Environmental concerns also featured prominently in New Delhi, with Guterres highlighting rising energy and water demands from data centres. He urged a shift to clean power and warned against transferring environmental costs to vulnerable communities.

The UN chief proposed a $3 billion Global Fund on AI to build skills, data access and affordable computing worldwide. In New Delhi, he argued that broader access is essential to prevent countries from being excluded from the AI age and to ensure AI supports sustainable development goals.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Microsoft outlines challenges in verifying AI-generated media

In an era of deepfakes and AI-manipulated content, determining what is real online has become increasingly complex. Microsoft’s report Media Integrity and Authentication reviews current verification methods, their limits, and ways to boost trust in digital media.

The study emphasises that no single solution can prevent digital deception. Techniques such as provenance tracking, watermarking, and digital fingerprinting can provide useful context about a media file’s origin, creation tools, and whether it has been altered.

Microsoft has pioneered these technologies, cofounding the Coalition for Content Provenance and Authenticity (C2PA) to standardise media authentication globally.

The report also addresses the risks of sociotechnical attacks, where even subtle edits can manipulate authentication results to mislead the public.

Researchers explored how provenance information can remain durable and reliable across different environments, from high-security systems to offline devices, highlighting the challenge of maintaining consistent verification.

As AI-generated or edited content becomes commonplace, secure media provenance is increasingly important for news outlets, public figures, governments, and businesses.

Reliable provenance helps audiences spot manipulated content, with ongoing research guiding clearer, practical verification displays for the public.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UNESCO expands multilingual learning through LearnBig

The LearnBig digital application is expanding access to learning, with UNESCO supporting educational materials in national and local languages instead of relying solely on dominant teaching languages.

A project that aligns with International Mother Language Day and reflects long-standing research showing that children learn more effectively when taught in languages they understand from an early age.

The programme supports communities along the Thailand–Myanmar border, where children gain literacy and numeracy skills in both Thai and their mother tongues.

Young learners can make more substantial academic progress with this approach, which allows them to remain connected to their cultural identity rather than being pushed into unfamiliar linguistic environments. More than 2,000 digital books are available in languages such as Karen, Myanmar, and Pattani Malay.

LearnBig was developed within the ‘Mobile Literacy for Out-of-School Children’ programme, backed by partners including Microsoft, True Corporation, POSCO 1% Foundation and the Ministry of Education of Thailand.

An initiative by UNESCO that has reached more than 526,000 learners, with young people in Yala using tablets to access digital books, while learners in Mae Hong Son study through content presented in their local languages.

The project illustrates the potential of digital innovation to bridge linguistic, social, and geographic divides.

By supporting children who often fall outside formal education systems, LearnBig demonstrates how technology can help build a more inclusive and equitable learning environment rather than reinforcing existing barriers.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Macron calls Europe safe space for AI

French President Emmanuel Macron told the AI Impact Summit in New Delhi that Europe would remain a safe space for AI innovation and investment. Speaking in New Delhi, he said the European Union would continue shaping global AI rules alongside partners such as India.

Macron pointed to the EU AI Act, adopted in 2024, as evidence that Europe can regulate emerging technologies and AI while encouraging growth. In New Delhi, he claims that oversight would not stifle innovation but ensure responsible development, but not much evidence to back it up.

The French leader said that France is doubling the number of AI scientists and engineers it trains, with startups creating tens of thousands of jobs. He added in New Delhi that Europe aims to combine competitiveness with strong guardrails.

Macron also highlighted child protection as a G7 priority, arguing in New Delhi that children must be shielded from AI driven digital abuse. Europe, he said, intends to protect society while remaining open to investment and cooperation with India.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Anthropic seeks deeper AI cooperation with India

The chief executive of Anthropic, Dario Amodei, has said India can play a central role in guiding global responses to the security and economic risks linked to AI.

Speaking at the India AI Impact Summit in New Delhi, he argued that the world’s largest democracy is well placed to become a partner and leader in shaping the responsible development of advanced systems.

Amodei explained that Anthropic hopes to work with India on the testing and evaluation of models for safety and security. He stressed growing concern over autonomous behaviours that may emerge in advanced systems and noted the possibility of misuse by individuals or governments.

He pointed to the work of international and national AI safety institutes as a foundation for joint efforts and added that the economic effect of AI will be significant and that India and the wider Global South could benefit if policymakers prepare early.

Through its Economic Futures programme and Economic Index, Anthropic studies how AI reshapes jobs and labour markets.

He said the company intends to expand information sharing with Indian authorities and bring economists, labour groups, and officials into regular discussions to guide evidence-based policy instead of relying on assumptions.

Amodei said AI is set to increase economic output and that India is positioned to influence emerging global frameworks. He signalled a strong interest in long-term cooperation that supports safety, security, and sustainable growth.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

India unveils MANAV Vision as new global pathway for ethical AI

Narendra Modi presented the new MANAV Vision during the India AI Impact Summit 2026 in New Delhi, setting out a human-centred direction for AI.

He described the framework as rooted in moral guidance, transparent oversight, national control of data, inclusive access and lawful verification. He argued that the approach is intended to guide global AI governance for the benefit of humanity.

The Prime Minister of India warned that rapid technological change requires stronger safeguards and drew attention to the need to protect children. He also said societies are entering a period where people and intelligent systems co-create and evolve together instead of functioning in separate spheres.

He pointed to India’s confidence in its talent and policy clarity as evidence of a growing AI future.

Modi announced that three domestic companies introduced new AI models and applications during the summit, saying the launches reflect the energy and capability of India’s young innovators.

He invited technology leaders from around the world to collaborate by designing and developing in India instead of limiting innovation to established hubs elsewhere.

The summit brought together policymakers, academics, technologists and civil society representatives to encourage cooperation on the societal impact of artificial intelligence.

As the first global AI summit held in the Global South, the gathering aligned with India’s national commitment to welfare for all and the wider aspiration to advance AI for humanity.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Reddit’s human creators remain popular amid surge of AI content

According to reporting by the BBC, Reddit is seeing renewed growth as users seek human interaction in an online environment increasingly filled with AI-generated content.

Reddit reported 116 million daily active users globally, marking a 19% year-on-year increase in its most recent third quarter.

The platform, historically associated with tech-oriented male users, has become more demographically balanced. Women now account for more than 50% of users in both the US and UK, and the platform is reportedly the fastest-growing social network among UK women.

Reddit operates through user-created communities known as subreddits, where posts are ranked by upvotes rather than chronological order. Volunteer moderators manage individual communities, while company administrators can intervene when necessary.

Chief Operating Officer Jen Wong said Reddit has preserved ‘human authenticity’ amid AI-driven content that has crowded the internet. Popular discussion areas include parenting, skincare, reality television, and deeply personal experiences such as pregnancy or hair loss, topics where peer perspectives and lived experience are valued.

However, experts caution that Reddit faces governance challenges. Dr Yusuf Oc of Bayes Business School notes that upvote systems can reward consensus rather than factual accuracy, potentially reinforcing echo chambers, groupthink, and coordinated manipulation tactics such as brigading and astroturfing. Moderation quality may vary across communities due to reliance on volunteers.

Reddit has also signed data licensing agreements with AI companies, including OpenAI, allowing tools such as ChatGPT to access Reddit content. A study commissioned by Reddit found it to be the most cited source across AI search tools, including Google AI Overviews and Perplexity.

Analysts suggest these agreements increase visibility but are not necessarily the primary driver of user growth. The article situates Reddit’s rise within a broader shift toward platforms perceived as offering candid, less polished discussion in contrast to influencer-driven or AI-generated content ecosystems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!