US freedom.gov and the EU’s DSA in a transatlantic fight over online speech

The transatlantic debate over ‘digital sovereignty’ is also, in a discrete measure, about whose rules govern online speech. In the EU, digital sovereignty has essentially meant building enforceable guardrails for platforms, especially around illegal content, systemic risks, and transparency, through instruments such as the Digital Services Act (DSA) and its transparency mechanisms for content moderation decisions. In Washington, the emphasis has been shifting toward ‘free speech diplomacy‘, framing some EU online-safety measures as de facto censorship that spills across borders when US-based platforms comply with the EU requirements.

What is ‘freedom.gov’?

The newest flashpoint is a reported US State Department plan to develop an online portal, widely described as ‘freedom.gov‘, intended to help users in the EU and elsewhere access content blocked under local rules, and it aligns with the Trump administration policy and a State Department programme called Internet Freedom. The ‘freedom.gov’ plan reportedly includes adding VPN-like functionality so traffic would appear to originate in the US, effectively sidestepping geographic enforcement of content restrictions. According to the US House of Representatives’ legal framework, the idea could be seen as a digital-rights tool, but experts warn it would export a US free-speech standard into jurisdictions that regulate hate speech and extremist material more tightly.

The ‘freedom.gov’ portal story occurs within a broader escalation that has already moved from rhetoric to sanctions. In late 2025, the US imposed visa bans on several EU figures it accused of pressuring platforms to suppress ‘American viewpoints,’ a move the EU governments and officials condemned as unjustified and politically coercive. The episode brought to the conclusion that Washington is treating some foreign content-governance actions not as domestic regulation, but as a challenge to US speech norms and US technology firms.

The EU legal perspective

From the EU perspective, this framing misses the point of to DSA. The Commission argues that the DSA is about platform accountability, requiring large platforms to assess and mitigate systemic risks, explain moderation decisions, and provide users with avenues to appeal. The EU has also built new transparency infrastructure, such as the DSA Transparency Database, to make moderation decisions more visible and auditable. Civil-society groups broadly supportive of the DSA stress that it targets illegal content and opaque algorithmic amplification; critics, especially in US policy circles, argue that compliance burdens fall disproportionately on major US platforms and can chill lawful speech through risk-averse moderation.

That’s where the two sides’ risk models diverge most sharply. The EU rules are shaped by the view that disinformation, hate speech, and extremist propaganda can create systemic harms that platforms must proactively reduce. On the other side, the US critics counter that ‘harm’ categories can expand into viewpoint policing, and that tools like a government-backed portal or VPN could be portrayed as restoring access to lawful expression. Yet the same reporting that casts the portal as a speech workaround also notes it may facilitate access to content the EU considers dangerous, raising questions about whether the initiative is rights-protective ‘diplomacy,’ a geopolitical pressure tactic, or something closer to state-enabled circumvention.

Why does it matter?

The dispute has gone from theoretical to practical, reshaping digital alliances, compliance strategies, and even travel rights for policy actors, not to mention digital sovereignty in the governance of online discourse and data. The EU’s approach is to make platforms responsible for systemic online risks through enforceable transparency and risk-reduction duties, while the US approach is increasingly to contest those duties as censorship with extraterritorial effects, using instruments ranging from public messaging to visa restrictions, and, potentially, state-backed bypass tools.

What could we expect then, if not a more fragmented internet, with platforms pulled between competing legal expectations, users encountering different speech environments by region, and governments treating content policy as an extension of foreign policy, complete with retaliation, countermeasures, and escalating mistrust?

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

UK sets 48-hour deadline for removing intimate images

The UK government plans to require technology platforms to remove intimate images shared without consent within forty-eight hours instead of allowing such content to remain online for days.

Through an amendment to the Crime and Policing Bill, firms that fail to comply could face fines amounting to ten percent of their global revenue or risk having their services blocked in the UK.

A move that reflects ministers’ commitment to treat intimate image abuse with the same seriousness as child sexual abuse material and extremist content.

The action follows mounting concern after non-consensual sexual deepfakes produced by Grok circulated widely, prompting investigations by Ofcom and political pressure on platforms owned by Elon Musk.

The government now intends victims to report an image once instead of repeating the process across multiple services. Once flagged, the content should disappear across all platforms and be blocked automatically on future uploads through hash-matching or similar detection tools.

Ministers also aim to address content hosted outside the reach of the Online Safety Act by issuing guidance requiring internet providers to block access to sites that refuse to comply.

Keir Starmer, Liz Kendall and Alex Davies-Jones emphasised that no woman should be forced to pursue platform after platform to secure removal and that the online environment must offer safety and respect.

The package of reforms forms part of a broader pledge to halve violence against women and girls during the next decade.

Alongside tackling intimate image abuse, the government is legislating against nudification tools and ensuring AI chatbots fall within regulatory scope, using this agenda to reshape online safety instead of relying on voluntary compliance from large technology firms.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Summit in India hears call for safe AI

The UN Secretary General has warned that AI must augment human potential rather than replace it, speaking at the India AI Impact Summit in New Delhi. Addressing leaders at Bharat Mandapam in New Delhi, he urged investment in workers so that technology strengthens, rather than displaces, human capacity.

In New Delhi, he cautioned that AI could deepen inequality, amplify bias and fuel harm if left unchecked. He called for stronger safeguards to protect people from exploitation and insisted that no child should be exposed to unregulated AI systems.

Environmental concerns also featured prominently in New Delhi, with Guterres highlighting rising energy and water demands from data centres. He urged a shift to clean power and warned against transferring environmental costs to vulnerable communities.

The UN chief proposed a $3 billion Global Fund on AI to build skills, data access and affordable computing worldwide. In New Delhi, he argued that broader access is essential to prevent countries from being excluded from the AI age and to ensure AI supports sustainable development goals.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Macron calls Europe safe space for AI

French President Emmanuel Macron told the AI Impact Summit in New Delhi that Europe would remain a safe space for AI innovation and investment. Speaking in New Delhi, he said the European Union would continue shaping global AI rules alongside partners such as India.

Macron pointed to the EU AI Act, adopted in 2024, as evidence that Europe can regulate emerging technologies and AI while encouraging growth. In New Delhi, he claims that oversight would not stifle innovation but ensure responsible development, but not much evidence to back it up.

The French leader said that France is doubling the number of AI scientists and engineers it trains, with startups creating tens of thousands of jobs. He added in New Delhi that Europe aims to combine competitiveness with strong guardrails.

Macron also highlighted child protection as a G7 priority, arguing in New Delhi that children must be shielded from AI driven digital abuse. Europe, he said, intends to protect society while remaining open to investment and cooperation with India.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Google plans $15bn AI push in India

Google CEO Sundar Pichai said at the India AI Impact Summit 2026 in New Delhi that he never imagined Visakhapatnam would become a global AI hub. Speaking in New Delhi, he recalled passing through the coastal city as a student and described its transformation as remarkable.

In New Delhi, Pichai announced that Google will establish a full-stack AI hub in Visakhapatnam as part of a $15 billion investment in India. The facility is expected to include gigawatt-scale compute capacity and a new international subsea cable gateway.

The project in Visakhapatnam is set to generate jobs and deliver advanced AI services to businesses and communities across India. Authorities in Andhra Pradesh have allotted more than 600 acres of land near the port city for the proposed hyperscale AI data centre.

Reacting in New Delhi, Andhra Pradesh IT and HRD Minister Nara Lokesh welcomed the announcement and thanked Pichai for expressing confidence in Visakhapatnam. The development positions Visakhapatnam as a major AI infrastructure hub within India’s expanding technology sector.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Anthropic seeks deeper AI cooperation with India

The chief executive of Anthropic, Dario Amodei, has said India can play a central role in guiding global responses to the security and economic risks linked to AI.

Speaking at the India AI Impact Summit in New Delhi, he argued that the world’s largest democracy is well placed to become a partner and leader in shaping the responsible development of advanced systems.

Amodei explained that Anthropic hopes to work with India on the testing and evaluation of models for safety and security. He stressed growing concern over autonomous behaviours that may emerge in advanced systems and noted the possibility of misuse by individuals or governments.

He pointed to the work of international and national AI safety institutes as a foundation for joint efforts and added that the economic effect of AI will be significant and that India and the wider Global South could benefit if policymakers prepare early.

Through its Economic Futures programme and Economic Index, Anthropic studies how AI reshapes jobs and labour markets.

He said the company intends to expand information sharing with Indian authorities and bring economists, labour groups, and officials into regular discussions to guide evidence-based policy instead of relying on assumptions.

Amodei said AI is set to increase economic output and that India is positioned to influence emerging global frameworks. He signalled a strong interest in long-term cooperation that supports safety, security, and sustainable growth.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU turns to AI tools to strengthen defences against disinformation

Institutions, researchers, and media organisations in the EU are intensifying efforts to use AI to counter disinformation, even as concerns grow about the wider impact on media freedom and public trust.

Confidence in journalism has fallen sharply across the EU, a trend made more severe by the rapid deployment of AI systems that reshape how information circulates online.

Brussels is attempting to respond with a mix of regulation and strategic investment. The EU’s AI Act is entering its implementation phase, supported by the AI Continent Action Plan and the Apply AI Strategy, both introduced in 2025 to improve competitiveness while protecting rights.

Yet manipulation campaigns continue to spread false narratives across platforms in multiple languages, placing pressure on journalists, fact-checkers and regulators to act with greater speed and precision.

Within such an environment, AI4TRUST has emerged as a prominent Horizon Europe initiative. The consortium is developing an integrated platform that detects disinformation signals, verifies content, and maps information flows for professionals who need real-time insight.

Partners stress the need for tools that strengthen human judgment instead of replacing it, particularly as synthetic media accelerates and shared realities become more fragile.

Experts speaking in Brussels warned that traditional fact-checking cannot absorb the scale of modern manipulation. They highlighted the geopolitical risks created by automated messaging and deepfakes, and argued for transparent, accountable systems tailored to user needs.

European officials emphasised that multiple tools will be required, supported by collaboration across institutions and sustained regulatory frameworks that defend democratic resilience.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Digital procurement strengthens compliance and prepares governments for AI oversight

AI is reshaping the expectations placed on organisations, yet many local governments in the US continue to rely on procurement systems designed for a paper-first era.

Sealed envelopes, manual logging and physical storage remain standard practice, even though these steps slow essential services and increase operational pressure on staff and vendors.

The persistence of paper is linked to long-standing compliance requirements, which are vital for public accountability. Over time, however, processes intended to safeguard fairness have created significant inefficiencies.

Smaller businesses frequently struggle with printing, delivery, and rigid submission windows, and the administrative burden on procurement teams expands as records accumulate.

The author’s experience leading a modernisation effort in Somerville, Massachusetts showed how deeply embedded such practices had become.

Gradual adoption of digital submission reduced logistical barriers while strengthening compliance. Electronic bids could be time-stamped, access monitored, and records centrally managed, allowing staff to focus on evaluation rather than handling binders and storage boxes.

Vendor participation increased once geographical and physical constraints were removed. The shift also improved resilience, as municipalities that had already embraced digital procurement were better equipped to maintain continuity during pandemic disruptions.

Electronic records now provide a basis for responsible use of AI. Digital documents can be analysed for anomalies, metadata inconsistencies, or signs of manipulation that are difficult to detect in paper files.

Rather than replacing human judgment, such tools support stronger oversight and more transparent public administration. Modernising procurement aligns government operations with present-day realities and prepares them for future accountability and technological change.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

India unveils MANAV Vision as new global pathway for ethical AI

Narendra Modi presented the new MANAV Vision during the India AI Impact Summit 2026 in New Delhi, setting out a human-centred direction for AI.

He described the framework as rooted in moral guidance, transparent oversight, national control of data, inclusive access and lawful verification. He argued that the approach is intended to guide global AI governance for the benefit of humanity.

The Prime Minister of India warned that rapid technological change requires stronger safeguards and drew attention to the need to protect children. He also said societies are entering a period where people and intelligent systems co-create and evolve together instead of functioning in separate spheres.

He pointed to India’s confidence in its talent and policy clarity as evidence of a growing AI future.

Modi announced that three domestic companies introduced new AI models and applications during the summit, saying the launches reflect the energy and capability of India’s young innovators.

He invited technology leaders from around the world to collaborate by designing and developing in India instead of limiting innovation to established hubs elsewhere.

The summit brought together policymakers, academics, technologists and civil society representatives to encourage cooperation on the societal impact of artificial intelligence.

As the first global AI summit held in the Global South, the gathering aligned with India’s national commitment to welfare for all and the wider aspiration to advance AI for humanity.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Proposed GDPR changes target AI development

The European Commission has proposed changes to the GDPR and the EU AI Act as part of its Digital Omnibus Package, seeking to clarify how personal data may be processed for AI development and operation across the EU.

A new provision would recognise AI development and operation as a potential legitimate interest under the GDPR, subject to necessity and a balancing test. Controllers in the EU would still need to demonstrate safeguards, including data minimisation, transparency and an unconditional right to object.

The package also introduces a proposed legal ground for processing sensitive data in AI systems where removal is not feasible without disproportionate effort. Claims that strict conditions would apply, requiring technical protections and documentation throughout the lifecycle of AI models in the EU.

Further amendments would permit biometric data processing for identity verification under defined conditions and expand the rules allowing sensitive data to be used for bias detection beyond high-risk AI systems.

Overall, the proposals aim to provide greater legal certainty without overturning existing data protection principles. The EU lawmakers and supervisory authorities continue to debate the proposals before any final adoption.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot