Trump threatens sanctions on EU over Digital Services Act

Only five days after the Joint Statement on a United States-European Union framework on an agreement on reciprocal, fair and balanced trade (‘Framework Agreement’), the Trump administration is weighing an unprecedented step against the EU over its new tech rules.

According to The Japan Times and Reuters, US officials are discussing sanctions on the EU or member state representatives responsible for implementing the Digital Services Act (DSA), a sweeping law that forces online platforms to police illegal content. Washington argues the regulation censors Americans and unfairly burdens US companies.

While governments often complain about foreign rules they deem restrictive, directly sanctioning allied officials would mark a sharp escalation. So far, discussions have centred on possible visa bans, though no decision has been made.

Last week, Internal State Department meetings focused on whom such measures might target. Secretary of State Marco Rubio has ordered US diplomats in Europe to lobby against the DSA, urging allies to amend or repeal the law.

Washington insists that the EU is curbing freedom of speech under the banner of combating hate speech and misinformation, while the EU maintains that the act is designed to protect citizens from illegal material such as child exploitation and extremist propaganda.

‘Freedom of expression is a fundamental right in the EU. It lies at the heart of the DSA,’ an EU Commission spokesperson said, rejecting US accusations as ‘completely unfounded.’

Trump has framed the dispute in broader terms, threatening tariffs and export restrictions on any country that imposes digital regulations he deems discriminatory. In recent months, he has repeatedly warned that measures like the DSA, or national digital taxes, are veiled attacks on US companies and conservative voices online. At the same time, the administration has not hesitated to sanction foreign officials in other contexts, including a Brazilian judge overseeing cases against Trump ally Jair Bolsonaro.

US leaders, including Vice President JD Vance, have accused European authorities of suppressing right-wing parties and restricting debate on issues such as immigration. In contrast, European officials argue that their rules are about fairness and safety and do not silence political viewpoints. At a transatlantic conference earlier this year, Vance stunned European counterparts by charging that the EU was undermining democracy, remarks that underscored the widening gap.

The question remains whether Washington will take the extraordinary step of sanctioning officials in Brussels or the EU capitals. Such action could further destabilise an already fragile trade relationship while putting the US squarely at odds with Europe over the future of digital governance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI in justice: Bridging the global access gap or deepening inequalities

At least 5 billion people worldwide lack access to justice, a human right enshrined in international law. In many regions, particularly low and middle-income countries, millions face barriers to justice, ranging from their socioeconomic position to the legal system failure. Meanwhile, AI has entered the legal sector at full speed and may offer legitimate solutions to bridge this justice gap.

Through chatbots, automated document review, predictive legal analysis, and AI-enabled translation, AI holds promise to improve efficiency and accessibility. Yet, the rise of AI in legal systems across the globe suggests the digitalisation of our legal systems.

While it may serve as a tool to break down access barriers, AI legal tools could also introduce the automation of bias in our judicial systems, unaccountable decision-making, and act as an accelerant to a widening digital divide. AI is capable of meaningfully expanding equitable justice, but its implementation must safeguard human rights principles. 

Improving access to justice

Across the globe, AI legal assistance pilot programmes are underway. The UNHCR piloted an AI agent to improve legal communication barriers in Jordan. AI transcribes, translates, and organises refugee queries. With its help, users can streamline their caseload management, which is key to keeping operations smooth even under financial strain

NGOs working to increase access to justice, such as Migrasia in Hong Kong, have begun using AI-powered chatbots to triage legal queries from migrant workers, offering 24/7 multilingual legal assistance.

While it is clear that these tools are designed to assist rather than replace human legal experts, they are showing they have the potential to significantly reduce delays by streamlining processes. In the UK, AI transcription tools are being used to provide victims of serious sexual crimes with access to judges’ sentencing remarks and explanations of legal language. This tool enhances transparency for victims, especially those seeking emotional closure. 

Even as these programmes are only being piloted, a UNESCO survey found that 44% of judicial workers across 96 countries are currently using AI tools, like ChatGPT, for tasks such as drafting and translating documents. For example, the Morrocan judiciary has already integrated AI technology into its legal system.

AI tools help judges prepare judgments for various cases, as well as streamline legal document preparation. The technology allows for faster document drafting in a multilingual environment. Soon, AI-powered case analysis, based on prior case data, may also provide legal experts with predictive outcomes. AI tools have the opportunity and are already beginning to, break down barriers to justice and ultimately improve the just application of the law. 

Risking human rights

While AI-powered legal assistance can provide affordable access, improve outreach to rural or marginalised communities, close linguistic divides, and streamline cases, it also poses a serious risk to human rights. The most prominent concerns surround bias and discrimination, as well as widening the digital divide.

Deploying AI without transparency can lead to algorithmic systems perpetuating systematic inequalities, such as racial or ethnic biases. Meanwhile, the risk of black box decision-making, through the use of AI tools with unexplainable outputs, can make it difficult to challenge legal decisions, undermining due process and the right to a fair trial.

Experts emphasise that the integration of AI into legal systems must focus on supporting human judgment, rather than outright replacing it. Whether AI is biased by its training datasets or simply that it becomes a black box over time, AI usage is in need of foresighted governance and meaningful human oversight. 

 Sphere, Adult, Female, Person, Woman, Astronomy, Outer Space, Planet, Globe, Head
Image via Pixabay / jessica45

Additionally, AI will greatly impact economic justice, especially for those in low-income or marginalised communities. Legal professionals lack necessary training and skills needed to effectively use AI tools. In many legal systems, lawyers, judges, clerks, and assistants do not feel confident explaining AI outputs or monitoring their use.

However, this lack of education undermines the necessary accountability and transparency needed to integrate AI meaningfully. It may lead to misuse of the technology, such as unverified translations, which can lead to legal errors. 

While the use of AI improves efficiency, it may erode public trust when legal actors fail to use it correctly or the technology reflects systematic bias. The judiciary in Texas, US, warned about this concern in an opinion that detailed the fear of integrating opaque systems into the administration of justice. Public trust in the legal system is already eroding in the US, with just over a third of Americans expressing confidence in 2024.

The incorporation of AI into the legal system threatens to derail the public’s faith that is left. Meanwhile, those without access to digital connectivity or literacy education may be further excluded from justice. Many AI tools are developed by for-profit actors, raising questions about justice accessibility in an AI-powered legal system. Furthermore, AI providers will have access to sensitive case data, which poses a risk of misuse and even surveillance. 

The policy path forward

As already stated, for AI to be integrated into legal systems and help bridge the justice gap, it must take on the role of assisting to human judges, lawyers, and other legal actors, but it cannot replace them. In order for AI to assist, it must be transparent, accountable, and a supplement to human reason. UNESCO and some regional courts in Eastern Africa advocate for judicial training programmes, thorough guidelines, and toolkits that promote the ethical use of AI.

The focus of legal AI education must be to improve AI literacy and to teach bias awareness, as well as inform users of digital rights. Legal actors must keep pace with the innovation and integration level of AI. They are the core of policy discussions, as they understand existing norms and have firsthand experience of how the technology affects human rights. 

Other actors are also at play in this discussion. Taking a multistakeholder approach that centres on existing human rights frameworks, such as the Toronto Declaration, is the path to achieving effective and workable policy. Closing the justice gap by utilising AI hinges on the public’s access to the technology and understanding how it is being used in their legal systems. Solutions working to demystify black box decisions will be key to maintaining and improving public confidence in their legal systems. 

The future of justice

AI has the transformative capability to help bridge the justice gap by expanding reach, streamlining operations, and reducing cost. AI has the potential to be a tool for the application of justice and create powerful improvements to inclusion in our legal systems.

However, it also poses the risk of deepening inequalities and decaying public trust. AI integration must be governed by human rights norms of transparency and accountability. Regulation is possible through education and discussion predicated on adherence to ethical frameworks. Now is the time to invest in digital literacy to create legal empowerment, which ensures that AI tools are developed to be contestable and serve as human-centric support. 

AI, justice, law
Image via Pixabay / souandresantana

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot




Elon Musk calls Grok’s brief suspension a dumb error

Elon Musk’s AI chatbot Grok was briefly suspended from X, then returned without its verification badge and with a controversial video pinned to its replies. Confusing and contradictory explanations appeared in multiple languages, leaving users puzzled.

English posts blamed hateful conduct and Israel-Gaza comments, while French and Portuguese messages mentioned crime stats or technical bugs. Musk called the situation a ‘dumb error’ and admitted Grok was unsure why it had been suspended.

Grok’s suspension follows earlier controversies, including antisemitic remarks and introducing itself as ‘MechaHitler.’ xAI blamed outdated code and internet memes, revealing that Grok often referenced Musk’s public statements on sensitive topics.

The company has updated the chatbot’s prompts and promised ongoing monitoring, amid internal tensions and staff resignations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI news summaries to affect the future of journalism

Generative AI tools like ChatGPT significantly impact traditional online news by reducing search traffic to media websites.

As these AI assistants summarise news content directly in search results, users are less likely to click through to the sources, threatening already struggling publishers who depend on ad revenue and subscriptions.

A Pew Research Centre study found that when AI summaries appear in search, users click suggested links half as often as in traditional search formats.

Matt Karolian of the Boston Globe Media warns that the next few years will be especially difficult for publishers, urging them to adapt or risk being ‘swept away.’

While some, like the Boston Globe, have gained a modest number of new subscribers through ChatGPT, these numbers pale compared to other traffic sources.

To adapt, publishers are turning to Generative Engine Optimisation (GEO), tailoring content so AI tools can be used and cited more effectively. Some have blocked crawlers to prevent data harvesting, while others have reopened access to retain visibility.

Legal battles are unfolding, including a major lawsuit from The New York Times against OpenAI and Microsoft. Meanwhile, licensing deals between tech giants and media organisations are beginning to take shape.

With nearly 15% of under-25s now relying on AI for news, concerns are mounting over the credibility of information. As AI reshapes how news is consumed, the survival of original journalism and public trust in it face grave uncertainty.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

X challenges India’s expanded social media censorship in court

Tensions have escalated between Elon Musk’s social media platform, X, and the Indian government over extensive online content censorship measures.

Triggered by a seemingly harmless post describing a senior politician as ‘useless,’ the incident quickly spiralled into a significant legal confrontation.

X has accused Prime Minister Narendra Modi’s administration of overstepping constitutional bounds by empowering numerous government bodies to issue content-removal orders, significantly expanding the scope of India’s digital censorship.

At the heart of the dispute lies India’s increased social media content regulation since 2023, including launching the Sahyog platform, a centralised portal facilitating direct content-removal orders from officials to tech firms.

X rejected participating in Sahyog, labelling it a ‘censorship portal,’ and subsequently filed a lawsuit in Karnataka High Court earlier this year, contesting the legality of India’s directives and website, which it claims undermine free speech.

Indian authorities justify their intensified oversight by pointing to the need to control misinformation, safeguard national security, and prevent societal discord. They argue that the measures have broad support within the tech community. Indeed, major players like Google and Meta have reportedly complied without public protest, though both companies have declined to comment on their stance.

However, the court documents reveal that the scope of India’s censorship requests extends far beyond misinformation.

Authorities have reportedly targeted satirical cartoons depicting politicians unfavorably, criticism regarding government preparedness for natural disasters, and even media coverage of serious public incidents like a deadly stampede at a railway station.

While Musk and Prime Minister Modi maintain an outwardly amicable relationship, the conflict presents significant implications for X’s operations in India, one of its largest user bases.

Musk, a self-proclaimed free speech advocate, finds himself at a critical juncture, navigating between principles and the imperative to expand his business ventures within India’s substantial market.

Source: Reuters

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Google AI Mode raises fears over control of news

Google’s AI Mode has quietly launched in the UK, reshaping how users access news by summarising information directly in search results.

By paraphrasing content gathered across the internet, the tool offers instant answers while reducing the need to visit original news sites.

Critics argue that the technology monopolises UK information by filtering what users see, based on algorithms rather than editorial judgement. Concerns have grown over transparency, fairness and the future of independent journalism.

Publishers are not compensated for content used by AI Mode, and most users rarely click through to the sources. Newsrooms fear pressure to adapt their output to align with Google’s preferences or risk being buried online.

While AI may streamline convenience, it lacks accountability. Regulated journalism must operate under legal frameworks, whereas AI faces no such scrutiny even when errors have real consequences.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Trump pushes for ‘anti-woke’ AI in US government contracts

Tech firms aiming to sell AI systems to the US government will now need to prove their chatbots are free of ideological bias, following a new executive order signed by Donald Trump.

The measure, part of a broader plan to counter China’s influence in AI development, marks the first official attempt by the US to shape the political behaviour of AI in services.

It places a new emphasis on ensuring AI reflects so-called ‘American values’ and avoids content tied to diversity, equity and inclusion (DEI) frameworks in publicly funded models.

The order, titled ‘Preventing Woke AI in the Federal Government’, does not outright ban AI that promotes DEI ideas, but requires companies to disclose if partisan perspectives are embedded.

Major providers like Google, Microsoft and Meta have yet to comment. Meanwhile, firms face pressure to comply or risk losing valuable public sector contracts and funding.

Critics argue the move forces tech companies into a political culture war and could undermine years of work addressing AI bias, harming fair and inclusive model design.

Civil rights groups warn the directive may sideline tools meant to support vulnerable groups, favouring models that ignore systemic issues like discrimination and inequality.

Policy analysts have compared the approach to China’s use of state power to shape AI behaviour, though Trump’s order stops short of requiring pre-approval or censorship.

Supporters, including influential Trump-aligned venture capitalists, say the order restores transparency. Marc Andreessen and David Sacks were reportedly involved in shaping the language.

The move follows backlash to an AI image tool released by Google, which depicted racially diverse figures when asked to generate the US Founding Fathers, triggering debate.

Developers claimed the outcome resulted from attempts to counter bias in training data, though critics labelled it ideological overreach embedded by design teams.

Under the directive, companies must disclose model guidelines and explain how neutrality is preserved during training. Intentional encoding of ideology is discouraged.

Former FTC technologist Neil Chilson described the order as light-touch. It does not ban political outputs; it only calls for transparency about generating outputs.

OpenAI said its objectivity measures align with the order, while Microsoft declined to comment. xAI praised Trump’s AI policy but did not mention specifics.

The firm, founded by Elon Musk, recently won a $200M defence contract shortly after its Grok chatbot drew criticism for generating antisemitic and pro-Hitler messages.

Trump’s broader AI orders seek to strengthen American leadership and reduce regulatory burdens to keep pace with China in the development of emerging technologies.

Some experts caution that ideological mandates could set a precedent for future governments to impose their political views on critical AI infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Humanitarian, peace, and media sectors join forces to tackle harmful information

At the WSIS+20 High-Level Event in Geneva, a powerful session brought together humanitarian, peacebuilding, and media development actors to confront the growing threat of disinformation, more broadly reframed as ‘harmful information.’ Panellists emphasised that false or misleading content, whether deliberately spread or unintentionally harmful, can have dire consequences for already vulnerable populations, fueling violence, eroding trust, and distorting social narratives.

The session moderator, Caroline Vuillemin of Fondation Hirondelle, underscored the urgency of uniting these sectors to protect those most at risk.

Hans-Peter Wyss of the Swiss Agency for Development and Cooperation presented the ‘triple nexus’ approach, advocating for coordinated interventions across humanitarian, development, and peacebuilding efforts. He stressed the vital role of trust, institutional flexibility, and the full inclusion of independent media as strategic actors.

Philippe Stoll of the ICRC detailed an initiative that focuses on the tangible harms of information—physical, economic, psychological, and societal—rather than debating truth. That initiative, grounded in a ‘detect, assess, respond’ framework, works from local volunteer training up to global advocacy and research on emerging challenges like deepfakes.

Donatella Rostagno of Interpeace shared field experiences from the Great Lakes region, where youth-led efforts to counter misinformation have created new channels for dialogue in highly polarised societies. She highlighted the importance of inclusive platforms where communities can express their own visions of peace and hear others’.

Meanwhile, Tammam Aloudat of The New Humanitarian critiqued the often selective framing of disinformation, urging support for local journalism and transparency about political biases, including the harm caused by omission and silence.

The session concluded with calls for sustainable funding and multi-level coordination, recognising that responses must be tailored locally while engaging globally. Despite differing views, all panellists agreed on the need to shift from a narrow focus on disinformation to a broader and more nuanced understanding of information harm, grounded in cooperation, local agency, and collective responsibility.

Track all key events from the WSIS+20 High-Level Event 2025 on our dedicated page.

UNESCO pushes for digital trust at WSIS+20

At the WSIS+20 High-Level Event in Geneva, UNESCO convened a timely session exploring how to strengthen global information ecosystems through responsible platform governance and smart technology use. The discussion, titled ‘Towards a Resilient Information Ecosystem’, brought together international regulators, academics, civil society leaders, and tech industry representatives to assess digital media’s role in shaping public discourse, especially in times of crisis.

UNESCO’s Assistant Director General Tawfik Jelassi emphasised the organisation’s longstanding mission to build peace through knowledge sharing, warning that digital platforms now risk becoming breeding grounds for misinformation, hate speech, and division. To counter this, he highlighted UNESCO’s ‘Internet for Trust’ initiative, which produced governance guidelines informed by over 10,000 global contributions.

Speakers called for a shift from viewing misinformation as an isolated problem to understanding the broader digital communication ecosystem, especially during crises such as wars or natural disasters. Professor Ingrid Volkmer stressed that global monopolies like Starlink, Amazon Web Services, and OpenAI dominate critical communication infrastructure, often without sufficient oversight.

She urged a paradigm shift that treats crisis communication as an interconnected system requiring tailored regulation and risk assessments. France’s digital regulator Frédéric Bokobza outlined the European Digital Services Act’s role in enhancing transparency and accountability, noting the importance of establishing direct cooperation with platforms, particularly during elections.

The panel also spotlighted ways to empower users. Google’s Nadja Blagojevic showcased initiatives like SynthID watermarking for AI-generated content and media literacy programs such as ‘Be Internet Awesome,’ which aim to build digital critical thinking skills across age groups.

Meanwhile, Maria Paz Canales from Global Partners Digital offered a civil society perspective, sharing how AI tools protect protestors’ identities, preserve historical memory, and amplify marginalised voices, even amid funding challenges. She also called for regulatory models distinguishing between traditional commercial media and true public interest journalism, particularly in underrepresented regions like Latin America.

The session concluded with a strong call for international collaboration among regulators and platforms, affirming that information should be treated as a public good. Participants underscored the need for inclusive, multistakeholder governance and sustainable support for independent media to protect democratic values in an increasingly digital world.

Track all key events from the WSIS+20 High-Level Event 2025 on our dedicated page.

Rights before risks: Rethinking quantum innovation at WSIS+20

At the WSIS+20 High-Level Event in Geneva, a powerful call was made to ensure the development of quantum technologies remains rooted in human rights and inclusive governance. A UNESCO-led session titled ‘Human Rights-Centred Global Governance of Quantum Technologies’ presented key findings from a new issue brief co-authored with Sciences Po and the European University Institute.

It outlined major risks—such as quantum’s dual-use nature threatening encryption, a widening technological divide, and severe gender imbalances in the field—and urged immediate global action to build safeguards before quantum capabilities mature.

UNESCO’s Guilherme Canela emphasised that innovation and human rights are not mutually exclusive but fundamentally interlinked, warning against a ‘false dichotomy’ between the two. Lead author Shamira Ahmed highlighted the need for proactive frameworks to ensure quantum benefits are equitably distributed and not used to deepen global inequalities or erode rights.

With 79% of quantum firms lacking female leadership and a mere 1 in 54 job applicants being women, the gender gap was called ‘staggering.’ Ahmed proposed infrastructure investment, policy reforms, capacity development, and leveraging the UN’s International Year of Quantum to accelerate global discussions.

Panellists echoed the urgency. Constance Bommelaer de Leusse from Sciences Po advocated for embedding multistakeholder participation into governance processes and warned of a looming ‘quantum arms race.’ Professor Pieter Vermaas of Delft University urged moving from talk to international collaboration, suggesting the creation of global quantum research centres.

Journalist Elodie Vialle raised alarms about quantum’s potential to supercharge surveillance, endangering press freedom and digital privacy, and underscored the need to close the cultural gap between technologists and civil society.

Overall, the session championed a future where quantum technology is developed transparently, governed globally, and serves as a digital public good, bridging divides rather than deepening them. Speakers agreed that the time to act is now, before today’s opportunities become tomorrow’s crises.

Track all key events from the WSIS+20 High-Level Event 2025 on our dedicated page.