Deepfakes in campaign ads expose limits of Texas election law

AI-generated political advertisements are becoming increasingly visible in Texas election campaigns, highlighting gaps in existing laws designed to regulate deepfakes in political messaging.

Texas was the first state in the United States to adopt legislation restricting the use of deepfakes in campaign advertisements. However, the law applies only to state-level races. It does not cover federal contests, including the US Senate race that has dominated advertising spending in Texas and featured several AI-generated campaign ads.

Some lawmakers and experts warn that the growing use of AI-generated political content could complicate election campaigns. During recent primary contests, campaign advertisements featuring manipulated or synthetic images of political figures circulated widely across media platforms.

State Senator Nathan Johnson, who has proposed legislation to strengthen the state’s rules regarding deepfakes, said the rapid evolution of AI technology makes the issue increasingly urgent. Johnson argues that voters should be able to make decisions based on accurate information rather than manipulated media.

The current Texas law, adopted in 2019, contains several limitations. It only applies to video content, requires proof of intent to deceive or harm a candidate, and covers material distributed within 30 days of an election. Critics say these restrictions make the law difficult to enforce and limit its practical impact.

Lawmakers from both parties attempted to address some of these issues during the most recent legislative session. Proposed reforms included removing the 30-day restriction, requiring clear disclosure when AI is used in political advertising, and allowing candidates to pursue legal action to block misleading ads. Although both chambers of the Texas legislature passed versions of the legislation, the proposals ultimately failed to become law.

Supporters of stricter regulation argue that the rapid advancement of generative AI tools is making it harder to distinguish synthetic media from authentic content. Some political leaders warn that increasingly realistic deepfakes could eventually influence election outcomes.

Others, however, caution that regulating political content raises constitutional concerns. Some lawmakers argue that many AI-generated political ads resemble satire or parody, forms of political speech protected by the First Amendment.

At the federal level, regulation of congressional campaign advertising falls under the Federal Election Commission’s authority. In 2024, the agency declined to begin a formal rulemaking process on AI-generated political ads, leaving states and policymakers to continue debating how to address the emerging issue.

Experts warn that as AI tools continue to improve, distinguishing authentic political messaging from deepfakes and other forms of synthetic content will likely become more complex.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU updates voluntary code for labelling AI-generated content

The European Commission has released a second draft of its voluntary Code of Practice on marking and labelling AI-generated content, designed to support compliance with transparency rules under the Artificial Intelligence Act.

Published on 5 March, the updated draft reflects feedback from hundreds of stakeholders, including industry groups, academic researchers, policymakers, and civil society organisations.

Revisions follow consultations held in early 2026 as part of the broader rollout of the EU’s AI regulatory framework.

The proposed code outlines technical approaches for identifying AI-generated material. A two-layered system using secure metadata and digital watermarking is recommended, with optional fingerprinting, logging, and verification to improve detection.

Guidelines also address how platforms and publishers should label deepfakes and AI-generated text related to matters of public interest. Public feedback is open until 30 March, with the final code expected in early June before transparency rules take effect on 2 August 2026.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Writers publish protest book to challenge AI use of copyrighted works

Thousands of writers have joined a symbolic protest against AI companies by publishing a book that contains no traditional content.

The work, titled “Don’t Steal This Book,” lists only the names of roughly 10,000 contributors who oppose the use of their writing to train AI systems without their permission.

An initiative that was organised by composer and campaigner Ed Newton-Rex and distributed during the London Book Fair. Contributors include prominent authors such as Kazuo Ishiguro, Philippa Gregory and Richard Osman, along with thousands of other writers and creative professionals.

Campaigners argue that generative AI systems are trained on vast collections of copyrighted material gathered from the internet without authorisation or compensation.

According to organisers, such practices allow AI tools to compete with the creators whose works were used to develop them.

The protest arrives as the UK Government prepares an economic assessment of potential copyright reforms related to AI. Proposals under discussion include allowing AI developers to use copyrighted material unless rights holders explicitly opt out.

Many writers and artists oppose that approach and demand stronger copyright protections. In parallel, the publishing sector is preparing a licensing initiative through Publishers’ Licensing Services to provide AI developers with legal access to books while ensuring authors receive compensation.

The dispute reflects a growing global debate over how copyright law should apply to generative AI systems that rely on massive datasets to develop chatbots and other digital tools.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU faces challenges in curbing digital abuse against women

Researchers and policymakers are raising concerns about how new technologies may put women at risk online, despite existing EU rules designed to ensure safer digital spaces.

AI-powered tools and smart devices have been linked to incidents of harassment and the creation of non-consensual sexualised imagery, highlighting gaps in enforcement and compliance.

The European Commission’s Gender Equality 2026–2030 Strategy noted that women are disproportionately targeted by online gender-based violence, including harassment, doxing, and AI-generated deepfakes.

Investigations into tools such as Elon Musk’s Grok AI and Meta’s Ray-Ban smart glasses have drawn attention to how digital platforms and wearable technologies can be misused, even where legal frameworks like the Digital Services Act (DSA) are in place.

Experts emphasise that while the EU’s rules offer a foundation to regulate online content, significant challenges remain. Advocates and lawmakers say enforcement gaps let harmful AI functions like nudification persist.

Commissioners have stressed ongoing cooperation with tech companies and upcoming guidelines to prioritise flagged content from independent organisations to address gender-based cyber violence.

Authorities are also monitoring new technologies closely. In the case of wearable devices, regulators are considering how users and bystanders are informed about recording features.

Ongoing discussions aim to strengthen compliance under existing legislation and ensure that digital spaces become safer and more accountable for all users.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Council of Europe issues new guidance on AI and gender equality

Ahead of International Women’s Day on 8 March, the Council of Europe adopted two new recommendations addressing gender equality and the prevention of violence against women in the context of emerging technologies.

One recommendation targets the design and use of AI to prevent discrimination, while the other focuses on accountability for technology-facilitated violence against women and girls.

The AI recommendation advises member states on preventing discrimination throughout the lifecycle of AI systems, from development to deployment and retirement. It highlights risks like gender bias while promoting transparency, explainability, and safeguards.

Special attention is given to discrimination based on gender, race, and sexual orientation, gender identity, and expression (SOGIESC).

The second recommendation sets the first international standard for addressing technology-facilitated violence against women. It outlines strategies to overcome impunity, including clearer legal frameworks, accessible reporting systems, and victim-centred approaches.

Emphasis is placed on multistakeholder engagement, trauma-informed policies, and safety-by-design in technology products to prevent digital harm.

Both recommendations reinforce the importance of combining regulation, institutional support, and public awareness to ensure technology advances equality rather than perpetuates harm.

The formal launch is scheduled for 10 June 2026 at the Palais de l’Europe in Strasbourg during an event titled ‘From standards to action: making accountability for technology-facilitated violence against women and girls a reality.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenAI faces legal action in South Korea from top networks

South Korea’s leading terrestrial broadcasters have filed a lawsuit against OpenAI, claiming that the company trained its ChatGPT model using their news content without permission. KBS, MBC, and SBS are seeking an injunction to halt the alleged infringement and to recover damages.

The Korea Broadcasters Association said OpenAI generates significant revenue from its GPT services and has licensing agreements with media organisations worldwide.

Despite this, the company has refused to negotiate with the South Korean networks, leaving them without recourse to ensure proper use of their content.

The lawsuit emphasises the protection of intellectual property and creators’ rights, arguing that domestic copyright holders face high legal costs and barriers when confronting global technology companies. It also raises broader questions about South Korea’s data sovereignty in the age of AI.

Earlier action against Naver set a precedent for copyright enforcement in AI applications.

Although KBS subsequently partnered with Naver for AI-driven media solutions, the current case underscores continuing disputes over lawful access to broadcast content for generative AI training.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

US freedom.gov and the EU’s DSA in a transatlantic fight over online speech

The transatlantic debate over ‘digital sovereignty’ is also, in a discrete measure, about whose rules govern online speech. In the EU, digital sovereignty has essentially meant building enforceable guardrails for platforms, especially around illegal content, systemic risks, and transparency, through instruments such as the Digital Services Act (DSA) and its transparency mechanisms for content moderation decisions. In Washington, the emphasis has been shifting toward ‘free speech diplomacy‘, framing some EU online-safety measures as de facto censorship that spills across borders when US-based platforms comply with the EU requirements.

What is ‘freedom.gov’?

The newest flashpoint is a reported US State Department plan to develop an online portal, widely described as ‘freedom.gov‘, intended to help users in the EU and elsewhere access content blocked under local rules, and it aligns with the Trump administration policy and a State Department programme called Internet Freedom. The ‘freedom.gov’ plan reportedly includes adding VPN-like functionality so traffic would appear to originate in the US, effectively sidestepping geographic enforcement of content restrictions. According to the US House of Representatives’ legal framework, the idea could be seen as a digital-rights tool, but experts warn it would export a US free-speech standard into jurisdictions that regulate hate speech and extremist material more tightly.

The ‘freedom.gov’ portal story occurs within a broader escalation that has already moved from rhetoric to sanctions. In late 2025, the US imposed visa bans on several EU figures it accused of pressuring platforms to suppress ‘American viewpoints,’ a move the EU governments and officials condemned as unjustified and politically coercive. The episode brought to the conclusion that Washington is treating some foreign content-governance actions not as domestic regulation, but as a challenge to US speech norms and US technology firms.

The EU legal perspective

From the EU perspective, this framing misses the point of to DSA. The Commission argues that the DSA is about platform accountability, requiring large platforms to assess and mitigate systemic risks, explain moderation decisions, and provide users with avenues to appeal. The EU has also built new transparency infrastructure, such as the DSA Transparency Database, to make moderation decisions more visible and auditable. Civil-society groups broadly supportive of the DSA stress that it targets illegal content and opaque algorithmic amplification; critics, especially in US policy circles, argue that compliance burdens fall disproportionately on major US platforms and can chill lawful speech through risk-averse moderation.

That’s where the two sides’ risk models diverge most sharply. The EU rules are shaped by the view that disinformation, hate speech, and extremist propaganda can create systemic harms that platforms must proactively reduce. On the other side, the US critics counter that ‘harm’ categories can expand into viewpoint policing, and that tools like a government-backed portal or VPN could be portrayed as restoring access to lawful expression. Yet the same reporting that casts the portal as a speech workaround also notes it may facilitate access to content the EU considers dangerous, raising questions about whether the initiative is rights-protective ‘diplomacy,’ a geopolitical pressure tactic, or something closer to state-enabled circumvention.

Why does it matter?

The dispute has gone from theoretical to practical, reshaping digital alliances, compliance strategies, and even travel rights for policy actors, not to mention digital sovereignty in the governance of online discourse and data. The EU’s approach is to make platforms responsible for systemic online risks through enforceable transparency and risk-reduction duties, while the US approach is increasingly to contest those duties as censorship with extraterritorial effects, using instruments ranging from public messaging to visa restrictions, and, potentially, state-backed bypass tools.

What could we expect then, if not a more fragmented internet, with platforms pulled between competing legal expectations, users encountering different speech environments by region, and governments treating content policy as an extension of foreign policy, complete with retaliation, countermeasures, and escalating mistrust?

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

MIT study finds AI chatbots underperform for vulnerable users

Research from the MIT Centre for Constructive Communication (CCC) finds that leading AI chatbots often provide lower-quality responses to users with lower English proficiency, less education, or who are outside the US.

Models tested include GPT-4, Claude 3 Opus, and Llama 3, which sometimes refuse to answer or respond condescendingly. Using TruthfulQA and SciQ datasets, researchers added user biographies to simulate differences in education, language, and country.

Accuracy fell sharply among non-native English speakers and less-educated users, with the most significant drop among those affected by both; users from countries like Iran also received lower-quality responses.

Refusal behaviour was notable. Claude 3 Opus declined 11% of questions for less-educated, non-native English speakers versus 3.6% for control users. Manual review showed 43.7% of refusals contained condescending language.

Some users were denied access to specific topics even though they answered correctly for others.

The study echoes human sociocognitive biases, in which non-native speakers are often perceived as less competent. Researchers warn AI personalisation could worsen inequities, providing marginalised users with subpar or misleading information when they need it most.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Summit in India hears call for safe AI

The UN Secretary General has warned that AI must augment human potential rather than replace it, speaking at the India AI Impact Summit in New Delhi. Addressing leaders at Bharat Mandapam in New Delhi, he urged investment in workers so that technology strengthens, rather than displaces, human capacity.

In New Delhi, he cautioned that AI could deepen inequality, amplify bias and fuel harm if left unchecked. He called for stronger safeguards to protect people from exploitation and insisted that no child should be exposed to unregulated AI systems.

Environmental concerns also featured prominently in New Delhi, with Guterres highlighting rising energy and water demands from data centres. He urged a shift to clean power and warned against transferring environmental costs to vulnerable communities.

The UN chief proposed a $3 billion Global Fund on AI to build skills, data access and affordable computing worldwide. In New Delhi, he argued that broader access is essential to prevent countries from being excluded from the AI age and to ensure AI supports sustainable development goals.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Microsoft outlines challenges in verifying AI-generated media

In an era of deepfakes and AI-manipulated content, determining what is real online has become increasingly complex. Microsoft’s report Media Integrity and Authentication reviews current verification methods, their limits, and ways to boost trust in digital media.

The study emphasises that no single solution can prevent digital deception. Techniques such as provenance tracking, watermarking, and digital fingerprinting can provide useful context about a media file’s origin, creation tools, and whether it has been altered.

Microsoft has pioneered these technologies, cofounding the Coalition for Content Provenance and Authenticity (C2PA) to standardise media authentication globally.

The report also addresses the risks of sociotechnical attacks, where even subtle edits can manipulate authentication results to mislead the public.

Researchers explored how provenance information can remain durable and reliable across different environments, from high-security systems to offline devices, highlighting the challenge of maintaining consistent verification.

As AI-generated or edited content becomes commonplace, secure media provenance is increasingly important for news outlets, public figures, governments, and businesses.

Reliable provenance helps audiences spot manipulated content, with ongoing research guiding clearer, practical verification displays for the public.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot