EU privacy watchdogs warn over US plans to expand traveller data collection

European privacy authorities have raised concerns about proposed changes to the Electronic System for Travel Authorisation that could require travellers to the US to disclose extensive personal information, including social media activity.

The European Data Protection Board, which coordinates national data protection authorities across the EU, sent a letter to the European Commission asking whether the institution plans to intervene or respond to the updated requirements.

A proposal that would apply to visitors entering the US through the visa-waiver programme for short stays of up to 90 days.

Under the proposed changes, travellers may be required to provide details about their social media accounts covering the previous five years.

Authorities could also request personal data about family members, including addresses, phone numbers and dates of birth, information that privacy regulators argue is unrelated to travel authorisation.

Watchdogs also questioned how EU citizens could exercise their data protection rights once such information is transferred to US authorities, particularly regarding storage periods and potential misuse.

Parallel negotiations between the EU and the US have also attracted attention.

Discussions around a potential Enhanced Border Security Partnerships framework could allow US authorities to seek access to biometric databases held by European countries, including facial scans and fingerprint records.

European privacy regulators warned that such measures could raise significant concerns regarding fundamental rights and personal data protection for travellers from the EU.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Cambridge researchers warn AI toys misread children’s emotions

AI toys for young children may misread emotions and respond inappropriately, according to a study by researchers at the University of Cambridge. Developmental psychologists observed interactions between children aged three to five and conversational AI-powered toys.

Findings showed the toys often struggled with pretend play and emotional cues. In several cases, children attempted to express sadness or initiate imaginative scenarios, while the AI responded with unrelated or overly scripted replies, leaving emotional signals unrecognised.

Researchers warned that such limitations could affect children’s emotional development and imaginative play. Early years practitioners also raised concerns about how toy-collected conversation data may be used and whether children could start treating the devices as trusted companions.

The study calls for stronger regulation and the introduction of safety certification for AI toys aimed at young children. Toy developer Curio stated that improving AI interactions and maintaining parental controls remain priorities as the technology continues to develop.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Deepfakes in campaign ads expose limits of Texas election law

AI-generated political advertisements are becoming increasingly visible in Texas election campaigns, highlighting gaps in existing laws designed to regulate deepfakes in political messaging.

Texas was the first state in the United States to adopt legislation restricting the use of deepfakes in campaign advertisements. However, the law applies only to state-level races. It does not cover federal contests, including the US Senate race that has dominated advertising spending in Texas and featured several AI-generated campaign ads.

Some lawmakers and experts warn that the growing use of AI-generated political content could complicate election campaigns. During recent primary contests, campaign advertisements featuring manipulated or synthetic images of political figures circulated widely across media platforms.

State Senator Nathan Johnson, who has proposed legislation to strengthen the state’s rules regarding deepfakes, said the rapid evolution of AI technology makes the issue increasingly urgent. Johnson argues that voters should be able to make decisions based on accurate information rather than manipulated media.

The current Texas law, adopted in 2019, contains several limitations. It only applies to video content, requires proof of intent to deceive or harm a candidate, and covers material distributed within 30 days of an election. Critics say these restrictions make the law difficult to enforce and limit its practical impact.

Lawmakers from both parties attempted to address some of these issues during the most recent legislative session. Proposed reforms included removing the 30-day restriction, requiring clear disclosure when AI is used in political advertising, and allowing candidates to pursue legal action to block misleading ads. Although both chambers of the Texas legislature passed versions of the legislation, the proposals ultimately failed to become law.

Supporters of stricter regulation argue that the rapid advancement of generative AI tools is making it harder to distinguish synthetic media from authentic content. Some political leaders warn that increasingly realistic deepfakes could eventually influence election outcomes.

Others, however, caution that regulating political content raises constitutional concerns. Some lawmakers argue that many AI-generated political ads resemble satire or parody, forms of political speech protected by the First Amendment.

At the federal level, regulation of congressional campaign advertising falls under the Federal Election Commission’s authority. In 2024, the agency declined to begin a formal rulemaking process on AI-generated political ads, leaving states and policymakers to continue debating how to address the emerging issue.

Experts warn that as AI tools continue to improve, distinguishing authentic political messaging from deepfakes and other forms of synthetic content will likely become more complex.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Biased AI suggestions shift societal attitudes

AI-powered writing tools may do more than speed up typing- they can influence the way people think. A Cornell study found that biassed autocomplete suggestions can subtly shift users’ opinions on issues like the death penalty, fracking, GMOs, and voting rights.

Experiments with over 2,500 participants revealed that users’ views gravitated toward the AI’s predetermined bias. Attempts to warn participants about the AI’s bias, either before or after writing, did not prevent the shifts.

Researchers noted that the effect occurs because users effectively write biassed viewpoints themselves, a process psychology research shows can alter personal attitudes.

The influence was consistent across political topics and participants of all leanings. Compared with simply providing pre-written arguments, biassed AI suggestions had a stronger effect on shaping opinions.

Researchers warn that as autocomplete and generative AI tools become increasingly prevalent, covert persuasion through AI may pose serious societal risks.

The study, led by Sterling Williams-Ceci and Mor Naaman of Cornell Tech, highlights the potential for AI to shape beliefs without users noticing. Findings highlight the need for oversight as AI writing assistants enter everyday communication.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI browsers expose new cybersecurity attack surfaces

Security researchers have demonstrated that agentic browsers, powered by AI, may introduce new cybersecurity vulnerabilities.

Experiments targeting the Comet AI browser, developed by Perplexity AI, showed that attackers could manipulate the system into executing phishing scams in only a few minutes.

The attack exploits the reasoning process used by AI agents when interacting with websites. These systems continuously explain their actions and observations, revealing internal signals that attackers can analyse to refine malicious strategies and bypass built-in safeguards.

Researchers showed that phishing pages can be iteratively trained using adversarial machine learning methods, such as Generative Adversarial Networks.

By observing how the AI browser responds to suspicious signals, attackers can optimise fraudulent pages until the system accepts them as legitimate.

The findings highlight a shift in the cybersecurity threat landscape. Instead of deceiving human users directly, attackers increasingly focus on manipulating the AI agents that perform online actions on behalf of users.

Security experts warn that prompt injection vulnerabilities remain a fundamental challenge for large language models and agentic systems.

Although new defensive techniques are being developed, researchers believe such weaknesses may remain difficult to eliminate.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU platform law expands data access rights

European regulators are examining how the Digital Markets Act interacts with the General Data Protection Regulation across major digital platforms. The EU rules apply to designated gatekeepers that operate core platform services used by millions of users.

Policy specialists in the EU say the Digital Markets Act complements GDPR protections by strengthening user control over personal data. The framework also supports rights related to data access, portability and transparency for both consumers and business users.

The regulatory overlap affects areas including consent requirements, third-party software installation and interoperability between services. Authorities are also coordinating enforcement between competition and data protection regulators.

Analysts say the combined application of both laws could reshape the responsibilities of major technology platforms. Policymakers aim to increase user choice while reinforcing safeguards for the integrity and confidentiality of personal data in the GDPR.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Leading tech companies deepen AI competition with new capabilities

AI competition among leading AI developers intensified in early 2026 as major companies expanded their models, platforms, and partnerships. Companies including Google, OpenAI, Anthropic, and xAI are introducing new capabilities and integrating AI systems into broader ecosystems.

Google has continued to expand its Gemini model family with updates to Gemini 3.1 Pro and 3.1 Flash, designed to support complex tasks across applications. The company is also integrating Gemini into services such as Docs, Sheets, Slides, and Drive, allowing users to generate documents and analyse data across multiple Google services.

Gemini has also been embedded into the Chrome browser and integrated with Samsung’s Galaxy devices, expanding its distribution across consumer platforms as AI competition among major developers accelerates.

Anthropic has focused on advancing the Claude model family while positioning the system for enterprise and professional use. Recent updates include Claude Sonnet 4.6, which introduces improvements in reasoning and coding capabilities alongside an expanded context window currently in beta. The company has also launched a limited preview of the Claude Marketplace, allowing organisations to use third-party tools built on Claude through partnerships with several software companies.

OpenAI has continued to update ChatGPT with the release of the GPT-5 series, including GPT-5.2 and GPT-5.4. The newer models combine reasoning, coding, and agent-based workflows, while also introducing computer-use capabilities that allow the system to interact with applications directly.

OpenAI has also introduced additional services, including ChatGPT Health and integrations designed to assist with spreadsheet modelling and data analysis, further intensifying AI competition across enterprise and consumer tools.

Meanwhile, xAI has expanded development of its Grok models while increasing computing infrastructure. The company has reported growth in Grok usage through integration with the X platform and other applications. Recent announcements include upgrades to Grok’s voice and multimodal capabilities, as well as continued training of future models.

Across the industry, developers are increasingly positioning their systems not only as conversational assistants but also as tools integrated into enterprise workflows, creative production, and software development. New releases in 2026 reflect a broader shift toward multimodal systems, agent-based capabilities, and deeper integration with existing digital platforms, highlighting how AI competition is shaping the next phase of AI development.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Google outlines roadmap for safer generative AI for young users

Google has presented a strategy for developing generative AI systems designed to protect younger users better better while supporting learning and creativity.

The approach emphasises building conversational AI experiences that balance innovation with safeguards tailored to children and teenagers.

The company’s framework rests on three pillars: protecting young people online, respecting the role of families in digital environments and enabling youth to explore AI technologies responsibly.

According to Google, safety policies prohibit harmful content, including material linked to child exploitation, violent extremism and self-harm, while additional restrictions target age-inappropriate topics.

Safeguards are integrated throughout the AI development lifecycle, from user input to model responses. Systems use specialised classifiers to detect potentially harmful queries and prevent inappropriate outputs.

These protections are also applied to models such as Gemini, which incorporates defences against prompt manipulation and cyber misuse.

Beyond preventing harm, Google aims to support responsible AI adoption through educational initiatives.

Resources designed for families encourage discussions about responsible technology use, while tools such as Guided Learning in Gemini seek to help students explore complex topics through structured explanations and interactive learning support.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Spain expands digital oversight of online hate

Spain has launched a digital system designed to track hate speech and disinformation across social media platforms. Prime Minister Pedro Sánchez presented the tool in Madrid as part of a wider effort to improve oversight of online platforms.

The platform known as HODIO will analyse public posts and measure the spread and reach of hateful content. Authorities in Spain say the project will publish regular reports examining how platforms respond to harmful material.

The monitoring initiative is managed by Spain’s Observatory on Racism and Xenophobia. Officials in Spain say the data will help citizens understand the scale of online hate and assess how social networks address abusive content.

The initiative forms part of a broader digital policy agenda in Spain that also includes measures to protect minors online. Policymakers in Spain have discussed proposals such as restrictions on social media use by children under 16.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU updates voluntary code for labelling AI-generated content

The European Commission has released a second draft of its voluntary Code of Practice on marking and labelling AI-generated content, designed to support compliance with transparency rules under the Artificial Intelligence Act.

Published on 5 March, the updated draft reflects feedback from hundreds of stakeholders, including industry groups, academic researchers, policymakers, and civil society organisations.

Revisions follow consultations held in early 2026 as part of the broader rollout of the EU’s AI regulatory framework.

The proposed code outlines technical approaches for identifying AI-generated material. A two-layered system using secure metadata and digital watermarking is recommended, with optional fingerprinting, logging, and verification to improve detection.

Guidelines also address how platforms and publishers should label deepfakes and AI-generated text related to matters of public interest. Public feedback is open until 30 March, with the final code expected in early June before transparency rules take effect on 2 August 2026.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot