EU privacy watchdogs warn over US plans to expand traveller data collection

European privacy authorities have raised concerns about proposed changes to the Electronic System for Travel Authorisation that could require travellers to the US to disclose extensive personal information, including social media activity.

The European Data Protection Board, which coordinates national data protection authorities across the EU, sent a letter to the European Commission asking whether the institution plans to intervene or respond to the updated requirements.

A proposal that would apply to visitors entering the US through the visa-waiver programme for short stays of up to 90 days.

Under the proposed changes, travellers may be required to provide details about their social media accounts covering the previous five years.

Authorities could also request personal data about family members, including addresses, phone numbers and dates of birth, information that privacy regulators argue is unrelated to travel authorisation.

Watchdogs also questioned how EU citizens could exercise their data protection rights once such information is transferred to US authorities, particularly regarding storage periods and potential misuse.

Parallel negotiations between the EU and the US have also attracted attention.

Discussions around a potential Enhanced Border Security Partnerships framework could allow US authorities to seek access to biometric databases held by European countries, including facial scans and fingerprint records.

European privacy regulators warned that such measures could raise significant concerns regarding fundamental rights and personal data protection for travellers from the EU.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

BeatBanker malware targets Android users in Brazil

A new Android malware called BeatBanker is targeting users in Brazil through fake Starlink and government apps. The malware hijacks devices, steals banking credentials, tampers with cryptocurrency transactions, and secretly mines Monero.

Infection begins on phishing websites mimicking the Google Play Store or the ‘INSS Reembolso’ app. Users are tricked into installing trojanised APKs, which evade detection through memory-based decryption and by blocking analysis environments.

Fake update screens maintain persistence while silently downloading additional malicious payloads.

BeatBanker initially combined a banking trojan with a cryptocurrency miner. It uses accessibility permissions to monitor browsers and crypto apps, overlaying fake screens to redirect Tether and other crypto transfers.

A foreground service plays silent audio loops to prevent the device from shutting down, while Firebase Cloud Messaging enables remote control of infected devices.

The latest variant replaces the banking module with the BTMOB RAT, providing full control over devices. Capabilities include automatic permissions, background persistence, keylogging, GPS tracking, camera access, and screen-lock credential capture.

Kaspersky warns that BeatBanker demonstrates the growing sophistication of mobile threats and multi-layered malware campaigns.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI browsers expose new cybersecurity attack surfaces

Security researchers have demonstrated that agentic browsers, powered by AI, may introduce new cybersecurity vulnerabilities.

Experiments targeting the Comet AI browser, developed by Perplexity AI, showed that attackers could manipulate the system into executing phishing scams in only a few minutes.

The attack exploits the reasoning process used by AI agents when interacting with websites. These systems continuously explain their actions and observations, revealing internal signals that attackers can analyse to refine malicious strategies and bypass built-in safeguards.

Researchers showed that phishing pages can be iteratively trained using adversarial machine learning methods, such as Generative Adversarial Networks.

By observing how the AI browser responds to suspicious signals, attackers can optimise fraudulent pages until the system accepts them as legitimate.

The findings highlight a shift in the cybersecurity threat landscape. Instead of deceiving human users directly, attackers increasingly focus on manipulating the AI agents that perform online actions on behalf of users.

Security experts warn that prompt injection vulnerabilities remain a fundamental challenge for large language models and agentic systems.

Although new defensive techniques are being developed, researchers believe such weaknesses may remain difficult to eliminate.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI agents face growing prompt injection risks

AI developers are working on new defences against prompt-injection attacks that aim to manipulate AI agents. Security specialists warn that attackers are increasingly using social engineering techniques to influence AI systems that interact with online content.

Researchers say AI agents that browse the web or handle user tasks face growing risks from hidden instructions embedded in emails or websites. Experts in the US note that attackers often attempt to trick AI into revealing sensitive information.

Engineers are responding by designing systems that limit the impact of manipulation attempts. Developers in the US say AI tools must include safeguards preventing sensitive data from being transmitted without user approval.

Security teams are also introducing technologies that detect risky actions and prompt users for confirmation. Specialists argue that strong system design and user oversight will remain essential as AI agents gain more autonomy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Anthropic lawsuit gains Big Tech support in AI dispute

Several major US technology companies have backed Anthropic in its lawsuit challenging the US Department of Defence’s decision to label the AI company a national security ‘supply chain risk’.

Google, Amazon, Apple, and Microsoft have filed legal briefs supporting Anthropic’s attempt to overturn the designation issued by Defence Secretary Pete Hegseth. Anthropic argues the decision was retaliation after the company declined to allow its AI systems to be used for mass surveillance or autonomous weapons.

In court filings, the companies warned that the government’s action could have wider consequences for the technology sector. Microsoft said the decision could have ‘broad negative ramifications for the entire technology sector’.

Microsoft, which works closely with the US government and the Department of Defence, said it agreed with Anthropic’s position that AI systems should not be used to conduct domestic mass surveillance or enable autonomous machines to initiate warfare.

A joint amicus brief supporting Anthropic was also submitted by the Chamber of Progress, a technology policy organisation funded by companies including Google, Apple, Amazon and Nvidia. The group said it was concerned about the government penalising a company for its public statements.

The brief described the designation as ‘a potentially ruinous sanction’ for businesses and warned it could create a climate in which companies fear government retaliation for expressing views.

Anthropic’s lawsuit claims the government violated its free speech rights by retaliating against the company for comments made by its leadership. The dispute escalated after Anthropic declined to remove contractual restrictions preventing its AI models from being used for mass surveillance or autonomous weapons.

The company had previously introduced safeguards in government contracts to limit certain uses of its technology. Negotiations over revised contract language continued for several weeks before the disagreement became public.

Former military officials and technology policy advocates have also filed supporting briefs, warning that the decision could discourage companies from participating in national security projects if they fear retaliation for voicing concerns. The case is currently being heard in federal court in San Francisco.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Google outlines roadmap for safer generative AI for young users

Google has presented a strategy for developing generative AI systems designed to protect younger users better better while supporting learning and creativity.

The approach emphasises building conversational AI experiences that balance innovation with safeguards tailored to children and teenagers.

The company’s framework rests on three pillars: protecting young people online, respecting the role of families in digital environments and enabling youth to explore AI technologies responsibly.

According to Google, safety policies prohibit harmful content, including material linked to child exploitation, violent extremism and self-harm, while additional restrictions target age-inappropriate topics.

Safeguards are integrated throughout the AI development lifecycle, from user input to model responses. Systems use specialised classifiers to detect potentially harmful queries and prevent inappropriate outputs.

These protections are also applied to models such as Gemini, which incorporates defences against prompt manipulation and cyber misuse.

Beyond preventing harm, Google aims to support responsible AI adoption through educational initiatives.

Resources designed for families encourage discussions about responsible technology use, while tools such as Guided Learning in Gemini seek to help students explore complex topics through structured explanations and interactive learning support.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI-driven adaptive malware highlights new cyber threat landscape

Google’s cybersecurity division, Mandiant, has warned about the growing threat of AI-driven adaptive malware, highlighting how AI is reshaping the cyber threat landscape.

According to a recent report, adaptive malware can modify its behaviour and code in response to the environment it encounters, thereby evading traditional security tools. By analysing the security systems protecting a target, the malware can rewrite parts of its code to bypass detection.

Unlike traditional malware, which typically follows fixed instructions, adaptive malware can adjust its behaviour during an attack. This capability makes it more difficult for conventional cybersecurity tools to detect and block malicious activity.

Mandiant noted that such malware is increasingly associated with advanced persistent threat (APT) groups that conduct long-term, targeted cyber operations. These groups often pursue espionage objectives or financial gain while maintaining prolonged access to compromised systems.

AI is also being used to automate elements of cyberattacks. Machine learning algorithms allow malicious software to anticipate defensive measures and adjust its behaviour in real time. In some cases, attackers are integrating AI into broader automated attack chains. AI-driven malware can gather information, adapt its strategy, and continue operating with minimal human intervention.

Security researchers say autonomous AI agents may be capable of managing multiple stages of an attack, including reconnaissance, exploitation, and persistence, while remaining undetected.

To address these evolving threats, Mandiant recommends that organisations strengthen their cybersecurity strategies by deploying advanced detection and response tools, including AI-based systems that can identify anomalous behaviour. As AI capabilities continue to develop, cybersecurity experts say understanding adaptive malware and automated attack techniques will be essential for organisations seeking to protect their systems and data.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI and quantum computing reshape the global cybersecurity landscape

Cybersecurity risks are increasing as digital connectivity expands across governments, businesses and households.

According to Thales Group, a growing number of connected devices and digital services has significantly expanded the potential entry points for cyberattacks.

AI is reshaping the cybersecurity landscape by enabling attackers to identify vulnerabilities at unprecedented speed.

Security specialists increasingly describe the environment as a contest in which defensive systems must deploy AI to counter adversaries using similar technologies to exploit weaknesses in digital infrastructure.

Security concerns also extend beyond large institutions. Connected devices in homes, including smart cameras and speakers, often lack robust security protections, increasing exposure for individuals and networks.

Policymakers in Europe are responding through measures such as the Cyber Resilience Act, which will introduce mandatory security requirements for connected products sold in the EU.

Long-term risks are also emerging from advances in quantum computing.

Experts warn that powerful future machines could eventually break widely used encryption systems that currently protect communications, financial data and government networks, prompting organisations to adopt quantum-resistant security methods.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU lawmakers call for stronger copyright safeguards in AI training

The European Parliament has adopted a report urging policymakers to establish a long-term framework protecting copyrighted works used in AI training.

These recommendations aim to ensure that creative industries retain transparency and fair treatment as generative AI technologies expand.

Among the central proposals is the creation of a European register managed by the European Union Intellectual Property Office. The database would list copyrighted works used to train AI systems and identify creators who have chosen to exclude their content from such use.

Lawmakers in the EU are also calling for greater transparency from AI developers, including disclosure of the websites from which training data has been collected. According to the report, failing to meet transparency requirements could raise questions about compliance with existing copyright rules.

The recommendations have received mixed reactions from industry stakeholders.

Organisations representing creators argue that stronger safeguards are necessary to ensure fair remuneration and legal clarity, while technology sector groups caution that additional requirements could create complexity for companies developing AI systems.

The report is not legally binding but signals the political direction of ongoing European discussions on copyright and AI governance.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!