Google patches critical Chrome bugs enabling code execution

Chrome security update fixes six flaws that could enable arbitrary code execution. Stable channel 139.0.7258.127/.128 (Windows, Mac) and .127 (Linux) ships high-severity patches that protect user data and system integrity.

CVE-2025-8879 is a heap buffer overflow in libaom’s video codec. CVE-2025-8880 is a V8 race condition reported by Seunghyun Lee. CVE-2025-8901 is an out-of-bounds write in ANGLE.

Detection methods included AddressSanitizer, MemorySanitizer, UndefinedBehaviorSanitizer, Control Flow Integrity, libFuzzer, and AFL. Further fixes address CVE-2025-8881 in File Picker and CVE-2025-8882, a use-after-free in Aura.

Successful exploitation could allow code to run with browser privileges through overflows and race conditions. The automatic rollout is staged; users should update it manually by going to Settings > About Chrome.

Administrators should prioritise rapid deployment in enterprise fleets. Google credited external researchers, anonymous contributors, and the Big Sleep project for coordinated reporting and early discovery.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI agents face prompt injection and persistence risks, researchers warn

Zenity Labs warned at Black Hat USA that widely used AI agents can be hijacked without interaction. Attacks could exfiltrate data, manipulate workflows, impersonate users, and persist via agent memory. Researchers said knowledge sources and instructions could be poisoned.

Demos showed risks across major platforms. ChatGPT was tricked into accessing a linked Google Drive via email prompt injection. Microsoft Copilot Studio agents leaked CRM data. Salesforce Einstein rerouted customer emails. Gemini and Microsoft 365 Copilot were steered into insider-style attacks.

Vendors were notified under coordinated disclosure. Microsoft stated that ongoing platform updates have stopped the reported behaviour and highlighted built-in safeguards. OpenAI confirmed a patch and a bug bounty programme. Salesforce said its issue was fixed. Google pointed to newly deployed, layered defences.

Enterprise adoption of AI agents is accelerating, raising the stakes for governance and security. Aim Labs, which had previously flagged similar zero-click risks, said frameworks often lack guardrails. Responsibility frequently falls on organisations deploying agents, noted Aim Labs’ Itay Ravia.

Researchers and vendors emphasise layered defence against prompt injection and misuse. Strong access controls, careful tool exposure, and monitoring of agent memory and connectors remain priorities as agent capabilities expand in production.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

YouTube’s AI flags viewers as minors, creators demand safeguards

YouTube’s new AI age check, launched on 13 August 2025, flags suspected minors based on their viewing habits. Over 50,000 creators petitioned against it, calling it ‘AI spying’. The backlash reveals deep tensions between child safety and online anonymity.

Flagged users must verify their age with ID, credit card, or a facial scan. Creators say the policy risks normalising surveillance and shrinking digital freedoms.

SpyCloud’s 2025 report found a 22% jump in stolen identities, raising alarm over data uploads. Critics fear YouTube’s tool could invite hackers. Past scandals over AI-generated content have already hurt creator trust.

Users refer to it on X as a ‘digital ID dragnet’. Many are switching platforms or tweaking content to avoid flags. WebProNews says creators demand opt-outs, transparency, and stronger human oversight of AI systems.

As global regulation tightens, YouTube could shape new norms. Experts urge a balance between safety and privacy. Creators push for deletion rules to avoid identity risks in an increasingly surveilled online world.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UK-based ODI outlines vision for EU AI Act and data policy

The Open Data Institute (ODI) has published a manifesto setting out six principles for shaping European Union policy on AI and data. Aimed at supporting policymakers, it aligns with the EU’s upcoming digital reforms, including the AI Act and the review of the bloc’s digital framework.

Although based in the UK, the ODI has previously contributed to EU policymaking, including work on the General-Purpose AI Code of Practice and consultations on the use of health data. The organisation also launched a similar manifesto for UK data and AI policy in 2024.

The ODI states that the EU has a chance to establish a global model of digital governance, prioritizing people’s interests. Director of research Elena Simperl called for robust open data infrastructure, inclusive participation, and independent oversight to build trust, support innovation, and protect values.

Drawing on the EU’s Competitiveness Compass and the Draghi report, the six principles are: data infrastructure, open data, trust, independent organisations, an inclusive data ecosystem, and data skills. The goal is to balance regulation and innovation while upholding rights, values, and interoperability.

The ODI highlights the need to limit bias and inequality, broaden access to data and skills, and support smaller enterprises. It argues that strong governance should be treated like physical infrastructure, enabling competitiveness while safeguarding rights and public trust in the AI era.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI browsers accused of harvesting sensitive data, according to new study

A new study from researchers in the UK and Italy found that popular AI-powered browsers collect and share sensitive personal data, often in ways that may breach privacy laws.

The team tested ten well-known AI assistants, including ChatGPT, Microsoft’s Copilot, Merlin AI, Sider, and TinaMind, using public websites and private portals like health and banking services.

All but Perplexity AI showed evidence of gathering private details, from medical records to social security numbers, and transmitting them to external servers.

The investigation revealed that some tools continued tracking user activity even during private browsing, sending full web page content, including confidential information, to their systems.

Sometimes, prompts and identifying details, like IP addresses, were shared with analytics platforms, enabling potential cross-site tracking and targeted advertising.

Researchers also found that some assistants profiled users by age, gender, income, and interests, tailoring their responses across multiple sessions.

According to the report, such practices likely violate American health privacy laws and the European Union’s General Data Protection Regulation.

Privacy policies for some AI browsers admit to collecting names, contact information, payment data, and more, and sometimes storing information outside the EU.

The study warns that users cannot be sure how their browsing data is handled once gathered, raising concerns about transparency and accountability in AI-enhanced browsing.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Ukraine pioneers Starlink satellite-to-phone network

Ukraine has completed its first successful field test of Starlink’s direct-to-cell satellite technology, marking a breakthrough for mobile connectivity in Eastern Europe.

The trial, carried out by the country’s largest mobile operator Kyivstar in the Zhytomyr region, saw CEO Oleksandr Komarov and Ukraine’s digital transformation minister Mykhailo Fedorov exchange messages using standard smartphones.

The system connects directly to phones via satellites equipped with advanced cellular modems, functioning like cell towers in space.

The technology is designed to keep communications running when terrestrial networks are damaged or inaccessible.

Telecom companies worldwide are exploring satellite-based solutions to remove coverage gaps instead of relying solely on costly or impractical land-based networks.

Starlink, owned by SpaceX, has already signed direct-to-cell service deals in 10 countries, with Kyivstar set to be the first European operator to adopt it.

A commercial rollout in Ukraine is planned for late 2025, starting with messaging. Broader mobile satellite broadband access is expected in early 2026.

Kyivstar’s parent company, VEON, is also discussing with other providers, such as Amazon’s Project Kuiper, the extension of similar services beyond Ukraine.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Crypto crime unit expands with Binance

Tron, Tether, and TRM Labs have announced the expansion of their T3 Financial Crime Unit (T3 FCU) with Binance as the first T3+ partner. The unit has frozen over $250 million in illicit crypto assets since its launch in September 2024.

The T3 FCU works with global law enforcement to tackle money laundering, investment fraud, terrorism financing, and other financial crimes. The new T3+ programme unites exchanges and institutions to share intelligence and tackle threats in real time.

Recent reports highlight the urgency of these efforts. Over $3 billion in crypto was stolen in the first half of 2025, with some hacks laundering funds in under three minutes. Only around 4% of stolen assets were recovered during this period, underscoring the speed and sophistication of modern attacks.

Debate continues over the role of stablecoin issuers and exchanges in freezing funds. Tether’s halt of $86,000 in stolen USDt highlights fast recovery but raises concerns over decentralised principles amid calls for stronger industry-wide security.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU targets eight members states over cybersecurity directive implementation delay

Eight EU countries, including Ireland, Spain, France, Bulgaria, Luxembourg, the Netherlands, Portugal, and Sweden, have been warned by the European Commission for failing to meet the deadline on the implementation of the NIS2 Directive.

What is the NIS2 Directive about?

The NIS2 Directive, adopted by the EU in 2022, is an updated legal framework designed to strengthen the cybersecurity and resilience of critical infrastructure and essential services. Essentially, this directive replaces the 2016 NIS Directive, the EU’s first legislation to improve cybersecurity across crucial sectors such as energy, transport, banking, and healthcare. It set baseline security and incident reporting requirements for critical infrastructure operators and digital service providers to enhance the overall resilience of network and information systems in the EU.

With the adoption of the NIS2 Directive, the EU aims to broaden the scope to include not only traditional sectors like energy, transport, banking, and healthcare, but also public administration, space, manufacturing of critical products, food production, postal services, and a wide range of digital service providers.

NIS2 introduces stricter risk management, supply-chain security requirements, and enhanced incident reporting rules, with early warnings due within 24 hours. It increases management accountability, requiring leadership to oversee compliance and undergo cybersecurity training.

It also imposes heavy penalties for violations, including up to €10 million or 2% of global annual turnover for essential entities. The Directive also aims to strengthen EU-level cooperation through bodies like ENISA and EU-CyCLONe.

Member States were expected to transpose NIS2 into national law by 17 October 2024, making timely compliance preparation critical.

What is a directive?

There are two main types of the EU laws: regulations and directives. Regulations apply automatically and uniformly across all member states once adopted by the EU.

In contrast, directives set specific goals that member states must achieve but leave it up to each country to decide how to implement them, allowing for different approaches based on each member state’s capacities and legal systems.

So, why is there a delay in implementing the NIS2 Directive?

According to Insecurity Magazine, the delay is due to member states’ implementation challenges, and many companies across the EU are ‘not fully ready to comply with the directive.’ Six critical infrastructure sectors are facing challenges, including:

  • IT service management is challenged by its cross-border nature and diverse entities
  • Space, with limited cybersecurity knowledge and heavy reliance on commercial off-the-shelf components
  • Public administrations, which “lack the support and experience seen in more mature sectors”
  • Maritime, facing operational technology-related challenges and needing tailored cybersecurity risk management guidance
  • Health, relying on complex supply chains, legacy systems, and poorly secured medical devices
  • Gas, which must improve incident readiness and response capabilities

The deadline for the implementation was 17 October 2024. In May 2025, the European Commission warned 19 member states about delays, giving them two months to act or risk referral to the Court of Justice of the EU. It remains unclear whether the eight remaining holdouts will face further legal consequences.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Data breach hits cervical cancer screening programme

Hackers have stolen personal and medical information from nearly 500,000 participants in the Netherlands’ cervical cancer screening programme. The attack targeted the NMDL laboratory in Rijswijk between 3 and 6 July, but authorities were only informed on 6 August.

Data includes names, addresses, birth dates, citizen service numbers, possible test results and healthcare provider details. For some victims, phone numbers and email addresses were also stolen. The lab, owned by Eurofins Scientific, has suspended operations while a security review occurs.

The Dutch Population Screening Association has switched to a different laboratory to process future tests and is warning those affected of the risk of fraud. Local media reports suggest hackers may also have accessed up to 300GB of data on other patients from the past three years.

Security experts say the breach underscores the dangers of weak links in healthcare supply chains. Victims are now being contacted by the authorities, who have expressed regret for the distress caused.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Turkish authorities detain Ethereum developer amid legal probe

Ethereum developer Federico Carrone, known as Fede’s Intern, was detained in Turkey over allegations of helping misuse the Ethereum network. The incident happened at Izmir airport, where authorities informed him of a pending criminal charge likely linked to his privacy protocol work.

After intervention from the Ethereum community and legal support, Carrone was released and allowed to leave. The case seems tied to blockchain privacy tools, which face rising government scrutiny.

Carrone’s team previously came under attention for Tutela, a study on Ethereum and Tornado Cash user privacy. He emphasised that creating privacy code does not make developers criminals, comparing it to blaming software creators for misuse.

Growing legal challenges face developers building privacy and self-custody tools. Tornado Cash co-founder Alexey Roman recently received a criminal conviction and may face prison.

Crypto advocates warn lawsuits against developers risk stifling innovation and highlight ongoing legal uncertainty.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot