Netherlands regulator presses tech firms over election disinformation

The Netherlands’ competition authority will meet with 12 major online platforms, including TikTok, Facebook and X, on 15 September to address the spread before the 29 October elections.

The session will also involve the European Commission, national regulators and civil society groups.

The Authority for Consumers and Markets (ACM), which enforces the EU’s Digital Services Act in the Netherlands, is mandated to oversee election integrity under the law. The vote was called early in June after the Dutch government collapsed over migration policy disputes.

Platforms designated as Very Large Online Platforms must uphold transparent policies for moderating content and act decisively against illegal material, ACM director Manon Leijten said.

In July, the ACM contacted the platforms to outline their legal obligations, request details for their Trust and Safety teams and collect responses to a questionnaire on safeguarding public debate.

The September meeting will evaluate how companies plan to tackle disinformation, foreign interference and illegal hate speech during the campaign period.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Con artists pose as lawyers to steal from crypto scam victims

A new fraud tactic is emerging, with con artists posing as lawyers to target cryptocurrency scam victims. They exploit desperation by promising to recover lost funds, using elaborate ruses like fabricated government partnerships and forged documents.

Sophisticated tactics, including fake websites and staged WhatsApp chats, pressure people into paying additional fees.

The US Federal Bureau of Investigation has issued a warning about the scam. Fake law firms use detailed knowledge of a victim’s prior losses to appear credible, knowing the exact amounts and dates of fraudulent transactions.

The scheme often escalates when victims are directed to deposit money into what appear to be foreign bank accounts, which are sophisticated facades designed to steal more funds.

The FBI recommends a ‘Zero Trust’ approach to combat fraud. Any unsolicited recovery offer should be met with immediate scepticism. A major red flag is if a representative refuses to appear on camera or provide their licensing details.

The bureau also advises keeping detailed records of all interactions, like emails and video calls, as documentation could prove invaluable for investigators.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Crypto wallet apps must now comply with new Google Play rules

Google Play is introducing new policies for cryptocurrency wallet applications. The new rules will require them to be licensed in more than fifteen countries, including the United States and the European Union.

The changes, which come into effect on 29 October, will require providers in the US to register as a money services business or money transmitter. Those in the EU, meanwhile, must register as a crypto-asset service provider.

The updated rules, which aim to ensure compliance with industry standards, will not apply to non-custodial wallets. Following initial concerns from the crypto community, Google clarified the policy on X, stating that non-custodial apps are not in scope.

The new regulations could lead to a broader adoption of Know Your Customer checks and other anti-money laundering measures for the affected apps.

Google Play has a mixed history with cryptocurrency, having previously banned crypto mining apps in 2018 and removed several crypto news and video games. In 2021, the company removed several deceptive apps for allegedly tricking users into paying for an illegitimate cloud service.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Cyber-crime group BlackSuit crippled by $1 million crypto seizure

Law enforcement agencies in the United States and abroad have coordinated a raid to dismantle the BlackSuit ransomware operation, seizing servers and domains and approximately $1 million in cryptocurrency linked to ransom demands.

The action, led by the Department of Justice, Homeland Security Investigations, the Secret Service, the IRS and the FBI, involved cooperation with agencies across the UK, Germany, France, Canada, Ukraine, Ireland and Lithuania.

BlackSuit, a rebranded successor to the Royal ransomware gang and connected to the notorious Conti group, has been active since 2022. It has targeted over 450 US organisations across healthcare, government, manufacturing and education sectors, demanding more than $370 million in ransoms.

The crypto seized was traced back to a 2023 ransom payment of around 49.3 Bitcoin, valued at approximately $1.4 million. Investigators worked with cryptocurrency exchanges to freeze and recover roughly $1 million of those funds in early 2024.

While this marks a significant blow to the gang’s operations, officials warn that without arrests, the threat may persist or re-emerge under new identities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google patches critical Chrome bugs enabling code execution

Chrome security update fixes six flaws that could enable arbitrary code execution. Stable channel 139.0.7258.127/.128 (Windows, Mac) and .127 (Linux) ships high-severity patches that protect user data and system integrity.

CVE-2025-8879 is a heap buffer overflow in libaom’s video codec. CVE-2025-8880 is a V8 race condition reported by Seunghyun Lee. CVE-2025-8901 is an out-of-bounds write in ANGLE.

Detection methods included AddressSanitizer, MemorySanitizer, UndefinedBehaviorSanitizer, Control Flow Integrity, libFuzzer, and AFL. Further fixes address CVE-2025-8881 in File Picker and CVE-2025-8882, a use-after-free in Aura.

Successful exploitation could allow code to run with browser privileges through overflows and race conditions. The automatic rollout is staged; users should update it manually by going to Settings > About Chrome.

Administrators should prioritise rapid deployment in enterprise fleets. Google credited external researchers, anonymous contributors, and the Big Sleep project for coordinated reporting and early discovery.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Tesla seeks robotaxi operators in New York without permits

Tesla is recruiting drivers in Queens, New York, to operate vehicles fitted with automated driving systems. The company has not yet secured the permits required for autonomous vehicle testing in the city.

The roles involve driving specially equipped cars for extended periods while collecting audio and video data to train Tesla’s self-driving technology. However, the New York City Department of Transportation confirmed Tesla has yet to apply for the necessary authorisation.

By law, any approved operator must have a trained safety driver ready to take control at all times.

Tesla is advertising similar roles in Dallas, Houston, Tampa, Orlando, Miami and Palo Alto, hinting at a nationwide expansion of its Full Self-Driving trials. In Texas, the company has approval to run a driverless robotaxi service, now limited to Austin employees but likely opening to the public soon.

The company’s push into autonomous driving faces significant hurdles, including regulatory scrutiny, legal disputes and safety concerns. Meanwhile, electric vehicle sales have slowed, with critics blaming product missteps and CEO Elon Musk’s divisive remarks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI agents face prompt injection and persistence risks, researchers warn

Zenity Labs warned at Black Hat USA that widely used AI agents can be hijacked without interaction. Attacks could exfiltrate data, manipulate workflows, impersonate users, and persist via agent memory. Researchers said knowledge sources and instructions could be poisoned.

Demos showed risks across major platforms. ChatGPT was tricked into accessing a linked Google Drive via email prompt injection. Microsoft Copilot Studio agents leaked CRM data. Salesforce Einstein rerouted customer emails. Gemini and Microsoft 365 Copilot were steered into insider-style attacks.

Vendors were notified under coordinated disclosure. Microsoft stated that ongoing platform updates have stopped the reported behaviour and highlighted built-in safeguards. OpenAI confirmed a patch and a bug bounty programme. Salesforce said its issue was fixed. Google pointed to newly deployed, layered defences.

Enterprise adoption of AI agents is accelerating, raising the stakes for governance and security. Aim Labs, which had previously flagged similar zero-click risks, said frameworks often lack guardrails. Responsibility frequently falls on organisations deploying agents, noted Aim Labs’ Itay Ravia.

Researchers and vendors emphasise layered defence against prompt injection and misuse. Strong access controls, careful tool exposure, and monitoring of agent memory and connectors remain priorities as agent capabilities expand in production.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

YouTube’s AI flags viewers as minors, creators demand safeguards

YouTube’s new AI age check, launched on 13 August 2025, flags suspected minors based on their viewing habits. Over 50,000 creators petitioned against it, calling it ‘AI spying’. The backlash reveals deep tensions between child safety and online anonymity.

Flagged users must verify their age with ID, credit card, or a facial scan. Creators say the policy risks normalising surveillance and shrinking digital freedoms.

SpyCloud’s 2025 report found a 22% jump in stolen identities, raising alarm over data uploads. Critics fear YouTube’s tool could invite hackers. Past scandals over AI-generated content have already hurt creator trust.

Users refer to it on X as a ‘digital ID dragnet’. Many are switching platforms or tweaking content to avoid flags. WebProNews says creators demand opt-outs, transparency, and stronger human oversight of AI systems.

As global regulation tightens, YouTube could shape new norms. Experts urge a balance between safety and privacy. Creators push for deletion rules to avoid identity risks in an increasingly surveilled online world.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UK-based ODI outlines vision for EU AI Act and data policy

The Open Data Institute (ODI) has published a manifesto setting out six principles for shaping European Union policy on AI and data. Aimed at supporting policymakers, it aligns with the EU’s upcoming digital reforms, including the AI Act and the review of the bloc’s digital framework.

Although based in the UK, the ODI has previously contributed to EU policymaking, including work on the General-Purpose AI Code of Practice and consultations on the use of health data. The organisation also launched a similar manifesto for UK data and AI policy in 2024.

The ODI states that the EU has a chance to establish a global model of digital governance, prioritizing people’s interests. Director of research Elena Simperl called for robust open data infrastructure, inclusive participation, and independent oversight to build trust, support innovation, and protect values.

Drawing on the EU’s Competitiveness Compass and the Draghi report, the six principles are: data infrastructure, open data, trust, independent organisations, an inclusive data ecosystem, and data skills. The goal is to balance regulation and innovation while upholding rights, values, and interoperability.

The ODI highlights the need to limit bias and inequality, broaden access to data and skills, and support smaller enterprises. It argues that strong governance should be treated like physical infrastructure, enabling competitiveness while safeguarding rights and public trust in the AI era.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UK minister defends use of live facial recognition vans

Dame Diana Johnson, the UK policing minister, has reassured the public that expanded use of live facial recognition vans is being deployed in a measured and proportionate manner.

She emphasised that the tools aim only to assist police in locating high-harm offenders, not to create a surveillance society.

Addressing concerns raised by Labour peer Baroness Chakrabarti, who argued the technology was being introduced outside existing legal frameworks, Johnson firmly rejected such claims.

She stated that UK public acceptance would depend on a responsible and targeted application.

By framing the technology as a focused tool for effective law enforcement rather than pervasive monitoring, Johnson seeks to balance public safety with civil liberties and privacy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!