Parliament deadlock leaves EU chat-scanning extension in doubt

The civil liberties committee failed to secure majority backing for its amended report on extending the EU’s temporary chat-scanning rules instead of giving a clear negotiating position.

Members of Parliament reviewed the amendments on Monday, but the final text did not garner sufficient support, leaving the proposal without endorsement as the adoption deadline approaches.

A proposal to extend the current derogation that allows tech companies to voluntarily scan their services for Child Sexual Abuse Material (CSAM).

The existing regime expires in April 2026 and was intended only as a stopgap while a permanent Child Sexual Abuse Regulation was developed. Years of stalled negotiations have led to the temporary rules being extended twice since 2021.

Council has already approved its position without changes to the Commission proposal, creating a tight timeline for Parliament.

With trilogue talks finally underway, institutions would need to conclude discussions unusually quickly to prevent the legal basis from expiring. If no agreement is reached by April, companies would lose their ability to scan services under the EU law.

The committee confirmed that the file will now move to plenary in the week of 9–12 March, where political groups may table new amendments. An outcome that will determine whether the temporary regime remains in place while negotiations on the permanent system continue.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Europe turns to satellite networks as Deutsche Telekom expands Starlink collaboration

Deutsche Telekom is turning to satellite connectivity to address Europe’s persistent mobile coverage gaps, rather than relying solely on terrestrial networks.

The company announced a partnership with Starlink during the Mobile World Congress in Barcelona, arguing that non-terrestrial networks can help reach remote forests, mountains and islands that remain underserved despite broad coverage elsewhere.

A collaboration that aims to support direct-to-device satellite links by 2028, enabling future smartphones to connect to Starlink’s MSS spectrum without additional hardware.

Telecommunications leaders describe the plan as a step toward an ‘everywhere network’, extending reliable service to areas long constrained by topographical and conservation barriers. The partnership follows earlier joint work with SpaceX to eliminate dead zones.

Deutsche Telekom is also increasing its use of agentic AI, integrating autonomous network-enhancing systems intended to improve translation, search and service features across devices.

Executives say these capabilities work even on older phones, reducing dependence on apps and creating a more inclusive digital environment.

Although committed to European digital sovereignty, the company insists that global collaboration remains necessary for long-term competitiveness.

Leadership argues that precise regulation and controlled data environments aligned with European standards can balance international cooperation with privacy protection. They remain confident that European technology firms and start-ups will continue driving meaningful innovation across the sector.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

ClawJacked flaw let attackers hijack AI agents through the browser

A high-severity vulnerability dubbed ‘ClawJacked’ has been discovered in OpenClaw, an open-source AI agent framework that lets developers run autonomous AI assistants locally.

The flaw, uncovered by Oasis Security, allowed malicious websites to silently hijack a user’s local AI agent instance and steal sensitive data, all triggered by a single browser visit.

The attack exploited OpenClaw’s local WebSocket gateway, which assumed that traffic from localhost could be trusted. A malicious website could open a WebSocket connection to the gateway, brute-force the password at hundreds of guesses per second, with no rate limiting applied to local connections, and then silently register as a trusted device without any user prompt.

Once inside, attackers gained admin-level access to the AI agent, connected devices, logs, and configuration data. Oasis Security responsibly disclosed the flaw, and OpenClaw issued a patch within 24 hours, releasing version 2026.2.26.

Security experts are urging organisations to update immediately, audit the permissions held by their AI agents, and apply strict governance policies, treating AI agents as non-human identities that require the same oversight as human users or service accounts.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Why detecting deepfakes is no longer enough to stay secure

Deepfakes and injection attacks are no longer just tools for misinformation; they are now being deployed to break the identity verification systems that underpin banking, hiring, and account access.

Bad actors are targeting the critical moments when a system determines whether someone is a real person, from customer onboarding at banks to remote hiring and account recovery workflows.

Attackers exploit verification systems in two main ways: by using increasingly convincing synthetic faces and voice clones to mimic real people, and by launching injection attacks that substitute fraudulent video into the capture pipeline before it ever reaches the detection system.

According to the Entrust 2026 Identity Fraud Report, deepfakes are now linked to one in five biometric fraud attempts, with injection attacks rising 40% year-on-year.

Experts warn that detecting deepfakes alone is no longer sufficient. Enterprises must validate the whole session, including device integrity and behavioural signals, in real time.

Gartner predicts that by 2026, 30% of enterprises will no longer consider face-based identity verification reliable in isolation, given the pace AI AI-generated deepfake attacks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Data breach sparks outrage at Cloud Imperium among players

A data breach at British game studio Cloud Imperium has angered players worldwide after the company quietly announced the incident. Users criticised the slow disclosure and the minimal information provided about what was accessed.

The breach, which occurred on 21 January, exposed names, contact details and dates of birth from backup systems. Cloud Imperium insists no passwords, financial information or game data were compromised.

Players have expressed frustration over the company’s reassurances, arguing that even basic personal details could be used in phishing campaigns. Forums and social media quickly filled with criticism, calling the announcement hidden and inadequate.

Cloud Imperium said it acted quickly to contain the breach, refresh security settings, and monitor systems for further incidents. The studio maintains that the issue should not affect gameplay or user safety, but some users remain sceptical.

The company’s flagship game, Star Citizen, is crowdfunded and boasts millions of players. However, it has not disclosed the total number of accounts affected, leaving the community uneasy about the transparency of the response.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Deepfake scams target Indian global executives

A deepfake video of Bombay Stock Exchange chief executive Sundararaman Ramamurthy circulated on social media in India, falsely offering stock advice to investors. The exchange moved quickly to report and remove the content, warning the public not to trust fake investment clips.

Cybersecurity experts say such cases are rising sharply, with one US firm estimating a 3,000 percent increase in deepfake incidents over two years. Executives in the US and the UK have also been impersonated using AI-generated audio and video.

In Hong Kong, police said a UK engineering firm lost $25m after an employee joined a video call featuring deepfake versions of senior colleagues. The transfer was made to multiple accounts before the fraud was discovered.

Security companies in the US and the UK are developing detection tools that analyse facial movement and blood flow patterns to identify AI-generated footage. Analysts warn that as costs fall and tools improve, businesses in India, Hong Kong and beyond face an escalating arms race against digital fraud.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Free plan users can now transfer data to Claude

Anthropic has enhanced its Claude AI chatbot to make switching from other platforms easier. Users on the free plan can now activate Claude’s memory feature, which allows them to import data from other AI platforms using a new dedicated tool.

The update ensures that users don’t have to start over when transferring context and history from competitors like OpenAI’s ChatGPT or Google’s Gemini.

The memory import option, first introduced in October for paid subscribers, now appears under ‘settings’ → ‘capabilities’ for all users. The tool lets users copy a prompt from their previous AI and paste the output into Claude, seamlessly transferring past interactions.

The recent popularity of Claude has been driven by tools such as Claude Code and Claude Cowork, as well as the launch of the Opus 4.6 and Sonnet 4.6 models. Upgrades enhance Claude’s coding, spreadsheet, and complex task capabilities, boosting its appeal to new users.

Anthropic’s visibility has also increased amid debates with the Pentagon, as the company refuses to loosen AI safeguards for military use, drawing ‘red lines’ around mass surveillance and autonomous weapons.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Chrome unveils 3-phase quantum-resistant HTTPS upgrade with Merkle Tree Certificates

Google has outlined a plan to strengthen Chrome’s HTTPS security against future quantum-computing threats. Rather than expanding traditional X.509 certificate chains in Chrome with post-quantum cryptography, the company is developing a new model based on Merkle Tree Certificates (MTCs).

The proposal from the PLANTS working group seeks to modernise the web public key infrastructure. Under the MTC model, a Certification Authority signs a single ‘Tree Head’ covering many certificates. Browsers receive a lightweight proof instead of a full certificate chain.

Google said this structure reduces authentication data exchanged during TLS handshakes while supporting post-quantum algorithms. By decoupling cryptographic strength from certificate size, the approach seeks to preserve performance as stronger security standards are adopted.

The company is already testing MTCs with real internet traffic. Phase one involves feasibility studies with Cloudflare, while phase two, in early 2027, will invite selected Certificate Transparency log operators to support initial public deployment.

By the third quarter of 2027, Google plans to establish requirements for onboarding certificate authorities to the quantum-resistant Chrome Root Store, which exclusively supports MTCs. The company described the initiative as foundational to maintaining long-term web security resilience.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Vietnam AI Law establishes comprehensive risk based governance framework

Vietnam’s Law on Artificial Intelligence has entered into force, establishing the first dedicated AI legal framework in Southeast Asia. The law centralises oversight and replaces earlier AI provisions in the 2025 Law on Digital Technology Industry.

The framework closely mirrors the AI Act adopted by the European Union. It promotes accountability, transparency, and safety in response to risks such as misinformation, copyright infringement, and deepfakes.

At the same time, Vietnam places a stronger emphasis on digital sovereignty and domestic AI capacity. While remaining open to international integration, the law prioritises national strategic interests.

The legislation introduces a tiered risk classification system. AI systems considered to pose unacceptable risks, including threats to national security or human dignity, are banned, while low-risk applications such as spam filters face lighter obligations.

The Vietnam Ministry of Science and Technology will lead implementation. A national AI database will support monitoring and registration, and a dedicated AI development fund will invest in data centres and research capacity as part of Vietnam’s broader technology strategy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI Cybersecurity stability framework unlocks advanced Non Human Identity management

AI is increasingly positioned as a key driver of cybersecurity stability. By analysing large volumes of data and detecting anomalies in real time, AI helps organisations strengthen defence systems and respond faster to evolving digital threats.

Modern cybersecurity challenges are closely linked to the rise of Non-Human Identities (NHIs), including machine accounts, tokens, and automated credentials. These identities require continuous monitoring and secure lifecycle management to prevent unauthorised access and data breaches.

The integration of AI with NHI management enables a more proactive security approach. AI improves visibility into access permissions and system behaviour, helping organisations reduce risks and maintain stronger control over their digital environments.

Automation powered by AI enhances operational efficiency across cybersecurity processes. Tasks such as credential rotation, access monitoring, and policy enforcement can be automated, allowing security teams to prioritise strategic decision-making.

AI also strengthens threat intelligence capabilities by identifying patterns and predicting potential attacks before they occur. This predictive capacity helps close security gaps, particularly between development, operations, and security teams.

Across sectors such as finance, healthcare, and technology, AI-driven cybersecurity solutions support compliance and data protection requirements. These systems contribute to building resilient infrastructures capable of adapting to increasingly sophisticated cyber threats.

Finally, combining AI capabilities with structured identity management creates a foundation for long-term cybersecurity resilience. Organisations adopting this approach can improve incident response, enhance adaptability, and secure future digital operations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!