AgentKit enables ID verification for AI-powered online commerce

Tools for Humanity has introduced a new verification system to strengthen trust in online transactions, as demand for reliable ID verification tools grows in AI-driven environments. The update builds on its World project, which aims to prove that real humans, rather than automated systems, are behind digital activity.

The company’s latest release, AgentKit, is designed to support agentic commerce by allowing websites to verify that AI agents are acting on behalf of authenticated users. As AI programs increasingly browse websites and make purchases autonomously, ID verification tools are becoming essential to prevent fraud, spam, and misuse.

AgentKit relies on World ID, a system that generates a secure digital identity through biometric verification. Users obtain a verified ID by scanning their iris using a dedicated device, which converts the scan into an encrypted digital code. These ID verification tools are then used to confirm that transactions initiated by AI agents are linked to a real and unique individual.

The system integrates with the x402 protocol, a blockchain-based standard developed by Coinbase and Cloudflare, enabling automated transactions between systems. By combining this protocol with ID verification tools, websites can validate whether a human user authorises an AI agent before completing a purchase.

‘AgentKit is built as a complementary extension to the x402 v2 protocol, in coordination with Coinbase,’ the company said. ‘The integration is designed so that any website already using x402 can enable proof of unique human verification alongside (or instead of) micropayments.’

According to the company, the approach functions similarly to delegating authority to an AI agent, allowing platforms to decide whether to trust automated actions. These ID verification tools provide a layer of accountability, helping ensure that AI-driven transactions remain secure and traceable.

AgentKit is currently available in beta, with developers encouraged to test and refine the system. However, access depends on users obtaining a verified World ID, reinforcing the central role of biometric-based ID verification tools in the company’s ecosystem.

As agentic commerce expands across platforms such as Amazon and Mastercard, the need for trusted identity systems is becoming more urgent. By positioning its ID verification tools at the centre of this emerging market, the company aims to establish itself as a key provider of trust infrastructure for AI-powered digital transactions.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UK announces £2.5 billion investment in AI and quantum technologies

Plans to accelerate technological leadership have been outlined by the HM Treasury and the Department for Science, Innovation and Technology, with a £2.5 billion investment targeting AI and quantum computing.

Ambition has been reinforced by Rachel Reeves, who positioned AI as a central driver of economic growth, alongside closer European ties and regional development. Strategy aims to secure the fastest adoption of AI across the G7 while supporting domestic innovation ecosystems.

Significant funding in the UK will be directed towards a Sovereign AI initiative, quantum infrastructure and research capacity. Plans include procurement of large-scale quantum systems and targeted investment in startups, helping companies scale while strengthening national capabilities in advanced technologies.

Expectations surrounding quantum computing are framed as transformative, with potential to reshape industries from healthcare to energy. Combined investment reflects a broader effort to align innovation policy with long-term economic growth and global competitiveness.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Stryker cyberattack wipes devices via Microsoft environment without malware

A major cyber incident has impacted Stryker Corporation, where attackers targeted its internal Microsoft environment and remotely wiped tens of thousands of employee devices without deploying traditional malware.

Access to systems was reportedly achieved through a compromised administrator account, allowing attackers to issue remote wipe commands via Microsoft Intune.

As a result, large parts of the company’s internal infrastructure were disrupted, with some services remaining offline and business operations affected.

Responsibility has been claimed by Handala, a group often associated with broader geopolitical cyber activity. The incident reflects a growing trend of cyber operations blending disruption, data theft and strategic messaging.

Despite the scale of the attack, the company confirmed that its medical devices and patient-facing technologies were not impacted.

The case highlights increasing risks linked to identity compromise and cloud-based management tools, where attackers can cause significant damage without relying on conventional malware techniques.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU calls on US tech firms to respect rules on handling staff data

Concerns over data protection have intensified as the European Commission calls on major technology companies to apply the EU standards when handling sensitive staff information linked to digital regulation.

Pressure follows requests from the US House Judiciary Committee seeking access to communications between US firms and the EU officials involved in enforcing laws such as the Digital Services Act and Digital Markets Act.

The EU officials emphasise that formal exchanges with companies take place through official channels, including documented correspondence, rather than informal messaging platforms. Internal communication practices may involve encrypted tools, reflecting growing concerns about data security and external scrutiny.

Debate surrounding the issue reflects wider tensions between the EU and the US over digital governance, privacy protections and regulatory authority. Questions over jurisdiction and access to sensitive communications are likely to remain central as transatlantic tech policy evolves.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

GDPR changes debated as EU seeks balance on data protection rules

Debate over potential updates to the GDPR is intensifying, as Marina Kaljurand advocates a focused ‘fitness check’ rather than sweeping legislative changes in an omnibus package.

Concerns raised in the European Parliament highlight risks associated with altering foundational elements of the regulation, particularly its definitions to personal data. Preserving these core principles is seen as essential to maintaining the integrity of the EU’s data protection framework.

Ongoing discussions reflect broader policy tensions within the EU, where efforts to reduce regulatory complexity must be balanced against the need to uphold strong privacy safeguards. Proposals for simplification are therefore facing scrutiny from lawmakers prioritising stability and legal clarity.

Future developments are likely to shape how the EU adapts its data protection rules to evolving digital markets, while ensuring that existing protections remain effective in a rapidly changing technological environment.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

EU moves to strengthen digital resilience with subsea cable funding

Efforts to improve the security of Europe’s digital infrastructure have advanced as the European Commission opens a €180 million funding call to support backup systems for subsea internet cables.

Investment by the EU will focus on developing alternative routes and redundancy mechanisms, ensuring continuity of connectivity in the event of disruptions affecting critical undersea networks that carry global data traffic.

Growing concerns around infrastructure vulnerability have increased attention on subsea cables, which play a central role in international communications. Strengthening resilience is therefore becoming a priority within broader European strategies on technological sovereignty and security.

Planned projects are expected to enhance reliability across the region, reducing risks associated with outages or potential external threats to essential telecommunications infrastructure.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

NSA warns of AI supply chain risks in new cybersecurity guidance

The National Security Agency has released new guidance on managing risks across the AI supply chain, highlighting growing cybersecurity concerns tied to AI and machine learning systems. The joint information sheet outlines how organisations can better assess vulnerabilities when deploying or sourcing AI technologies.

The document defines the AI and machine learning supply chain as a combination of key components, including training data, models, software, infrastructure, hardware, and third-party services. Each element can introduce risks affecting confidentiality, integrity, or availability, particularly as advanced tools such as large language models and AI agents become more widely adopted.

Security risks associated with data include bias, poisoning attacks, and exposure via techniques such as model inversion and data extraction. For models, the guidance warns of hidden backdoors, malware, evasion attacks, and model manipulation. Organisations are advised to use trusted sources, perform integrity checks, and maintain verified model registries to mitigate such threats.

The paper also highlights software and infrastructure vulnerabilities, noting that AI systems often rely on complex dependencies that expand the attack surface. Recommended measures include malware scanning, testing, patching, and maintaining software bills of materials. Additional risks arise from third-party services, which may introduce weaknesses through their own supply chains or shared environments.

To manage these risks, organisations are urged to improve visibility across their AI ecosystems, identify suppliers and subcontractors, and require documentation such as AI and software bills of materials. The guidance aligns with frameworks from the National Institute of Standards and Technology and MITRE, reinforcing the need for coordinated approaches to AI supply chain security.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Exchange Online outage affecting Outlook access resolved by Microsoft

Microsoft has addressed an Exchange Online outage that disrupted access to email and calendar services for users worldwide. The issue affected multiple connection methods, including Outlook on the web, Outlook desktop, and Exchange ActiveSync.

The company first acknowledged the problem early in the day, saying it was investigating reports of users being unable to access their mailboxes. According to a Microsoft 365 admin centre update, several Exchange Online connection protocols were impacted during the outage.

Although Microsoft later reported that telemetry indicated the issue was no longer occurring for most users, some customers continued to experience access problems. At one point, the Office.com portal also displayed an error message, preventing users from logging in.

Microsoft linked the disruption to an issue within its supporting network infrastructure, which affected how traffic was processed. Engineers implemented configuration changes to restore normal service and continue monitoring the platform to ensure stability.

In a later update, Microsoft confirmed that the Exchange Online outage had been mitigated and that services had been restored. The company said it is still investigating the root cause and will provide further details in a post-incident report, while a separate issue affecting Microsoft 365 Copilot web access remains under review.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UK quantum ambitions get a boost as Cambridge joins forces with IonQ

The University of Cambridge has announced its largest-ever corporate research partnership, with US quantum technology company IonQ set to install a 256-qubit quantum computer at the Cavendish Laboratory, which will become the most powerful quantum computer in the UK upon installation.

The system will be housed in the newly created IonQ Quantum Innovation Centre at the Ray Dolby Centre, Cambridge’s new physics home.

As part of the collaboration, Innovate UK will provide access and computing time to UKRI’s National Quantum Computing Centre over three years, enabling researchers and early-stage companies across the UK to use the first commercial-scale quantum computer installed at a British university.

The centre’s research portfolio will span quantum computing, networking, sensing, and security.

The partnership aligns with the UK Government’s National Quantum Strategy and its five ‘Quantum Missions’, which set milestones for investment and research to secure the UK’s position as a world leader in quantum technology.

IonQ has been rapidly expanding its capabilities through acquisitions, including Oxford Ionics for $1.08 billion in September 2025 and chipmaker SkyWater Technology for $1.8 billion in January 2026.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Publishers challenge OpenAI over alleged copyright infringement

Legal pressure is increasing on OpenAI as Encyclopaedia Britannica and Merriam-Webster file a lawsuit accusing the company of large-scale copyright violations.

According to the complaint, nearly 100,000 copyrighted articles were allegedly used without authorisation to train large language models. Publishers also argue that AI-generated outputs can reproduce parts of their content, raising concerns about unauthorised distribution.

Additional claims focus on how AI systems retrieve and present information. The lawsuit argues that retrieval-augmented generation tools may rely on proprietary databases, potentially undermining publishers’ business models by reducing traffic to original sources.

Concerns are also raised about inaccurate outputs attributed to publishers, which could affect trust in established information providers. The case highlights ongoing tensions between AI development and intellectual property protections.

Growing legal disputes involving media organisations, including The New York Times, suggest that courts will play a key role in defining how copyrighted material can be used in AI training.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!