The UK government has announced a new investment in London-based Isomorphic Labs through its Sovereign AI Fund, strengthening national efforts to support homegrown AI companies developing strategic technologies.
The company focuses on using frontier AI systems to redesign how medicines are discovered and developed. Isomorphic Labs builds on the scientific foundations of AlphaFold, the DeepMind system capable of predicting protein structures with high accuracy, while expanding into broader AI-driven drug design models across multiple therapeutic areas.
The investment forms part of a wider fundraising round as the company scales efforts to accelerate medicine development and reduce the time traditionally required for pharmaceutical research. British officials described the initiative as part of a broader strategy to strengthen sovereign AI capabilities, support domestic innovation, and ensure future AI breakthroughs remain anchored in the UK economy.
The Sovereign AI programme, launched in 2026, combines venture capital investment with government-backed support for promising UK AI firms. Officials say supported companies must maintain a meaningful British presence while contributing to domestic economic growth, technological leadership, and high-skilled employment.
Why does it matter?
AI is increasingly moving beyond consumer applications and into strategic sectors such as biotechnology, pharmaceuticals, and healthcare infrastructure. The UK’s backing of Isomorphic Labs reflects growing international competition to secure sovereign AI capabilities tied to scientific research, intellectual property, and future economic advantage.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
European Commission officials are examining whether Meta’s policy on access to WhatsApp for AI providers may raise competition concerns in the European Economic Area.
Changes to the WhatsApp Business Solution terms are at the centre of the investigation, particularly as they affect how third-party AI providers can offer services on the platform. The Commission is assessing whether the policy could limit access for competing AI services and reduce choice for users and businesses.
Messaging platforms are becoming important distribution channels for AI-powered services. As chatbots and AI assistants become more integrated into everyday communication tools, access to widely used platforms such as WhatsApp may become an important factor in competition between providers.
Commission officials have said they will examine whether Meta’s conduct complies with the EU competition rules. Opening an investigation does not mean that the Commission has reached a conclusion or found an infringement.
The broader EU scrutiny of large digital platforms is increasingly focused on how access to infrastructure, services and user ecosystems is managed as AI tools become more widely adopted.
Why does it matter?
Competition questions are expanding into AI distribution channels. Messaging platforms can shape which AI services reach users and businesses at scale, making access rules an important part of the emerging AI market. The outcome could influence how major platforms design access policies for third-party AI providers while regulators seek to preserve competition and user choice.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our chatbot!
With the rapid expansion of AI technologies, agentic AI is rapidly moving from experimentation to deployment on a scale larger than ever before. As a result, these systems have been given far greater autonomy to perform tasks with limited human input, much to the delight of enterprise magnates.
Companies such as Microsoft, Google, Anthropic, and OpenAI are increasingly developing agentic AI systems capable of automating vulnerability detection, incident response, code analysis, and other security tasks traditionally handled by human teams.
The appeal of using agentic AI as a first line of defence is palpable, as cybersecurity teams face mounting pressure from the growing volume of attacks. According to the Microsoft Digital Defense Report 2025, the company now detects more than 600 million cyberattacks daily, ranging from ransomware and phishing campaigns to identity attacks. Additionally, the International Monetary Fund has also warned that cyber incidents have more than doubled since the COVID-19 pandemic, potentially triggering institutional failures and incurring enormous financial losses.
To add insult to injury, ransomware groups such as Conti, LockBit, and Salt Typhoon have shown increased activity from 2024 through early 2026, targeting critical infrastructure and global communications, as if aware of the upcoming cybersecurity fortifications and using a limited window of time to incur as much damage as possible.
In such circumstances, fully embracing agentic AI may seem like an ideal answer to the cybersecurity challenges looming on the horizon. Systems capable of autonomously detecting threats, analysing vulnerabilities, and accelerating response times could significantly strengthen cyber resilience.
Yet the same autonomy that makes these systems attractive to defenders could also be exploited by malicious actors. If agentic AI becomes a defining feature of cyber defence, policymakers and companies may soon face a more difficult question: how can they maximise its benefits without creating an entirely new layer of cyber risk?
Why cybersecurity is turning to agentic AI
The growing interest in agentic AI is not simply driven by the rise in cyber threats. It is also a response to the operational limitations of modern security teams, which are often overwhelmed by repetitive tasks that consume time and resources.
Security analysts routinely handle phishing alerts, identity verification requests, vulnerability assessments, patch management, and incident prioritisation — processes that can become difficult to manage at scale. Many of these tasks require speed rather than strategic decision-making, creating a natural opening for AI systems to operate with greater autonomy.
Microsoft has aggressively moved into this space. In March 2025, the company introduced Security Copilot agents designed to autonomously handle phishing triage, data security investigations, and identity management. Rather than replacing human analysts, Microsoft positioned the tools to reduce repetitive workloads and enable security teams to focus on more complex threats.
Google has approached the issue through vulnerability research. Through Project Naptime, the company demonstrated how AI systems could replicate parts of the workflow traditionally handled by human security researchers by identifying vulnerabilities, testing hypotheses, and reproducing findings.
Anthropic introduced another layer of complexity through Claude Mythos, a model built for high-risk cybersecurity tasks. While the company presented the model as a controlled release for defensive purposes, the announcement also highlighted how advanced cyber capabilities are becoming increasingly embedded in frontier AI systems.
Meanwhile, OpenAI has expanded partnerships with cybersecurity organisations and broadened access to specialised tools for defenders, signalling that major AI firms increasingly view cybersecurity as one of the most commercially viable applications for autonomous systems.
Together, these developments show that agentic AI is gradually becoming embedded in the cybersecurity infrastructure. For many companies, the question is no longer whether autonomous systems can support cyber defence, but how much responsibility they should be given.
When agentic AI tools become offensive weapons
The same capabilities that make agentic AI valuable to defenders also make it attractive to malicious actors. Systems designed to identify vulnerabilities, analyse code, automate workflows, and accelerate decision-making can be repurposed for offensive cyber operations.
Anthropic offered one of the clearest examples of that risk when it disclosed that malicious actors had used Claude in cyber campaigns. The company said attackers were not simply using the model for basic assistance, but were integrating it into broader operational workflows. The incident showed how agentic AI can move cyber misuse beyond advice and into execution.
The risk extends beyond large-scale cyber operations. Agentic AI systems could make phishing campaigns more scalable, automate reconnaissance, accelerate vulnerability discovery, and reduce the technical expertise needed to launch certain attacks. Tasks that once required specialist teams could become easier to coordinate through autonomous systems.
Security researchers have repeatedly warned that generative AI is already making social engineering more convincing through realistic phishing emails, cloned voices, and synthetic identities. More autonomous systems could further push those risks by combining content generation with independent action.
The concern is not that agentic AI will replace human hackers. Cybercrime could become faster, cheaper, and more scalable, mirroring the same efficiencies that organisations hope to achieve through AI-powered defence.
The agentic AI governance gap
The governance challenge surrounding agentic AI is no longer theoretical. As autonomous systems gain access to internal networks, cloud infrastructure, code repositories, and sensitive datasets, companies and regulators are being forced to confront risks that existing cybersecurity frameworks were not designed to manage.
Policymakers are starting to respond. In February 2026, the US National Institute of Standards and Technology (NIST) launched its AI Agent Standards Initiative, focused on identity verification and authentication frameworks for AI agents operating across digital environments. The aim is simple but important: organisations need to know which agents can be trusted, what they are allowed to do, and how their actions can be traced.
Governments are also becoming more cautious about deployment risks. In May 2026, the Cybersecurity and Infrastructure Security Agency (CISA) joined cybersecurity agencies from Australia, Canada, New Zealand, and the United Kingdom in issuing guidance on the secure adoption of agentic AI services. The warning was clear: autonomous systems become more dangerous when they are connected to sensitive infrastructure, external tools, and internal permissions.
The private sector is adjusting as well. Companies are increasingly discussing safeguards such as restricted permissions, audit logs, human approval checkpoints, and sandboxed environments to limit the degree of autonomy granted to AI agents.
The questions facing businesses are becoming practical. Should an AI agent be allowed to patch vulnerabilities without approval? Can it disable accounts, quarantine systems, or modify infrastructure independently? Who is held accountable when an autonomous system makes the wrong decision?
Agentic AI may become one of cybersecurity’s most effective defensive tools. Its success, however, will depend on whether governance frameworks evolve quickly enough to keep pace with the technology itself.
How companies are building guardrails around agentic AI
As concerns around autonomous cyber systems grow, companies are increasingly experimenting with safeguards designed to prevent agentic AI from becoming an uncontrolled risk. Rather than granting unrestricted access, many organisations are limiting what AI agents can see, what systems they can interact with, and what actions they can execute without human approval.
Anthropic has restricted access to Claude Mythos over concerns about offensive misuse, while OpenAI has recently expanded its Trusted Access for Cyber programme to provide vetted defenders with broader access to advanced cyber tools. Both approaches reflect a growing consensus that powerful cyber capabilities may require tiered access rather than unrestricted deployment.
The broader industry is moving in a similar direction. CrowdStrike has increasingly integrated AI-driven automation into threat intelligence and incident response workflows while maintaining human oversight for critical decisions. Palo Alto Networks has also expanded its AI-powered security automation tools designed to reduce response times without fully removing human analysts from the decision-making process.
Cloud providers are also becoming more cautious about autonomous access. Amazon Web Services, Google Cloud, and Microsoft Azure have increasingly emphasised zero-trust security models, role-based permissions, and segmented access controls as enterprises deploy more automated tools across sensitive infrastructure.
Meanwhile, sectors such as finance, healthcare, and critical infrastructure remain particularly cautious about fully autonomous deployment due to the potential consequences of false positives, accidental shutdowns, or disruptions to essential services.
As a result, security teams are increasingly discussing safeguards such as audit logs, sandboxed environments, role-based permissions, staged deployments, and human approval checkpoints to balance speed with accountability. For now, many companies seem ready to embrace agentic AI, but without keeping one hand on the emergency brake.
The future of cybersecurity may be agentic
Agentic AI is unlikely to remain a niche experiment for long. The scale of modern cyber threats, combined with the mounting pressure on security teams, means organisations will continue to look for faster and more scalable defensive tools.
That shift could significantly improve cybersecurity resilience. Autonomous systems may help organisations detect threats earlier, reduce response times, address workforce shortages, and manage the growing volume of attacks that human teams increasingly struggle to handle alone.
At the same time, the technology’s long-term success will depend as much on restraint as on innovation. Without clear governance frameworks, operational safeguards, and human oversight, the same tools designed to strengthen cyber defence could introduce entirely new vulnerabilities.
The future of cybersecurity may increasingly belong to agentic AI. Whether that future becomes safer or more volatile may depend on how responsibly governments, companies, and security teams manage the transition.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The US Economic Development Administration has announced approximately $25 million in funding for a new AI Upskill Accelerator Pilot Program to support AI workforce training.
The programme will fund industry-driven partnerships that design and implement AI training models for workers and businesses in sectors considered important to regional economies. EDA says the initiative is intended to support workforce development approaches that can scale, adapt and become self-sustaining as AI technologies continue to evolve.
The funding opportunity links the programme to the Trump administration’s 2025 Artificial Intelligence Action Plan, which includes goals to accelerate AI development, support adoption across industries and strengthen US leadership in the technology. EDA says the programme is part of efforts to empower American workers to use AI tools and support industries tied to regional growth.
Deputy Assistant Secretary and Chief Operating Officer Ben Page said AI is becoming ‘a core driver of productivity and growth across industries’ and that workers need AI skills so regions can attract investment, adopt advanced technologies and sustain long-term economic growth.
The pilot will support workforce development in an emerging technology area while helping businesses and workers build the skills needed to use AI in the workplace. Applications for the programme are open until 10 July 2026.
Why does it matter?
The programme shows how AI policy is increasingly being linked to regional economic development and workforce readiness, not only research or infrastructure. By funding industry-driven training models, the EDA is trying to prepare workers and local economies for AI adoption while helping businesses close skills gaps that could affect productivity, investment and competitiveness.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Bhutan’s Gelephu Mindfulness City has launched an accelerated pathway for crypto and fintech firms already regulated in major financial hubs such as Singapore, Hong Kong and Abu Dhabi.
The system is intended to reduce duplication in compliance checks while allowing eligible companies to incorporate in Gelephu, seek local regulatory approval, and open corporate bank accounts through a coordinated process involving DK Bank, the city’s official banking partner. Standard Know Your Customer and Anti-Money Laundering checks will still apply.
Officials said foreign licences will not replace local supervision, but will instead help streamline due diligence. The framework also differs from passporting models used in regions such as the European Union, as each firm must still meet Gelephu’s own regulatory requirements.
Gelephu Mindfulness City also rejected speculation linking recent Bitcoin transfers flagged by analytics platforms to reserve sales. Officials said Bitcoin held under the country’s ‘Bitcoin Development Pledge’ remains part of strategic reserves allocated for the long-term development of the city.
Why does it matter?
The move shows how smaller jurisdictions are competing for digital asset and fintech firms by offering faster market entry while trying to preserve regulatory credibility. By recognising existing licences without replacing local supervision, Gelephu is positioning itself as a controlled gateway for firms seeking access to a new crypto and fintech jurisdiction.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our chatbot!
The Attorney General of Texas has filed a lawsuit against Netflix, alleging the company unlawfully collected user data without consent. The case claims the platform tracked extensive behavioural information from both adults and children while presenting itself as privacy-conscious.
According to the lawsuit, Netflix allegedly logged viewing habits, device usage and other interactions, turning user activity into monetised data. The lawsuit further claims that this data was shared with brokers and advertising technology firms to build detailed consumer profiles.
The Attorney General also argues that Netflix designed features to increase engagement, including autoplay, which allegedly encouraged prolonged viewing, particularly among younger users. These practices allegedly contradict the platform’s public messaging about being ad-free and family-friendly.
Texas’s complaint quoted a statement from Netflix co-founder and Chairman Reed Hastings, who allegedly said the company did not collect user data. He sought to distinguish Netflix’s approach from other major technology platforms with regard to data collection.
The Attorney General also claims that Netflix’s alleged surveillance violates the Texas Deceptive Trade Practices Act. The legal action seeks to halt the alleged data practices, introduce stricter controls, such as disabling autoplay for children, and impose penalties under consumer protection law, including civil fines of $ 10,000 per violation. The case highlights ongoing scrutiny of data practices by major technology platforms in the USA.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Gregor Robertson, Minister of Housing and Infrastructure and Minister responsible for Pacific Economic Development Canada (PacifiCan), announced more than C$17.3 million in funding for eight British Columbia technology companies to accelerate the commercialisation and adoption of AI and quantum technologies.
Through PacifiCan, the federal government is supporting projects focused on robotics, semiconductor manufacturing, AI infrastructure, and quantum supply chains as part of a broader strategy to strengthen domestic innovation and sovereign technology capabilities.
A major share of the investment will support Human in Motion Robotics, which received CAD$3 million to commercialise its AI-powered XoMotion wearable robotic exoskeleton. The company plans to integrate AI into mobility systems, expand manufacturing, and move the technology beyond clinical environments into homes and community settings for people with spinal cord injuries and neurological conditions.
Another funded company, Dream Photonics, will receive more than CAD$1.1 million to establish pilot manufacturing for optical interconnect technologies used in AI and quantum chips. The project aims to strengthen Canada’s domestic semiconductor and quantum ecosystem while creating skilled technology jobs in British Columbia.
The announcement also highlighted the rapid expansion of British Columbia’s AI ecosystem, which now includes nearly 600 AI companies. Canadian officials linked the investments to broader efforts to secure domestic compute infrastructure, strengthen AI supply chains, and position Canada competitively in emerging technologies ahead of events such as Web Summit Vancouver.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The Canadian government and TELUS are advancing plans to develop large-scale sovereign AI infrastructure as part of Ottawa’s broader strategy to strengthen domestic compute capacity and support the country’s AI ecosystem.
The initiative was announced by Evan Solomon (Minister of Artificial Intelligence and Digital Innovation and Minister responsible for the Federal Economic Development Agency for Southern Ontario) and focuses on a proposed AI data centre project in British Columbia designed to support researchers, businesses, and academic institutions.
A project that forms part of Canada’s ‘Enabling large-scale sovereign AI data centres’ initiative, which was introduced under Budget 2025. Ottawa stated that sovereign compute infrastructure is increasingly important for maintaining national competitiveness in AI while ensuring Canadian data, intellectual property, and economic value remain within the country.
The government also confirmed that no formal funding commitments have yet been distributed, with discussions currently progressing through non-binding memoranda of understanding with selected industry participants.
Local officials argued that large-scale compute infrastructure has become a strategic economic requirement as governments worldwide race to expand AI processing capabilities. Canada believes it holds competitive advantages due to its colder climate, sustainable energy resources, and network infrastructure, all of which could help attract future AI investment and hyperscale data centre development.
Why does it matter?
The race for sovereign AI infrastructure is rapidly becoming one of the most important geopolitical and economic competitions of the digital era. The Canada-TELUS partnership illustrates how countries are moving beyond AI model development alone and shifting focus towards the physical infrastructure required to sustain future AI ecosystems, including data centres, energy capacity, semiconductors, and domestic compute networks.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The US Senate Banking Committee has released a revised 309-page draft of the Digital Asset Market Clarity Act ahead of a markup vote, reopening debate on stablecoin rewards, DeFi protections and the regulation of digital asset markets.
The draft, proposed by Committee Chair Tim Scott, seeks to provide a federal framework for digital asset market structure, including provisions on securities innovation, illicit finance, decentralised finance, banking innovation, regulatory sandboxes, software developers and customer protection.
A key section addresses stablecoin rewards. The draft would prohibit digital asset service providers from paying interest or yield on payment stablecoin balances in a way that is economically or functionally equivalent to bank deposit interest. However, it would permit certain activity-based or transaction-based rewards and incentives, provided they are not equivalent to interest or yield on a bank deposit.
The text also includes provisions affecting decentralised finance. It covers rules on non-decentralised finance trading protocols, illicit finance obligations for distributed ledger messaging systems, temporary holds for certain digital asset transactions, voluntary cybersecurity programmes for DeFi trading protocols and studies on digital asset mixers, foreign intermediaries and financial stability risks.
Software developer protections are also included in the draft. The bill contains a dedicated title on protecting software developers and software innovation, including provisions on non-fungible tokens, self-custody and blockchain regulatory certainty.
The draft still faces further negotiation before any final vote. Lawmakers continue to debate the balance between consumer protection, illicit finance controls, innovation, stablecoin incentives and the treatment of decentralised finance. At the same time, the legislation needs to be aligned with other Senate work on digital asset market structure.
Why does it matter?
The revised Clarity Act is another step towards a federal framework for digital asset markets in the United States, with rules that could shape how crypto firms, stablecoin platforms and decentralised finance projects operate. Its provisions on stablecoin rewards, DeFi and software developers show lawmakers trying to balance innovation, consumer protection and oversight in one of the world’s most important financial markets.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our chatbot!
Dubai residents will be able to pay government fees using virtual assets after Crypto.com’s UAE entity, Foris DAX Middle East FZE, received a Stored Value Facilities licence from the Central Bank of the UAE.
Crypto.com said the approval makes it the first Virtual Asset Service Provider in the UAE to receive the licence. It allows the company to activate its partnership with the Dubai Department of Finance, enabling virtual asset payments for government services.
Financial settlements will be conducted in UAE dirhams or Central Bank-approved dirham-backed stablecoins through the regulated Stored Value Facilities framework. Crypto.com said the arrangement supports the Dubai Cashless Strategy.
Users wishing to access the service will need to be onboarded through Crypto.com’s VARA-licensed platform. The company also said that, subject to further Central Bank approvals, the licence could support crypto payment integrations with Emirates and Dubai Duty Free.
Crypto.com executives described the approval as a step towards regulated digital asset adoption in the UAE, while linking it to the country’s wider push for compliant crypto infrastructure and digital payments innovation.
Why does it matter?
The development shows how Dubai is moving virtual asset payments closer to public-sector infrastructure, rather than treating them only as investment products or private-sector payment experiments. By routing payments through a regulated Stored Value Facilities framework and settling them in dirhams or approved dirham-backed stablecoins, the model links crypto access with conventional payment oversight, financial regulation and the emirate’s cashless economy strategy.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our chatbot!