UK backs Isomorphic Labs to strengthen sovereign AI and drug discovery

The UK government has announced a new investment in London-based Isomorphic Labs through its Sovereign AI Fund, strengthening national efforts to support homegrown AI companies developing strategic technologies.

The company focuses on using frontier AI systems to redesign how medicines are discovered and developed. Isomorphic Labs builds on the scientific foundations of AlphaFold, the DeepMind system capable of predicting protein structures with high accuracy, while expanding into broader AI-driven drug design models across multiple therapeutic areas.

The investment forms part of a wider fundraising round as the company scales efforts to accelerate medicine development and reduce the time traditionally required for pharmaceutical research. British officials described the initiative as part of a broader strategy to strengthen sovereign AI capabilities, support domestic innovation, and ensure future AI breakthroughs remain anchored in the UK economy.

The Sovereign AI programme, launched in 2026, combines venture capital investment with government-backed support for promising UK AI firms. Officials say supported companies must maintain a meaningful British presence while contributing to domestic economic growth, technological leadership, and high-skilled employment.

Why does it matter?

AI is increasingly moving beyond consumer applications and into strategic sectors such as biotechnology, pharmaceuticals, and healthcare infrastructure. The UK’s backing of Isomorphic Labs reflects growing international competition to secure sovereign AI capabilities tied to scientific research, intellectual property, and future economic advantage.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

UK backs stronger cooperation on AI and frontier technologies at OSCE

The UK has highlighted both the opportunities and risks linked to frontier technologies during a high-level conference organised by the Organization for Security and Co-operation in Europe in Geneva.

Speaking at the event, UK Tech Envoy Sarah Spencer said AI could support early warning and early action in humanitarian crises, but could also amplify misinformation and instability if misused or deployed without adequate safeguards.

Spencer said responsible governance of frontier technologies requires partnerships between states, institutions, industry and civil society, arguing that such cooperation matters more than individual products in building inclusive, responsible and sustainable digital ecosystems.

She also highlighted the OSCE’s role in fostering dialogue on frontier technologies, reducing misunderstandings and supporting anticipatory approaches to governance. The UK said it was ready to support efforts to ensure technological progress contributes to a safer, more secure and more humane future.

The conference, titled ‘Anticipating technologies – for a safe and humane future’, brought together participants to discuss how emerging technologies are affecting security, stability and international cooperation.

Why does it matter?

The statement places AI and other frontier technologies within a security and diplomacy context, rather than treating them only as innovation issues. It highlights growing concern that emerging technologies can support humanitarian and development goals, but also create risks for misinformation, conflict escalation and strategic stability if governance and cooperation lag behind deployment.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Cybersecurity sector revenue reaches £14.7 billion in UK

The UK cybersecurity sector generated £14.7 billion in annual revenue and £9.1 billion in gross value added, according to the government’s Cyber Security Sectoral Analysis 2026.

The report, commissioned by the Department for Science, Innovation and Technology and produced by Ipsos and Perspective Economics, identifies 2,603 firms active in the UK cybersecurity market. That marks a 20% increase from the previous report, which identified 2,165 firms.

Employment in the sector reached about 69,600 full-time equivalent roles, an increase of around 2,300 jobs, or 3%, over the past year. The report says this is the lowest recorded employment growth rate since the series began in 2018, suggesting a softening in workforce growth.

Revenue rose by around 11% from last year’s estimate of £13.2 billion, while gross value added increased by 17%. The report also estimates GVA per employee at £131,200, up from £116,200, suggesting higher productivity within the cybersecurity ecosystem.

The analysis also points to growth in AI security and software security. It estimates that 111 firms active and registered in the UK now clearly offer cybersecurity for AI systems as an explicit product or service, up 68% from the previous baseline. Of those, 32 are specialist providers focused mainly or exclusively on AI security, while 79 offer AI security as part of a broader portfolio.

Software security is also expanding across the market. The report estimates that 1,141 firms provide software security services, an increase of 181 firms, or 19%, from the previous baseline. Nearly half of all UK cybersecurity providers appear to be involved in software security provision, with application security, cloud and container security, secure development, supply chain security, and DevSecOps highlighted as key areas.

Investment remains more subdued. Dedicated cybersecurity firms raised £184 million across 47 deals in 2025, down 11% from £206 million across 59 deals in 2024. The report says investors highlighted AI security and post-quantum cryptography as key themes, while also noting procurement barriers and limited UK growth-stage capital as ongoing concerns.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UK’s Ofcom prioritises child protection and AI moderation under Online Safety Act

The UK’s Ofcom has outlined its main online safety priorities for 2026–27, signalling tougher oversight of digital platforms under the UK’s Online Safety Act. The regulator said it will continue focusing heavily on child protection while expanding enforcement efforts against illegal hate speech, terrorism-related material, intimate image abuse, and AI-generated harms.

The regulator confirmed that more than 100,000 online services now fall within the scope of the legislation, creating major compliance and enforcement challenges. Ofcom said it will continue investigating platforms that fail to prevent harmful or illegal content, while also preparing new rules linked to additional UK legislation covering cyberflashing, non-consensual intimate imagery, and generative AI services.

Ofcom stated that major online platforms have already introduced broader age verification measures under regulatory pressure. Services including gaming, dating, social media, and pornography platforms have implemented stronger age checks and child safety protections.

Furthermore, the regulator said it will expand supervision of large technology companies and publish updated safety codes later this year, including guidance on AI-powered moderation systems.

According to Ofcom, future compliance work will increasingly focus on the effectiveness of platform moderation systems rather than relying solely on reactive content removal. The regulator also plans to strengthen protections for women and girls online through new technical standards designed to block the spread of non-consensual intimate images and sexual deepfakes at scale.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

UK’s ICO issues guidance on AI-generated FOI requests

The UK Information Commissioner’s Office (ICO) has published new guidance to help public authorities handle Freedom of Information (FOI) requests generated using AI, as public authorities report growing pressure from higher volumes and more complex requests.

According to the ICO, some AI-generated requests misquote or misinterpret FOI legislation, while others require significant clarification before they can be processed. The regulator says the guidance is intended to give FOI teams practical support so they can continue meeting their legal duties without adding new burdens.

The guidance addresses issues that practitioners say are increasingly common, including requests generated with AI that misstate the law, a rising number of submissions that need refinement, and the need to ensure requests are handled fairly and consistently regardless of how they were created.

It also includes example wording that public authorities can use to encourage more responsible use of AI by requesters and to support clearer and more effective FOI submissions. The ICO says the aim is to reduce delays, errors, and complaints linked to poorly framed or confusing requests.

Deborah Clark, the ICO’s Upstream Regulation Manager, clarified: ‘This guidance is about giving teams practical, sensible support, not adding new burdens. It does not change the law or create new requirements; instead, it helps teams apply existing FOI principles consistently, regardless of how a request is created. Used responsibly, AI also has the potential to help public authorities improve how they handle FOI requests, and this guidance sits alongside our wider work to support innovation that delivers real benefits for organisations and the public.’

The ICO says the guidance applies to all public authorities covered by the Freedom of Information Act and draws on existing casework, stakeholder engagement, practitioner feedback, and input from its AI specialists.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Online Safety Act brings progress, but UK children still face harm online

A new report from Internet Matters suggests the UK’s Online Safety Act has introduced more visible safety measures for children, but has not yet delivered the step change needed to make their online lives meaningfully safer. Drawing on surveys and focus groups with children and parents, the report presents an early view of how the law is affecting families in practice.

The findings point to some clear signs of progress. Parents and children report seeing more safety features, including improved reporting tools, content filters, restrictions on certain functions, and stronger parental controls. Many children also say the content they encounter online is becoming more age-appropriate.

At the same time, the report argues that important weaknesses remain. Children continue to encounter harmful content at high rates, while age verification is widely seen as easy to bypass. Internet Matters also says that some of the issues families care most about, including excessive screen time and the risks linked to AI-generated content, are still not adequately addressed under the current framework.

The report concludes that parents are still carrying too much of the burden of keeping children safe online. It calls for stronger enforcement, more effective age assurance, tighter limits on harmful features, and a broader safety-by-design approach to digital services used by children in the UK.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK AI sector survey to map growth trends and policy direction

The UK government is stepping up efforts to better understand the structure and growth of its AI sector through an updated national survey led by the Department for Science, Innovation and Technology.

The research, conducted by Ipsos and supported by Perspective Economics, aims to gather direct insights from businesses operating in the UK AI ecosystem. The findings are expected to inform future government policy on AI and sector development.

Participation is voluntary and confidential. Respondents are drawn from senior leadership roles, including chief executives, chief technology officers, company directors, and senior members of AI or data science teams. The survey focuses on business activity, products and services, and longer-term growth plans across the sector.

Fieldwork is taking place between late April and the end of May 2026 using online questionnaires and telephone interviews. Each session is expected to last around 15 to 20 minutes, allowing businesses to contribute structured input without significant disruption to normal operations.

The initiative reflects a wider UK policy priority: ensuring that government strategy keeps pace with developments in AI innovation and commercial growth. By drawing on direct industry evidence rather than relying only on secondary analysis, policymakers are trying to build a more accurate picture of the country’s evolving AI landscape. This last sentence is an inference based on the survey’s stated purpose of informing government AI policy.

Why does it matter?

AI policy is much easier to design in theory than in a market that is changing quickly and unevenly. If the government lacks current information on how AI firms are growing, what products they are developing, and where the main constraints lie, it risks shaping policy based on outdated assumptions. Direct input from businesses gives policymakers a stronger basis for decisions on support, regulation, skills, and investment, especially at a time when the UK is trying to turn AI ambition into measurable economic capacity.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

UK’s National Cyber Security Centre launches device to protect display connections from cyber threats

The National Cyber Security Centre (NCSC) has developed SilentGlass, a device designed to protect display connections from malicious or unexpected activity. It is the first commercially available product licensed to use NCSC branding and was launched at CYBERUK.

SilentGlass blocks unauthorised interactions between HDMI and DisplayPort connections and screens. The NCSC stated that threat actors can target monitors as they may process sensitive or personal data.

The intellectual property has been licensed to Goldilock Labs, which is manufacturing the device in partnership with Sony UK Technology Centre. The product has already been deployed in government environments and approved for use in high-threat settings.

The NCSC noted that increasing numbers of connected devices raise exposure to risks linked to physical interfaces. SilentGlass has been developed to address this risk by preventing malicious connections at the hardware level.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot 

UK embraces 6 frontier technologies to drive digital growth

The UK government has identified six frontier technologies as central to strengthening digital capability, economic growth, and long-term competitiveness.

Outlined in the 2025 Modern Industrial Strategy and Digital and Technologies Sector Plan, the approach prioritises AI, cybersecurity, advanced connectivity, engineering biology, quantum technologies, and semiconductors as pillars of national resilience and technological sovereignty.

Advanced connectivity and AI remain core drivers of digital transformation. Investment in next-generation telecoms, including 5G and future 6G development, is supported through public funding and infrastructure initiatives, while AI continues to expand rapidly through commitments to compute capacity, national supercomputing infrastructure, and workforce development. The strategy positions the UK as aiming to strengthen its role as a leading European AI hub.

Cybersecurity, engineering biology, and quantum technologies reflect a broader strategy linking innovation with security, resilience, and sustainability. Government-backed programmes are intended to support commercialisation, strengthen secure-by-design systems, and accelerate growth in emerging areas such as bio-based manufacturing. Quantum technologies are also being positioned for longer-term use across sectors, including healthcare, defence, and finance.

Semiconductors complete the strategy as a foundational technology underpinning modern digital systems. Rather than focusing on large-scale manufacturing, the UK is prioritising areas such as design, photonics, compound semiconductors, and specialised materials, backed by targeted funding and institutional support.

Across all six areas, the strategy reflects a wider effort to align innovation policy with economic security, global competitiveness, and more resilient supply chains.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

UK National Cyber Security Centre recommends passkeys over passwords

The National Cyber Security Centre (NCSC) recommends the use of passkeys as a more secure alternative to passwords for accessing online services. The guidance supports wider adoption of passwordless authentication across digital platforms.

Passkeys are created and managed on user devices and do not need to be remembered. The NCSC noted that they are resistant to phishing, as they cannot be intercepted, reused or stolen in the same way as passwords.

The NCSC also stated that passkeys can be faster and more convenient to use. Authentication relies on existing device security methods, such as fingerprint, facial recognition or PIN, rather than separate login credentials.

Passkeys are stored and managed through credential managers, which can synchronise access across trusted devices and provide backups. The NCSC advised that where passkeys are not available, users should continue using strong passwords and enable two-step verification.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot