Kazakhstan prioritises cyber resilience

The Government of the Republic of Kazakhstan has reviewed cybersecurity measures for state bodies during an interagency meeting chaired by the Deputy Prime Minister and Minister of AI and Digital Development.

According to the Government, reports highlighted progress in cybersecurity policies alongside ongoing vulnerabilities. Audits of local executive bodies identified systemic weaknesses requiring stronger safeguards.

The meeting also introduced new measures, including mandatory biometric identification for operators managing large databases. Officials stressed the importance of integrating systems into a unified monitoring framework.

The Government stated that cybersecurity is essential for digital transformation and instructed agencies to improve oversight, public awareness and data protection efforts in Kazakhstan.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Canada’s cyber resilience plan targets AI-driven threats to critical infrastructure

A new initiative to strengthen national resilience has been launched by the Canadian Centre for Cyber Security against escalating cyber threats targeting critical infrastructure.

The programme, titled CIREN (Critical Infrastructure Resilience and Escalated Threat Navigation), aims to prepare organisations for severe disruptions by improving readiness, response capacity, and long-term recovery planning.

An initiative that reflects growing concern within Communications Security Establishment Canada over increasingly sophisticated cyber risks, including those amplified by AI.

Authorities highlight that both state-sponsored and criminal actors are exploiting automation and AI to accelerate attacks, raising the stakes for sectors such as energy, telecommunications, transport, and water systems.

CIREN outlines a structured approach centred on operational continuity during extreme scenarios.

Organisations are encouraged to prepare for prolonged isolation of critical systems, develop independent operating capabilities, and establish recovery frameworks capable of rebuilding infrastructure after major incidents. The focus remains on maintaining essential services under worst-case conditions.

The programme forms part of a broader national strategy in Canada to enhance cyber readiness through collaboration, threat intelligence, and practical guidance.

Officials stress that proactive planning and simplified defensive measures can significantly reduce real-world impact, particularly as cyber incidents grow in frequency, scale, and complexity.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Europol shut down illegal booter services across 21 countries

A major international crackdown led by Europol has targeted more than 75,000 users involved in distributed denial-of-service (DDoS)-for-hire activity. The coordinated Operation PowerOFF brought together 21 countries in a global effort to dismantle cyberattack infrastructure.

Authorities issued tens of thousands of warning messages, carried out arrests, executed search warrants, and seized dozens of domains linked to illegal booter platforms.

The operation also disrupted technical systems used to facilitate attacks, including servers and databases that enabled users to target online services and websites.

Analysis of seized data provided access to millions of user accounts, strengthening ongoing investigations across participating jurisdictions. Europol supported the operation through intelligence analysis, forensic work, and coordination between national agencies, helping identify and track those involved.

Alongside enforcement, the initiative has shifted towards prevention, including awareness campaigns, search engine interventions, and blockchain-based warnings.

Officials stress that DDoS-for-hire services remain widely accessible but are illegal, with users ranging from inexperienced actors to more organised cybercriminals driven by financial or ideological motives.

By targeting both infrastructure and users, authorities reduce the accessibility of tools that enable low-skill attackers to cause significant disruption to online services. Such actions strengthen cyber resilience and reflect a shift towards more proactive, internationally coordinated responses to digital threats.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UK Defence Innovation opens Biosecurity Frontiers competition with up to £2 million

UK Defence Innovation has opened the Biosecurity Frontiers themed competition, run by the Cabinet Office on behalf of the UK government, and is seeking innovative proposals to help deliver the ambitions of the 2023 UK Biological Security Strategy and the 2025 National Security Strategy.

The competition document states that proposals may be used by multiple government departments, sectors, and frontline users, including the police, the military, and NHS/public health bodies.

Up to £2 million excluding VAT is available, with the government expecting to fund five to seven proposals across three challenge areas: biodetection and biosurveillance; AI and diagnostics, therapeutics, and vaccines; and non-pharmaceutical protective systems.

Individual awards are expected to be in the region of £100,000 to £500,000, though the document states proposals at higher or lower values may also be funded.

The submission deadline is 12:00 midday BST on 10 June 2026. Projects are expected to start in September 2026 and run for no longer than 12 months. Proposals must progress through at least one Technology Readiness Level. For Challenges 1 and 3, projects must reach Technology Readiness Level (TRL) 4-6, while Challenge 2 projects may reach TRL 7.

For biodetection and biosurveillance, the competition seeks capabilities to detect and monitor traditional and novel biological threats, including portable surveillance technologies, computational tools for analysing complex datasets, and permanently installed air surveillance systems in high-footfall locations.

For AI and diagnostics, therapeutics, and vaccines, the document refers to AI-based support for identifying and developing new diagnostic, therapeutic, and vaccine candidates, including structure-based discovery and development tools.

For non-pharmaceutical protective systems, the competition covers lower-cost personal protective equipment, respiratory protective equipment with improved fit, decontamination and disinfection approaches, biodegradable PPE materials, and solutions that remove humans from operations in contaminated areas. The competition document says it is funded by the Integrated Security Fund, which supports priority national security themes in the UK 2025 National Security Strategy.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Illegal cryptocurrency circulation to carry prison sentences in Russia

Russia’s government commission on legislative activity has approved new measures introducing criminal liability for large-scale cryptocurrency operations conducted without the central bank’s authorisation.

The proposal establishes penalties for the illegal organisation of digital currency circulation where significant damage or substantial financial gain is involved.

Under the approved amendments, individuals found to be organising crypto transactions in violation of Russian law could face prison sentences of 4 to 7 years. The rules apply to cases involving harm to individuals, organisations, or the state, or large-scale illicit income.

The draft introduces a new Article 171.7 into the Russian Criminal Code, formally defining ‘illegal organisation of digital currency circulation’ as a punishable offence. The measures are expected to come into force on 1 July 2027, marking a significant tightening of enforcement in the country’s digital asset sector.

By introducing custodial penalties, Russia is raising the legal and financial risks for unlicensed digital asset activity, which could deter informal market participation and push activity towards regulated channels.

In the broader context, it reflects a global trend in which governments are moving to formalise oversight of crypto markets in response to concerns about financial crime, capital flows, and systemic risk.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

Australia’s OAIC updates the Children’s Online Privacy Code page during public consultation

The Office of the Australian Information Commissioner (OAIC) updated its Children’s Online Privacy Code page, as the regulator continues consultation on a draft code that will set privacy rules for online services likely to be accessed by children.

The page says the Code is being developed under the Privacy and Other Legislation Amendment Act 2024 and will operate as an APP Code under the Privacy Act 1988.

According to the OAIC, the Code will apply to online services that fall within the categories of social media services, relevant electronic services, and designated internet services under the Online Safety Act 2021, where those services are likely to be accessed by children or primarily concern children’s activities. The regulator says the Code is intended to put children at the centre of privacy protections in Australia while also lifting privacy practices more broadly.

The updated page highlights the current public consultation on the exposure draft of the Children’s Online Privacy Code. It also refers users to separate consultation pathways for children, young people, parents and carers, and for industry, civil society, academia, and other interested parties.

The OAIC also says it has created a dedicated Privacy for Kids hub to support participation in the consultation. According to the page, the hub includes workbooks and child-friendly guides to help explain the draft Code to children, young people, and parents and carers.

In addition, the updated page invites stakeholders to register for an OAIC webinar on the Children’s Online Privacy Code public consultation. The OAIC says the final Code must be finalised and registered by 10 December 2026.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

European firms launch Disaster Recovery Pack for tech independence

A group of European technology companies, Cubbit, SUSE, Elemento, and StorPool Storage, has launched a joint ‘Disaster Recovery Pack’ to support the continuity of organisations’ data and operations in the event of disruptions caused to external dependencies.

The solution was presented on 15 April 2026 at the European Data Summit organised by the Konrad-Adenauer-Foundation in Berlin. It is described as a system intended to maintain critical workloads even in scenarios involving disruptions associated with foreign technology providers.

The Disaster Recovery Pack integrates multiple components of the cloud software stack into a single deployable system. These components include storage, compute, orchestration, networking, identity, observability, and management. By combining these elements, the solution aims to reduce fragmentation and facilitate the deployment of a unified technology stack.

According to the providers, the system is designed to allow organisations to transfer critical workloads to a European-based infrastructure without major disruption. It can be used to identify essential services, establish and test recovery setups, and extend these configurations to additional workloads over time.

The solution is positioned to address operational requirements for disaster recovery while also supporting a broader transition to infrastructure based on European providers. It has already been deployed by an IT service provider in Italy and is expected to be adopted by additional partners.

Why does it matter?

The initiative is linked to efforts to reduce reliance on non-European cloud infrastructure and to strengthen the resilience of digital operations. In a statement, Sebastiano Toffaletti, Secretary General of the European DIGITAL SME Alliance, said that European companies are capable of developing and integrating such solutions, and highlighted the need for policy measures that support their adoption, including considerations related to public procurement and definitions of sovereign cloud within future policy frameworks.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

OpenAI expands cyber defence programme with trusted access and industry partnerships

The US AI research and deployment company, OpenAI, has introduced an expanded cyber defence initiative aimed at strengthening collaboration across the cybersecurity ecosystem.

A programme, known as Trusted Access for Cyber, is designed to provide advanced AI capabilities to vetted organisations while maintaining safeguards based on trust, validation and accountability.

Such an initiative by OpenAI includes financial support through a cybersecurity grant programme, allocating resources to organisations working on software supply chain security and vulnerability research.

By enabling broader access to advanced tools, the programme seeks to support developers and smaller teams that may lack continuous security capacity.

A range of industry participants, including Cisco, Cloudflare and NVIDIA, are involved in testing and applying these capabilities within complex digital environments.

Public sector collaboration is also reflected through partnerships with institutions focused on evaluating AI safety and security standards.

The initiative reflects a broader approach to cybersecurity as a distributed responsibility, where public and private actors contribute to resilience.

It also highlights the increasing role of AI systems in identifying vulnerabilities and supporting defensive research across critical infrastructure and digital services.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

UK’s NCSC chief warns frontier AI will speed up cybersecurity threats

Dr Richard Horne, chief executive of the United Kingdom’s National Cyber Security Centre (NCSC), said advances in frontier AI models will make it easier, faster, and cheaper to find and exploit software vulnerabilities, increasing pressure on organisations to strengthen their security baseline.

In a piece published on the NCSC website, Horne said the longer-term effect of AI-assisted vulnerability discovery could be positive if technology suppliers use such tools to identify and fix weaknesses across the lifecycle of products and services. He also warned that the path to that outcome brings immediate risks and requires urgent action.

Horne said organisations that have not taken appropriate steps to safeguard their systems will increasingly be exposed as AI lowers the time, skill, and resources needed to identify exploitable weaknesses. He added that pressure to apply security patches quickly will become more acute as these capabilities develop.

Horne added that said organisations should follow established NCSC guidance, including reducing unnecessary exposure to attack, applying security updates rapidly, and monitoring for and responding quickly to malicious activity.

Horne also said these measures must be championed by leaders and boards, describing cyber risk as business risk. He added that government-backed schemes such as Cyber Essentials can help organisations and their customers gain confidence that core security practices are being followed.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Russia advances draft AI regulation framework

Russia has moved forward with a draft law outlining the fundamentals of state regulation of AI technologies, with the public consultation closed on 15 April 2026. The proposal outlines a structured compliance framework to tighten oversight of AI system development and deployment nationwide.

Under the draft, AI system operators would be required to test their systems to identify potential uses that could violate Russian legislation.

The framework also introduces a classification of trusted AI models, which would be subject to formal security verification by authorised federal bodies responsible for technical intelligence countermeasures and information security.

The proposed rules also establish a certification process for quality compliance, to be carried out in accordance with procedures defined by the Russian government. These measures aim to create a multi-layered oversight system for AI security and performance in regulated environments.

The proposed framework signals a shift towards tighter state control over how AI is tested, classified, and deployed, particularly in sensitive or high-risk environments. By introducing mandatory testing, security certification and government-defined quality standards, it increases regulatory scrutiny across the AI lifecycle. 

The broader implication is a move towards more centralised governance of AI systems, where compliance and risk management become embedded requirements rather than optional best practices.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!