OpenAI expands cyber defence programme with trusted access and industry partnerships

The US AI research and deployment company, OpenAI, has introduced an expanded cyber defence initiative aimed at strengthening collaboration across the cybersecurity ecosystem.

A programme, known as Trusted Access for Cyber, is designed to provide advanced AI capabilities to vetted organisations while maintaining safeguards based on trust, validation and accountability.

Such an initiative by OpenAI includes financial support through a cybersecurity grant programme, allocating resources to organisations working on software supply chain security and vulnerability research.

By enabling broader access to advanced tools, the programme seeks to support developers and smaller teams that may lack continuous security capacity.

A range of industry participants, including Cisco, Cloudflare and NVIDIA, are involved in testing and applying these capabilities within complex digital environments.

Public sector collaboration is also reflected through partnerships with institutions focused on evaluating AI safety and security standards.

The initiative reflects a broader approach to cybersecurity as a distributed responsibility, where public and private actors contribute to resilience.

It also highlights the increasing role of AI systems in identifying vulnerabilities and supporting defensive research across critical infrastructure and digital services.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

UK’s NCSC chief warns frontier AI will speed up cybersecurity threats

Dr Richard Horne, chief executive of the United Kingdom’s National Cyber Security Centre (NCSC), said advances in frontier AI models will make it easier, faster, and cheaper to find and exploit software vulnerabilities, increasing pressure on organisations to strengthen their security baseline.

In a piece published on the NCSC website, Horne said the longer-term effect of AI-assisted vulnerability discovery could be positive if technology suppliers use such tools to identify and fix weaknesses across the lifecycle of products and services. He also warned that the path to that outcome brings immediate risks and requires urgent action.

Horne said organisations that have not taken appropriate steps to safeguard their systems will increasingly be exposed as AI lowers the time, skill, and resources needed to identify exploitable weaknesses. He added that pressure to apply security patches quickly will become more acute as these capabilities develop.

Horne added that said organisations should follow established NCSC guidance, including reducing unnecessary exposure to attack, applying security updates rapidly, and monitoring for and responding quickly to malicious activity.

Horne also said these measures must be championed by leaders and boards, describing cyber risk as business risk. He added that government-backed schemes such as Cyber Essentials can help organisations and their customers gain confidence that core security practices are being followed.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Russia advances draft AI regulation framework

Russia has moved forward with a draft law outlining the fundamentals of state regulation of AI technologies, with the public consultation closed on 15 April 2026. The proposal outlines a structured compliance framework to tighten oversight of AI system development and deployment nationwide.

Under the draft, AI system operators would be required to test their systems to identify potential uses that could violate Russian legislation.

The framework also introduces a classification of trusted AI models, which would be subject to formal security verification by authorised federal bodies responsible for technical intelligence countermeasures and information security.

The proposed rules also establish a certification process for quality compliance, to be carried out in accordance with procedures defined by the Russian government. These measures aim to create a multi-layered oversight system for AI security and performance in regulated environments.

The proposed framework signals a shift towards tighter state control over how AI is tested, classified, and deployed, particularly in sensitive or high-risk environments. By introducing mandatory testing, security certification and government-defined quality standards, it increases regulatory scrutiny across the AI lifecycle. 

The broader implication is a move towards more centralised governance of AI systems, where compliance and risk management become embedded requirements rather than optional best practices.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

AI reshapes cybersecurity access as defenders gain new tools

OpenAI has expanded its Trusted Access for Cyber programme and introduced a more permissive AI model designed specifically for cybersecurity work. The initiative reflects a broader shift in digital security, in which advanced AI tools are increasingly integrated into both defensive and offensive cyber operations.

The development highlights a structural change in cybersecurity, where defenders are no longer relying solely on traditional tools but are instead incorporating AI systems capable of analysing code, identifying vulnerabilities and accelerating incident response.

At the same time, the same technological capabilities are becoming accessible to malicious actors, intensifying the need for controlled and verified access.

New automated vulnerability tools are being deployed to detect and fix security flaws at scale, moving towards continuous AI-assisted protection. Rather than periodic security reviews, development environments are gradually shifting towards real-time monitoring and automated remediation.

The broader implication is a tightening link between AI capability growth and cyber risk management. Access frameworks based on identity verification and trust signals aim to balance the wider availability of defensive tools with safeguards against misuse.

The expansion of AI-driven cybersecurity tools reflects a structural shift in how digital infrastructure is protected at scale. As software systems become more complex and interconnected, traditional periodic security checks are increasingly insufficient to manage fast-evolving threats. 

Cybersecurity is moving towards an always-on, automated model where the balance between openness and restriction will directly shape global digital resilience. The outcome of this approach will influence how resilient digital infrastructure becomes as AI-driven threats and defences evolve in parallel.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

Sussex police deploy AI cameras to detect traffic offences

Sussex Police has introduced AI cameras to detect drivers using mobile phones or not wearing seatbelts. The technology is being deployed to support enforcement and reduce road safety risks.

The rollout follows a 2024 trial by National Highways in Sussex, during which 458 offences were detected in 7 days. Most cases involved seatbelt violations, while others included mobile phone use or both offences combined.

Chief Constable Jo Shiner said the cameras are intended to support policing rather than replace it. She added that AI cameras help monitor driver behaviour and enable action where necessary.

Police and Crime Commissioner Katy Bourne said the technology would strengthen enforcement and allow resources to be used more effectively. She noted that collisions linked to phone use and lack of seatbelts continue to cause injuries.

The cameras, supplied by Acusensus, will operate for several weeks before evaluation. Officials said the system will contribute to wider road safety efforts and ongoing monitoring initiatives.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot 

Canada launches cyber security certification to protect defence supply chains

The Government of Canada has introduced Level 1 of the Canadian Program for Cyber Security Certification, establishing a baseline set of cyber security requirements for suppliers involved in defence contracts.

The measure will begin phased implementation from summer 2026, with certification required at the contract award stage.

A programme that forms part of Canada’s broader effort to strengthen resilience across defence supply chains, responding to increasing cyber threats targeting contractors and sensitive information.

It introduces standardised criteria to help organisations identify, assess and manage cyber risks more effectively.

A phased approach allows industry adaptation, particularly for small and medium-sized enterprises, while aligning national requirements with international partners. It also includes interoperability with US standards, supporting cross-border defence cooperation and market access.

By setting a minimum security baseline, the initiative in Canada reinforces trust in procurement systems and contributes to operational readiness within the defence sector.

Further certification levels are expected to expand requirements in the coming years.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Australian authorities warn of data exploitation through social media platforms

Social media and messaging services pose growing security and privacy risks, with personal data used to build profiles for fraud, espionage, or social engineering. Even routine posts may contribute to broader data collection and unintended exposure.

Platforms typically collect extensive user and device data under evolving privacy policies, sometimes storing it across jurisdictions with varying legal protections. Such conditions increase the risks to identity theft, reputational harm, and the misuse of aggregated personal information.

The Australian Government advises organisations to restrict access to official accounts, train staff, and enforce clear policies on what can be shared. It also highlights the importance of breach response procedures to maintain operational security.

For individuals, the Government guidance recommends limiting exposure of personal data, using privacy settings, avoiding unknown contacts, and applying strong authentication.

Regular updates, careful app permissions, and device security measures are also encouraged to reduce cyber risks.

Strengthening awareness and applying consistent security practices reduces vulnerability and supports more resilient organisational systems in an increasingly interconnected digital environment.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

Geneva Cyber Week to bring diplomacy, cyber policy, and AI security debates together

The United Nations Institute for Disarmament Research and the Swiss Federal Department of Foreign Affairs will co-host Geneva Cyber Week from 4 to 8 May 2026, bringing policymakers, diplomats, technical experts, industry leaders, academics, and civil society representatives to venues across Geneva and online for a week of discussions on cyber stability, resilience, governance, digitalisation, and the security implications of emerging technologies, including AI.

Returning after its inaugural edition, the event is being positioned as a response to a more fragile cyber and geopolitical environment. Held under the theme ‘Advancing Global Cooperation in Cyberspace’, Geneva Cyber Week 2026 comes at a moment of mounting cyber insecurity, intensifying geopolitical tension, and rapid technological change, with organisers framing the gathering as a space for more practical cooperation across diplomatic, technical, operational, and policy communities.

“Cybersecurity is no longer a niche technical issue; it is a strategic policy challenge with implications for international peace, economic stability and public trust. At a moment of growing fragmentation and accelerating technological change, Geneva Cyber Week brings together the communities that need to be in the room — diplomatic, technical, operational and policy — to move from shared concern to practical cooperation,” said Dr Giacomo Persi Paoli, Head of Security and Technology Programme at UNIDIR.

The programme will feature nearly 90 events and reinforce Geneva’s role as a centre for cyber diplomacy, international cooperation, and digital governance. Scheduled sessions include UNIDIR’s Cyber Stability Conference, Peak Incident Response organised by the Swiss CSIRT Forum, Digital International Geneva, the World Economic Forum Annual Meeting on Cybersecurity, and a Council of Europe session titled ‘Artificial Intelligence, Cybercrime and Electronic Evidence: Risks, Opportunities, and Global Cooperation’.

The week will also include partner-led panels, workshops, simulations, exhibitions, and networking events to connect specialist communities that do not always work in the same room. That broader structure reflects an effort to treat cyber issues not only as a technical or security matter but also as a governance, trust-building, and international-coordination challenge.

“At a time when digital threats know no borders, fostering inclusive discussions is essential to building trust, advancing common norms, and promoting a secure and open cyberspace for all. International Geneva provides an unparalleled multilateral environment to address these cybersecurity challenges collectively. Geneva Cyber Week’s diverse programme embodies this collaborative spirit,” said Marina Wyss Ross, Deputy Head of International Security Division and Chief of Section for Arms Control, Disarmament and Cybersecurity at the Swiss FDFA.

Across the city, Geneva will also mark the week visually, including flags on the Mont Blanc Bridge and special illumination of the Jet d’Eau on Monday evening. But beyond the symbolism, the event’s significance lies in how it seeks to bring cyber diplomacy, incident response, governance debates, and emerging technology risks into the same international conversation.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

French data protection authority sets out 2026 GDPR and AI guidance agenda

The French data protection authority, the Commission nationale de l’informatique et des libertés (CNIL), has outlined the main guidance, consultations, and resources it plans to publish in 2026 to support compliance with the General Data Protection Regulation and certain provisions of the AI Act.

According to the CNIL, the programme is intended to help public and private sector actors prepare for upcoming consultations and anticipate regulatory developments. It says the programme is indicative and may evolve in response to current events.

The CNIL says it will begin work on ‘multi-property’ consent, covering the conditions for obtaining a single consent across several sites or media, particularly where they belong to the same group. It also says it will finalise work on the use of AI in the workplace and in health, including bias risks and safeguards to protect the rights of employees and patients.

The authority also plans to work on transcription and automated analysis tools used in call centres and videoconferencing software, operational content for data protection officers, and clarification of how the GDPR applies to non-anonymous AI models.

In the health sector, it says it will update research reference methodologies, publish its position on how people should be informed when data are reused for research, and issue a consolidated document on the electronic patient record.

On security, the CNIL says it will continue publishing recommendations to improve personal data security, publish the final updated version of its recommendation on remote electronic voting systems, and open public consultations on recommendations covering the security of personal data exchanges, remote identity verification, and end point detection and response services. It also says it will publish a recommendation on web filtering gateways.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UK government reviews regulatory options for enterprise connected devices

The UK government has said it will update and streamline its proposed code of practice for enterprise connected device security and assess further policy options, including regulation, certification, and other assurance mechanisms, following its call for views on enterprise connected device security.

The response, published by the Department for Science, Innovation and Technology, says enterprise-connected devices are often critical to business operations but can lack adequate security measures. It also states that the UK government’s call for views showed strong support for intervention to improve the cybersecurity of such devices, with 95% of respondents agreeing that the government should do more.

According to the response, 76% of respondents agreed or strongly agreed that the risks posed by enterprise-connected devices are sufficiently distinct from those of other connected devices to warrant an independent code of practice.

The UK government also reports that 78% agreed or strongly agreed with creating new legislation imposing obligations on manufacturers, while 71% agreed or strongly agreed with creating a new global standard based on the code of practice.

The UK government says it will ask manufacturers to use the National Cyber Security Centre’s existing device security principles while this work continues. It also says it will finalise the security principles, make them modular within the broader set of secure-by-design codes of practice, and explore the feasibility of a certification scheme for manufacturers.

The response also states that the UK government will assess options for regulatory measures, following feedback that it needs to go beyond voluntary adoption and include some form of assurance or enforcement mechanism. It adds that the government will review whether the scope of this work should be expanded beyond enterprise-connected devices as part of its broader analysis of technology security.

The document says the UK government will seek to align this work, where possible and necessary, with international developments, including European Union standards processes under the Cyber Resilience Act. It also notes repeated calls from respondents for implementation guides and clearer alignment with existing legislation and standards.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!