Australia’s OAIC updates the Children’s Online Privacy Code page during public consultation

The Office of the Australian Information Commissioner (OAIC) updated its Children’s Online Privacy Code page, as the regulator continues consultation on a draft code that will set privacy rules for online services likely to be accessed by children.

The page says the Code is being developed under the Privacy and Other Legislation Amendment Act 2024 and will operate as an APP Code under the Privacy Act 1988.

According to the OAIC, the Code will apply to online services that fall within the categories of social media services, relevant electronic services, and designated internet services under the Online Safety Act 2021, where those services are likely to be accessed by children or primarily concern children’s activities. The regulator says the Code is intended to put children at the centre of privacy protections in Australia while also lifting privacy practices more broadly.

The updated page highlights the current public consultation on the exposure draft of the Children’s Online Privacy Code. It also refers users to separate consultation pathways for children, young people, parents and carers, and for industry, civil society, academia, and other interested parties.

The OAIC also says it has created a dedicated Privacy for Kids hub to support participation in the consultation. According to the page, the hub includes workbooks and child-friendly guides to help explain the draft Code to children, young people, and parents and carers.

In addition, the updated page invites stakeholders to register for an OAIC webinar on the Children’s Online Privacy Code public consultation. The OAIC says the final Code must be finalised and registered by 10 December 2026.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

European firms launch Disaster Recovery Pack for tech independence

A group of European technology companies, Cubbit, SUSE, Elemento, and StorPool Storage, has launched a joint ‘Disaster Recovery Pack’ to support the continuity of organisations’ data and operations in the event of disruptions caused to external dependencies.

The solution was presented on 15 April 2026 at the European Data Summit organised by the Konrad-Adenauer-Foundation in Berlin. It is described as a system intended to maintain critical workloads even in scenarios involving disruptions associated with foreign technology providers.

The Disaster Recovery Pack integrates multiple components of the cloud software stack into a single deployable system. These components include storage, compute, orchestration, networking, identity, observability, and management. By combining these elements, the solution aims to reduce fragmentation and facilitate the deployment of a unified technology stack.

According to the providers, the system is designed to allow organisations to transfer critical workloads to a European-based infrastructure without major disruption. It can be used to identify essential services, establish and test recovery setups, and extend these configurations to additional workloads over time.

The solution is positioned to address operational requirements for disaster recovery while also supporting a broader transition to infrastructure based on European providers. It has already been deployed by an IT service provider in Italy and is expected to be adopted by additional partners.

Why does it matter?

The initiative is linked to efforts to reduce reliance on non-European cloud infrastructure and to strengthen the resilience of digital operations. In a statement, Sebastiano Toffaletti, Secretary General of the European DIGITAL SME Alliance, said that European companies are capable of developing and integrating such solutions, and highlighted the need for policy measures that support their adoption, including considerations related to public procurement and definitions of sovereign cloud within future policy frameworks.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

OpenAI expands cyber defence programme with trusted access and industry partnerships

The US AI research and deployment company, OpenAI, has introduced an expanded cyber defence initiative aimed at strengthening collaboration across the cybersecurity ecosystem.

A programme, known as Trusted Access for Cyber, is designed to provide advanced AI capabilities to vetted organisations while maintaining safeguards based on trust, validation and accountability.

Such an initiative by OpenAI includes financial support through a cybersecurity grant programme, allocating resources to organisations working on software supply chain security and vulnerability research.

By enabling broader access to advanced tools, the programme seeks to support developers and smaller teams that may lack continuous security capacity.

A range of industry participants, including Cisco, Cloudflare and NVIDIA, are involved in testing and applying these capabilities within complex digital environments.

Public sector collaboration is also reflected through partnerships with institutions focused on evaluating AI safety and security standards.

The initiative reflects a broader approach to cybersecurity as a distributed responsibility, where public and private actors contribute to resilience.

It also highlights the increasing role of AI systems in identifying vulnerabilities and supporting defensive research across critical infrastructure and digital services.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

UK’s NCSC chief warns frontier AI will speed up cybersecurity threats

Dr Richard Horne, chief executive of the United Kingdom’s National Cyber Security Centre (NCSC), said advances in frontier AI models will make it easier, faster, and cheaper to find and exploit software vulnerabilities, increasing pressure on organisations to strengthen their security baseline.

In a piece published on the NCSC website, Horne said the longer-term effect of AI-assisted vulnerability discovery could be positive if technology suppliers use such tools to identify and fix weaknesses across the lifecycle of products and services. He also warned that the path to that outcome brings immediate risks and requires urgent action.

Horne said organisations that have not taken appropriate steps to safeguard their systems will increasingly be exposed as AI lowers the time, skill, and resources needed to identify exploitable weaknesses. He added that pressure to apply security patches quickly will become more acute as these capabilities develop.

Horne added that said organisations should follow established NCSC guidance, including reducing unnecessary exposure to attack, applying security updates rapidly, and monitoring for and responding quickly to malicious activity.

Horne also said these measures must be championed by leaders and boards, describing cyber risk as business risk. He added that government-backed schemes such as Cyber Essentials can help organisations and their customers gain confidence that core security practices are being followed.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Russia advances draft AI regulation framework

Russia has moved forward with a draft law outlining the fundamentals of state regulation of AI technologies, with the public consultation closed on 15 April 2026. The proposal outlines a structured compliance framework to tighten oversight of AI system development and deployment nationwide.

Under the draft, AI system operators would be required to test their systems to identify potential uses that could violate Russian legislation.

The framework also introduces a classification of trusted AI models, which would be subject to formal security verification by authorised federal bodies responsible for technical intelligence countermeasures and information security.

The proposed rules also establish a certification process for quality compliance, to be carried out in accordance with procedures defined by the Russian government. These measures aim to create a multi-layered oversight system for AI security and performance in regulated environments.

The proposed framework signals a shift towards tighter state control over how AI is tested, classified, and deployed, particularly in sensitive or high-risk environments. By introducing mandatory testing, security certification and government-defined quality standards, it increases regulatory scrutiny across the AI lifecycle. 

The broader implication is a move towards more centralised governance of AI systems, where compliance and risk management become embedded requirements rather than optional best practices.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

AI reshapes cybersecurity access as defenders gain new tools

OpenAI has expanded its Trusted Access for Cyber programme and introduced a more permissive AI model designed specifically for cybersecurity work. The initiative reflects a broader shift in digital security, in which advanced AI tools are increasingly integrated into both defensive and offensive cyber operations.

The development highlights a structural change in cybersecurity, where defenders are no longer relying solely on traditional tools but are instead incorporating AI systems capable of analysing code, identifying vulnerabilities and accelerating incident response.

At the same time, the same technological capabilities are becoming accessible to malicious actors, intensifying the need for controlled and verified access.

New automated vulnerability tools are being deployed to detect and fix security flaws at scale, moving towards continuous AI-assisted protection. Rather than periodic security reviews, development environments are gradually shifting towards real-time monitoring and automated remediation.

The broader implication is a tightening link between AI capability growth and cyber risk management. Access frameworks based on identity verification and trust signals aim to balance the wider availability of defensive tools with safeguards against misuse.

The expansion of AI-driven cybersecurity tools reflects a structural shift in how digital infrastructure is protected at scale. As software systems become more complex and interconnected, traditional periodic security checks are increasingly insufficient to manage fast-evolving threats. 

Cybersecurity is moving towards an always-on, automated model where the balance between openness and restriction will directly shape global digital resilience. The outcome of this approach will influence how resilient digital infrastructure becomes as AI-driven threats and defences evolve in parallel.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

Sussex police deploy AI cameras to detect traffic offences

Sussex Police has introduced AI cameras to detect drivers using mobile phones or not wearing seatbelts. The technology is being deployed to support enforcement and reduce road safety risks.

The rollout follows a 2024 trial by National Highways in Sussex, during which 458 offences were detected in 7 days. Most cases involved seatbelt violations, while others included mobile phone use or both offences combined.

Chief Constable Jo Shiner said the cameras are intended to support policing rather than replace it. She added that AI cameras help monitor driver behaviour and enable action where necessary.

Police and Crime Commissioner Katy Bourne said the technology would strengthen enforcement and allow resources to be used more effectively. She noted that collisions linked to phone use and lack of seatbelts continue to cause injuries.

The cameras, supplied by Acusensus, will operate for several weeks before evaluation. Officials said the system will contribute to wider road safety efforts and ongoing monitoring initiatives.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot 

Canada launches cyber security certification to protect defence supply chains

The Government of Canada has introduced Level 1 of the Canadian Program for Cyber Security Certification, establishing a baseline set of cyber security requirements for suppliers involved in defence contracts.

The measure will begin phased implementation from summer 2026, with certification required at the contract award stage.

A programme that forms part of Canada’s broader effort to strengthen resilience across defence supply chains, responding to increasing cyber threats targeting contractors and sensitive information.

It introduces standardised criteria to help organisations identify, assess and manage cyber risks more effectively.

A phased approach allows industry adaptation, particularly for small and medium-sized enterprises, while aligning national requirements with international partners. It also includes interoperability with US standards, supporting cross-border defence cooperation and market access.

By setting a minimum security baseline, the initiative in Canada reinforces trust in procurement systems and contributes to operational readiness within the defence sector.

Further certification levels are expected to expand requirements in the coming years.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Australian authorities warn of data exploitation through social media platforms

Social media and messaging services pose growing security and privacy risks, with personal data used to build profiles for fraud, espionage, or social engineering. Even routine posts may contribute to broader data collection and unintended exposure.

Platforms typically collect extensive user and device data under evolving privacy policies, sometimes storing it across jurisdictions with varying legal protections. Such conditions increase the risks to identity theft, reputational harm, and the misuse of aggregated personal information.

The Australian Government advises organisations to restrict access to official accounts, train staff, and enforce clear policies on what can be shared. It also highlights the importance of breach response procedures to maintain operational security.

For individuals, the Government guidance recommends limiting exposure of personal data, using privacy settings, avoiding unknown contacts, and applying strong authentication.

Regular updates, careful app permissions, and device security measures are also encouraged to reduce cyber risks.

Strengthening awareness and applying consistent security practices reduces vulnerability and supports more resilient organisational systems in an increasingly interconnected digital environment.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

China sets trial ethics rules for AI science and technology activities

China’s Ministry of Industry and Information Technology and nine other departments have issued the ‘Measures for AI science and technology ethics review and services (Trial)’, setting out rules on scope, support measures, implementing bodies, working procedures, supervision, and legal responsibility.

The text says the measures are intended to regulate ethics governance for AI science and technology activities and to support fair, just, safe, and responsible innovation.

The measures apply to AI scientific research, technology development, and other science and technology activities carried out in China that may raise ethics risks relating to human dignity, public order, life and health, the ecological environment, or sustainable development.

The text states that ethics requirements should run through the whole process of AI activities and lists principles including promoting human well-being, respecting life and rights, fairness and justice, reasonable risk control, openness and transparency, privacy and security protection, and controllability and trustworthiness.

On support measures, the document calls for improving the AI ethics standards system, including international, national, industry, and group standards. It also calls for stronger risk monitoring, testing, assessment, certification, and consulting services, more support for small and micro enterprises, work on ethics review research and technical innovation, the orderly opening of high-quality datasets, development of risk assessment and audit tools, public education, and ethics-related talent training.

The measures state that universities, research institutions, medical and health institutions, enterprises, and other entities engaged in AI science and technology activities are responsible for ethics review management within their own organisations and should establish AI science and technology ethics committees.

Local authorities and relevant departments may also establish specialised ethics review and service centres that provide review, re-examination, training, and consulting services on commission, but may not both review and re-examine the same AI activity.

The text sets out application and review procedures, including general, simplified, expert re-examination, and emergency procedures. It says review should focus on human well-being, fairness and justice, controllability and trustworthiness, transparency and explainability, traceability of responsibility, and privacy protection. Review decisions are to be made within 30 days after acceptance, subject to extension in complex cases. An emergency review is generally completed within 72 hours.

The measures also provide for expert re-examination of listed activities. The attached list covers human-machine integrated systems with a strong influence on human behaviour, psychological emotions, or health; algorithmic models, applications, and systems with the capacity for social mobilisation or guidance of social consciousness; and highly autonomous automated decision systems used in scenarios involving safety or health risks. The text says the list will be adjusted dynamically as needed.

The document further states that violations may be investigated and handled under laws, including the Cybersecurity Law, the Data Security Law, the Personal Information Protection Law, and the Science and Technology Progress Law. According to the text, the measures take effect upon issuance.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!