OpenAI outlines safeguards as AI cyber capabilities advance

Cyber capabilities in advanced AI models are improving rapidly, delivering clear benefits for cyberdefence while introducing new dual-use risks that require careful management, according to OpenAI’s latest assessment.

The company points to sharp gains in capture-the-flag performance, with success rates rising from 27 percent in August to 76 percent by November 2025. OpenAI says future models could reach high cyber capability, including assistance with sophisticated intrusion techniques.

To address this, OpenAI says it is prioritising defensive use cases, investing in tools that help security teams audit code, patch vulnerabilities, and respond more effectively to threats. The goal is to give defenders an advantage in an often under-resourced environment.

OpenAI argues that cybersecurity cannot be governed through a single safeguard, as defensive and offensive techniques overlap. Instead, it applies a defence-in-depth approach that combines access controls, monitoring, detection systems, and extensive red teaming to limit misuse.

Alongside these measures, the company plans new initiatives, including trusted access programmes for defenders, agent-based security tools in private testing, and the creation of a Frontier Risk Council. OpenAI says these efforts reflect a long-term commitment to cyber resilience.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UNODC and INTERPOL announce Global Fraud Summit in 2026

The United Nations Office on Drugs and Crime (UNODC), in cooperation with the International Criminal Police Organization (INTERPOL), will convene the Global Fraud Summit 2026 at the Vienna International Centre, Austria, from 16 to 17 March 2026.

UNODC and INTERPOL invite applications for participation from private sector entities, civil society organisations, and academic institutions. Applications must be submitted by 12 December 2025.

The Summit will provide a platform for discussion on current trends, risks, and responses related to fraud, including its digital and cross-border dimensions. Discussions will address challenges associated with detection, investigation, prevention, and international cooperation in fraud-related cases.

The objectives of the Summit include:

  • Facilitating coordination among national and international stakeholders
  • Supporting information exchange across sectors and jurisdictions
  • Sharing policy, operational, and technical approaches to fraud prevention and response
  • Identifying areas for further cooperation and capacity-building

The ministerial-level meeting will bring together senior representatives from governments, international and regional organisations, law enforcement authorities, the private sector, academia, and civil society. Participating institutions are encouraged to nominate delegates at an appropriate senior level.

The Summit is supported by a financial contribution from the Government of the United Kingdom of Great Britain and Northern Ireland.

Applications must be submitted through the application at the official website.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

International Criminal Court (ICC) issues policy on cyber-enabled crimes

The Office of the Prosecutor (OTP) of the International Criminal Court (ICC) has issued a Policy on Cyber-Enabled Crimes under the Rome Statute. The Policy sets out how the OTP interprets and applies the existing ICC legal framework to conduct that is committed or facilitated through digital and cyber means.

The Policy clarifies that the ICC’s jurisdiction remains limited to crimes defined in the Rome Statute: genocide, crimes against humanity, war crimes, the crime of aggression, and offences against the administration of justice. It does not extend to ordinary cybercrimes under domestic law, such as hacking, fraud, or identity theft, unless such conduct forms part of or facilitates one of the crimes within the Court’s jurisdiction.

According to the Policy, the Rome Statute is technology-neutral. This means that the legal assessment of conduct depends on whether the elements of a crime are met, rather than on the specific tools or technologies used.

As a result, cyber means may be relevant both to the commission of Rome Statute crimes and to the collection and assessment of evidence related to them.

The Policy outlines how cyber-enabled conduct may relate to each category of crimes under the Rome Statute. Examples include cyber operations affecting essential civilian services, the use of digital platforms to incite or coordinate violence, cyber activities causing indiscriminate effects in armed conflict, cyber operations linked to inter-State uses of force, and digital interference with evidence, witnesses, or judicial proceedings before the ICC.

The Policy was developed through consultations with internal and external legal and technical experts, including the OTP’s Special Adviser on Cyber-Enabled Crimes, Professor Marko Milanović. It does not modify or expand the ICC’s jurisdiction, which remains governed exclusively by the Rome Statute.

Currently, there are no publicly known ICC cases focused specifically on cyber-enabled crimes. However, the issuance of the Policy reflects the OTP’s assessment that digital conduct may increasingly be relevant to the commission, facilitation, and proof of crimes within the Court’s mandate.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Online data exposure heightens threats to healthcare workers

Healthcare workers are facing escalating levels of workplace violence, with more than three-quarters reporting verbal or physical assaults, prompting hospitals to reassess how they protect staff from both on-site and external threats.

A new study examining people search sites suggests that online exposure of personal information may worsen these risks. Researchers analysed the digital footprint of hundreds of senior medical professionals, finding widespread availability of sensitive personal data.

The study shows that many doctors appear across multiple data broker platforms, with a significant share listed on five or more sites, making it difficult to track, manage, or remove personal information once it enters the public domain.

Exposure varies by age and geography. Younger doctors tend to have smaller digital footprints, while older professionals are more exposed due to accumulated public records. State-level transparency laws also appear to influence how widely data is shared.

Researchers warn that detailed profiles, often available for a small fee, can enable harassment or stalking at a time when threats against healthcare leaders are rising. The findings renew calls for stronger privacy protections for medical staff.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

US War Department unveils AI-powered GenAI.mil for all personnel

The War Department has formally launched GenAI.mil, a bespoke generative AI platform powered initially by Gemini for Government, making frontier AI capabilities available to its approximately three million military, civilian, and contractor staff.

According to the department’s announcement, GenAI.mil supports so-called ‘intelligent agentic workflows’: users can summarise documents, generate risk assessments, draft policy or compliance material, analyse imagery or video, and automate routine tasks, all on a secure, IL5-certified platform designed for Controlled Unclassified Information (CUI).

The rollout, described as part of a broader push to cultivate an ‘AI-first’ workforce, follows a July directive from the administration calling for the United States to achieve ‘unprecedented levels of AI technological superiority.’

Department leaders said the platform marks a significant shift in how the US military operates, embedding AI into daily workflows and positioning AI as a force multiplier.

Access is limited to users with a valid DoW common-access card, and the service is currently restricted to non-classified work. The department also says the first rollout is just the beginning; additional AI models from other providers will be added later.

From a tech-governance and defence-policy perspective, this represents one of the most sweeping deployments of generative AI in a national security organisation to date.

It raises critical questions about security, oversight and the balance between efficiency and risk, especially if future iterations expand into classified or operational planning contexts.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New spyware threat alerts issued by Apple and Google

Apple and Google have issued a fresh round of cyber threat notifications, warning users worldwide they may have been targeted by sophisticated surveillance operations linked to state-backed actors.

Apple said it sent alerts on 2 December, confirming it has now notified users in more than 150 countries, though it declined to disclose how many people were affected or who was responsible.

Google followed on 3 December, announcing warnings for several hundred accounts targeted by Intellexa spyware across multiple countries in Africa, Central Asia, and the Middle East.

The Alphabet-owned company said Intellexa continues to evade restrictions despite US sanctions, highlighting persistent challenges in limiting the spread of commercial surveillance tools.

Researchers say such alerts raise costs for cyber spies by exposing victims, often triggering investigations that can lead to public scrutiny and accountability over spyware misuse.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Growing app restrictions hit ByteDance’s AI smartphone rollout

ByteDance is facing mounting pushback after major Chinese apps restricted how its agentic AI smartphone can operate across their platforms. Developers moved to block or limit Doubao, the device’s voice-driven assistant, following concerns about automation, security and transactional risks.

Growing reports from early adopters describe locked accounts, interrupted payments and app instability when Doubao performs actions autonomously. ByteDance has responded by disabling the assistant’s access to financial services, rewards features and competitive games while collaborating with app providers to establish clearer guidelines.

The Nubia M153, marketed as an experimental device, continues to attract interest for its hands-free interface, even as privacy worries persist over its device-wide memory system. Its long-term success hinges on whether China’s platforms and regulators can align with ByteDance’s ambitions for seamless, AI-powered smartphone interaction.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

NITDA warns of prompt injection risks in ChatGPT models

Nigeria’s National Information Technology Development Agency (NITDA) has issued an urgent advisory on security weaknesses in OpenAI’s ChatGPT models. The agency warned that flaws affecting GPT-4o and GPT-5 could expose users to data leakage through indirect prompt injection.

According to NITDA’s Computer Emergency Readiness and Response Team, seven critical flaws were identified that allow hidden instructions to be embedded in web content. Malicious prompts can be triggered during routine browsing, search or summarisation without user interaction.

The advisory warned that attackers can bypass safety filters, exploit rendering bugs and manipulate conversation context. Some techniques allow injected instructions to persist across future interactions by interfering with the models’ memory functions.

While OpenAI has addressed parts of the issue, NITDA said large language models still struggle to reliably distinguish malicious data from legitimate input. Risks include unintended actions, information leakage and long-term behavioural influence.

NITDA urged users and organisations in Nigeria to apply updates promptly and limit browsing or memory features when not required. The agency said that exposing AI systems to external tools increases their attack surface and demands stronger safeguards.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU gains stronger ad oversight after TikTok agreement

Regulators in the EU have accepted binding commitments from TikTok aimed at improving advertising transparency under the Digital Services Act.

An agreement that follows months of scrutiny and addresses concerns raised in the Commission’s preliminary findings earlier in the year.

TikTok will now provide complete versions of advertisements exactly as they appear in user feeds, along with associated URLs, targeting criteria and aggregated demographic data.

Researchers will gain clearer insight into how advertisers reach users, rather than relying on partial or delayed information. The platform has also agreed to refresh its advertising repository within 24 hours.

Further improvements include new search functions and filters that make it easier for the public, civil society and regulators to examine advertising content.

These changes are intended to support efforts to detect scams, identify harmful products and analyse coordinated influence operations, especially around elections.

TikTok must implement its commitments to the EU within deadlines ranging from two to twelve months, depending on each measure.

The Commission will closely monitor compliance while continuing broader investigations into algorithmic design, protection of minors, data access and risks connected to elections and civic discourse.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU ministers call for faster action on digital goals

European ministers have adopted conclusions aimed to boosting the Union’s digital competitiveness, urging quicker progress toward the 2030 digital decade goals.

Officials called for stronger digital skills, wider adoption of technology, and a framework that supports innovation while protecting fundamental rights. Digital sovereignty remains a central objective, framed as open, risk-based and aligned with European values.

Ministers supported simplifying digital rules for businesses, particularly SMEs and start-ups, which face complex administrative demands. A predictable legal environment, less reporting duplication and more explicit rules were seen as essential for competitiveness.

Governments emphasised that simplification must not weaken data protection or other core safeguards.

Concerns over online safety and illegal content were a prominent feature in discussions on enforcing the Digital Services Act. Ministers highlighted the presence of harmful content and unsafe products on major marketplaces, calling for stronger coordination and consistent enforcement across member states.

Ensuring full compliance with EU consumer protection and product safety rules was described as a priority.

Cyber-resilience was a key focus as ministers discussed the increasing impact of cyberattacks on citizens and the economy. Calls for stronger defences grew as digital transformation accelerated, with several states sharing updates on national and cross-border initiatives.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot