South Korea reviews AI cyber threat response

The Office of National Security of South Korea held a cybersecurity meeting to review how government agencies are responding to AI-driven cyber threats. The session focused on the growing risks posed by the misuse of advanced AI technologies.

Officials from multiple ministries attended, including science, defence and intelligence bodies, to coordinate responses. The government warned that AI-enabled hacking capabilities are becoming increasingly realistic as global technology companies release more advanced models.

Authorities have instructed relevant agencies to strengthen cooperation with businesses and institutions and distributed guidance on responding to AI-based security risks. Discussions also covered practical measures to support rapid responses to cybersecurity vulnerabilities across public and private sectors.

The government plans to establish a joint technical response team to improve information sharing and enable immediate action. Officials emphasised that while AI increases cyber risks, it also offers opportunities to strengthen security capabilities in South Korea.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Council of the EU extends cyber sanctions framework until 2027

The Council of the European Union has extended restrictive measures against individuals and entities involved in cyber-attacks threatening the EU and its member states until 18 May 2027. The legal framework behind the sanctions regime had already been extended until 18 May 2028.

The framework allows the EU to impose targeted sanctions on persons or entities involved in significant cyber-attacks that constitute an external threat to the Union or its member states. Measures can also be imposed in response to cyber-attacks against third countries or international organisations, where they support Common Foreign and Security Policy objectives.

Current listings under the regime apply to 19 individuals and seven entities. Sanctioned actors face asset freezes, while the EU citizens and companies are prohibited from making funds or economic resources available to them. Listed individuals are also subject to travel bans preventing them from entering or transiting through the EU territory.

The Council said the individual listings will continue to be reviewed every 12 months. It also said the measures are intended to deter malicious cyber activity and uphold the international rules-based order by ensuring accountability for those responsible.

The sanctions mechanism forms part of the EU’s broader cyber diplomacy toolbox, established in 2017 to strengthen coordinated diplomatic responses to malicious cyber activity. The Council said the EU and its member states would continue working with international partners to promote an open, free, stable and secure cyberspace.

Why does it matter?

The decision shows how cybersecurity has become part of the EU’s foreign policy and sanctions toolkit, not only a matter of technical defence. By extending cyber sanctions listings, the EU is reinforcing its use of diplomatic and economic measures to deter malicious cyber activity, attribute responsibility and signal that significant cyber-attacks can carry geopolitical consequences.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Google warns adversaries are industrialising AI-enabled cyberattacks

Google Threat Intelligence Group says cyber adversaries are moving from early AI experimentation towards the industrial-scale use of generative models across malicious workflows.

In a new report, GTIG says it has identified, for the first time, a threat actor using a zero-day exploit that it believes was developed with AI. The criminal actor had planned to use the exploit in a mass exploitation campaign involving a two-factor authentication bypass, but Google said its proactive discovery may have prevented the campaign from going ahead.

The findings describe several uses of AI in cyber operations. Threat actors linked to the People’s Republic of China and the Democratic People’s Republic of Korea have used AI for vulnerability research, including persona-based prompting, specialised vulnerability datasets and automated analysis of vulnerabilities and proof-of-concept exploits.

Other actors have used AI-assisted coding to support defence evasion, including the development of obfuscation tools, relay infrastructure and malware containing AI-generated decoy logic. Google said these uses show how generative models can accelerate development cycles and make malicious tools harder to detect.

Google also highlights PROMPTSPY, an Android backdoor that uses Gemini API capabilities to interpret device interfaces, generate structured commands, simulate gestures and support more autonomous malware behaviour. The company said it had disabled assets linked to the activity and that no apps containing PROMPTSPY were found on Google Play at the time of its current detection.

AI systems are also becoming direct targets. Google says attackers are compromising AI software dependencies, open-source agent skills, API connectors and AI gateway tools such as LiteLLM. The report warns that such supply-chain attacks could expose API secrets, enable ransomware activity or allow intruders to use internal AI systems for reconnaissance, data theft and deeper network access.

Why does it matter?

Google’s findings suggest that AI-enabled cyber activity is moving beyond basic phishing support or faster research. Generative models are now being used in vulnerability discovery, exploit development, malware obfuscation, autonomous device interaction, information operations and attacks on AI infrastructure itself. That could make some attacks faster, more adaptive and harder to detect, while also turning AI platforms, integrations and supply chains into part of the cyberattack surface.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Australia’s ASIC urges cyber resilience as frontier AI raises risk

The Australian Securities and Investments Commission has urged regulated entities to strengthen cyber resilience, warning that frontier AI could intensify cyber risks by exposing vulnerabilities at greater speed, scale and sophistication.

In an open letter to industry, ASIC said licensees and market participants should act now to improve their cybersecurity fundamentals rather than wait as advanced AI tools reshape the threat environment. The regulator said cyber resilience should be treated as a core licensing obligation, not solely as an IT issue.

ASIC Commissioner Simone Constant said frontier AI creates opportunities but also materially increases cyber risk, including by exposing weaknesses faster than many organisations realise. She warned that vulnerabilities once seen as isolated could have system-wide effects and enable previously out-of-reach forms of exploitation for many malicious actors.

The letter follows ASIC’s recent court outcome against FIIG Securities Limited, which the regulator said reinforced the need for cyber risk management controls to be demonstrably effective and proportionate to a business’s size, nature and complexity.

ASIC is urging entities to reassess cyber plans, identify and protect critical systems, reduce exposure to untrusted networks, review user access, patch systems promptly, strengthen incident response planning and manage third-party risks. It also says organisations should use AI defensively where appropriate, including to identify vulnerabilities and secure software before release.

Constant said entities need robust incident response plans and that the underlying principles of cyber risk management remain the same: govern, protect, detect and respond. She also said boards and executives must ensure systems are tested, weaknesses are addressed early, and action is taken before threats can be exploited.

ASIC says entities must table the letter at their ultimate board and risk governance committees. It also encourages regulated entities to use guidance from trusted sources, including the Australian Signals Directorate and the Australian Government’s Cyber Health Check.

Why does it matter?

ASIC’s warning shows that financial regulators are beginning to treat frontier AI as a force multiplier of cyber risk, not just a technology issue. By framing cyber resilience as a licensing and board-level governance obligation, the regulator is signalling that firms may be judged not only on whether they suffer cyber incidents, but on whether their controls, escalation processes and resilience planning are proportionate to an AI-accelerated threat environment.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

WEF report says AI is reshaping cybersecurity defence

Advanced AI models are reshaping cybersecurity by accelerating both offensive and defensive capabilities, forcing organisations to rethink how they detect, assess and respond to cyber threats.

A new World Economic Forum report argues that AI is becoming a defining force in cybersecurity, with organisations increasingly moving from pilot projects to operational deployment. According to the WEF, AI is already being used to improve vulnerability identification, threat detection, response speed and resilience.

The report highlights how AI can help security teams process large volumes of data, detect threats faster and support more efficient responses. At the same time, it warns that threat actors are also using AI to automate deception, generate malware and scale attacks at machine speed.

WEF’s analysis says the growing speed and scale of AI-enabled cyber operations are putting pressure on traditional cybersecurity models. Instead of relying mainly on prevention and scheduled patching cycles, organisations are being pushed towards continuous detection, automated response, stronger access controls and more resilient infrastructure.

The report also stresses that AI’s value in cybersecurity depends on strategy, governance and human oversight. Rather than treating AI as a standalone tool, organisations are encouraged to test use cases carefully, build appropriate safeguards and invest in the skills and processes needed to defend at machine speed.

Why does it matter?

AI is changing cybersecurity on both sides of the equation. It can lower the barriers for faster and more scalable attacks, but it can also help defenders improve detection, response and resilience. The wider significance is that cybersecurity strategies built around periodic assessment and manual response may become less effective as AI-driven threats and defences operate at greater speed and scale.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Norway joins Pax Silica initiative to secure AI and semiconductor supply chains

The Pax Silixca initiative, which focuses on secure AI, semiconductor, and critical raw materials supply chains, has expanded with the addition of Norway. The partnership aims to strengthen technological innovation while protecting sensitive technologies.

Norway joins a group of 14 participating countries, including the USA, Japan, the UK and India. Norwegian officials said participation could improve market access for domestic companies operating in advanced technological sectors and strengthen economic security cooperation with strategic partners.

Minister of Trade and Industry, Cecilie Myrseth, said the initiative aligns with Norway’s goal of expanding cooperation with leading countries in AI and emerging technologies. Norwegian ambassador to the USA, Anniken Huitfeldt, is expected to formally sign the agreement on behalf of the country.

The move also complements broader Norwegian and European efforts to secure access to critical technologies and supply chains. The government highlighted initiatives linked to the European Chips Act and the EU Critical Raw Materials Act as part of a wider strategy to strengthen technology resilience and industrial competitiveness.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

World Economic Forum report highlights growing role of AI in cybersecurity operations

A World Economic Forum white paper (Empowering Defenders: AI for Cybersecurity), developed with KPMG, states that AI is becoming a core capability for modern cybersecurity. The report notes that attackers are using AI to increase speed, scale and sophistication, while defenders are also adopting AI to improve detection, response and resilience.

The report describes how AI is being used across the cybersecurity lifecycle, from cyber governance and risk identification to threat detection, incident response and recovery. Case studies from major organisations highlight applications in phishing detection, vulnerability management, malware analysis, threat intelligence and automated security reviews.

WEF report also states that effective adoption depends on more than technology investment. Organisations need executive support, reliable data, skilled teams, mature infrastructure and clear governance before deploying AI in critical security operations.

The report also highlights the rise of agentic AI, where autonomous systems can detect, coordinate and respond to threats with limited human intervention. It adds that while these systems could help defenders act faster, they may also introduce risks related to accountability, unintended behaviour and over-reliance on automation.

Why does it matter?

The central message of the report is that AI can strengthen cyber defence only when paired with human judgement, structured pilots, continuous monitoring and clear safeguards. Without these foundations, organisations risk creating fragile systems instead of resilient ones.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

UK’s NCSC warns AI could expose software vulnerabilities at scale

The NCSC says that AI is reshaping cybersecurity by exposing vulnerabilities across software ecosystems.
The National Cyber Security Centre (NCSC) warns that organisations must prepare for a large-scale patch wave. AI enables faster identification and exploitation of weaknesses than traditional defences can handle.

Technical debt, built through years of prioritising short-term efficiency instead of long-term resilience, is now being exposed at scale.

The NCSC notes that AI capabilities enable attackers to identify weaknesses faster and more comprehensively, creating pressure on organisations to respond with rapid and coordinated patching strategies across entire technology environments.

The recommended approach by NCSC prioritises internet-facing systems and external attack surfaces, followed by internal infrastructure and critical security assets.

Automated updates and hot patching are encouraged where available, while organisations lacking such capabilities must adopt scalable and risk-based update processes. Legacy systems without support present a particular risk, requiring replacement instead of reliance on patching alone.

NCSC adds that beyond software updates, the challenge reflects a deeper structural issue within digital ecosystems. Stronger cyber resilience depends on reducing systemic vulnerabilities through secure design practices, improved monitoring and supply chain readiness.

They also said that organisations that fail to prepare for continuous, large-scale patching cycles risk increased exposure as AI continues to reshape the cybersecurity landscape.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Agentic AI risks outlined in joint cyber agency guidance

Six cybersecurity agencies have jointly published guidance urging organisations to adopt agentic AI services cautiously. The document warns that greater autonomy can increase cyber risk, particularly as agentic AI is introduced into critical infrastructure, defence, and other mission-critical environments.

The authors say organisations should use agentic AI primarily for low-risk and non-sensitive tasks and should not grant it broad or unrestricted access to sensitive data or critical systems. The guidance also recommends incremental deployment rather than large-scale implementation from the outset.

The document was co-authored by agencies from Australia, the United States, Canada, New Zealand, and the United Kingdom: the Australian Signals Directorate’s Australian Cyber Security Centre, the US Cybersecurity and Infrastructure Security Agency and National Security Agency, the Canadian Centre for Cyber Security, New Zealand’s National Cyber Security Centre, and the UK’s National Cyber Security Centre.

It defines agentic AI as systems composed of one or more agents that rely on AI models, such as large language models, to interpret context, make decisions, and take actions, often without continuous human intervention. The guidance says these systems often combine an LLM-based agent with tools, external data, memory, and planning functions, which expands both capability and attack surface.

The agencies say agentic AI inherits many of the vulnerabilities already associated with large language models while introducing greater complexity and new systemic risks. The document identifies five broad categories of concern: privilege risks, design and configuration risks, behaviour risks, structural risks, and accountability risks.

It warns that over-privileged agents, insecure third-party tools, goal misalignment, emergent or deceptive behaviour, and opaque decision-making chains can all increase the likelihood and impact of compromise. To reduce those risks, the guidance recommends secure design, strong identity management, defence-in-depth, comprehensive testing, threat modelling, progressive deployment, isolation, continuous monitoring, and strict privilege controls.

The agencies also stress that human approval should remain in place for high-impact actions and that agentic AI security should be treated as part of broader cybersecurity governance rather than as a separate discipline. The document concludes by calling for stronger research, collaboration, and agent-specific evaluations as the technology matures.

Why does it matter?

The guidance matters because it draws a clear line between ordinary AI adoption and agentic systems that can act with far more autonomy inside real operational environments. Once AI tools move from assisting users to making decisions, calling tools, and interacting with sensitive systems, the security challenge shifts from model safety alone to full organisational risk management. That is why the document treats agentic AI not as a niche technical issue, but as a governance and cyber resilience problem that organisations need to control before deploying at scale.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU and Republic of Korea launch aviation partnership on technical cooperation and cyber resilience

European and South Korean aviation authorities are conducting a three-week series of technical exchanges in Seoul, covering safety oversight, airspace management, and cybersecurity.

The European Union Aviation Safety Agency (EASA) and South Korea’s Ministry of Land, Infrastructure and Transport are participating under the EU–Republic of Korea Aviation Partnership Project, an EU-funded initiative announced by the European External Action Service (EEAS).

The programme began with a three-day session on the International Civil Aviation Organisation’s Universal Safety Oversight Audit Programme (USOAP), which assesses national aviation safety oversight systems. EASA presented findings from its most recent ICAO audit, with discussions covering oversight frameworks, organisational structures, and lessons identified.

A workshop on performance-based navigation and airspace management followed, addressing procedures to improve the predictability and efficiency of aircraft arrivals, including at airports with parallel runways.

A third workshop on aviation cybersecurity is scheduled for the coming week. It will cover security considerations across aviation systems, including aircraft certification processes and air traffic management infrastructure.

The activities are designed to facilitate technical exchange between Korean and European stakeholders across the aviation sector, according to EASA.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!