South Korea reviews AI cyber threat response

The Office of National Security of South Korea held a cybersecurity meeting to review how government agencies are responding to AI-driven cyber threats. The session focused on the growing risks posed by the misuse of advanced AI technologies.

Officials from multiple ministries attended, including science, defence and intelligence bodies, to coordinate responses. The government warned that AI-enabled hacking capabilities are becoming increasingly realistic as global technology companies release more advanced models.

Authorities have instructed relevant agencies to strengthen cooperation with businesses and institutions and distributed guidance on responding to AI-based security risks. Discussions also covered practical measures to support rapid responses to cybersecurity vulnerabilities across public and private sectors.

The government plans to establish a joint technical response team to improve information sharing and enable immediate action. Officials emphasised that while AI increases cyber risks, it also offers opportunities to strengthen security capabilities in South Korea.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Data Protection Act regulations bring AI code requirement into force

The UK has brought into force regulations requiring the Information Commissioner to prepare a code of practice on the processing of personal data in relation to AI and automated decision-making.

The Data Protection Act 2018 (Code of Practice on Artificial Intelligence and Automated Decision-Making) Regulations 2026 were made on 16 April, laid before Parliament on 21 April, and came into force on 12 May. The regulations apply across England and Wales, Scotland and Northern Ireland.

Under the regulations, the Information Commissioner must prepare a code giving guidance on good practice in the processing of personal data under the UK GDPR and the Data Protection Act 2018 when developing and using AI and automated decision-making systems.

The code must also include guidance on good practice in the processing of children’s personal data. Automated decision-making is defined by reference to provisions in the UK GDPR and the Data Protection Act 2018 inserted through the Data (Use and Access) Act 2025.

The instrument also modifies the panel requirements for preparing or amending the code. Any panel established to consider the code must not consider or report on aspects relating to national security.

The explanatory note states that no full impact assessment was prepared for the instrument because the regulations themselves are not expected to have a significant impact on the private, voluntary or public sectors. The Information Commissioner must produce an impact assessment when preparing the code.

Why does it matter?

The regulations move UK guidance on AI, automated decision-making and personal data onto a statutory track. The eventual code could become an important reference point for organisations using AI systems that process personal data, particularly where automated decisions or children’s data are involved. For now, the main development is procedural: the Information Commissioner is required to prepare the code, while the practical compliance details will follow through that process.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Agentic AI and the future of cybersecurity

With the rapid expansion of AI technologies, agentic AI is rapidly moving from experimentation to deployment on a scale larger than ever before. As a result, these systems have been given far greater autonomy to perform tasks with limited human input, much to the delight of enterprise magnates.

Companies such as Microsoft, Google, Anthropic, and OpenAI are increasingly developing agentic AI systems capable of automating vulnerability detection, incident response, code analysis, and other security tasks traditionally handled by human teams.

The appeal of using agentic AI as a first line of defence is palpable, as cybersecurity teams face mounting pressure from the growing volume of attacks. According to the Microsoft Digital Defense Report 2025, the company now detects more than 600 million cyberattacks daily, ranging from ransomware and phishing campaigns to identity attacks. Additionally, the International Monetary Fund has also warned that cyber incidents have more than doubled since the COVID-19 pandemic, potentially triggering institutional failures and incurring enormous financial losses.

To add insult to injury, ransomware groups such as Conti, LockBit, and Salt Typhoon have shown increased activity from 2024 through early 2026, targeting critical infrastructure and global communications, as if aware of the upcoming cybersecurity fortifications and using a limited window of time to incur as much damage as possible.

In such circumstances, fully embracing agentic AI may seem like an ideal answer to the cybersecurity challenges looming on the horizon. Systems capable of autonomously detecting threats, analysing vulnerabilities, and accelerating response times could significantly strengthen cyber resilience.

Yet the same autonomy that makes these systems attractive to defenders could also be exploited by malicious actors. If agentic AI becomes a defining feature of cyber defence, policymakers and companies may soon face a more difficult question: how can they maximise its benefits without creating an entirely new layer of cyber risk?

Why cybersecurity is turning to agentic AI

The growing interest in agentic AI is not simply driven by the rise in cyber threats. It is also a response to the operational limitations of modern security teams, which are often overwhelmed by repetitive tasks that consume time and resources.

Security analysts routinely handle phishing alerts, identity verification requests, vulnerability assessments, patch management, and incident prioritisation — processes that can become difficult to manage at scale. Many of these tasks require speed rather than strategic decision-making, creating a natural opening for AI systems to operate with greater autonomy.

Microsoft has aggressively moved into this space. In March 2025, the company introduced Security Copilot agents designed to autonomously handle phishing triage, data security investigations, and identity management. Rather than replacing human analysts, Microsoft positioned the tools to reduce repetitive workloads and enable security teams to focus on more complex threats.

Google has approached the issue through vulnerability research. Through Project Naptime, the company demonstrated how AI systems could replicate parts of the workflow traditionally handled by human security researchers by identifying vulnerabilities, testing hypotheses, and reproducing findings.

Anthropic introduced another layer of complexity through Claude Mythos, a model built for high-risk cybersecurity tasks. While the company presented the model as a controlled release for defensive purposes, the announcement also highlighted how advanced cyber capabilities are becoming increasingly embedded in frontier AI systems.

Meanwhile, OpenAI has expanded partnerships with cybersecurity organisations and broadened access to specialised tools for defenders, signalling that major AI firms increasingly view cybersecurity as one of the most commercially viable applications for autonomous systems.

Together, these developments show that agentic AI is gradually becoming embedded in the cybersecurity infrastructure. For many companies, the question is no longer whether autonomous systems can support cyber defence, but how much responsibility they should be given.

When agentic AI tools become offensive weapons

The same capabilities that make agentic AI valuable to defenders also make it attractive to malicious actors. Systems designed to identify vulnerabilities, analyse code, automate workflows, and accelerate decision-making can be repurposed for offensive cyber operations.

Anthropic offered one of the clearest examples of that risk when it disclosed that malicious actors had used Claude in cyber campaigns. The company said attackers were not simply using the model for basic assistance, but were integrating it into broader operational workflows. The incident showed how agentic AI can move cyber misuse beyond advice and into execution.

The risk extends beyond large-scale cyber operations. Agentic AI systems could make phishing campaigns more scalable, automate reconnaissance, accelerate vulnerability discovery, and reduce the technical expertise needed to launch certain attacks. Tasks that once required specialist teams could become easier to coordinate through autonomous systems.

Security researchers have repeatedly warned that generative AI is already making social engineering more convincing through realistic phishing emails, cloned voices, and synthetic identities. More autonomous systems could further push those risks by combining content generation with independent action.

The concern is not that agentic AI will replace human hackers. Cybercrime could become faster, cheaper, and more scalable, mirroring the same efficiencies that organisations hope to achieve through AI-powered defence.

The agentic AI governance gap

The governance challenge surrounding agentic AI is no longer theoretical. As autonomous systems gain access to internal networks, cloud infrastructure, code repositories, and sensitive datasets, companies and regulators are being forced to confront risks that existing cybersecurity frameworks were not designed to manage.

Policymakers are starting to respond. In February 2026, the US National Institute of Standards and Technology (NIST) launched its AI Agent Standards Initiative, focused on identity verification and authentication frameworks for AI agents operating across digital environments. The aim is simple but important: organisations need to know which agents can be trusted, what they are allowed to do, and how their actions can be traced.

Governments are also becoming more cautious about deployment risks. In May 2026, the Cybersecurity and Infrastructure Security Agency (CISA) joined cybersecurity agencies from Australia, Canada, New Zealand, and the United Kingdom in issuing guidance on the secure adoption of agentic AI services. The warning was clear: autonomous systems become more dangerous when they are connected to sensitive infrastructure, external tools, and internal permissions.

The private sector is adjusting as well. Companies are increasingly discussing safeguards such as restricted permissions, audit logs, human approval checkpoints, and sandboxed environments to limit the degree of autonomy granted to AI agents.

The questions facing businesses are becoming practical. Should an AI agent be allowed to patch vulnerabilities without approval? Can it disable accounts, quarantine systems, or modify infrastructure independently? Who is held accountable when an autonomous system makes the wrong decision?

Agentic AI may become one of cybersecurity’s most effective defensive tools. Its success, however, will depend on whether governance frameworks evolve quickly enough to keep pace with the technology itself.

How companies are building guardrails around agentic AI

As concerns around autonomous cyber systems grow, companies are increasingly experimenting with safeguards designed to prevent agentic AI from becoming an uncontrolled risk. Rather than granting unrestricted access, many organisations are limiting what AI agents can see, what systems they can interact with, and what actions they can execute without human approval.

Anthropic has restricted access to Claude Mythos over concerns about offensive misuse, while OpenAI has recently expanded its Trusted Access for Cyber programme to provide vetted defenders with broader access to advanced cyber tools. Both approaches reflect a growing consensus that powerful cyber capabilities may require tiered access rather than unrestricted deployment.

The broader industry is moving in a similar direction. CrowdStrike has increasingly integrated AI-driven automation into threat intelligence and incident response workflows while maintaining human oversight for critical decisions. Palo Alto Networks has also expanded its AI-powered security automation tools designed to reduce response times without fully removing human analysts from the decision-making process.

Cloud providers are also becoming more cautious about autonomous access. Amazon Web Services, Google Cloud, and Microsoft Azure have increasingly emphasised zero-trust security models, role-based permissions, and segmented access controls as enterprises deploy more automated tools across sensitive infrastructure.

Meanwhile, sectors such as finance, healthcare, and critical infrastructure remain particularly cautious about fully autonomous deployment due to the potential consequences of false positives, accidental shutdowns, or disruptions to essential services.

As a result, security teams are increasingly discussing safeguards such as audit logs, sandboxed environments, role-based permissions, staged deployments, and human approval checkpoints to balance speed with accountability. For now, many companies seem ready to embrace agentic AI, but without keeping one hand on the emergency brake.

The future of cybersecurity may be agentic

Agentic AI is unlikely to remain a niche experiment for long. The scale of modern cyber threats, combined with the mounting pressure on security teams, means organisations will continue to look for faster and more scalable defensive tools.

That shift could significantly improve cybersecurity resilience. Autonomous systems may help organisations detect threats earlier, reduce response times, address workforce shortages, and manage the growing volume of attacks that human teams increasingly struggle to handle alone.

At the same time, the technology’s long-term success will depend as much on restraint as on innovation. Without clear governance frameworks, operational safeguards, and human oversight, the same tools designed to strengthen cyber defence could introduce entirely new vulnerabilities.

The future of cybersecurity may increasingly belong to agentic AI. Whether that future becomes safer or more volatile may depend on how responsibly governments, companies, and security teams manage the transition.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!


Texas lawsuit targets Netflix data practices

The Attorney General of Texas has filed a lawsuit against Netflix, alleging the company unlawfully collected user data without consent. The case claims the platform tracked extensive behavioural information from both adults and children while presenting itself as privacy-conscious.

According to the lawsuit, Netflix allegedly logged viewing habits, device usage and other interactions, turning user activity into monetised data. The lawsuit further claims that this data was shared with brokers and advertising technology firms to build detailed consumer profiles.

The Attorney General also argues that Netflix designed features to increase engagement, including autoplay, which allegedly encouraged prolonged viewing, particularly among younger users. These practices allegedly contradict the platform’s public messaging about being ad-free and family-friendly.

Texas’s complaint quoted a statement from Netflix co-founder and Chairman Reed Hastings, who allegedly said the company did not collect user data. He sought to distinguish Netflix’s approach from other major technology platforms with regard to data collection.

The Attorney General also claims that Netflix’s alleged surveillance violates the Texas Deceptive Trade Practices Act. The legal action seeks to halt the alleged data practices, introduce stricter controls, such as disabling autoplay for children, and impose penalties under consumer protection law, including civil fines of $ 10,000 per violation. The case highlights ongoing scrutiny of data practices by major technology platforms in the USA.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Child safety concerns dominate Europe’s digital agenda

A growing majority of Europeans believe stronger online protections for children and young people should remain a top policy priority, according to new findings from the Special Eurobarometer on the Digital Decade.

The European Commission said 92% of Europeans consider further action to protect children and young people online a top priority, reflecting sustained concern over the impact of digital platforms on younger users.

Mental health risks linked to social media ranked among the biggest concerns, with 93% of respondents calling for stronger protections. Cyberbullying, online harassment, and better age-restriction mechanisms for inappropriate content were also highlighted by 92% of respondents.

Concerns over AI and online manipulation also remain high. The survey found that 39% of respondents cited privacy or data protection as a barrier to using AI, followed by accuracy or incorrect information at 36% and ethical issues or misuse of generative AI tools at 32%.

Around 87% of Europeans agreed that online manipulation, including disinformation, foreign interference, AI-generated content and deepfakes, poses a threat to democratic processes. Another 80% said AI development should be carefully regulated to ensure safety, even if oversight places constraints on developers.

The findings also show continuing concern over online platforms. Europeans reported being personally affected by fake news and disinformation, misuse of personal data and insufficient protections for minors, with concerns over fake news and child protection showing the sharpest increases since 2024.

Why does it matter?

The findings show that public concern over digital technologies in Europe is increasingly centred on safety, rights and accountability, particularly for children and young people. They also suggest that trust in platforms and AI systems will depend not only on innovation and access, but also on visible safeguards against manipulation, harmful content, privacy risks, and weak protections for minors.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our chatbot!  

Google warns adversaries are industrialising AI-enabled cyberattacks

Google Threat Intelligence Group says cyber adversaries are moving from early AI experimentation towards the industrial-scale use of generative models across malicious workflows.

In a new report, GTIG says it has identified, for the first time, a threat actor using a zero-day exploit that it believes was developed with AI. The criminal actor had planned to use the exploit in a mass exploitation campaign involving a two-factor authentication bypass, but Google said its proactive discovery may have prevented the campaign from going ahead.

The findings describe several uses of AI in cyber operations. Threat actors linked to the People’s Republic of China and the Democratic People’s Republic of Korea have used AI for vulnerability research, including persona-based prompting, specialised vulnerability datasets and automated analysis of vulnerabilities and proof-of-concept exploits.

Other actors have used AI-assisted coding to support defence evasion, including the development of obfuscation tools, relay infrastructure and malware containing AI-generated decoy logic. Google said these uses show how generative models can accelerate development cycles and make malicious tools harder to detect.

Google also highlights PROMPTSPY, an Android backdoor that uses Gemini API capabilities to interpret device interfaces, generate structured commands, simulate gestures and support more autonomous malware behaviour. The company said it had disabled assets linked to the activity and that no apps containing PROMPTSPY were found on Google Play at the time of its current detection.

AI systems are also becoming direct targets. Google says attackers are compromising AI software dependencies, open-source agent skills, API connectors and AI gateway tools such as LiteLLM. The report warns that such supply-chain attacks could expose API secrets, enable ransomware activity or allow intruders to use internal AI systems for reconnaissance, data theft and deeper network access.

Why does it matter?

Google’s findings suggest that AI-enabled cyber activity is moving beyond basic phishing support or faster research. Generative models are now being used in vulnerability discovery, exploit development, malware obfuscation, autonomous device interaction, information operations and attacks on AI infrastructure itself. That could make some attacks faster, more adaptive and harder to detect, while also turning AI platforms, integrations and supply chains into part of the cyberattack surface.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

European Ombudsman criticises Commission over X risk report access

The European Ombudswoman has criticised the European Commission’s handling of a request for public access to a risk assessment report submitted by social media platform X under the Digital Services Act.

The case concerned a journalist’s request to access X’s 2023 risk assessment report, which large online platforms must provide under the DSA. The Commission refused to assess the report for possible disclosure, arguing that access could undermine X’s commercial interests, an ongoing DSA investigation and an independent audit.

The Ombudswoman found it unreasonable for the Commission to rely on a general presumption of non-disclosure rather than individually assessing the report. She said the circumstances in which the EU courts have allowed such presumptions differ from the rules applying to DSA risk assessment reports.

Although X has since made the report public with redactions, the Ombudswoman recommended that the Commission conduct its own assessment and aim to give the journalist the widest access possible, including potentially to parts redacted by the company. If access is refused for any sections, the Commission must explain why.

The finding of maladministration highlights the importance of transparency in the oversight of very large online platforms under the DSA, particularly where documents are relevant to public scrutiny of platform risk management and regulatory enforcement.

Why does it matter?

The case tests how far transparency obligations around very large online platforms can be limited by broad claims of commercial sensitivity or ongoing investigations. DSA risk assessment reports are central to understanding how platforms identify and manage systemic risks, so access decisions affect public oversight of the EU digital regulation as much as the rights of individual requesters.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Health New Zealand issues guidance on use of generative AI and large language models

Health New Zealand has published new guidance on generative AI and large language models for healthcare settings.

The guidance states that the National Artificial Intelligence and Algorithm Expert Advisory Group evaluates the use of generative AI tools and LLMs and recommends caution in their application across Health New Zealand environments. It notes that further data is needed to assess risks and benefits in the New Zealand health context.

Employees and contractors are prohibited from entering personal, confidential or sensitive patient or organisational information into unapproved LLMs or generative AI tools. The guidance also says such tools must not be used for clinical decisions or personalised patient advice.

Staff using generative AI tools in other contexts must take full responsibility for checking the information generated and acknowledge when generative AI has been used to create content. Anyone planning to use generative AI or LLMs is also asked to seek advice from the advisory group.

The guidance highlights potential risks including privacy breaches, inaccurate or misleading outputs, bias in training data, lack of transparency in model outputs, data sovereignty concerns and intellectual property risks. It also notes that generative AI systems may not adequately support te reo Māori and other minority languages spoken in Aotearoa New Zealand.

Why does it matter?

The guidance shows how health systems are beginning to set practical boundaries for generative AI before its use becomes routine in clinical and administrative settings. By prohibiting unapproved tools for patient data, clinical decisions and personalised advice, Health New Zealand is drawing a clear line between limited productivity uses and high-risk healthcare applications. In contrast, its references to Māori data sovereignty and language support widen the governance frame to include equity, cultural rights and data protection concerns that standard technology policies may not fully address.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Young users’ reliance on ChatGPT raises questions over AI advice and autonomy

Sam Altman has described a generational divide in how people use ChatGPT, saying younger users are integrating the tool more deeply into learning, planning and everyday decision-making.

Speaking at Sequoia Capital’s AI Ascent 2025, the OpenAI CEO said older users tend to treat ChatGPT more like a search tool, while people in their 20s and 30s often use it as a personal advisor. College students, he said, are going further by treating ChatGPT almost like an operating system, connecting it to files, tasks and complex workflows.

The remarks point to a shift in how AI tools are being embedded into daily routines, particularly among students and younger adults. Business Insider reported that a February 2025 OpenAI report found US college students were among the platform’s most frequent users, while a Pew Research Centre survey found that 26% of US teens aged 13 to 17 used ChatGPT for schoolwork in 2024, double the share recorded in 2023.

Altman’s comments also raise questions about dependence, accuracy and boundaries as AI systems move closer to advisory roles. While users may benefit from private spaces to test ideas, organise tasks and prepare decisions, concerns remain over over-reliance, data privacy and the shifting role of human relationships in decision-making.

Why does it matter?

The trend suggests that AI is becoming more than an information tool for younger users. As ChatGPT and similar systems become part of studying, planning and personal decision-making, they influence not only how information is consumed, but also how habits, confidence and judgement develop.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our chatbot!  

Australia launches national AI platform ‘AI.gov.au’

The Department of Industry, Science and Resources has announced the launch of AI.gov.au through the National Artificial Intelligence Centre. The platform is designed to help organisations adopt AI safely and responsibly in line with the National AI Plan.

AI.gov.au provides a central source of guidance, tools and resources to support businesses and not-for-profits. It aims to help users identify AI opportunities, plan implementation, manage risks and build internal capability.

The platform’s development was informed by research and engagement with industry and government, highlighting the need for clear starting points, practical advice and support for AI organisational change. It also supports the AI Safety Institute’s work by improving access to safety guidance.

Initial features focus on small and medium-sized enterprises and include training, case studies and adoption tools, with further updates planned. The initiative reflects efforts to strengthen AI uptake and governance in Australia.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot