Australia and Japan expand cooperation on AI, supply chains and resilience

Australia and Japan have issued a joint declaration on economic security cooperation, stating that economic and technological resilience are central to national security and setting out a broad agenda for closer bilateral coordination across supply chains, critical technologies, and Indo-Pacific connectivity.

The declaration states that economic resilience is foundational to both countries’ security and that the framework is intended to strengthen strategic autonomy, indispensability, and regional resilience.

Furthermore, the declaration commits the two governments to closer policy alignment through existing bilateral mechanisms and to consultation on economic security contingencies linked to geopolitical tensions, economic coercion, and major market disruptions.

A major focus is on supply chain security in strategically significant sectors. Australia and Japan reaffirmed their partnership on minerals, energy, food, and industrial goods, while expressing concern over economic coercion, harmful overcapacity, and export restrictions, particularly in critical minerals.

The declaration also highlights cooperation on critical minerals projects, domestic smelting and metals processing, and coordination among government-backed finance institutions to support investment and supply chain resilience.

The text also emphasises critical and emerging technologies. Australia and Japan say they will deepen cooperation on research security and integrity, while promoting trusted collaboration between governments, national laboratories, industry, and academia in areas including AI, data centres, quantum, biotechnology, space, undersea cables, and telecommunications. The declaration also links advanced technologies to defence industry cooperation and supply chain collaboration.

In the Indo-Pacific, the two countries say they will work together to foster a safe, secure, and trustworthy AI and digital ecosystem, including through the Hiroshima AI Process and cooperation on digital infrastructure such as telecommunications, undersea cables, data centres, and all-photonics networks. The declaration also commits them to stronger coordination on secure undersea cables, describing them as vital regional infrastructure.

More broadly, Australia and Japan reaffirm support for a rules-based international economic order centred on the World Trade Organization, while also backing further work through the The Comprehensive and Progressive Agreement for Trans-Pacific Partnership, the Asia-Pacific Economic Cooperation, the Quad, the Asia Zero Emission Community, and other regional initiatives.

The declaration presents economic security cooperation not only as a bilateral priority but as part of a wider effort to strengthen resilience, secure connectivity, and trusted technology governance across the Indo-Pacific.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

World Economic Forum report calls for shift towards cyber resilience amid global threats

A World Economic Forum report states that the growing complexity of global cyber threats requires a shift from traditional cybersecurity approaches towards a broader model of cyber resilience.

The report notes that with nearly 70% of the global population online, digital infrastructure underpins critical sectors including healthcare, finance and public services. While interconnected systems deliver significant benefits, they also create cascading risks that can spread rapidly across borders and industries.

Recent cyber incidents have demonstrated how local breaches can escalate into global disruptions, exposing vulnerabilities in highly interconnected systems, the report notes. At the same time, the rise of state-linked cyber activity and large-scale cybercrime adds further complexity to the threat landscape.

The report by the WEF highlights fragmentation as a major barrier to effective response. Differences in political priorities, regulatory frameworks and technical capabilities create gaps that attackers can exploit, while limiting the ability of governments and organisations to coordinate effectively.

Emerging technologies such as AI and quantum computing are expanding both capabilities and risks, the report states.

The WEF report calls for a more coordinated global approach, including implementation of international norms, stronger capacity-building efforts and enhanced cooperation between governments, industry and civil society.

Why does it matter?

The WEF report is important because it reframes cyber threats as systemic, cross-border risks instead of isolated incidents, showing that fragmented regulation, uneven capabilities and weak cooperation can allow a single breach to cascade across critical infrastructure, economies and public services. Emerging technologies like AI are accelerating both the scale and sophistication of attacks, making coordinated international resilience a necessary condition for maintaining stability.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Peacebuilding and AI in focus at UNSSC webinar series

The United Nations System Staff College has highlighted growing interest across the UN and the wider peacebuilding community in how artificial intelligence is shaping conflict prevention, arguing that the technology can support peace efforts but cannot replace human judgement, diplomacy, and oversight.

The reflection draws on a three-part webinar series launched by UNSSC to examine AI governance, field use, and ethical risks in peacebuilding. According to the text, one message ran across all three discussions: AI may offer real value for conflict prevention, but its role should remain supportive rather than substitutive.

The piece argues that AI is already being used across the UN peace and security pillar and should be introduced only where it improves effectiveness, such as by handling repetitive tasks and allowing staff to focus on analysis, leadership, and political judgement. It also stresses that principles long associated with peacebuilding, including trust and ‘do no harm’, should apply across the full AI stack, from data and infrastructure to model design and deployment.

Examples cited from the webinar series include the use of augmented intelligence in early warning systems, where machine learning is combined with human contextual knowledge, and an AI-enabled WhatsApp chatbot used in Yemen to broaden participation in mediation, particularly among women and young people. The text presents these cases as evidence that AI can extend the reach of peacebuilding tools without replacing practitioners.

The final part of the reflection focuses on governance and ethics. It argues that while ethical AI principles are widely discussed, they need to be translated into practical, context-specific safeguards, especially in conflict settings. It also notes that risks differ across use cases such as early warning, social media monitoring, and mediation support, and says meaningful governance requires input from diplomats, researchers, mediators, and the private sector.

UNSSC says the webinar series drew between 300 and 500 registrants per session, which it presents as evidence of strong demand for more targeted learning on AI and peacebuilding. The college argues that its role should extend beyond convening discussion to turning those debates into practical knowledge for UN practitioners working at the intersection of AI and conflict prevention.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft reports large-scale phishing campaign targeting organisations across sectors

Microsoft has disclosed a phishing campaign aimed at stealing credentials from more than 35,000 users across 26 countries. The attack, detected in April 2026, targeted over 13,000 organisations, with a heavy concentration in healthcare, financial services, professional services, and technology sectors.

Microsoft said the campaign used email templates designed to mimic internal corporate communications, often framed as code of conduct or compliance-related notices.

Attackers created a sense of urgency through time-sensitive prompts and attached PDFs that redirected victims to credential-harvesting pages hosted on attacker-controlled infrastructure, Microsoft added.

The attack chain included multiple verification steps, such as CAPTCHA screens and intermediate landing pages intended to bypass automated defences and increase legitimacy.

Ultimately, victims were directed to fake sign-in portals using adversary-in-the-middle techniques, enabling real-time capture of credentials and authentication tokens, including multi-factor authentication bypass.

The disclosure comes amid a wider surge in phishing activity, with Microsoft reporting billions of attempts and a rapid rise in QR code-based attacks and CAPTCHA-gated phishing flows.

Why does it matter? 

The campaign shows phishing evolving into highly convincing, enterprise-style attacks that are harder to detect and increasingly scalable. By bypassing both human judgment and security controls like multi-factor authentication, it significantly raises the risk of large-scale account compromise.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

World Economic Forum report highlights growing role of AI in cybersecurity operations

A World Economic Forum white paper (Empowering Defenders: AI for Cybersecurity), developed with KPMG, states that AI is becoming a core capability for modern cybersecurity. The report notes that attackers are using AI to increase speed, scale and sophistication, while defenders are also adopting AI to improve detection, response and resilience.

The report describes how AI is being used across the cybersecurity lifecycle, from cyber governance and risk identification to threat detection, incident response and recovery. Case studies from major organisations highlight applications in phishing detection, vulnerability management, malware analysis, threat intelligence and automated security reviews.

WEF report also states that effective adoption depends on more than technology investment. Organisations need executive support, reliable data, skilled teams, mature infrastructure and clear governance before deploying AI in critical security operations.

The report also highlights the rise of agentic AI, where autonomous systems can detect, coordinate and respond to threats with limited human intervention. It adds that while these systems could help defenders act faster, they may also introduce risks related to accountability, unintended behaviour and over-reliance on automation.

Why does it matter?

The central message of the report is that AI can strengthen cyber defence only when paired with human judgement, structured pilots, continuous monitoring and clear safeguards. Without these foundations, organisations risk creating fragile systems instead of resilient ones.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Cybercrime communities face skills gap despite rise of AI tools

A major study by researchers from the universities of Cambridge, Edinburgh, and Strathclyde, published by the Centre for Emerging Technology and Security at the Alan Turing Institute, suggests cybercriminals are still struggling to use AI effectively in their operations despite widespread attention around tools such as ChatGPT.

Researchers analysed more than 100 million posts from underground and dark web forums to assess how AI is being adopted within cybercrime communities.

The research, carried out by the universities of Edinburgh, Strathclyde, and Cambridge using the CrimeBB database, found that most offenders lack the technical skills and resources needed to integrate AI into criminal activity. Rather than lowering barriers to entry, AI tools benefit already skilled actors far more than inexperienced ones.

The analysis shows AI is used most successfully in already highly automated areas, such as social media bots linked to harassment and fraud, as well as in efforts to mask patterns that cybersecurity systems might otherwise detect. While experimentation is increasing, the researchers found little sign that AI is delivering a broad or transformative boost to overall cybercriminal capability. Mainstream chatbot guardrails were also found to be limiting harmful use in practice.

The researchers argue that the more immediate concern for industry is not dramatic AI-enabled innovation among cybercriminals, but insecure adoption of AI within legitimate organisations. They point to risks from poorly secured agentic AI systems and from AI-generated ‘vibecoded’ software being deployed without adequate safeguards.

Why does it matter?

The findings challenge a common assumption that generative AI is already giving cybercriminals a major operational advantage. Instead, the more immediate and scalable risk may come from companies deploying insecure AI systems faster than they can secure them. That shifts attention away from worst-case speculation about criminal innovation and towards a more practical cyber policy question: whether organisations are introducing new AI-enabled vulnerabilities into mainstream digital infrastructure.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

Canada and partners welcome EU as strategic partner in telecom coalition

The Government of Canada and its international partners have announced that the European Union has joined the Global Coalition on Telecommunications as its first strategic partner, reinforcing cooperation on secure, resilient, and trusted next-generation telecom networks.

The coalition, established in 2023, brings together governments, including Canada, the United States, the United Kingdom, Japan, and Australia, to promote secure supply chains, interoperable standards, and telecommunications innovation. More recent expansion has also brought in Finland and Sweden, widening the coalition’s international reach and its work on future telecom technologies, including 6G.

The EU’s inclusion reflects a shared interest in closer policy coordination, technical standards development, and telecom innovation. As a strategic partner, the EU is expected to contribute to discussions, support coalition workstreams, and collaborate on initiatives aligned with the group’s broader objectives. Strategic partnerships are designed to allow flexible cooperation while leaving governance control with the coalition’s core members.

Canadian officials described the step as a significant milestone in efforts to strengthen secure and trusted telecommunications networks through joint policy, research, and innovation. In practical terms, the move points to a broader effort among like-minded partners to shape the future of telecom infrastructure through coordinated international action rather than fragmented national approaches. This final sentence is an inference grounded in the coalition’s stated purpose and the new strategic partner model.

Why does it matter?

The significance of the move lies in the way telecom policy is increasingly being treated as a strategic coordination issue rather than just a domestic infrastructure question. By bringing the EU into the coalition as its first strategic partner, the group is widening its capacity to shape standards, supply chain resilience, and future network technologies across a broader transatlantic and Indo-Pacific policy space. That matters because the contest over telecom systems is no longer only about connectivity, but also about security, industrial policy, and influence over the technologies that will underpin future digital economies.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

UK’s NCSC warns AI could expose software vulnerabilities at scale

The NCSC says that AI is reshaping cybersecurity by exposing vulnerabilities across software ecosystems.
The National Cyber Security Centre (NCSC) warns that organisations must prepare for a large-scale patch wave. AI enables faster identification and exploitation of weaknesses than traditional defences can handle.

Technical debt, built through years of prioritising short-term efficiency instead of long-term resilience, is now being exposed at scale.

The NCSC notes that AI capabilities enable attackers to identify weaknesses faster and more comprehensively, creating pressure on organisations to respond with rapid and coordinated patching strategies across entire technology environments.

The recommended approach by NCSC prioritises internet-facing systems and external attack surfaces, followed by internal infrastructure and critical security assets.

Automated updates and hot patching are encouraged where available, while organisations lacking such capabilities must adopt scalable and risk-based update processes. Legacy systems without support present a particular risk, requiring replacement instead of reliance on patching alone.

NCSC adds that beyond software updates, the challenge reflects a deeper structural issue within digital ecosystems. Stronger cyber resilience depends on reducing systemic vulnerabilities through secure design practices, improved monitoring and supply chain readiness.

They also said that organisations that fail to prepare for continuous, large-scale patching cycles risk increased exposure as AI continues to reshape the cybersecurity landscape.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Agentic AI risks outlined in joint cyber agency guidance

Six cybersecurity agencies have jointly published guidance urging organisations to adopt agentic AI services cautiously. The document warns that greater autonomy can increase cyber risk, particularly as agentic AI is introduced into critical infrastructure, defence, and other mission-critical environments.

The authors say organisations should use agentic AI primarily for low-risk and non-sensitive tasks and should not grant it broad or unrestricted access to sensitive data or critical systems. The guidance also recommends incremental deployment rather than large-scale implementation from the outset.

The document was co-authored by agencies from Australia, the United States, Canada, New Zealand, and the United Kingdom: the Australian Signals Directorate’s Australian Cyber Security Centre, the US Cybersecurity and Infrastructure Security Agency and National Security Agency, the Canadian Centre for Cyber Security, New Zealand’s National Cyber Security Centre, and the UK’s National Cyber Security Centre.

It defines agentic AI as systems composed of one or more agents that rely on AI models, such as large language models, to interpret context, make decisions, and take actions, often without continuous human intervention. The guidance says these systems often combine an LLM-based agent with tools, external data, memory, and planning functions, which expands both capability and attack surface.

The agencies say agentic AI inherits many of the vulnerabilities already associated with large language models while introducing greater complexity and new systemic risks. The document identifies five broad categories of concern: privilege risks, design and configuration risks, behaviour risks, structural risks, and accountability risks.

It warns that over-privileged agents, insecure third-party tools, goal misalignment, emergent or deceptive behaviour, and opaque decision-making chains can all increase the likelihood and impact of compromise. To reduce those risks, the guidance recommends secure design, strong identity management, defence-in-depth, comprehensive testing, threat modelling, progressive deployment, isolation, continuous monitoring, and strict privilege controls.

The agencies also stress that human approval should remain in place for high-impact actions and that agentic AI security should be treated as part of broader cybersecurity governance rather than as a separate discipline. The document concludes by calling for stronger research, collaboration, and agent-specific evaluations as the technology matures.

Why does it matter?

The guidance matters because it draws a clear line between ordinary AI adoption and agentic systems that can act with far more autonomy inside real operational environments. Once AI tools move from assisting users to making decisions, calling tools, and interacting with sensitive systems, the security challenge shifts from model safety alone to full organisational risk management. That is why the document treats agentic AI not as a niche technical issue, but as a governance and cyber resilience problem that organisations need to control before deploying at scale.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

US military expands AI deployment across classified networks

The US Department of Defence has announced agreements with leading technology firms to deploy advanced AI capabilities across classified military networks. The initiative forms part of a broader effort to position the United States as a more AI-enabled military power.

Companies including OpenAI, Google, Microsoft, Amazon Web Services, NVIDIA, and SpaceX are reported to be involved in supporting deployment within high-security Impact Level 6 and 7 environments. The integration is intended to improve data synthesis, situational awareness, and operational decision-making across defence systems.

The department’s internal platform, GenAI.mil, is also being presented as a central part of this push, with senior officials describing it as a way to put advanced AI tools into the hands of personnel across the department and across different classification levels.

Officials have emphasised that maintaining access to a range of AI providers is important to avoid vendor lock-in and preserve long-term flexibility. In that sense, the move reflects a wider attempt to strengthen national security through advanced technology while keeping the military AI stack diversified rather than dependent on a single company or model family. However, this is an inference based on the reported Pentagon framing of the agreements.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!