Microsoft reports large-scale phishing campaign targeting organisations across sectors

Microsoft has disclosed a phishing campaign aimed at stealing credentials from more than 35,000 users across 26 countries. The attack, detected in April 2026, targeted over 13,000 organisations, with a heavy concentration in healthcare, financial services, professional services, and technology sectors.

Microsoft said the campaign used email templates designed to mimic internal corporate communications, often framed as code of conduct or compliance-related notices.

Attackers created a sense of urgency through time-sensitive prompts and attached PDFs that redirected victims to credential-harvesting pages hosted on attacker-controlled infrastructure, Microsoft added.

The attack chain included multiple verification steps, such as CAPTCHA screens and intermediate landing pages intended to bypass automated defences and increase legitimacy.

Ultimately, victims were directed to fake sign-in portals using adversary-in-the-middle techniques, enabling real-time capture of credentials and authentication tokens, including multi-factor authentication bypass.

The disclosure comes amid a wider surge in phishing activity, with Microsoft reporting billions of attempts and a rapid rise in QR code-based attacks and CAPTCHA-gated phishing flows.

Why does it matter? 

The campaign shows phishing evolving into highly convincing, enterprise-style attacks that are harder to detect and increasingly scalable. By bypassing both human judgment and security controls like multi-factor authentication, it significantly raises the risk of large-scale account compromise.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

World Economic Forum report highlights growing role of AI in cybersecurity operations

A World Economic Forum white paper (Empowering Defenders: AI for Cybersecurity), developed with KPMG, states that AI is becoming a core capability for modern cybersecurity. The report notes that attackers are using AI to increase speed, scale and sophistication, while defenders are also adopting AI to improve detection, response and resilience.

The report describes how AI is being used across the cybersecurity lifecycle, from cyber governance and risk identification to threat detection, incident response and recovery. Case studies from major organisations highlight applications in phishing detection, vulnerability management, malware analysis, threat intelligence and automated security reviews.

WEF report also states that effective adoption depends on more than technology investment. Organisations need executive support, reliable data, skilled teams, mature infrastructure and clear governance before deploying AI in critical security operations.

The report also highlights the rise of agentic AI, where autonomous systems can detect, coordinate and respond to threats with limited human intervention. It adds that while these systems could help defenders act faster, they may also introduce risks related to accountability, unintended behaviour and over-reliance on automation.

Why does it matter?

The central message of the report is that AI can strengthen cyber defence only when paired with human judgement, structured pilots, continuous monitoring and clear safeguards. Without these foundations, organisations risk creating fragile systems instead of resilient ones.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Cybercrime communities face skills gap despite rise of AI tools

A major study by researchers from the universities of Cambridge, Edinburgh, and Strathclyde, published by the Centre for Emerging Technology and Security at the Alan Turing Institute, suggests cybercriminals are still struggling to use AI effectively in their operations despite widespread attention around tools such as ChatGPT.

Researchers analysed more than 100 million posts from underground and dark web forums to assess how AI is being adopted within cybercrime communities.

The research, carried out by the universities of Edinburgh, Strathclyde, and Cambridge using the CrimeBB database, found that most offenders lack the technical skills and resources needed to integrate AI into criminal activity. Rather than lowering barriers to entry, AI tools benefit already skilled actors far more than inexperienced ones.

The analysis shows AI is used most successfully in already highly automated areas, such as social media bots linked to harassment and fraud, as well as in efforts to mask patterns that cybersecurity systems might otherwise detect. While experimentation is increasing, the researchers found little sign that AI is delivering a broad or transformative boost to overall cybercriminal capability. Mainstream chatbot guardrails were also found to be limiting harmful use in practice.

The researchers argue that the more immediate concern for industry is not dramatic AI-enabled innovation among cybercriminals, but insecure adoption of AI within legitimate organisations. They point to risks from poorly secured agentic AI systems and from AI-generated ‘vibecoded’ software being deployed without adequate safeguards.

Why does it matter?

The findings challenge a common assumption that generative AI is already giving cybercriminals a major operational advantage. Instead, the more immediate and scalable risk may come from companies deploying insecure AI systems faster than they can secure them. That shifts attention away from worst-case speculation about criminal innovation and towards a more practical cyber policy question: whether organisations are introducing new AI-enabled vulnerabilities into mainstream digital infrastructure.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

Canada and partners welcome EU as strategic partner in telecom coalition

The Government of Canada and its international partners have announced that the European Union has joined the Global Coalition on Telecommunications as its first strategic partner, reinforcing cooperation on secure, resilient, and trusted next-generation telecom networks.

The coalition, established in 2023, brings together governments, including Canada, the United States, the United Kingdom, Japan, and Australia, to promote secure supply chains, interoperable standards, and telecommunications innovation. More recent expansion has also brought in Finland and Sweden, widening the coalition’s international reach and its work on future telecom technologies, including 6G.

The EU’s inclusion reflects a shared interest in closer policy coordination, technical standards development, and telecom innovation. As a strategic partner, the EU is expected to contribute to discussions, support coalition workstreams, and collaborate on initiatives aligned with the group’s broader objectives. Strategic partnerships are designed to allow flexible cooperation while leaving governance control with the coalition’s core members.

Canadian officials described the step as a significant milestone in efforts to strengthen secure and trusted telecommunications networks through joint policy, research, and innovation. In practical terms, the move points to a broader effort among like-minded partners to shape the future of telecom infrastructure through coordinated international action rather than fragmented national approaches. This final sentence is an inference grounded in the coalition’s stated purpose and the new strategic partner model.

Why does it matter?

The significance of the move lies in the way telecom policy is increasingly being treated as a strategic coordination issue rather than just a domestic infrastructure question. By bringing the EU into the coalition as its first strategic partner, the group is widening its capacity to shape standards, supply chain resilience, and future network technologies across a broader transatlantic and Indo-Pacific policy space. That matters because the contest over telecom systems is no longer only about connectivity, but also about security, industrial policy, and influence over the technologies that will underpin future digital economies.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

UK’s NCSC warns AI could expose software vulnerabilities at scale

The NCSC says that AI is reshaping cybersecurity by exposing vulnerabilities across software ecosystems.
The National Cyber Security Centre (NCSC) warns that organisations must prepare for a large-scale patch wave. AI enables faster identification and exploitation of weaknesses than traditional defences can handle.

Technical debt, built through years of prioritising short-term efficiency instead of long-term resilience, is now being exposed at scale.

The NCSC notes that AI capabilities enable attackers to identify weaknesses faster and more comprehensively, creating pressure on organisations to respond with rapid and coordinated patching strategies across entire technology environments.

The recommended approach by NCSC prioritises internet-facing systems and external attack surfaces, followed by internal infrastructure and critical security assets.

Automated updates and hot patching are encouraged where available, while organisations lacking such capabilities must adopt scalable and risk-based update processes. Legacy systems without support present a particular risk, requiring replacement instead of reliance on patching alone.

NCSC adds that beyond software updates, the challenge reflects a deeper structural issue within digital ecosystems. Stronger cyber resilience depends on reducing systemic vulnerabilities through secure design practices, improved monitoring and supply chain readiness.

They also said that organisations that fail to prepare for continuous, large-scale patching cycles risk increased exposure as AI continues to reshape the cybersecurity landscape.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Agentic AI risks outlined in joint cyber agency guidance

Six cybersecurity agencies have jointly published guidance urging organisations to adopt agentic AI services cautiously. The document warns that greater autonomy can increase cyber risk, particularly as agentic AI is introduced into critical infrastructure, defence, and other mission-critical environments.

The authors say organisations should use agentic AI primarily for low-risk and non-sensitive tasks and should not grant it broad or unrestricted access to sensitive data or critical systems. The guidance also recommends incremental deployment rather than large-scale implementation from the outset.

The document was co-authored by agencies from Australia, the United States, Canada, New Zealand, and the United Kingdom: the Australian Signals Directorate’s Australian Cyber Security Centre, the US Cybersecurity and Infrastructure Security Agency and National Security Agency, the Canadian Centre for Cyber Security, New Zealand’s National Cyber Security Centre, and the UK’s National Cyber Security Centre.

It defines agentic AI as systems composed of one or more agents that rely on AI models, such as large language models, to interpret context, make decisions, and take actions, often without continuous human intervention. The guidance says these systems often combine an LLM-based agent with tools, external data, memory, and planning functions, which expands both capability and attack surface.

The agencies say agentic AI inherits many of the vulnerabilities already associated with large language models while introducing greater complexity and new systemic risks. The document identifies five broad categories of concern: privilege risks, design and configuration risks, behaviour risks, structural risks, and accountability risks.

It warns that over-privileged agents, insecure third-party tools, goal misalignment, emergent or deceptive behaviour, and opaque decision-making chains can all increase the likelihood and impact of compromise. To reduce those risks, the guidance recommends secure design, strong identity management, defence-in-depth, comprehensive testing, threat modelling, progressive deployment, isolation, continuous monitoring, and strict privilege controls.

The agencies also stress that human approval should remain in place for high-impact actions and that agentic AI security should be treated as part of broader cybersecurity governance rather than as a separate discipline. The document concludes by calling for stronger research, collaboration, and agent-specific evaluations as the technology matures.

Why does it matter?

The guidance matters because it draws a clear line between ordinary AI adoption and agentic systems that can act with far more autonomy inside real operational environments. Once AI tools move from assisting users to making decisions, calling tools, and interacting with sensitive systems, the security challenge shifts from model safety alone to full organisational risk management. That is why the document treats agentic AI not as a niche technical issue, but as a governance and cyber resilience problem that organisations need to control before deploying at scale.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

US military expands AI deployment across classified networks

The US Department of Defence has announced agreements with leading technology firms to deploy advanced AI capabilities across classified military networks. The initiative forms part of a broader effort to position the United States as a more AI-enabled military power.

Companies including OpenAI, Google, Microsoft, Amazon Web Services, NVIDIA, and SpaceX are reported to be involved in supporting deployment within high-security Impact Level 6 and 7 environments. The integration is intended to improve data synthesis, situational awareness, and operational decision-making across defence systems.

The department’s internal platform, GenAI.mil, is also being presented as a central part of this push, with senior officials describing it as a way to put advanced AI tools into the hands of personnel across the department and across different classification levels.

Officials have emphasised that maintaining access to a range of AI providers is important to avoid vendor lock-in and preserve long-term flexibility. In that sense, the move reflects a wider attempt to strengthen national security through advanced technology while keeping the military AI stack diversified rather than dependent on a single company or model family. However, this is an inference based on the reported Pentagon framing of the agreements.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Swisscom says AI and geopolitics are reshaping the cyber threat landscape

Swisscom has published its 2026 Cybersecurity Threat Radar, warning that cyber threats have grown more complex over the past year as geopolitical tensions and disruptive technologies put added pressure on digital systems. The report presents AI, supply chain exposure, digital sovereignty, and operational technology security as four strategic risk areas for organisations.

The report highlights state-linked cyber activity, hybrid influence operations such as disinformation, and supply chain attacks as key drivers of the current threat environment. It argues that digital transformation has increased dependence on cloud services, third-party software, AI systems, and networked industrial infrastructure, making organisations more exposed to cascading failures and external dependencies.

On AI, Swisscom describes insecure AI use as a risk multiplier. While AI can improve productivity, the report warns that poor governance, weak visibility into models, and uncontrolled use of AI tools in operational environments can expand attack surfaces, affect data quality, and create new compliance challenges.

Software supply chains are also identified as a persistent vulnerability. Swisscom says a single compromised component or manipulated update process can have far-reaching consequences across interconnected systems, making software integrity, origin verification, and traceability increasingly important as mitigation measures.

The convergence of information technology and operational technology is presented as another growing area of concern. In sectors such as energy, healthcare, manufacturing, and building automation, incidents can have consequences that go well beyond financial loss, affecting critical infrastructure, production, and even human safety.

The report also places greater emphasis on digital sovereignty, arguing that organisations need clearer visibility over where data is processed, which legal regimes apply, and how dependent they are on cloud and technology providers. In that sense, Swisscom frames cybersecurity less as a narrow IT function and more as a strategic governance issue tied to resilience, control, and trust.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Ransomware accounts for 90% of cyber losses in manufacturing, claims data shows

Ransomware is responsible for 90% of total cyber-related financial losses in the manufacturing sector, despite accounting for only 12% of claim volume by number, according to an analysis of insurance claims data published by Resilience.

The findings indicate that while ransomware incidents are not the most frequently filed claim type, they produce disproportionately large financial losses when they occur. The manufacturing sector’s low tolerance for operational downtime is identified as a contributing factor to loss severity.

Additional findings from the claims dataset include:

  • 30% of manufacturing claims are linked to phishing and transfer fraud
  • 26% of total losses are associated with multi-factor authentication (MFA) misconfiguration
  • 12% of claims involved wrongful data collection

The report identifies MFA misconfiguration as a notable area of exposure, alongside procedural gaps in financial transfer controls. Recommended mitigation measures include auditing MFA deployment, implementing transfer verification procedures, and investing in ransomware containment capabilities.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU and Republic of Korea launch aviation partnership on technical cooperation and cyber resilience

European and South Korean aviation authorities are conducting a three-week series of technical exchanges in Seoul, covering safety oversight, airspace management, and cybersecurity.

The European Union Aviation Safety Agency (EASA) and South Korea’s Ministry of Land, Infrastructure and Transport are participating under the EU–Republic of Korea Aviation Partnership Project, an EU-funded initiative announced by the European External Action Service (EEAS).

The programme began with a three-day session on the International Civil Aviation Organisation’s Universal Safety Oversight Audit Programme (USOAP), which assesses national aviation safety oversight systems. EASA presented findings from its most recent ICAO audit, with discussions covering oversight frameworks, organisational structures, and lessons identified.

A workshop on performance-based navigation and airspace management followed, addressing procedures to improve the predictability and efficiency of aircraft arrivals, including at airports with parallel runways.

A third workshop on aviation cybersecurity is scheduled for the coming week. It will cover security considerations across aviation systems, including aircraft certification processes and air traffic management infrastructure.

The activities are designed to facilitate technical exchange between Korean and European stakeholders across the aviation sector, according to EASA.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!