Cybercrime Atlas launches open-source map of criminal networks

Cybercrime Atlas has launched Cosmos, an open-source platform designed to map global cybercrime networks and strengthen cooperation among defenders, investigators, prosecutors and policymakers.

Hosted by the World Economic Forum’s Centre for Cybersecurity, Cybercrime Atlas aims to build a shared understanding of cybercriminal ecosystems at a time when ransomware, fraud and illicit digital services are becoming increasingly organised and industrialised.

The initiative responds to a long-standing problem in cybercrime disruption: fragmented terminology, isolated investigations and inconsistent reporting structures. Cosmos aims to standardise definitions, organise threat intelligence into a shared structure and help different actors coordinate more effectively across borders.

The first version of the platform contains nine core categories, 229 identified cybercrime-related elements and 849 mapped connections showing how criminal networks, tools and services interact. The dataset is designed to expand as the wider community contributes new intelligence.

Why does it matter?

Cybercrime increasingly functions as an interconnected ecosystem, with specialised groups, tools, infrastructure providers and illicit services supporting one another across borders. A shared map of those relationships could help shift cyber defence from isolated incident response towards more coordinated disruption of criminal networks, while giving investigators and policymakers a clearer view of how digital crime is organised.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our chatbot!

Joint cybersecurity agencies publish guidance on secure adoption of agentic AI

Cybersecurity agencies from Australia, Canada, New Zealand, the United Kingdom and the United States have published joint guidance on the careful adoption of agentic AI services in organisational IT environments.

The guidance is intended to help organisations design, develop, deploy and operate agentic AI systems, and to make informed risk assessments and mitigations. It primarily focuses on large-language-model-based agentic AI systems.

The publication examines threats to and vulnerabilities within agentic AI systems, including risks introduced through system components, integrations and downstream use. It also considers broader risks arising from agentic AI behaviour in IT environments.

The guidance covers wider agentic AI security considerations, specific security risks, best practices for securing agentic AI systems and steps organisations can take to prepare for emerging and future threats.

It was co-authored by the Australian Signals Directorate’s Australian Cyber Security Centre, the US Cybersecurity and Infrastructure Security Agency, the US National Security Agency, the Canadian Centre for Cyber Security, the New Zealand National Cyber Security Centre and the UK National Cyber Security Centre.

Why does it matter?

Agentic AI systems can act with greater autonomy than conventional software tools, including by interacting with other systems, using integrations and taking steps towards defined goals. That creates new cybersecurity risks when such tools are embedded in organisational IT environments. The joint guidance shows that major cyber agencies are treating agentic AI as an emerging operational security issue, not only as a question of AI policy or experimentation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Australia’s ASIC urges cyber resilience as frontier AI raises risk

The Australian Securities and Investments Commission has urged regulated entities to strengthen cyber resilience, warning that frontier AI could intensify cyber risks by exposing vulnerabilities at greater speed, scale and sophistication.

In an open letter to industry, ASIC said licensees and market participants should act now to improve their cybersecurity fundamentals rather than wait as advanced AI tools reshape the threat environment. The regulator said cyber resilience should be treated as a core licensing obligation, not solely as an IT issue.

ASIC Commissioner Simone Constant said frontier AI creates opportunities but also materially increases cyber risk, including by exposing weaknesses faster than many organisations realise. She warned that vulnerabilities once seen as isolated could have system-wide effects and enable previously out-of-reach forms of exploitation for many malicious actors.

The letter follows ASIC’s recent court outcome against FIIG Securities Limited, which the regulator said reinforced the need for cyber risk management controls to be demonstrably effective and proportionate to a business’s size, nature and complexity.

ASIC is urging entities to reassess cyber plans, identify and protect critical systems, reduce exposure to untrusted networks, review user access, patch systems promptly, strengthen incident response planning and manage third-party risks. It also says organisations should use AI defensively where appropriate, including to identify vulnerabilities and secure software before release.

Constant said entities need robust incident response plans and that the underlying principles of cyber risk management remain the same: govern, protect, detect and respond. She also said boards and executives must ensure systems are tested, weaknesses are addressed early, and action is taken before threats can be exploited.

ASIC says entities must table the letter at their ultimate board and risk governance committees. It also encourages regulated entities to use guidance from trusted sources, including the Australian Signals Directorate and the Australian Government’s Cyber Health Check.

Why does it matter?

ASIC’s warning shows that financial regulators are beginning to treat frontier AI as a force multiplier of cyber risk, not just a technology issue. By framing cyber resilience as a licensing and board-level governance obligation, the regulator is signalling that firms may be judged not only on whether they suffer cyber incidents, but on whether their controls, escalation processes and resilience planning are proportionate to an AI-accelerated threat environment.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

WEF report says AI is reshaping cybersecurity defence

Advanced AI models are reshaping cybersecurity by accelerating both offensive and defensive capabilities, forcing organisations to rethink how they detect, assess and respond to cyber threats.

A new World Economic Forum report argues that AI is becoming a defining force in cybersecurity, with organisations increasingly moving from pilot projects to operational deployment. According to the WEF, AI is already being used to improve vulnerability identification, threat detection, response speed and resilience.

The report highlights how AI can help security teams process large volumes of data, detect threats faster and support more efficient responses. At the same time, it warns that threat actors are also using AI to automate deception, generate malware and scale attacks at machine speed.

WEF’s analysis says the growing speed and scale of AI-enabled cyber operations are putting pressure on traditional cybersecurity models. Instead of relying mainly on prevention and scheduled patching cycles, organisations are being pushed towards continuous detection, automated response, stronger access controls and more resilient infrastructure.

The report also stresses that AI’s value in cybersecurity depends on strategy, governance and human oversight. Rather than treating AI as a standalone tool, organisations are encouraged to test use cases carefully, build appropriate safeguards and invest in the skills and processes needed to defend at machine speed.

Why does it matter?

AI is changing cybersecurity on both sides of the equation. It can lower the barriers for faster and more scalable attacks, but it can also help defenders improve detection, response and resilience. The wider significance is that cybersecurity strategies built around periodic assessment and manual response may become less effective as AI-driven threats and defences operate at greater speed and scale.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

The rise of gray websites fuels global scam and data theft risks

Cybersecurity researchers at Kaspersky have identified a growing network of so-called ‘grey’ websites that exploit user trust to generate financial gain and harvest personal data. Unlike traditional phishing attacks, these platforms rely on manipulation, misleading design and hidden conditions rather than direct credential theft.

The report shows that gray websites often imitate legitimate services, including financial tools, e-commerce platforms, AI services and subscription-based content.

Common categories include fake browser extensions, fraudulent investment schemes, subscription traps and counterfeit online shops, many of which are designed to encourage voluntary payment or data sharing.

Kaspersky notes that these threats are spreading globally but vary by region.

Europe is seeing a rise in fake privacy tools and browser hijackers, Africa is heavily affected by fraudulent trading platforms, while Latin America faces betting scams and pyramid schemes. Asia-Pacific shows a broader mix, including crypto fraud, AI-themed scams and malicious download services.

Across all regions, attackers are increasingly aligning scams with current digital trends to appear more credible. Kaspersky warns that even well-designed platforms can hide risks, making user awareness, verification and security tools key to reducing financial and data harm.

Why does it matter? 

The rise of ‘grey’ websites signals a shift in online fraud away from obvious phishing towards more subtle, trust-based manipulation. Instead of breaking systems, attackers increasingly exploit user behaviour, interfaces, and familiarity with digital services.

That lowers the ‘visibility’ of fraud. Users are not being forced into breaches; they are being guided into consent- signing up, subscribing, investing, or installing tools that appear legitimate. It makes scams harder to detect, harder to regulate, and easier to scale globally.

It also shows how cybercrime is adapting to current technological trends, especially AI services, crypto tools, and digital platforms that people already expect to be trustworthy. As a result, the boundary between legitimate innovation and fraud becomes less clear, increasing systemic risk for both consumers and digital economies.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

Rising data centre demand increases energy and cyber risks

Data centres are increasingly central to digital economies, but their rapid expansion is reshaping both electricity demand and cybersecurity risks. According to the International Energy Agency, data centres used about 1.5% of global electricity in 2024, with demand rising as AI and cloud services expand.

These facilities operate as both energy consumers and producers, relying on grid power while also maintaining on-site generation and battery systems. Their ability to switch power sources instantly supports service continuity but can also cause sudden load shifts that challenge grid stability during outages or cyber incidents.

Cybersecurity is now closely tied to energy resilience. Data centres depend on interconnected systems such as backup power, cooling, and digital control networks, all of which require continuous monitoring and protection.

Weaknesses in any part of this ‘system of systems’ can affect both service availability and wider electricity infrastructure.

Why does it matter? 

Data centres are becoming a critical infrastructure that directly affects both digital services and electricity systems. Shared planning for power disruptions, cyber events, and load management is increasingly seen as necessary to ensure stability across both digital services and national energy systems.

Their rising energy demand and reliance on complex on-site and grid power arrangements mean disruptions or cyber incidents can have wider knock-on effects, making resilience and cross-sector coordination essential for overall system stability.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Swisscom says AI and geopolitics are reshaping the cyber threat landscape

Swisscom has published its 2026 Cybersecurity Threat Radar, warning that cyber threats have grown more complex over the past year as geopolitical tensions and disruptive technologies put added pressure on digital systems. The report presents AI, supply chain exposure, digital sovereignty, and operational technology security as four strategic risk areas for organisations.

The report highlights state-linked cyber activity, hybrid influence operations such as disinformation, and supply chain attacks as key drivers of the current threat environment. It argues that digital transformation has increased dependence on cloud services, third-party software, AI systems, and networked industrial infrastructure, making organisations more exposed to cascading failures and external dependencies.

On AI, Swisscom describes insecure AI use as a risk multiplier. While AI can improve productivity, the report warns that poor governance, weak visibility into models, and uncontrolled use of AI tools in operational environments can expand attack surfaces, affect data quality, and create new compliance challenges.

Software supply chains are also identified as a persistent vulnerability. Swisscom says a single compromised component or manipulated update process can have far-reaching consequences across interconnected systems, making software integrity, origin verification, and traceability increasingly important as mitigation measures.

The convergence of information technology and operational technology is presented as another growing area of concern. In sectors such as energy, healthcare, manufacturing, and building automation, incidents can have consequences that go well beyond financial loss, affecting critical infrastructure, production, and even human safety.

The report also places greater emphasis on digital sovereignty, arguing that organisations need clearer visibility over where data is processed, which legal regimes apply, and how dependent they are on cloud and technology providers. In that sense, Swisscom frames cybersecurity less as a narrow IT function and more as a strategic governance issue tied to resilience, control, and trust.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Ransomware accounts for 90% of cyber losses in manufacturing, claims data shows

Ransomware is responsible for 90% of total cyber-related financial losses in the manufacturing sector, despite accounting for only 12% of claim volume by number, according to an analysis of insurance claims data published by Resilience.

The findings indicate that while ransomware incidents are not the most frequently filed claim type, they produce disproportionately large financial losses when they occur. The manufacturing sector’s low tolerance for operational downtime is identified as a contributing factor to loss severity.

Additional findings from the claims dataset include:

  • 30% of manufacturing claims are linked to phishing and transfer fraud
  • 26% of total losses are associated with multi-factor authentication (MFA) misconfiguration
  • 12% of claims involved wrongful data collection

The report identifies MFA misconfiguration as a notable area of exposure, alongside procedural gaps in financial transfer controls. Recommended mitigation measures include auditing MFA deployment, implementing transfer verification procedures, and investing in ransomware containment capabilities.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU and Republic of Korea launch aviation partnership on technical cooperation and cyber resilience

European and South Korean aviation authorities are conducting a three-week series of technical exchanges in Seoul, covering safety oversight, airspace management, and cybersecurity.

The European Union Aviation Safety Agency (EASA) and South Korea’s Ministry of Land, Infrastructure and Transport are participating under the EU–Republic of Korea Aviation Partnership Project, an EU-funded initiative announced by the European External Action Service (EEAS).

The programme began with a three-day session on the International Civil Aviation Organisation’s Universal Safety Oversight Audit Programme (USOAP), which assesses national aviation safety oversight systems. EASA presented findings from its most recent ICAO audit, with discussions covering oversight frameworks, organisational structures, and lessons identified.

A workshop on performance-based navigation and airspace management followed, addressing procedures to improve the predictability and efficiency of aircraft arrivals, including at airports with parallel runways.

A third workshop on aviation cybersecurity is scheduled for the coming week. It will cover security considerations across aviation systems, including aircraft certification processes and air traffic management infrastructure.

The activities are designed to facilitate technical exchange between Korean and European stakeholders across the aviation sector, according to EASA.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Victorian officials outline approach to managing AI risks in public sector

Ian Pham at the Victorian Managed Insurance Authority (VMIA) outlined approaches to managing AI adoption during the PSN Victorian Government Cyber Security Showcase. Organisations face the challenge of adopting AI while maintaining effective risk management as these systems become more embedded in government operations.

Cybersecurity teams have traditionally operated with a risk-averse approach focused on minimising threats. Such an approach can slow innovation when applied to AI systems used in public sector environments.

A shift towards managing risk in line with organisational objectives is presented as necessary. This includes prioritising relevant risks and moving from reactive responses towards supporting decision-making processes.

AI adoption involves secure environments for experimentation with defined guardrails, including synthetic or non-sensitive data, monitoring mechanisms, usage conditions, and identity and access controls. Exposure can then be increased gradually, supported by governance and continuous reassessment.

Risks linked to AI systems include data leakage, privacy concerns, unauthorised use, and data quality issues. These risks are described as requiring visibility and management, alongside organisational awareness and engagement to support confidence in AI use.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot