Cybercrime Atlas launches open-source map of criminal networks

Cybercrime Atlas has launched Cosmos, an open-source platform designed to map global cybercrime networks and strengthen cooperation among defenders, investigators, prosecutors and policymakers.

Hosted by the World Economic Forum’s Centre for Cybersecurity, Cybercrime Atlas aims to build a shared understanding of cybercriminal ecosystems at a time when ransomware, fraud and illicit digital services are becoming increasingly organised and industrialised.

The initiative responds to a long-standing problem in cybercrime disruption: fragmented terminology, isolated investigations and inconsistent reporting structures. Cosmos aims to standardise definitions, organise threat intelligence into a shared structure and help different actors coordinate more effectively across borders.

The first version of the platform contains nine core categories, 229 identified cybercrime-related elements and 849 mapped connections showing how criminal networks, tools and services interact. The dataset is designed to expand as the wider community contributes new intelligence.

Why does it matter?

Cybercrime increasingly functions as an interconnected ecosystem, with specialised groups, tools, infrastructure providers and illicit services supporting one another across borders. A shared map of those relationships could help shift cyber defence from isolated incident response towards more coordinated disruption of criminal networks, while giving investigators and policymakers a clearer view of how digital crime is organised.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our chatbot!

Google warns adversaries are industrialising AI-enabled cyberattacks

Google Threat Intelligence Group says cyber adversaries are moving from early AI experimentation towards the industrial-scale use of generative models across malicious workflows.

In a new report, GTIG says it has identified, for the first time, a threat actor using a zero-day exploit that it believes was developed with AI. The criminal actor had planned to use the exploit in a mass exploitation campaign involving a two-factor authentication bypass, but Google said its proactive discovery may have prevented the campaign from going ahead.

The findings describe several uses of AI in cyber operations. Threat actors linked to the People’s Republic of China and the Democratic People’s Republic of Korea have used AI for vulnerability research, including persona-based prompting, specialised vulnerability datasets and automated analysis of vulnerabilities and proof-of-concept exploits.

Other actors have used AI-assisted coding to support defence evasion, including the development of obfuscation tools, relay infrastructure and malware containing AI-generated decoy logic. Google said these uses show how generative models can accelerate development cycles and make malicious tools harder to detect.

Google also highlights PROMPTSPY, an Android backdoor that uses Gemini API capabilities to interpret device interfaces, generate structured commands, simulate gestures and support more autonomous malware behaviour. The company said it had disabled assets linked to the activity and that no apps containing PROMPTSPY were found on Google Play at the time of its current detection.

AI systems are also becoming direct targets. Google says attackers are compromising AI software dependencies, open-source agent skills, API connectors and AI gateway tools such as LiteLLM. The report warns that such supply-chain attacks could expose API secrets, enable ransomware activity or allow intruders to use internal AI systems for reconnaissance, data theft and deeper network access.

Why does it matter?

Google’s findings suggest that AI-enabled cyber activity is moving beyond basic phishing support or faster research. Generative models are now being used in vulnerability discovery, exploit development, malware obfuscation, autonomous device interaction, information operations and attacks on AI infrastructure itself. That could make some attacks faster, more adaptive and harder to detect, while also turning AI platforms, integrations and supply chains into part of the cyberattack surface.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Canada advances sovereign AI data centre strategy with TELUS

The Canadian government and TELUS are advancing plans to develop large-scale sovereign AI infrastructure as part of Ottawa’s broader strategy to strengthen domestic compute capacity and support the country’s AI ecosystem.

The initiative was announced by Evan Solomon (Minister of Artificial Intelligence and Digital Innovation and Minister responsible for the Federal Economic Development Agency for Southern Ontario) and focuses on a proposed AI data centre project in British Columbia designed to support researchers, businesses, and academic institutions.

A project that forms part of Canada’s ‘Enabling large-scale sovereign AI data centres’ initiative, which was introduced under Budget 2025. Ottawa stated that sovereign compute infrastructure is increasingly important for maintaining national competitiveness in AI while ensuring Canadian data, intellectual property, and economic value remain within the country.

The government also confirmed that no formal funding commitments have yet been distributed, with discussions currently progressing through non-binding memoranda of understanding with selected industry participants.

Local officials argued that large-scale compute infrastructure has become a strategic economic requirement as governments worldwide race to expand AI processing capabilities. Canada believes it holds competitive advantages due to its colder climate, sustainable energy resources, and network infrastructure, all of which could help attract future AI investment and hyperscale data centre development.

Why does it matter?

The race for sovereign AI infrastructure is rapidly becoming one of the most important geopolitical and economic competitions of the digital era. The Canada-TELUS partnership illustrates how countries are moving beyond AI model development alone and shifting focus towards the physical infrastructure required to sustain future AI ecosystems, including data centres, energy capacity, semiconductors, and domestic compute networks.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Joint cybersecurity agencies publish guidance on secure adoption of agentic AI

Cybersecurity agencies from Australia, Canada, New Zealand, the United Kingdom and the United States have published joint guidance on the careful adoption of agentic AI services in organisational IT environments.

The guidance is intended to help organisations design, develop, deploy and operate agentic AI systems, and to make informed risk assessments and mitigations. It primarily focuses on large-language-model-based agentic AI systems.

The publication examines threats to and vulnerabilities within agentic AI systems, including risks introduced through system components, integrations and downstream use. It also considers broader risks arising from agentic AI behaviour in IT environments.

The guidance covers wider agentic AI security considerations, specific security risks, best practices for securing agentic AI systems and steps organisations can take to prepare for emerging and future threats.

It was co-authored by the Australian Signals Directorate’s Australian Cyber Security Centre, the US Cybersecurity and Infrastructure Security Agency, the US National Security Agency, the Canadian Centre for Cyber Security, the New Zealand National Cyber Security Centre and the UK National Cyber Security Centre.

Why does it matter?

Agentic AI systems can act with greater autonomy than conventional software tools, including by interacting with other systems, using integrations and taking steps towards defined goals. That creates new cybersecurity risks when such tools are embedded in organisational IT environments. The joint guidance shows that major cyber agencies are treating agentic AI as an emerging operational security issue, not only as a question of AI policy or experimentation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK’s Ofcom prioritises child protection and AI moderation under Online Safety Act

The UK’s Ofcom has outlined its main online safety priorities for 2026–27, signalling tougher oversight of digital platforms under the UK’s Online Safety Act. The regulator said it will continue focusing heavily on child protection while expanding enforcement efforts against illegal hate speech, terrorism-related material, intimate image abuse, and AI-generated harms.

The regulator confirmed that more than 100,000 online services now fall within the scope of the legislation, creating major compliance and enforcement challenges. Ofcom said it will continue investigating platforms that fail to prevent harmful or illegal content, while also preparing new rules linked to additional UK legislation covering cyberflashing, non-consensual intimate imagery, and generative AI services.

Ofcom stated that major online platforms have already introduced broader age verification measures under regulatory pressure. Services including gaming, dating, social media, and pornography platforms have implemented stronger age checks and child safety protections.

Furthermore, the regulator said it will expand supervision of large technology companies and publish updated safety codes later this year, including guidance on AI-powered moderation systems.

According to Ofcom, future compliance work will increasingly focus on the effectiveness of platform moderation systems rather than relying solely on reactive content removal. The regulator also plans to strengthen protections for women and girls online through new technical standards designed to block the spread of non-consensual intimate images and sexual deepfakes at scale.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Türkiye delegation to explore US cyber and AI technologies

The US Trade and Development Agency will host a delegation of cybersecurity and AI decision-makers from Türkiye as the country works to modernise cyber protection for critical infrastructure.

The 15-member delegation will visit Washington, DC, and Silicon Valley from 9 to 20 May to meet US companies, view demonstrations of cybersecurity technologies and discuss how advanced tools could help protect critical infrastructure from cyber threats.

The visit will also include meetings with US government officials on policy and regulatory approaches to AI and cybersecurity. Delegates are expected to visit the US National Institute of Standards and Technology to learn about its work on cybersecurity frameworks, AI risk management, standards development and applied research.

USTDA will also host a public business briefing in San Francisco on 19 May, where US companies can hear from the delegation about commercial opportunities and present cybersecurity solutions.

The agency said Türkiye is rapidly developing its digital ecosystem and has made cybersecurity for critical infrastructure a national priority. It said Türkiye is looking to AI and other advanced technologies to respond to increasingly sophisticated cyber threats, while describing the US private sector as a potential partner in cybersecurity, AI and data protection.

Why does it matter?

The visit shows how cybersecurity for critical infrastructure is increasingly being linked with AI, standards and cross-border technology partnerships. For Türkiye, the focus is on modernising protection against more sophisticated cyber threats. For the United States, the programme also reflects USTDA’s role in connecting US technology providers with infrastructure and digital security priorities in partner countries.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Brazil tests quantum-secure communication over Recife fibre network

Researchers in Brazil have developed the Recife Quantum Network, a quantum key distribution system that uses inactive optical fibre already installed in the city’s urban infrastructure to test secure communications outside a laboratory setting.

The project, led by Professor Daniel Felinto at the Federal University of Pernambuco, connects university departments through dark fibre and uses quantum key distribution to protect information exchange.

Quantum key distribution relies on quantum properties that make interception detectable: any attempt to observe or copy the security key disrupts the quantum state, alerts the system and prevents secure key exchange.

The work has grown into a broader institutional effort through the Institute of Quantum Technologies, known as Quanta, based at the university’s ParqueTec. Researchers from the Federal Rural University of Pernambuco are also involved. The initiative received recognition through the 2025 Finep Innovation Award in the Northeast Region, in the research and development infrastructure category.

Initial tests over 7 kilometres have been completed, and the team now aims to expand the Recife quantum network to 40 kilometres with support from development institutions linked to Brazil’s Ministry of Science, Technology and Innovation. The project has also received support from the ministry through the National Education and Research Network and its Point of Presence in Pernambuco.

The initiative is presented as a step towards applying quantum key distribution-based secure communications to strategic cybersecurity needs, including defence and financial systems. Its use of existing telecommunications infrastructure is significant because it suggests that quantum-secure communication systems can be tested in urban environments without requiring entirely new fibre deployment.

Why does it matter?

Quantum key distribution is being explored as a way to protect sensitive communications against future threats, including advances in computing that could weaken current encryption methods. The Recife project is significant because it moves testing beyond laboratory conditions and into existing urban fibre infrastructure, which is a practical requirement for any wider deployment of quantum-secure networks.

For Brazil, the project also links cybersecurity with national research capacity, regional innovation and digital infrastructure development, showing how quantum technologies are beginning to move from academic experimentation towards applied communications security.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

SHEIN faces Irish inquiry over EU data transfers to China

Ireland’s Data Protection Commission has opened an inquiry into Infinite Styles Services Co. Ltd. (known as SHEIN Ireland), over transfers of personal data of EU and EEA users to China.

The inquiry will examine whether SHEIN Ireland has complied with its obligations under the General Data Protection Regulation in relation to those transfers. The DPC said it will assess compliance with GDPR principles on personal data processing, transparency obligations under Article 13, and Chapter V requirements governing transfers of personal data to third countries.

The regulator said its decision to begin the inquiry was issued to SHEIN Ireland at the end of April. The case comes as data transfers to China face growing regulatory scrutiny in Europe, including through recent DPC enforcement action and complaints filed with other European supervisory authorities.

Deputy Commissioner Graham Doyle said: ‘When an individual’s personal data is transferred to a country outside the EU, the GDPR requires that this personal data is afforded essentially the same protections as it would within the EU.’

He added: ‘Recent regulatory action by the DPC, together with complaints to other European supervisory authorities, has brought data transfers to China, in particular, into focus. The inquiry is an important strategic priority for the DPC and we intend to cooperate closely with our peer European Supervisory Authorities as part of the investigation.’

Under the GDPR, transfers of personal data outside the EU and EEA must meet specific safeguards so that the level of protection provided under EU law is not undermined. Where no European Commission adequacy decision exists for a third country, organisations must rely on alternative mechanisms, such as standard contractual clauses, and demonstrate that equivalent protections are in place.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI cyber capabilities raise risk of correlated financial system failures, IMF warns

AI is rapidly reshaping the global financial system’s cyber risk landscape, according to analysis associated with the International Monetary Fund. While AI improves defence, it also helps attackers find and exploit vulnerabilities more quickly, increasing the risk of systemic disruption.

Financial infrastructure is highly interconnected, relying on shared software, cloud services, and payment networks. IMF analysis suggests that AI-enabled cyberattacks could trigger correlated institutional failures, leading to funding stress, solvency risks, and disruptions to payments and market operations.

Recent developments in advanced AI models demonstrate how quickly offensive capabilities are evolving, with systems now able to identify weaknesses across widely used platforms.

At the same time, defensive AI tools are being deployed to detect threats and strengthen resilience, but their effectiveness depends on governance, oversight, and integration within financial institutions.

Authorities are now being urged to treat cyber risk as a core financial stability issue rather than a purely technical challenge. Stronger supervision, resilience standards, and international coordination are viewed as essential, particularly as cyber threats increasingly cross borders and exploit shared global infrastructure.

Why does it matter? 

Cyber risks related to AI are a macroeconomic threat that can affect liquidity, confidence, and core financial intermediation. At the same time, the same technology is essential for defence, meaning resilience now depends on how quickly supervision, governance, and international coordination can keep pace with rapidly scaling offensive capabilities.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our  chatbot!  

Instagram pulls the plug on encrypted chats

Instagram will no longer support end-to-end encrypted chats from 8 May 2026, ending an optional privacy feature for some direct messages on the platform.

Users affected by the change are being prompted to download any messages or media from encrypted chats that they wish to keep before the feature is removed. Instagram’s help page says users may need to update the app to access or download their end-to-end encrypted chats.

End-to-end encryption allows only the people in a conversation to read messages or hear calls, with messages protected by encryption keys linked to authorised devices. On Instagram, however, encrypted chats were an optional feature rather than the default for all direct messages.

After 8 May 2026, users will no longer be able to send or receive end-to-end encrypted messages or calls on Instagram. The help page also notes that users can still report messages from encrypted chats and that shared content may still be forwarded outside an encrypted conversation.

The change marks a rollback of a privacy feature on one of Meta’s major social platforms, even as end-to-end encryption remains central to debates over secure communications, platform safety and user confidentiality.

Why does it matter?

End-to-end encryption is widely seen as a core privacy protection because it limits access to message content, including by the platform itself. Its removal from Instagram encrypted chats raises questions about how major platforms prioritise privacy features, user safety, product complexity and interoperability across their messaging services.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our  chatbot