Europol’s IOCTA 2026 shows growing cyber threats across Europe’s digital landscape

The 2026 Internet Organised Crime Threat Assessment has been released by Europol, outlining the growing complexity of cybercrime across Europe. The report identifies encryption, proxies, and AI as key drivers behind the increasing scale and sophistication of digital threats.

According to Europol, criminal networks are adapting rapidly, using fragmented online environments and encrypted communication channels to evade detection. The report highlights cybercrime enablers, online fraud schemes, cyber-attacks, and online child sexual exploitation as central areas of concern in the EU threat landscape.

AI is playing a growing role in cyber-enabled crime by making fraud, deception, and other forms of online abuse more scalable and more convincing. Europol presents this as part of a wider shift in which digital threats are becoming more adaptive, more accessible, and harder to disrupt through traditional law enforcement methods alone. This is an inference based on Europol’s framing of AI as a major force expanding cybercrime.

The report also points to continued risks in cyber-attacks and online child sexual exploitation, underlining how technological change is affecting both financially motivated crime and harms involving vulnerable users. In that sense, IOCTA 2026 presents Europe’s cyber challenge not as a series of isolated incidents, but as a broader digital threat environment shaped by enabling technologies and rapidly evolving criminal tactics. This is an inference grounded in Europol’s description of the report’s main threat areas.

These developments reinforce the need for stronger operational cooperation, more advanced investigative capabilities, and continued adaptation across Europe’s law enforcement and regulatory systems. Europol’s overall message is that cybercrime is becoming more sophisticated, more industrialised, and more deeply embedded in the wider digital ecosystem. This is an inference based on the report’s scope and framing.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

IWF and Immaterialism expand efforts to combat child abuse content online

Immaterialism has joined the Internet Watch Foundation to strengthen efforts against the spread of child sexual abuse material online.

The partnership introduces IWF tools designed to accelerate the identification of harmful domains and enable faster intervention when abusive activity is detected. By adopting Registrar Alerts and related datasets, the registrar aims to improve its ability to respond to criminal content across the domains under its management.

The collaboration reflects a broader shift towards more proactive action at the domain infrastructure layer. By integrating intelligence tools into operational processes, the initiative aims to disrupt both the deliberate distribution of abusive material and the continued availability of domains linked to it.

The IWF says the volume of detected child sexual abuse material continues to rise, reinforcing the need for coordinated responses between safety organisations and private-sector actors. In that sense, the partnership points to closer alignment between domain service providers and specialist online safety groups working to strengthen protections for children online.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Cybersecurity reform in the EU advances through Spain consultation

Spain has launched a public consultation on the proposed EU Cybersecurity Act 2, inviting input from operators, citizens, and other interested parties on the need for, objectives of, and possible alternatives to the planned reform.

The consultation covers the European Commission’s proposal COM(2026) 11 final, which would repeal and replace Regulation (EU) 2019/881. The proposal is presented as a response to changes in the cyber threat landscape and to new strategic and regulatory challenges that have emerged since the current framework entered into force in 2019.

According to the consultation text, the reform is intended to address four main structural problems: a mismatch between the EU cybersecurity framework and current operational needs, limited practical use of the European Cybersecurity Certification Framework, fragmentation across the wider EU cybersecurity acquis, and growing cybersecurity risks in ICT supply chains.

Regarding ENISA, the proposal argues that the agency’s current functions and resources are insufficient to meet the needs of member states, the EU institutions, and market actors, particularly in policy implementation, operational cooperation, and crisis response. It also says the certification framework created under the current regulation has proved too slow and too complex in practice, with limited market uptake and governance mechanisms that have not delivered at the required speed.

The text also links the proposal to the growing complexity of compliance created by instruments such as NIS2, the Cyber Resilience Act, DORA, and the CER Directive. It says the new regulation would seek greater coherence and interoperability across those frameworks while reducing administrative burdens for companies and competent authorities.

A further objective is to create, for the first time, a horizontal EU-level framework for managing ICT supply-chain cybersecurity risks, including the identification of critical ICT assets, the possible designation of high-risk suppliers, and the adoption of proportionate measures to reduce strategic dependencies.

The proposal would also strengthen ENISA’s mandate and resources, reform and expand the certification framework, and support a more centralised incident-notification model linked to the wider Digital Omnibus simplification agenda.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Singapore urges organisations to strengthen AI governance frameworks

GovTech Singapore has argued that stronger AI governance in workplaces is essential for trust, compliance, risk management, and responsible innovation as AI adoption expands across business operations.

The agency leading Singapore’s Smart Nation and digital government efforts defines AI governance as a framework of policies, processes, and responsibilities guiding the ethical, transparent, and accountable development and deployment of AI systems within an organisation. The framework is linked to oversight across the AI lifecycle, from design through to ongoing monitoring.

Key elements identified by GovTech Singapore include transparency and explainability, fairness and bias mitigation, accountability and human oversight, and data privacy and security. Responsible AI is also linked to Singapore’s wider Smart Nation agenda, which the agency describes as a national priority.

The guidance recommends that organisations establish clear internal policies on AI use, build AI literacy across teams, carry out regular audits and assessments, and prioritise secure development practices. It also points to Singapore’s Model AI Governance Framework for Generative AI, developed by the AI Verify Foundation and the Infocomm Media Development Authority, as a reference point for businesses adapting governance frameworks to their own needs.

As part of its effort to support responsible AI use in the public sector, GovTech Singapore also highlights its AI Guardian suite. The suite includes Litmus, a testing platform using adversarial prompts to identify risks and vulnerabilities, and Sentinel, a guardrails service designed to detect and mitigate unsafe or irrelevant content before it affects AI models or users.

Overall, GovTech Singapore presents AI governance not only as a compliance issue, but as part of building a trusted digital environment in which AI can be deployed safely and effectively.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

The digital asset framework in Australia enters a critical rollout period

Australia’s crypto sector is entering a critical transition period as digital asset reforms move from policy design into implementation. Two overlapping timelines now define the landscape: immediate AUSTRAC AML/CTF and virtual asset service obligations, and a broader ASIC Digital Assets Framework set to commence in 2027.

Key compliance measures are already active or imminent, including stronger AML/CTF obligations and the Travel Rule from July 2026. Existing financial services law also continues to apply, meaning firms must operate within current licensing requirements while preparing for the next regulatory phase.

Policy development is also converging around stablecoins and scam prevention. While stablecoins are being addressed through payments reform and related financial regulation, scam prevention falls within a broader national framework that spans multiple sectors. In that environment, crypto exchanges occupy a particularly important point of control, where funds move on-chain and where detection and intervention efforts can be most effective.

Authorities and market participants increasingly recognise that the next 18 months will be decisive in showing how these systems work in practice. Stronger alignment with international standards, including FATF expectations, is likely to shape Australia’s shift from regulatory planning to active supervision and enforcement.

Why does it matter?

Australia’s approach reflects a broader global shift from fragmented crypto oversight towards a more integrated financial system regulation. As digital assets become more closely tied to payments, investment flows, and cross-border transfers, governments are increasingly treating crypto infrastructure as part of core financial plumbing rather than a separate experimental market. However, this is an inference grounded in the structure and timing of the reforms now underway.

From a wider perspective, the real significance lies in systemic coordination. Combining AML enforcement, stablecoin oversight, and scam prevention will help determine whether illicit activity can be disrupted at the point of conversion rather than only after funds are lost. How effectively Australia connects these layers will shape not only domestic market integrity, but also its credibility within evolving international standards for digital finance governance.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

Atos launches digital sovereignty offering for AI and regulated environments

Atos Group has launched an integrated digital sovereignty offering, designed to help organisations retain control and accountability over their data, infrastructure and digital operations.

The proposition combines capabilities across cloud, cybersecurity, AI and digital workplace services. It draws on Atos and Eviden expertise, including fully European data encryption products from Eviden.

Sovereignty is embedded by design across existing portfolios, with graduated levels tailored to each customer’s workloads. Open standards and interoperability sit at the core, aiming to reduce vendor lock-in.

The offering targets regulated sectors including the public sector, defence, financial services and healthcare. Atos Group digital sovereignty leader Michael Kollar said the initiative helps organisations ‘turn sovereignty into an operational capability.’

The launch complements the recent introduction of Atos Sovereign Agentic Studios, which focused on moving AI deployments into production under sovereign control.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UN experts warn of growing risks from digital surveillance and AI misuse

UN human rights experts have raised concerns about the global expansion of digital surveillance technologies and their impact on fundamental freedoms, warning that current practices risk undermining democratic participation and civic space.

In a joint statement, the experts said that surveillance tools are increasingly used in ways that may be incompatible with international human rights standards. They noted that such technologies are often deployed against civil society, journalists, political opposition, and minority groups, contributing to what they described as a ‘chilling effect’ on freedom of expression and dissent.

The experts highlighted the growing use of advanced technologies, including AI, in areas such as law enforcement, counter-terrorism, and border management. They said that, without adequate legal safeguards, these tools can enable large-scale monitoring, predictive profiling, and the amplification of bias, potentially leading to disproportionate targeting of individuals and groups.

According to the statement, digital surveillance systems are part of broader ecosystems that involve collaboration among governments, private companies, and data intermediaries. These interconnected systems can expand state surveillance capabilities and increase the complexity of assessing their impact on human rights.

The experts also pointed to the role of legal frameworks, noting that broadly defined laws on national security, extremism, and cybercrime may contribute to the misuse of surveillance technologies. Such measures, they said, can affect the work of civil society organisations and other actors operating in the public sphere.

To address these challenges, the experts called for stronger safeguards, including clearer limits on surveillance practices, risk-based regulation of AI systems, and improved oversight mechanisms. They emphasised the importance of human rights impact assessments throughout the lifecycle of digital technologies, as well as the need for accountability and access to remedies in cases of harm.

Why does it matter?

The statement also highlighted the importance of data protection, system testing, and validation to reduce risks associated with digital surveillance tools. It called on governments to align national legislation with international human rights standards and ensure independent oversight of surveillance activities.

The experts further suggested that international cooperation may be needed to address cross-border implications, including the potential development of a binding international framework governing digital surveillance technologies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Cyberattack on Itron exposes risks to global energy infrastructure systems

Itron has confirmed a cyber intrusion affecting parts of its internal systems, drawing attention to growing vulnerabilities across digital infrastructure linked to essential utility services. In a regulatory filing, the company said an unauthorised third party gained access to certain systems before the activity was contained and removed.

The US energy technology company said it has not identified any compromise of customer-hosted systems, suggesting that the incident may be limited to internal operations for now. At the same time, the lack of detail on the attack method, including whether ransomware was involved, underscores the uncertainty that still surrounds the breach.

As a provider of connected technologies for utilities serving more than 110 million homes and businesses, Itron sits within infrastructure that supports electricity, water, and gas services at scale. That makes the incident significant beyond the company itself, even if operational disruption appears limited so far.

Itron said it activated its cybersecurity response plan, notified law enforcement, and implemented contingency measures, including reliance on backups, to maintain continuity. The company also said operations have continued in all material respects while the investigation remains ongoing.

While services appear largely unaffected at this stage, the filing suggests the full scope of the breach has not yet been determined. The case reflects the growing pressure on infrastructure technology providers to strengthen cyber resilience as threats increasingly target the digital systems underpinning essential services.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

The Philippines and South Korea launch a major cybersecurity centre project

The Department of Information and Communications Technology in the Philippines has formalised a major cybersecurity partnership with South Korea, securing funding and technical support to establish a National Cyber Security Centre to strengthen the country’s digital defences.

The agreement, supported by the Korea International Cooperation Agency, has been described by Philippine officials as the largest cybersecurity cooperation project of its kind in the country.

The initiative is intended to create a central hub for cyber threat monitoring, incident response, and coordinated defence, while also improving information security management across government systems. The programme is backed by a US$25.6 million grant over five years, reflecting the growing urgency of responding to increasingly sophisticated cyber threats affecting infrastructure and public services.

Beyond infrastructure, the project also aims to strengthen national capacity through training and workforce development, helping build a larger pool of cybersecurity professionals. Philippine authorities have stressed that cybersecurity now extends beyond technical systems and increasingly affects public trust, economic stability, and everyday digital activity.

The agreement with South Korea points to a broader effort to strengthen the Philippines’ resilience as a digital economy, with stronger institutional safeguards against evolving cyber risks and a longer-term commitment to secure digital transformation.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

UNIDIR highlights the security implications of the shift from classical to quantum technologies

The United Nations Institute for Disarmament Research (UNIDIR) has outlined the evolution of digital technologies from early internet systems to emerging quantum capabilities, highlighting their growing impact on global systems and security.

In its analysis, UNIDIR traces the progression from dial-up connectivity and classical computing to advanced technologies such as AI and quantum computing, noting that innovation cycles are accelerating and becoming increasingly interconnected. The organisation states that the transition to quantum technologies represents a significant shift in how data is processed, stored and secured.

Unlike classical systems, quantum computing introduces new capabilities that could transform fields ranging from scientific research to communications.

However, UNIDIR warns that these advances also present risks, particularly in cybersecurity. Quantum technologies could challenge existing encryption methods and expose vulnerabilities in digital infrastructure, with implications for governments, businesses and critical systems.

The analysis also links emerging technologies to broader geopolitical dynamics, noting that competition over technological leadership is becoming a key factor in international security. As digital and physical systems converge, technological developments are increasingly shaping strategic stability.

Why does it matter?

UNIDIR emphasises the need for forward-looking governance, international cooperation and policy coordination to manage these challenges. It calls for stronger dialogue among states and stakeholders to ensure that technological progress supports global security rather than undermines it.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot