Europol’s IOCTA 2026 shows growing cyber threats across Europe’s digital landscape

The 2026 Internet Organised Crime Threat Assessment has been released by Europol, outlining the growing complexity of cybercrime across Europe. The report identifies encryption, proxies, and AI as key drivers behind the increasing scale and sophistication of digital threats.

According to Europol, criminal networks are adapting rapidly, using fragmented online environments and encrypted communication channels to evade detection. The report highlights cybercrime enablers, online fraud schemes, cyber-attacks, and online child sexual exploitation as central areas of concern in the EU threat landscape.

AI is playing a growing role in cyber-enabled crime by making fraud, deception, and other forms of online abuse more scalable and more convincing. Europol presents this as part of a wider shift in which digital threats are becoming more adaptive, more accessible, and harder to disrupt through traditional law enforcement methods alone. This is an inference based on Europol’s framing of AI as a major force expanding cybercrime.

The report also points to continued risks in cyber-attacks and online child sexual exploitation, underlining how technological change is affecting both financially motivated crime and harms involving vulnerable users. In that sense, IOCTA 2026 presents Europe’s cyber challenge not as a series of isolated incidents, but as a broader digital threat environment shaped by enabling technologies and rapidly evolving criminal tactics. This is an inference grounded in Europol’s description of the report’s main threat areas.

These developments reinforce the need for stronger operational cooperation, more advanced investigative capabilities, and continued adaptation across Europe’s law enforcement and regulatory systems. Europol’s overall message is that cybercrime is becoming more sophisticated, more industrialised, and more deeply embedded in the wider digital ecosystem. This is an inference based on the report’s scope and framing.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Intellectual property cooperation launched under EU-Japan IP Action

The European Union Intellectual Property Office has launched the EU-Japan IP Action in Tokyo, marking the first dedicated intellectual property cooperation project between the European Union and Japan.

The initiative is intended to strengthen the protection and promotion of intellectual property rights through technical cooperation, policy dialogue, and industry engagement. The launch also highlighted how AI is reshaping innovation, competition, and IP enforcement in the digital environment.

EUIPO Executive Director João Negrão said: ‘Today’s event marks a milestone: the official launch of the EUJapan IP Action. As the first dedicated cooperation project on intellectual property between our two regions, organised by the EUIPO and co-funded by the European Union, it carries real promise – for trade, for innovation, and for growth on both sides.’

The launch brought together officials from the EU and Japan, including representatives of the Japan Patent Office and Japan’s Intellectual Property Strategy Headquarters. Speakers described the initiative as a new phase of cooperation focused on streamlining IP processes and ensuring that legal frameworks keep pace with industrial and technological change.

A panel discussion examined the impact of AI and large language models on intellectual property, including questions of authorship, ownership of AI-generated inventions, and copyright enforcement. Industry representatives also discussed practical challenges related to AI governance and anti-piracy.

The event continued with a conference on generative AI, where participants from business, government, and academia examined how IP frameworks should respond to AI-driven change. Discussions included compensation for creators whose works are used in AI training, alongside legal, contractual, and technical mechanisms that could support that goal. Creative sectors, including manga, animation, music, and video games, were also part of the discussion.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Atos launches digital sovereignty offering for AI and regulated environments

Atos Group has launched an integrated digital sovereignty offering, designed to help organisations retain control and accountability over their data, infrastructure and digital operations.

The proposition combines capabilities across cloud, cybersecurity, AI and digital workplace services. It draws on Atos and Eviden expertise, including fully European data encryption products from Eviden.

Sovereignty is embedded by design across existing portfolios, with graduated levels tailored to each customer’s workloads. Open standards and interoperability sit at the core, aiming to reduce vendor lock-in.

The offering targets regulated sectors including the public sector, defence, financial services and healthcare. Atos Group digital sovereignty leader Michael Kollar said the initiative helps organisations ‘turn sovereignty into an operational capability.’

The launch complements the recent introduction of Atos Sovereign Agentic Studios, which focused on moving AI deployments into production under sovereign control.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UN experts warn of growing risks from digital surveillance and AI misuse

UN human rights experts have raised concerns about the global expansion of digital surveillance technologies and their impact on fundamental freedoms, warning that current practices risk undermining democratic participation and civic space.

In a joint statement, the experts said that surveillance tools are increasingly used in ways that may be incompatible with international human rights standards. They noted that such technologies are often deployed against civil society, journalists, political opposition, and minority groups, contributing to what they described as a ‘chilling effect’ on freedom of expression and dissent.

The experts highlighted the growing use of advanced technologies, including AI, in areas such as law enforcement, counter-terrorism, and border management. They said that, without adequate legal safeguards, these tools can enable large-scale monitoring, predictive profiling, and the amplification of bias, potentially leading to disproportionate targeting of individuals and groups.

According to the statement, digital surveillance systems are part of broader ecosystems that involve collaboration among governments, private companies, and data intermediaries. These interconnected systems can expand state surveillance capabilities and increase the complexity of assessing their impact on human rights.

The experts also pointed to the role of legal frameworks, noting that broadly defined laws on national security, extremism, and cybercrime may contribute to the misuse of surveillance technologies. Such measures, they said, can affect the work of civil society organisations and other actors operating in the public sphere.

To address these challenges, the experts called for stronger safeguards, including clearer limits on surveillance practices, risk-based regulation of AI systems, and improved oversight mechanisms. They emphasised the importance of human rights impact assessments throughout the lifecycle of digital technologies, as well as the need for accountability and access to remedies in cases of harm.

Why does it matter?

The statement also highlighted the importance of data protection, system testing, and validation to reduce risks associated with digital surveillance tools. It called on governments to align national legislation with international human rights standards and ensure independent oversight of surveillance activities.

The experts further suggested that international cooperation may be needed to address cross-border implications, including the potential development of a binding international framework governing digital surveillance technologies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UNESCO and Oxford University launch global AI course for courts

A free online course aimed at preparing judicial systems for the growing role of AI in legal decision-making has been launched, with UNESCO in partnership with the University of Oxford positioned at the centre of the initiative.

AI is already shaping court processes, influencing evidence assessment, and affecting access to justice. Yet, many legal professionals lack structured guidance to evaluate such systems within a rule-of-law framework.

The UNESCO programme introduces a practical, human rights-based approach to AI, combining legal, ethical, and operational perspectives.

Developed with institutions including Oxford’s Saïd Business School and Blavatnik School of Government, the course equips participants with tools to assess algorithmic outputs, manage risks of bias, and maintain judicial independence in increasingly digital court environments.

Central to UNESCO’s initiative is a newly developed AI and Rule of Law Checklist, designed to help courts scrutinise AI systems and their outputs, including use as evidence.

The course also addresses broader concerns, including fairness, transparency, accountability, and the protection of vulnerable groups, reflecting rising global reliance on AI across justice systems.

Supported by the EU, the course is available globally, free of charge, with certification from the University of Oxford. As AI becomes embedded in judicial processes, capacity-building efforts aim to ensure technological adoption strengthens rather than undermines the rule of law.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Greece accelerates digital governance with AI enforcement and social media age restrictions

Greece is moving to tighten online child protection and expand AI-based public enforcement as part of a broader digital governance agenda, Digital Governance and Artificial Intelligence Minister Dimitris Papastergiou has said.

Under the plan, social media platforms would, from 2027, be required to block access for users under 15 using age verification systems rather than self-declared age data. However, AI is already being used in road safety enforcement, with smart cameras issuing digital fines through government platforms.

The policy includes tools such as Kids Wallet, built on privacy-preserving verification methods that share only age eligibility. Authorities say the aim is to address risks linked to digital addiction while strengthening protections for minors across online environments.

Alongside these measures, AI is already being deployed in road safety enforcement. Smart cameras are being used to issue digital fines through government platforms, with a nationwide rollout planned to expand monitoring and improve compliance.

These measures form part of a wider effort to digitise public administration, reduce inefficiencies, and strengthen accountability. By embedding technology more deeply into everyday governance, Greece is trying to reshape how citizens interact with the state while also addressing long-standing systemic problems.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

EU advances GPAI framework with focus on forecasting systemic risks

At the third meeting of the Signatory Taskforce, the European Commission advanced discussions on how to strengthen oversight of advanced AI systems through the General-Purpose AI Code of Practice, with a particular focus on risk forecasting and harmful manipulation.

The latest GPAI taskforce meeting focused on improving how providers assess and anticipate systemic risks linked to high-impact AI models. A central proposal would require providers to estimate when future systems may exceed the highest systemic risk tier already reached by any of their existing models, using structured forecasting methods.

The Commission is also considering using aggregate forecasts across the industry to provide a broader view of technological trends, including compute capacity, algorithmic efficiency, and data availability. The aim is to improve visibility into how capabilities may evolve across the sector rather than only at the level of individual providers.

Attention was also directed towards harmful manipulation, which the Code treats as a recognised systemic risk. Discussions focused on how providers should develop realistic scenarios for testing and evaluating model behaviour, including in deployment settings such as chatbot interfaces, third-party applications, and agentic systems.

The initiative reflects a wider EU regulatory approach centred on transparency, accountability, and proactive governance in AI development. Rather than waiting for harms to materialise, the Code of Practice is being used to push providers to identify risks earlier and to adopt more structured safety planning for general-purpose AI models with systemic risk.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Saudi initiative attempts to link AI with sustainability goals

A new AI-enabled sustainability platform developed with support from the World Economic Forum aims to strengthen partnerships across sectors. The initiative is led by Saudi Arabia’s Ministry of Economy and Planning as part of its wider development agenda.

The platform, known as SUSTAIN, uses AI to match organisations with potential partners and opportunities. It is designed to connect government, businesses, academia, and civil society more efficiently and to help move sustainability projects from planning to implementation.

Developers say the system could accelerate collaboration and support the delivery of higher-impact sustainability projects. Official estimates suggest it could help unlock partnerships worth up to $20 billion in Saudi Arabia and significantly more across the wider region.

The initiative forms part of broader efforts to advance long-term sustainability goals through more coordinated action and practical uses of AI. The project is being developed in Saudi Arabia and presented as a tool to strengthen cross-sector cooperation rather than a stand-alone sustainability programme.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Malaysia expands national AI strategy through Microsoft partnership

Malaysia is strengthening its national AI strategy through an expanded partnership with Microsoft, launching the Microsoft Elevate initiative to accelerate AI readiness across society.

The programme aligns with the country’s AI Nation 2030 ambitions and extends digital skills development beyond traditional sectors.

An initiative that targets educators, public sector institutions, small businesses and wider communities, aiming to embed practical AI capabilities into everyday economic and social activity.

Early deployment has already reached tens of thousands of learners, reflecting a shift from pilot programmes to large-scale national implementation.

Government and industry leaders in Malaysia emphasise that long-term competitiveness depends not only on technological investment but on widespread adoption and understanding of AI tools.

The programme therefore prioritises workforce activation, institutional capacity and sustainable integration across sectors.

Malaysia’s approach reflects a broader global trend where public–private partnerships are increasingly central to AI development, focusing on inclusive access, responsible use and real-world application rather than purely technological advancement.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

UNIDIR highlights the security implications of the shift from classical to quantum technologies

The United Nations Institute for Disarmament Research (UNIDIR) has outlined the evolution of digital technologies from early internet systems to emerging quantum capabilities, highlighting their growing impact on global systems and security.

In its analysis, UNIDIR traces the progression from dial-up connectivity and classical computing to advanced technologies such as AI and quantum computing, noting that innovation cycles are accelerating and becoming increasingly interconnected. The organisation states that the transition to quantum technologies represents a significant shift in how data is processed, stored and secured.

Unlike classical systems, quantum computing introduces new capabilities that could transform fields ranging from scientific research to communications.

However, UNIDIR warns that these advances also present risks, particularly in cybersecurity. Quantum technologies could challenge existing encryption methods and expose vulnerabilities in digital infrastructure, with implications for governments, businesses and critical systems.

The analysis also links emerging technologies to broader geopolitical dynamics, noting that competition over technological leadership is becoming a key factor in international security. As digital and physical systems converge, technological developments are increasingly shaping strategic stability.

Why does it matter?

UNIDIR emphasises the need for forward-looking governance, international cooperation and policy coordination to manage these challenges. It calls for stronger dialogue among states and stakeholders to ensure that technological progress supports global security rather than undermines it.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot