Europol’s IOCTA 2026 shows growing cyber threats across Europe’s digital landscape

The 2026 Internet Organised Crime Threat Assessment has been released by Europol, outlining the growing complexity of cybercrime across Europe. The report identifies encryption, proxies, and AI as key drivers behind the increasing scale and sophistication of digital threats.

According to Europol, criminal networks are adapting rapidly, using fragmented online environments and encrypted communication channels to evade detection. The report highlights cybercrime enablers, online fraud schemes, cyber-attacks, and online child sexual exploitation as central areas of concern in the EU threat landscape.

AI is playing a growing role in cyber-enabled crime by making fraud, deception, and other forms of online abuse more scalable and more convincing. Europol presents this as part of a wider shift in which digital threats are becoming more adaptive, more accessible, and harder to disrupt through traditional law enforcement methods alone. This is an inference based on Europol’s framing of AI as a major force expanding cybercrime.

The report also points to continued risks in cyber-attacks and online child sexual exploitation, underlining how technological change is affecting both financially motivated crime and harms involving vulnerable users. In that sense, IOCTA 2026 presents Europe’s cyber challenge not as a series of isolated incidents, but as a broader digital threat environment shaped by enabling technologies and rapidly evolving criminal tactics. This is an inference grounded in Europol’s description of the report’s main threat areas.

These developments reinforce the need for stronger operational cooperation, more advanced investigative capabilities, and continued adaptation across Europe’s law enforcement and regulatory systems. Europol’s overall message is that cybercrime is becoming more sophisticated, more industrialised, and more deeply embedded in the wider digital ecosystem. This is an inference based on the report’s scope and framing.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

China advances AI-driven scientific research platform

The Chinese Academy of Sciences has introduced ScienceOne 100, an advanced AI model system designed to support scientific research across disciplines, including mathematics, physics, and biology.

The platform reflects a broader shift from isolated experimentation towards integrated, collaborative research environments powered by AI. Built on the earlier ScienceOne foundation model, the system combines multiple domain-specific large models and tools to streamline the full research cycle.

Three core components drive its functionality: a literature compass for automated analysis and review writing, an innovation evaluation engine to detect emerging research directions, and an agent factory offering more than 2,000 tools for scientific workflows.

Performance gains place the latest version at a high level in scientific reasoning and data interpretation, especially in image analysis and long-horizon problem solving. Training has relied on specialised scientific datasets, allowing the system to operate with precision across complex research contexts.

Deployment is already underway across more than 50 institutes, supporting over 100 research scenarios. Early use cases span materials discovery, aerospace modelling, environmental research, and biomedical design, underscoring its potential to accelerate output and reshape research infrastructure.

Why does it matter? 

ScienceOne 100 signals a decisive shift towards AI-led research infrastructure, where discovery becomes faster, more scalable, and less dependent on linear human workflows.

Automated literature analysis, hypothesis testing, and simulation can significantly shorten the path from idea to result, increasing overall scientific productivity and enabling more complex, cross-disciplinary breakthroughs.

Strategic implications extend beyond efficiency gains. Large-scale AI platforms strengthen national innovation capacity, particularly in critical sectors such as biotechnology, materials science, and aerospace.

Wider adoption could reshape global research competition, influence how scientific knowledge is validated, and drive demand for hybrid expertise combining domain knowledge with advanced computational skills.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

Intellectual property cooperation launched under EU-Japan IP Action

The European Union Intellectual Property Office has launched the EU-Japan IP Action in Tokyo, marking the first dedicated intellectual property cooperation project between the European Union and Japan.

The initiative is intended to strengthen the protection and promotion of intellectual property rights through technical cooperation, policy dialogue, and industry engagement. The launch also highlighted how AI is reshaping innovation, competition, and IP enforcement in the digital environment.

EUIPO Executive Director João Negrão said: ‘Today’s event marks a milestone: the official launch of the EUJapan IP Action. As the first dedicated cooperation project on intellectual property between our two regions, organised by the EUIPO and co-funded by the European Union, it carries real promise – for trade, for innovation, and for growth on both sides.’

The launch brought together officials from the EU and Japan, including representatives of the Japan Patent Office and Japan’s Intellectual Property Strategy Headquarters. Speakers described the initiative as a new phase of cooperation focused on streamlining IP processes and ensuring that legal frameworks keep pace with industrial and technological change.

A panel discussion examined the impact of AI and large language models on intellectual property, including questions of authorship, ownership of AI-generated inventions, and copyright enforcement. Industry representatives also discussed practical challenges related to AI governance and anti-piracy.

The event continued with a conference on generative AI, where participants from business, government, and academia examined how IP frameworks should respond to AI-driven change. Discussions included compensation for creators whose works are used in AI training, alongside legal, contractual, and technical mechanisms that could support that goal. Creative sectors, including manga, animation, music, and video games, were also part of the discussion.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UK moves to strengthen sovereignty over critical AI infrastructure

Britain is moving to strengthen its position in the global AI race, with Technology Secretary Liz Kendall calling for greater national control over key parts of the AI stack. In a recent speech, she described artificial intelligence as an increasingly important source of economic strength, security, and geopolitical influence.

Concerns centre on the concentration of power in a small number of companies that control much of the world’s advanced AI computing capacity. The government’s strategy is intended to reduce reliance on external providers while building domestic capabilities across areas such as research, infrastructure, compute, and talent.

Plans include the development of a national AI hardware strategy to improve access to chips and other critical technologies. At the same time, Britain says it will focus on sectors where it believes it holds a competitive edge, while continuing to work with allies on standards, governance, and the international rules shaping AI development.

Officials have stressed that AI sovereignty does not mean technological isolation, but stronger strategic resilience and greater influence over how future systems are built and governed. In that context, support for domestic firms and institutions is being framed as essential if Britain is to remain a serious player in the emerging global AI order.

Why does it matter?

Control over AI infrastructure is quickly becoming a core element of national power, comparable to energy or defence capabilities.

Concentration of computing and advanced chips in a few global players creates strategic vulnerabilities, exposing countries to external decisions that can affect economic stability, security and technological development.

Britain’s push for AI sovereignty reflects a broader global trend towards technological self-determination. Efforts to build domestic capacity and shape international standards could influence global AI governance, access to critical technologies, and reshape alliances in a more fragmented digital order.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

Singapore urges organisations to strengthen AI governance frameworks

GovTech Singapore has argued that stronger AI governance in workplaces is essential for trust, compliance, risk management, and responsible innovation as AI adoption expands across business operations.

The agency leading Singapore’s Smart Nation and digital government efforts defines AI governance as a framework of policies, processes, and responsibilities guiding the ethical, transparent, and accountable development and deployment of AI systems within an organisation. The framework is linked to oversight across the AI lifecycle, from design through to ongoing monitoring.

Key elements identified by GovTech Singapore include transparency and explainability, fairness and bias mitigation, accountability and human oversight, and data privacy and security. Responsible AI is also linked to Singapore’s wider Smart Nation agenda, which the agency describes as a national priority.

The guidance recommends that organisations establish clear internal policies on AI use, build AI literacy across teams, carry out regular audits and assessments, and prioritise secure development practices. It also points to Singapore’s Model AI Governance Framework for Generative AI, developed by the AI Verify Foundation and the Infocomm Media Development Authority, as a reference point for businesses adapting governance frameworks to their own needs.

As part of its effort to support responsible AI use in the public sector, GovTech Singapore also highlights its AI Guardian suite. The suite includes Litmus, a testing platform using adversarial prompts to identify risks and vulnerabilities, and Sentinel, a guardrails service designed to detect and mitigate unsafe or irrelevant content before it affects AI models or users.

Overall, GovTech Singapore presents AI governance not only as a compliance issue, but as part of building a trusted digital environment in which AI can be deployed safely and effectively.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Atos launches digital sovereignty offering for AI and regulated environments

Atos Group has launched an integrated digital sovereignty offering, designed to help organisations retain control and accountability over their data, infrastructure and digital operations.

The proposition combines capabilities across cloud, cybersecurity, AI and digital workplace services. It draws on Atos and Eviden expertise, including fully European data encryption products from Eviden.

Sovereignty is embedded by design across existing portfolios, with graduated levels tailored to each customer’s workloads. Open standards and interoperability sit at the core, aiming to reduce vendor lock-in.

The offering targets regulated sectors including the public sector, defence, financial services and healthcare. Atos Group digital sovereignty leader Michael Kollar said the initiative helps organisations ‘turn sovereignty into an operational capability.’

The launch complements the recent introduction of Atos Sovereign Agentic Studios, which focused on moving AI deployments into production under sovereign control.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

MIT researchers develop tool to estimate energy use of AI workloads

Researchers from the Massachusetts Institute of Technology and the MIT-IBM Watson AI Lab have developed a rapid estimation system that calculates the energy consumption of AI workloads in seconds, offering a major improvement over traditional methods that take hours or days.

The tool, known as EnergAIzer, is designed to support data centre operators as AI demand accelerates and electricity consumption rises. With AI infrastructure expected to account for a significant share of US power usage in the coming years, more efficient resource planning has become increasingly critical.

EnergAIzer analyses repeatable workload patterns and GPU behaviour to generate fast predictions of energy use across different hardware setups. After incorporating real GPU measurements, the system achieves high accuracy while remaining lightweight and adaptable to current and future chip designs.

By providing immediate feedback on energy consumption, the tool allows developers and operators to optimise workloads, reduce waste, and test different configurations before deployment. The approach is positioned as a practical step towards improving sustainability across large-scale AI systems.

Why does it matter? 

Energy use is becoming one of the defining constraints of AI growth, as large-scale models push data centres towards unprecedented electricity demand. A tool like EnergAIzer directly addresses this bottleneck by making power consumption visible and measurable before deployment.

Faster and more accurate estimation changes how AI systems are designed and scaled. Rather than reacting to energy costs after deployment, developers and operators can optimise workloads in advance, cutting waste and improving efficiency.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

Meta partners with Overview and Noon Energy to power AI data centres

Meta has announced two energy partnerships to support its AI infrastructure, teaming up with Overview Energy for space solar power and Noon Energy for ultra-long-duration storage, with up to 1 GW reserved under each agreement.

Overview Energy operates satellites in geosynchronous orbit, roughly 22,000 miles above Earth, where sunlight is constant. The satellites collect solar energy and beam it to existing ground-based solar farms as low-intensity, near-infrared light, enabling around-the-clock electricity generation without requiring additional land or grid infrastructure.

Noon Energy‘s technology relies on modular, reversible solid-oxide fuel cells and carbon-based storage, offering over 100 hours of energy storage. Meta has reserved up to 1 GW/100 GWh, with an initial 25 MW/2.5 GWh pilot demonstration expected by 2028. The company describes this as among the largest commitments to ultra-long-duration storage in the industry.

Both partnerships build on Meta’s existing energy portfolio, which includes more than 30 GW of contracted clean and renewable energy. The company is also one of the largest corporate purchasers of nuclear energy in the US, with 7.7 GW secured across agreements with Vistra, TerraPower, Oklo and Constellation Energy.

Overview Energy’s orbital demonstration is planned for 2028, with commercial delivery to the US grid potentially starting as early as 2030. Noon Energy’s demonstration project targets the same year, with its modular design allowing capacity to scale alongside Meta’s growing data centre footprint.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Generative AI policy updated by Australian Research Council

The Australian Research Council has updated its policy on the use of generative AI in its grants programmes, setting out how the rules apply to applicants, administering organisations, and assessors in the National Competitive Grants Program.

The revised policy has officially taken effect and applies to applications and assessments for Discovery Indigenous 2027 and all scheme rounds opening after that date.

The policy says applicants may use generative AI tools to support tasks such as testing ideas, improving language, and summarising text, but remain responsible for the content they submit and are considered the authors of that content.

Administering organisations are also responsible for ensuring that applications are complete, accurate, and free from false or misleading information, while delegated research leaders must certify that participants are responsible for the authorship and intellectual content of applications and that they have not infringed the intellectual property rights of others.

A notable change in the revised policy is that assessors are now permitted to use generative AI tools in limited ways. The ARC says assessors may use AI only to correct or improve grammar, spelling, formatting, and the readability of drafted assessments.

At the same time, the policy states that assessors must not use AI to help form an opinion on the quality of an application and must preserve the confidentiality of all application materials. Inputting any application material into public generative AI tools such as ChatGPT, Gemini, Claude, or Perplexity is described by the ARC as a serious breach of confidentiality and is not permitted.

The ARC also says assessors will be asked about their use of AI and must be transparent when requested. Where assessors’ inappropriate use of generative AI is suspected, the ARC may remove that assessment from the process. If a breach is established following investigation, the ARC may impose consequential actions in addition to any imposed by the assessor’s employing institution.

The revised policy explains that its approach is shaped by concerns including intellectual integrity and authorship, safeguarding intellectual property, culturally appropriate use of data, content reliability and bias, human oversight and expert judgement, and energy and environmental impacts. It also states that the ARC will continue to monitor developments in generative AI and update the policy as required.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UN experts warn of growing risks from digital surveillance and AI misuse

UN human rights experts have raised concerns about the global expansion of digital surveillance technologies and their impact on fundamental freedoms, warning that current practices risk undermining democratic participation and civic space.

In a joint statement, the experts said that surveillance tools are increasingly used in ways that may be incompatible with international human rights standards. They noted that such technologies are often deployed against civil society, journalists, political opposition, and minority groups, contributing to what they described as a ‘chilling effect’ on freedom of expression and dissent.

The experts highlighted the growing use of advanced technologies, including AI, in areas such as law enforcement, counter-terrorism, and border management. They said that, without adequate legal safeguards, these tools can enable large-scale monitoring, predictive profiling, and the amplification of bias, potentially leading to disproportionate targeting of individuals and groups.

According to the statement, digital surveillance systems are part of broader ecosystems that involve collaboration among governments, private companies, and data intermediaries. These interconnected systems can expand state surveillance capabilities and increase the complexity of assessing their impact on human rights.

The experts also pointed to the role of legal frameworks, noting that broadly defined laws on national security, extremism, and cybercrime may contribute to the misuse of surveillance technologies. Such measures, they said, can affect the work of civil society organisations and other actors operating in the public sphere.

To address these challenges, the experts called for stronger safeguards, including clearer limits on surveillance practices, risk-based regulation of AI systems, and improved oversight mechanisms. They emphasised the importance of human rights impact assessments throughout the lifecycle of digital technologies, as well as the need for accountability and access to remedies in cases of harm.

Why does it matter?

The statement also highlighted the importance of data protection, system testing, and validation to reduce risks associated with digital surveillance tools. It called on governments to align national legislation with international human rights standards and ensure independent oversight of surveillance activities.

The experts further suggested that international cooperation may be needed to address cross-border implications, including the potential development of a binding international framework governing digital surveillance technologies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot