Europol-backed operation shuts down thousands of dark web fraud sites

A global law enforcement operation supported by Europol has led to the shutdown of more than 373,000 dark web websites linked to fraudulent activity and the advertisement of child sexual abuse material.

The operation, known as ‘Operation Alice’, was launched on 9 March 2026 under the leadership of German authorities, with participation from 23 countries. The investigation, which began in 2021, initially targeted a dark web platform referred to as ‘Alice with Violence CP’.

According to Europol, investigators identified a single operator responsible for managing a network of hundreds of thousands of onion domains. These websites advertised child sexual abuse material and cybercrime-as-a-service offerings, including access to stolen financial data and systems.

Authorities state that the services were fraudulent, designed to extract payments without delivering the advertised material.

The operation has so far resulted in the identification of 440 customers worldwide, with further investigations ongoing against more than 100 individuals. Law enforcement agencies also seized 105 servers and multiple electronic devices during the coordinated action.

Europol provided analytical support, facilitated information exchange, and assisted in tracing cryptocurrency transactions linked to the network.

Authorities also reported that measures were taken throughout the investigation to identify and protect children at risk. An international arrest warrant has been issued for the suspected operator, who is reported to have generated significant profits through the scheme.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Sora strengthens AI video safety through consent and traceability controls

OpenAI has outlined a safety framework for Sora that embeds protections into how AI-generated video content is created, shared, and managed.

The system introduces visible and invisible provenance signals, including C2PA metadata and watermarks, designed to ensure that generated media can be identified and traced.

The framework emphasises consent and control. Users can generate video content from images of real individuals only after confirming they have permission, while the ‘characters’ feature enables controlled use of personal likeness, with the ability to revoke access at any time.

Additional safeguards apply to content involving minors or young-looking individuals, with stricter moderation rules and enforced watermarking.

Safety mechanisms operate across the entire lifecycle of content. Generation is subject to layered filtering that assesses prompts and outputs for harmful material, including sexual content, self-harm promotion, and illegal activity.

These automated systems are complemented by human review and continuous testing to address emerging risks linked to increasingly realistic video and audio outputs.

The system also introduces protections specific to audio and user interaction. Generated speech is analysed for policy violations, and attempts to replicate the style of living artists or existing works are restricted.

Users of Sora retain control over their content through reporting tools, sharing settings, and the ability to remove material, reflecting a broader approach that aligns AI-generated media with safety, transparency, and accountability standards.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Australian regulator warns AI companions expose children to serious online risks

The eSafety Commissioner has reported that AI companion chatbots are failing to adequately protect children from harmful content, following a transparency review of services including Character.AI, Nomi, Chai, and Chub AI.

According to the report, these services did not implement robust safeguards against exposure to sexually explicit material or the generation of child sexual exploitation and abuse content.

The findings also indicate that most platforms relied on self-declared age verification and did not consistently monitor inputs or outputs across all AI models used.

eSafety Commissioner Julie Inman Grant stated that AI companions, often presented as sources of emotional or social support, are increasingly used by children but may expose them to harmful interactions.

She noted that none of the reviewed services had ‘meaningful age checks’ in place and highlighted concerns about the absence of safeguards related to self-harm and suicide content.

The report further identifies that several platforms in Australia did not refer users to crisis or mental health support services when harmful interactions were detected.

It also notes gaps in monitoring for unlawful content and limited investment in trust and safety staffing, with some providers reporting no dedicated moderation personnel.

The findings follow the implementation of Australia’s Age-Restricted Material Codes, which require online services, including AI chatbots, to prevent access to age-inappropriate content and provide appropriate safety measures.

These obligations complement existing Unlawful Material Codes and Standards, with non-compliance potentially leading to civil penalties.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

NVIDIA introduces infrastructure-level security model for autonomous AI agents

OpenShell, an open-source runtime introduced by NVIDIA, is designed to support the secure deployment of autonomous AI agents within enterprise environments.

According to NVIDIA, OpenShell applies security controls at the infrastructure level rather than within the model or application layer. The runtime ensures that each agent operates inside an isolated sandbox, where system-level policies define and enforce permissions, resource access, and operational constraints.

The company states that such an approach separates agent behaviour from policy enforcement, preventing agents from overriding security controls or accessing restricted data.

OpenShell enables organisations to define and monitor a unified policy layer governing how autonomous systems interact with files, tools, and enterprise workflows.

Additionally, OpenShell forms part of the NVIDIA Agent Toolkit and is complemented by NemoClaw, a reference stack designed to support the deployment of continuously operating AI assistants.

NVIDIA indicates that the system can run across cloud, on-premises, and local computing environments, while maintaining consistent policy enforcement.

The company also reports collaboration with industry partners, including Cisco, CrowdStrike, Google Cloud, and Microsoft Security, to align security practices for AI agent deployment. Both OpenShell and NemoClaw are currently in early preview.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

UK pushes platforms to tackle AI abuse and online violence against women

The Department for Science, Innovation and Technology has called on online service providers to strengthen measures against digital harms targeting women and girls, as part of a commitment to halve such violence within a decade.

In a letter published on 23 March 2026, Liz Kendall outlined expectations for platforms operating under the Online Safety Act.

The letter states that the government has strengthened criminal law and regulatory frameworks, including new offences related to harmful pornographic practices and intimate image abuse.

It confirms that sharing or threatening to share sexually explicit deepfakes without consent constitutes a criminal offence, while the non-consensual creation of such content has also been criminalised and is being designated as a priority offence under the Act.

Further measures include amendments to the Crime and Policing Bill to ban so-called ‘nudification’ tools and extend illegal content duties to AI chatbots.

The government is also introducing a requirement for platforms to remove non-consensual intimate images within 48 hours, with a focus on reducing repeated reporting burdens for victims.

The Secretary of State urged companies to implement recommendations from Ofcom’s guidance on online safety for women and girls, including risk assessments, stronger privacy settings, and limits on the visibility of harmful content.

Platforms are expected to comply by the end of the year, with progress to be monitored.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

AI added to St Helens council strategic risk register

In the UK, the St Helens Council has added AI and digital disruption to its strategic risk register as it seeks to strengthen governance and oversight. The change reflects growing concern about how emerging technologies could affect operations and services.

The updated register, now featuring 12 strategic risks, was presented ahead of the audit and governance committee meeting. UK officials said effective risk management is vital to meeting the council’s objectives and mitigating potential challenges.

AI and digital disruption were cited for the first time alongside risks linked to extreme weather and community cohesion. The council noted that ethical, data privacy and workforce confidence issues are among the challenges associated with integrating AI into public services.

Leaders said other risks, including cybersecurity threats and budget pressures, remain under review. The move comes as local authorities across the UK weigh the impacts of new technologies on service delivery and strategic planning.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

US releases national AI policy framework

The Trump Administration unveiled a national AI framework to boost competitiveness, security, and benefits for Americans. The plan seeks to ensure that AI innovation supports all citizens while maintaining public trust in the technology.

Six key objectives form the foundation of the policy. These include protecting children online, empowering parents with tools to manage digital safety, strengthening communities and small businesses, respecting intellectual property, defending free speech, and fostering innovation.

The framework also prioritises workforce development to prepare Americans for AI-driven job opportunities.

Federal uniformity is considered critical to the plan’s success. The Administration warns that a patchwork of state regulations could stifle innovation and reduce the United States’ ability to lead globally.

Congress is encouraged to collaborate closely to implement the framework nationwide.

The Administration emphasises that the United States must lead the AI race, ensuring the benefits of AI reach all Americans while addressing challenges such as privacy, security, and equitable access to opportunities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Social media linked to declining well-being among young people

The World Happiness Report 2026 has identified a growing decline in well-being among young people, with increased social media use emerging as a key contributing factor. These findings suggest that digital habits are increasingly shaping life satisfaction, particularly across Western societies.

The report notes that younger age groups now report significantly lower happiness levels compared to previous decades.

In regions such as North America and Western Europe, the decline coincides with a sharp rise in time spent on social media platforms. Researchers highlight that heavy usage is associated with measurable reductions in well-being, especially among younger users.

Alongside these trends, the report continues to rank Finland as the happiest country globally, reflecting broader stability in Nordic nations. However, such stability contrasts with emerging concerns about mental health and social outcomes in more industrialised regions, where digital environments are playing an increasingly influential role.

While the report identifies risks including cyberbullying, depression and online exploitation, it does not advocate for complete restrictions. Instead, it emphasises the need for carefully designed regulatory approaches that balance protection with the potential benefits of digital connectivity.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Deepfake abuse crisis escalates worldwide

AI-generated deepfake abuse is emerging as a serious global threat, with women and girls disproportionately affected by non-consensual and harmful digital content. Advances in AI make it easy to create manipulated content that can spread across platforms within minutes and reach millions.

Data highlights the scale of the issue. The vast majority of deepfake content online consists of explicit material, overwhelmingly targeting women.

Accessible and often free tools have lowered the barrier to entry, enabling widespread misuse. At the same time, the ability to endlessly replicate and share such content makes removal nearly impossible once it is published.

Legal responses remain fragmented, with many pre-existing laws leaving gaps in addressing AI-generated deepfake abuse. Enforcement issues, such as cross-border challenges and limited digital forensics capabilities, make it unlikely that perpetrators will face consequences.

Pressure is mounting on governments and technology platforms to act. Calls for reform include clearer legislation, faster obligations to remove content, improved law enforcement capabilities, and stronger support systems for victims.

Without coordinated global action, deepfake abuse is set to expand alongside the technologies enabling it.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Human data demand fuels new global digital economy

A growing number of individuals worldwide are participating in a new digital economy built around supplying data for AI systems.

Through platforms such as Kled AI and Silencio, users upload videos, audio recordings and personal interactions in exchange for payment, contributing to the development of increasingly sophisticated AI models.

Such a trend reflects a broader shift in the AI industry, where demand for high-quality human-generated data is rising as traditional web-based sources become more limited.

Researchers suggest that human data remains essential for improving system performance and modelling behaviour beyond existing datasets. As a result, data marketplaces have emerged as an alternative supply mechanism.

Economic considerations often shape participation. In regions facing limited employment opportunities or currency instability, earning income in global currencies can provide a meaningful financial incentive.

At the same time, similar practices are expanding in higher-income countries, where individuals seek supplementary income streams amid rising living costs.

However, the model introduces complex trade-offs.

Contributors may grant extensive usage rights over their data, sometimes on a long-term or irreversible basis. Experts note that such arrangements can reduce control over how personal information is reused, including in contexts not initially anticipated.

Concerns also extend to issues such as data security, transparency and the potential for misuse in areas including synthetic media and identity replication.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!