ITU to host AI for Good Global Summit in Geneva

The International Telecommunication Union (ITU) will organise the AI for Good Global Summit from 7 to 10 July 2026 at Palexpo in Geneva, Switzerland, according to an official announcement by the Swiss authorities.

On 6 and 7 July, the United Nations Global Dialogue on AI Governance will take place ahead of the summit. The dialogue is convened within the framework of a UN General Assembly resolution and will bring together policymakers, experts, and representatives of civil society to discuss approaches to AI governance.

The events will be held in parallel with the World Summit on the Information Society (WSIS) Forum (from 6 to 10 July), which focuses on issues related to digital cooperation and the development of the information society.

According to the official announcement, the co-location of these events is intended to facilitate exchanges between technical and policy communities working on AI and digital governance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU Market Integration Package prompts feedback from Circle

Circle has submitted feedback to the European Commission on its proposed Market Integration Package, aiming to strengthen capital markets integration and supervision across the EU.

The response praises digital finance reforms while recommending refinements to support institutional adoption and liquidity growth. Key recommendations include reforming the DLT Pilot Regime with adaptive thresholds, a clear path to permanent legislation, and accelerated updates.

Circle also calls for broader use of MiCA-compliant e-money tokens (EMTs) in securities settlement, ensuring alignment with the CSD Regulation and considering non-EU-issued stablecoins for cross-border interoperability.

The company urges careful calibration of centralised supervision under the European Securities and Markets Authority, focusing on systemic crypto firms and reducing administrative complexity for smaller providers.

Legal certainty regarding the use of EMTs as collateral is also highlighted, enabling the EU markets to remain competitive globally.

Circle emphasises the potential of clear and proportionate regulation to bridge traditional finance with on-chain infrastructure. The company positions regulated stablecoins like USDC and EURC as key tools for modernising Europe’s capital markets and unlocking new efficiency and liquidity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Europol-backed operation shuts down thousands of dark web fraud sites

A global law enforcement operation supported by Europol has led to the shutdown of more than 373,000 dark web websites linked to fraudulent activity and the advertisement of child sexual abuse material.

The operation, known as ‘Operation Alice’, was launched on 9 March 2026 under the leadership of German authorities, with participation from 23 countries. The investigation, which began in 2021, initially targeted a dark web platform referred to as ‘Alice with Violence CP’.

According to Europol, investigators identified a single operator responsible for managing a network of hundreds of thousands of onion domains. These websites advertised child sexual abuse material and cybercrime-as-a-service offerings, including access to stolen financial data and systems.

Authorities state that the services were fraudulent, designed to extract payments without delivering the advertised material.

The operation has so far resulted in the identification of 440 customers worldwide, with further investigations ongoing against more than 100 individuals. Law enforcement agencies also seized 105 servers and multiple electronic devices during the coordinated action.

Europol provided analytical support, facilitated information exchange, and assisted in tracing cryptocurrency transactions linked to the network.

Authorities also reported that measures were taken throughout the investigation to identify and protect children at risk. An international arrest warrant has been issued for the suspected operator, who is reported to have generated significant profits through the scheme.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Sora strengthens AI video safety through consent and traceability controls

OpenAI has outlined a safety framework for Sora that embeds protections into how AI-generated video content is created, shared, and managed.

The system introduces visible and invisible provenance signals, including C2PA metadata and watermarks, designed to ensure that generated media can be identified and traced.

The framework emphasises consent and control. Users can generate video content from images of real individuals only after confirming they have permission, while the ‘characters’ feature enables controlled use of personal likeness, with the ability to revoke access at any time.

Additional safeguards apply to content involving minors or young-looking individuals, with stricter moderation rules and enforced watermarking.

Safety mechanisms operate across the entire lifecycle of content. Generation is subject to layered filtering that assesses prompts and outputs for harmful material, including sexual content, self-harm promotion, and illegal activity.

These automated systems are complemented by human review and continuous testing to address emerging risks linked to increasingly realistic video and audio outputs.

The system also introduces protections specific to audio and user interaction. Generated speech is analysed for policy violations, and attempts to replicate the style of living artists or existing works are restricted.

Users of Sora retain control over their content through reporting tools, sharing settings, and the ability to remove material, reflecting a broader approach that aligns AI-generated media with safety, transparency, and accountability standards.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Australian regulator warns AI companions expose children to serious online risks

The eSafety Commissioner has reported that AI companion chatbots are failing to adequately protect children from harmful content, following a transparency review of services including Character.AI, Nomi, Chai, and Chub AI.

According to the report, these services did not implement robust safeguards against exposure to sexually explicit material or the generation of child sexual exploitation and abuse content.

The findings also indicate that most platforms relied on self-declared age verification and did not consistently monitor inputs or outputs across all AI models used.

eSafety Commissioner Julie Inman Grant stated that AI companions, often presented as sources of emotional or social support, are increasingly used by children but may expose them to harmful interactions.

She noted that none of the reviewed services had ‘meaningful age checks’ in place and highlighted concerns about the absence of safeguards related to self-harm and suicide content.

The report further identifies that several platforms in Australia did not refer users to crisis or mental health support services when harmful interactions were detected.

It also notes gaps in monitoring for unlawful content and limited investment in trust and safety staffing, with some providers reporting no dedicated moderation personnel.

The findings follow the implementation of Australia’s Age-Restricted Material Codes, which require online services, including AI chatbots, to prevent access to age-inappropriate content and provide appropriate safety measures.

These obligations complement existing Unlawful Material Codes and Standards, with non-compliance potentially leading to civil penalties.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

AI investment reshapes euro area markets and financial systems

Philip R. Lane, Member of the Executive Board of the ECB, highlighted in his speech at the ECB-SAFE-RCEA International Conference on the Climate-Macro-Finance Interface (3CMFI) that € area firms with high AI intensity have experienced stronger revenue growth, operating margins, and earnings per share.

The advantage narrows when financial institutions are excluded, and internal funding remains essential, as well-capitalised firms are more likely to adopt AI while smaller firms face investment barriers.

European venture capital and private credit are growing but remain far below US levels, limiting start-up scaling and prompting some to relocate abroad.

Banks are embracing AI extensively, particularly for fraud detection, marketing, chatbots, and credit scoring. Proprietary tools are mostly developed in-house, while specialised external providers support cybersecurity and regulatory reporting.

AI boosts operational efficiency, risk assessment, and credit pricing, yet concentration in a few frontier firms and rising reliance on market-based finance introduce potential financial risks.

Lane noted that monetary policy implications are uncertain, as AI may enhance productivity and incomes differently depending on whether it is labour- or capital-augmenting.

High capital expenditure and increased energy demand during AI adoption could add inflationary pressure, while global concentration of AI activity in the US and China may limit domestic investment, influencing the € area’s natural rate of interest.

The European Central Bank is systematically integrating AI into its analytical and operational environment. Machine-learning tools support forecasting, scenario analysis, and extraction of signals from alternative data, while workflow automation and agentic AI enhance efficiency and reduce manual workload.

The ECB’s digitalisation programme aims to scale AI across business processes, ensuring technology complements expert judgement while maintaining reliability, traceability, and accountability.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

NVIDIA introduces infrastructure-level security model for autonomous AI agents

OpenShell, an open-source runtime introduced by NVIDIA, is designed to support the secure deployment of autonomous AI agents within enterprise environments.

According to NVIDIA, OpenShell applies security controls at the infrastructure level rather than within the model or application layer. The runtime ensures that each agent operates inside an isolated sandbox, where system-level policies define and enforce permissions, resource access, and operational constraints.

The company states that such an approach separates agent behaviour from policy enforcement, preventing agents from overriding security controls or accessing restricted data.

OpenShell enables organisations to define and monitor a unified policy layer governing how autonomous systems interact with files, tools, and enterprise workflows.

Additionally, OpenShell forms part of the NVIDIA Agent Toolkit and is complemented by NemoClaw, a reference stack designed to support the deployment of continuously operating AI assistants.

NVIDIA indicates that the system can run across cloud, on-premises, and local computing environments, while maintaining consistent policy enforcement.

The company also reports collaboration with industry partners, including Cisco, CrowdStrike, Google Cloud, and Microsoft Security, to align security practices for AI agent deployment. Both OpenShell and NemoClaw are currently in early preview.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Licence revocations hit unregistered crypto firms in Canada

Canada has increased crypto oversight, revoking registrations for nearly three dozen firms due to compliance failures. The move follows investigative reporting that uncovered widespread irregularities in the sector.

The Financial Transactions and Reports Analysis Centre of Canada removed 23 companies in one week, adding to previous actions against about a dozen other crypto firms.

Officials described the shift as part of a broader effort to address risks tied to virtual currencies, including fraud and money laundering.

Findings from the International Consortium of Investigative Journalists’ investigation highlighted clusters of crypto businesses operating without proper registration, particularly in Toronto.

Many of these services reportedly focused on converting digital assets into cash, raising concerns about gaps in oversight and compliance with anti-money laundering rules.

Authorities also flagged suspicious transaction patterns, including activity linked to wallets allegedly associated with Iran-backed groups. While regulators have promised further action, analysts warn that delayed enforcement and structural weaknesses may continue to expose the system to illicit financial flows.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Sydney set to become hub for AI innovation with Oracle centre

Oracle has launched the AI Customer Excellence Centre (AI CEC) in Sydney to help organisations adopt and scale AI technologies across Australia and Oceania. The centre will act as a hub for collaboration and skills, letting businesses test AI solutions in real-world settings.

The AI CEC provides access to Oracle and partner technologies, with flexible deployment options through Oracle Cloud Infrastructure (OCI). Organisations can receive training, test early-stage AI innovations, and pilot proof-of-concept projects in secure cloud environments.

The centre supports industries such as healthcare, public sector, financial services, and telecommunications, helping companies accelerate AI adoption while improving efficiency and decision-making.

Experts highlight the centre’s potential to bridge the gap between AI experimentation and measurable business impact. Rising compute demand shows AI moving from pilots to production, while hands-on testing helps organisations reduce risk and validate initiatives.

Oracle plans to continue collaborating with governments, partners, and industry to ensure responsible, secure, and trustworthy AI adoption, reinforcing Australia’s position as a leader in the digital economy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK pushes platforms to tackle AI abuse and online violence against women

The Department for Science, Innovation and Technology has called on online service providers to strengthen measures against digital harms targeting women and girls, as part of a commitment to halve such violence within a decade.

In a letter published on 23 March 2026, Liz Kendall outlined expectations for platforms operating under the Online Safety Act.

The letter states that the government has strengthened criminal law and regulatory frameworks, including new offences related to harmful pornographic practices and intimate image abuse.

It confirms that sharing or threatening to share sexually explicit deepfakes without consent constitutes a criminal offence, while the non-consensual creation of such content has also been criminalised and is being designated as a priority offence under the Act.

Further measures include amendments to the Crime and Policing Bill to ban so-called ‘nudification’ tools and extend illegal content duties to AI chatbots.

The government is also introducing a requirement for platforms to remove non-consensual intimate images within 48 hours, with a focus on reducing repeated reporting burdens for victims.

The Secretary of State urged companies to implement recommendations from Ofcom’s guidance on online safety for women and girls, including risk assessments, stronger privacy settings, and limits on the visibility of harmful content.

Platforms are expected to comply by the end of the year, with progress to be monitored.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!