Europol warns legal gaps could weaken child abuse detection online

Efforts to combat online child sexual exploitation could be severely weakened, Europol has warned, if legal frameworks supporting detection and reporting are disrupted.

Executive Director Catherine De Bolle highlighted growing concerns over the increasing volume of harmful content online and stressed that protecting children remains a top priority for European law enforcement.

Authorities rely heavily on reports submitted by online service providers, which play a central role in identifying victims and supporting investigations, rather than relying solely on traditional policing methods.

Europol processed around 1.1 million CyberTips in a single year, many originating from the National Centre for Missing & Exploited Children and shared across 24 European countries.

These CyberTips include critical evidence such as images, videos, and other digital data used to track criminal activity.

Europol cautioned that removing the legal basis allowing voluntary detection by platforms could significantly reduce the number of reports submitted to authorities. A decline in CyberTips would limit investigative leads, making it harder to identify victims and disrupt online criminal networks.

Such a development could undermine broader security efforts and weaken the protection of minors across the EU instead of strengthening safeguards.

The agency emphasised that maintaining online service providers’ ability to detect and report suspected abuse is essential to effective law enforcement.

Ensuring continued cooperation between platforms and authorities remains a key factor in safeguarding children and addressing the growing threat of online exploitation.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Microsoft and NVIDIA unveil AI tools for nuclear energy permitting and operations

Microsoft has announced an AI collaboration with NVIDIA to support nuclear energy projects across permitting, design, construction, and operations. In a post published on 24 March, the tech conglomerate said the initiative aims to provide end-to-end tools for the nuclear sector, focusing on streamlining permitting, accelerating design, and optimising operations.

Microsoft frames the effort within a broader energy challenge, arguing that rising power demand and long project timelines are putting pressure to accelerate the delivery of firm, carbon-free power. The company says customised engineering, fragmented data, and manual regulatory review slow nuclear projects. It presents AI as a way to make project development more repeatable, traceable, secure, and predictable.

The post says the collaboration spans the full lifecycle of a nuclear plant. Microsoft describes a model in which digital twins, high-fidelity simulations, and AI-assisted workflows support design and engineering, licensing and permitting, construction and delivery, and operations and maintenance.

According to the company, engineers would be able to reuse design patterns, model the impact of changes before construction begins, and link project decisions to supporting evidence and applicable rules. Microsoft also says generative AI can assist with drafting and gap analysis in permit documentation, while predictive modelling and operational digital twins can support anomaly detection and maintenance planning.

Microsoft says traceability and auditability are central to the approach. The company lists four intended qualities of the system: traceable records linking engineering decisions to evidence and regulations, audit-ready documentation, secure use within a governed environment, and predictable outcomes through simulations intended to identify delays before they occur in the real world.

Several case examples are included in the post. Microsoft says Aalo Atomics reduced the permitting process by 92% using its Generative AI for Permitting solution and estimates annual savings of 80$ million.

Aalo Atomics Chief Technology Officer Yasir Arafat is quoted as saying: ‘Two things matter most: enterprise-scale complexity and mission-critical reliability. We’re deploying something complex at a scale only a company like Microsoft really understands. There’s no room for anything less than proven reliability.’

Microsoft also says Southern Nuclear has deployed Copilot agents across engineering and licensing workstreams to improve consistency, reuse knowledge faster, and support decision-making. Idaho National Laboratory is described as an early adopter in the US federal context, with Microsoft saying the lab is using AI capabilities to automate the assembly of engineering and safety analysis reports and to create standard methodologies for regulators to adopt the tools safely.

The post also expands beyond those three examples. Microsoft says Everstar, described as an NVIDIA Inception startup, is bringing domain-specific AI for nuclear to Azure to support project workflows and governed data pipelines.

Everstar Chief Executive Officer Kevin Kong is quoted as saying: ‘The nuclear industry has been bottlenecked by documentation burden and regulatory complexity for decades. This partnership means our customers get the secure, scalable cloud deployments they demand. It’s a significant step toward making nuclear power fast, safe, and unstoppable.’

Microsoft also says Atomic Canyon’s Neutron platform is available on the Microsoft Marketplace for nuclear developers via established procurement channels.

At the technical level, Microsoft says the collaboration brings together NVIDIA Omniverse, NVIDIA Earth-2, NVIDIA CUDA-X, NVIDIA AI Enterprise, PhysicsNeMo, Isaac Sim, and Metropolis with Microsoft Generative AI for Permitting Solution Accelerator and Microsoft Planetary Computer. The company presents the stack as a digital ecosystem for nuclear energy on Azure.

The official post is a corporate announcement rather than an independent assessment of the approach’s effectiveness. The published note outlines the company’s intended use cases, named partners, and customer examples, but it does not provide a third-party evaluation of the broader claims regarding delivery speed, regulatory confidence, or sector-wide impact.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Zimbabwe advances AI national strategy with UNESCO support

Zimbabwe has launched a National Artificial Intelligence Strategy for 2026 to 2030, marking a significant step towards shaping its digital future instead of relying solely on traditional development pathways.

Announced by President Emmerson Mnangagwa in Harare, the strategy sets out a national framework for the responsible use of AI to support innovation, improve public services, and expand economic opportunities across sectors such as agriculture, healthcare, education, finance, and public administration.

The strategy places strong emphasis on building digital infrastructure, developing AI skills, and strengthening research and innovation ecosystems.

Officials highlighted the importance of governance frameworks to ensure that AI systems remain transparent, ethical, and aligned with national priorities instead of advancing without oversight.

The initiative reflects a broader effort to position Zimbabwe within the evolving technological landscape of the fourth industrial revolution while promoting sustainable economic growth.

Development of the strategy was supported by UNESCO, working alongside national institutions and stakeholders from academia, industry, and civil society.

The process was informed by the Artificial Intelligence Readiness Assessment Methodology and aligned with UNESCO Recommendation on the Ethics of Artificial Intelligence, promoting a human-centred approach that prioritises human rights, fairness, and transparency.

Regional initiatives across Southern Africa have also contributed to strengthening AI adoption readiness through similar assessment frameworks.

Looking ahead, Zimbabwe aims to translate the strategy into concrete investments in infrastructure, talent development, and innovation ecosystems.

International partners, including the UN, have expressed support for implementation efforts, emphasising the importance of inclusive growth and equitable access to digital opportunities.

By combining national leadership with international collaboration, Zimbabwe seeks to ensure that AI benefits communities across urban and rural areas rather than widening existing socioeconomic divides.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

UK tests social media bans for children in national pilot

The UK government has launched a large-scale pilot programme to test social media restrictions in the homes of 300 teenagers, aiming to improve children’s well-being instead of relying solely on existing digital safety measures.

The initiative, led by the Department for Science, Innovation and Technology and supported by Liz Kendall, will run for six weeks and examine how limits on digital platforms affect young people’s daily lives, including sleep, schoolwork, and family relationships.

Families across the UK will be divided into groups testing different approaches. Some parents will block access to social media entirely, while others will introduce a one-hour daily limit on popular platforms such as Instagram, TikTok, and Snapchat.

Another group will implement overnight curfews, restricting access between 9 pm and 7 am, while a control group will maintain existing usage patterns rather than introducing changes.

Participants will be interviewed before and after the trial to assess behavioural and practical outcomes, including how easily restrictions can be enforced and whether teenagers attempt to bypass controls.

The pilot runs alongside a national consultation on children’s digital well-being, which has already received nearly 30,000 responses. Government officials and academic experts will analyse data gathered from both initiatives to guide future policy decisions.

A programme that aims to ensure that any regulatory steps are evidence-based, reflecting real-life experiences rather than theoretical assumptions about digital behaviour.

Alongside the government trials, an independent scientific study funded by the Wellcome Trust will examine the effects of reduced social media use among adolescents.

Led by researchers from the University of Cambridge and the Bradford Institute for Health Research, the study will involve around 4,000 students aged 12 to 15.

Findings are expected to provide deeper insight into how social media influences anxiety, sleep, relationships, and overall well-being, supporting policymakers in shaping future online safety measures instead of relying on limited evidence.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU strengthens semiconductor strategy through Chips Act dialogue

Executive Vice-President Henna Virkkunen will host a high-level dialogue in Brussels to assess the implementation of the European Chips Act Regulation and gather industry feedback ahead of its planned revision.

Stakeholders from across the semiconductor ecosystem are expected to exchange views and present recommendations to shape future policy direction.

An initiative that forms part of the broader strategy led by the European Commission to reinforce technological sovereignty and competitiveness, rather than relying heavily on external suppliers.

The Chips Act seeks to strengthen Europe’s semiconductor ecosystem, improve supply chain resilience, and reduce strategic dependencies in critical technologies.

The dialogue follows a public consultation and call for evidence conducted in autumn 2025, with findings set to inform the upcoming legislative revision.

Industry representatives will provide direct input through a report outlining challenges, opportunities, and proposed policy adjustments, contributing to a more targeted and effective framework for semiconductor development.

Looking ahead, the revision of the Chips Act will be integrated into a wider Technological Sovereignty package designed to boost the capacity of Europe’s digital industries.

By combining stakeholder engagement with policy reform, the European Commission aims to ensure that semiconductor innovation and production can expand across the EU rather than remain constrained by reliance on external suppliers.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

EDPB summarises conference on cross-regulatory cooperation in the EU

The European Data Protection Board has published a summary of its 17 March conference in Brussels on cross-regulatory interplay and cooperation in the EU from a data protection perspective. According to the EDPB, the event brought together representatives of the EU institutions, European Data Protection Authorities, academia, and industry.

Three panels structured the conference discussion. One focused on data protection and competition, another on the Digital Markets Act and the General Data Protection Regulation (GDPR), and a third on the Digital Services Act and the GDPR.

Discussion in the first panel centred on cooperation between regulatory bodies in data protection and competition, including lessons from the aftermath of the Bundeskartellamt ruling. The EDPB said speakers emphasised the need for regulators to align their approaches and recognise synergies between the two fields. Speakers also said data protection should be considered in competition analysis only when relevant and on a case-by-case basis. The EDPB added that it had recently agreed with the European Commission to develop joint guidelines on the interplay between competition law and data protection.

The second panel focused on joint guidelines on the Digital Markets Act and the GDPR, developed by the European Commission and the EDPB and recently opened to public consultation. According to the EDPB, speakers described the guidelines as an example of regulatory cooperation aimed at developing a coherent and compatible interpretation of the two frameworks while respecting regulatory competences. The Board said participants linked the guidelines to stronger consistency, legal clarity, and easier compliance. Some speakers also suggested changes to the final version, including points related to proportionality and the relationship between DMA obligations and the GDPR.

The final panel examined the interaction between the Digital Services Act and the GDPR. The EDPB said panellists referred to the protection of minors as one example, arguing that age verification should be effective while remaining fully in line with data protection legislation. Speakers also highlighted the need for coordination between the two frameworks, including cooperation involving the EU institutions such as the European Board for Digital Services, the European Commission, the EDPB, and national authorities. Emerging technologies such as AI were also mentioned in the discussion.

The event also featured keynote speeches from European Commission Executive Vice President Henna Virkkunen and European Parliament LIBE Committee Chair Javier Zarzalejos. According to the EDPB, Virkkunen said the Commission remained committed to cooperation between different frameworks and highlighted the need to support compliance through stronger coordination among regulators. Zarzalejos said close cross-regulatory cooperation was essential for consistency, effective enforcement, and trust, and pointed to the intersections among data protection law, competition law, the DMA, and the DSA.

EDPB Chair Anu Talus closed the conference by reiterating that the EDPB and European Data Protection Authorities are committed to supporting stakeholders in navigating what the Board described as a new cross-regulatory landscape. The EDPB said future work will include continued cooperation with the Commission on joint guidelines on the interplay between the AI Act and the GDPR, finalisation of the joint guidelines on the interplay between the DMA and the GDPR, and work on the recently announced Joint Guidelines on the interplay between data protection and competition law.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Australia eSafety warns on AI companion harms

Australia’s online safety regulator has found major gaps in how popular AI companion chatbots protect children from harmful and sexually explicit material. The transparency report assessed four services and concluded that age verification and content filters were inadequate for users under 18.

Regulator Julie Inman Grant said many AI companions marketed as offering friendship or emotional support can expose young users to explicit chat and encourage harmful thoughts without effective safeguards. Most failed to guide users to support when self-harm or suicide issues appeared.

The report also showed several platforms lacked robust content monitoring or dedicated trust and safety teams, leaving children vulnerable to inappropriate inputs and outputs from AI systems. Firms relied on basic age self-declaration at signup rather than reliable checks.

New enforceable safety codes now require AI chatbots to block age-inappropriate content and offer crisis support tools, with potential civil penalties for breaches. Some providers have already updated age assurance features or restricted access in Australia following the regulator’s notices.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI-EFFECT builds EU testing facility for AI in critical energy infrastructure

As Europe moves towards its climate-neutrality goals, integrating AI into energy systems is being presented as a way to improve efficiency, resilience, and sustainability. The EU-funded AI-EFFECT project is developing a European testing and experimentation facility (TEF) to support the development and adoption of AI solutions for the energy industry while ensuring safety, reliability, and compliance with EU regulations.

The TEF is described as a virtual network linking existing laboratories and computing resources across several EU countries. It is designed to provide standardised testing environments, risk and certification workflows, and replicable methods for developing, testing, and validating AI applications for critical energy infrastructures under diverse, real-world conditions.

The facility operates through four national nodes in Denmark, Germany, the Netherlands, and Portugal, each focused on a different set of energy challenges. In Denmark, the node led by the Technical University of Denmark is testing AI in virtual and physical multi-energy systems, including coordination between electric power grid operations and district heating systems in the Triangle Region in Jutland and on the island of Bornholm.

In the Netherlands, the node at Delft University of Technology is extending the university’s ‘control room of the future’ with AI capabilities to address grid congestion as renewable generation increases.

In Portugal, the node led by INESC TEC is developing a trusted local energy data space intended to address privacy concerns and connectivity gaps through secure, consent-based energy data sharing. The AI-EFFECT project says consumers and prosumers will be able to manage data rights and permissions in line with EU regulations while working with AI-driven service providers on co-creation and testing.

In Germany, the Fraunhofer-led node is focused on AI for power distribution systems and is developing a near-realistic cyber-physical model to benchmark AI performance in congestion management and distributed energy resource integration against traditional engineering approaches.

Alberto Dognini, project coordinator of EPRI Europe, Ireland, wrote in an Enlit news item: ‘Together, these four nodes form the backbone of AI-EFFECT’s mission to make AI a trusted partner in Europe’s energy transition.’ He added: ‘From optimising multi-energy systems to enabling secure data sharing and improving grid resilience, these nodes will accelerate innovation while reducing risk for operators and consumers alike.’

AI-EFFECT is also sharing its work through public-facing initiatives, including the EPRI Current Podcast. In the episode ‘Exploring the AI-EFFECT on Europe’s Energy Future’, participants discuss the architecture and building blocks supporting distributed nodes across multiple countries and examine how the TEF could shape the future of Europe’s energy systems.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Australian regulator warns AI companions expose children to serious online risks

The eSafety Commissioner has reported that AI companion chatbots are failing to adequately protect children from harmful content, following a transparency review of services including Character.AI, Nomi, Chai, and Chub AI.

According to the report, these services did not implement robust safeguards against exposure to sexually explicit material or the generation of child sexual exploitation and abuse content.

The findings also indicate that most platforms relied on self-declared age verification and did not consistently monitor inputs or outputs across all AI models used.

eSafety Commissioner Julie Inman Grant stated that AI companions, often presented as sources of emotional or social support, are increasingly used by children but may expose them to harmful interactions.

She noted that none of the reviewed services had ‘meaningful age checks’ in place and highlighted concerns about the absence of safeguards related to self-harm and suicide content.

The report further identifies that several platforms in Australia did not refer users to crisis or mental health support services when harmful interactions were detected.

It also notes gaps in monitoring for unlawful content and limited investment in trust and safety staffing, with some providers reporting no dedicated moderation personnel.

The findings follow the implementation of Australia’s Age-Restricted Material Codes, which require online services, including AI chatbots, to prevent access to age-inappropriate content and provide appropriate safety measures.

These obligations complement existing Unlawful Material Codes and Standards, with non-compliance potentially leading to civil penalties.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Pinterest chief calls for stricter youth rules

The chief executive of Pinterest has voiced support for governments banning access to social media for people under 16. He cited rising concerns about mental health, screen addiction and online harms among young users.

He praised the Australian decision to ban social media for under-16s and urged other nations to adopt similar protections. He argued that existing tech safety measures have fallen short of keeping children secure online.

The executive warned that AI enhancements in social platforms may amplify behavioural influence on teens. He compared the inaction by tech companies to past resistance by harmful industries to public health safeguards.

He also highlighted surveys showing parental worries about explicit content and excessive screen time. Pinterest’s view supports calls for clear age limits, better tools for parents and stronger platform accountability.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot