Online Safety Act brings progress, but UK children still face harm online

A new report from Internet Matters suggests the UK’s Online Safety Act has introduced more visible safety measures for children, but has not yet delivered the step change needed to make their online lives meaningfully safer. Drawing on surveys and focus groups with children and parents, the report presents an early view of how the law is affecting families in practice.

The findings point to some clear signs of progress. Parents and children report seeing more safety features, including improved reporting tools, content filters, restrictions on certain functions, and stronger parental controls. Many children also say the content they encounter online is becoming more age-appropriate.

At the same time, the report argues that important weaknesses remain. Children continue to encounter harmful content at high rates, while age verification is widely seen as easy to bypass. Internet Matters also says that some of the issues families care most about, including excessive screen time and the risks linked to AI-generated content, are still not adequately addressed under the current framework.

The report concludes that parents are still carrying too much of the burden of keeping children safe online. It calls for stronger enforcement, more effective age assurance, tighter limits on harmful features, and a broader safety-by-design approach to digital services used by children in the UK.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI turns disaster news into structured global risk maps

A new AI-powered dataset developed by the European Commission’s Joint Research Centre, in cooperation with the Italian technology company Engineering Ingegneria Informatica and the Institute of Health and Society at the University of Louvain, is turning fragmented disaster reporting into structured knowledge to help researchers and policymakers better understand how crises unfold and interact.

The dataset covers more than 3,000 disaster events across 175 countries and 26 hazard types, drawing on global reporting to reduce geographical and thematic gaps in existing databases.

Climate and geological disasters, including floods, hurricanes, earthquakes, and wildfires, are processed into structured ‘storylines’ that trace causes, impacts, and responses.

A key feature of the system is its ability to identify cascading effects, in which one event triggers a chain of secondary impacts, such as infrastructure disruption, agricultural losses, or disease outbreaks.

Unlike traditional datasets that record impacts in isolation, the AI-generated knowledge graphs reveal interconnected risk dynamics that are often hidden in standard reporting.

The pipeline uses large language models and retrieval-augmented techniques to extract relevant articles and turn them into structured summaries and visual networks.

Why does it matter?

The development shifts disaster analysis away from fragmented reporting and towards more structured, interconnected intelligence. By showing how hazards cascade into broader social, economic, and environmental impacts, it can help policymakers and emergency services anticipate secondary risks more effectively, rather than reacting to isolated events.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

China closes consultation on digital virtual human services

The Cyberspace Administration of China has closed its public consultation on the draft Administrative Measures for Digital Virtual Human Information Services, which set out proposed rules for digital virtual human services provided to the public in China.

The notice states that the consultation opened in April 2026 and that comments were accepted until 6 May 2026. According to the draft, the measures would apply to internet information services delivered to the public within China through digital virtual humans.

The draft says providers and users must process data for lawful purposes and within a lawful scope, use data from legal sources, and fulfil their data security responsibilities. It also requires technical and other necessary measures to protect data storage and transmission and to prevent leaks or improper use.

The text further requires digital virtual human service providers and users to establish security risk monitoring, warning, emergency response, anti-addiction mechanisms, and stronger content-direction management, while also retaining logs. Providers whose services have public opinion attributes or social mobilisation capacity would also be required to complete algorithm filing procedures and security assessments in line with existing national rules.

Beyond cybersecurity and data protection, the draft includes provisions on personal information, personality rights, intellectual property, content controls, labelling requirements, and protections for minors. It defines digital virtual humans as virtual figures in the non-physical world that simulate human appearance and may have voice, behaviour, interaction abilities, or personality traits, using graphics, digital image processing, or AI technologies.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Major publishers book again Meta’s Llama over AI training

Meta and Mark Zuckerberg are facing a new copyright lawsuit from five major publishers, Hachette, Macmillan, McGraw-Hill, Elsevier, and Cengage, along with author Scott Turow. The plaintiffs accuse the company of using millions of copyrighted books, journal articles, textbooks, and scholarly works to train its Llama AI models without permission. Filed in the US District Court for the Southern District of New York (Manhattan federal court), the proposed complaint seeks monetary compensation, an injunction, and the destruction of allegedly infringing copies held by Meta.

The complaint argues that Meta’s AI strategy relied on protected works from trade, education, and academic publishing, including content allegedly taken from pirate libraries such as LibGen and Anna’s Archive, as well as broad web scrapes containing subscription-only material. The publishers also claim Zuckerberg personally directed or authorised the conduct, a charge Meta is expected to contest vigorously.

At the centre of the lawsuit is a policy question now shaping AI governance worldwide: whether large-scale copying for model training can be justified as fair use or requires permission, transparency, and compensation? Meta and other AI developers argue that training enables transformative innovation, while rights holders say commercial models are being built from creative and scholarly labour without licensing. A previous Meta win in an author’s case showed that courts may accept fair-use arguments, but only where plaintiffs fail to prove clear market harm.

Either way, the publishers are trying to make that market-harm argument harder to dismiss. Their filing describes Llama as an ‘infinite substitution machine’, capable of generating long-form books, educational materials, and scholarly-style outputs that may compete with human-authored works. The case also points to the alleged erosion of licensing markets, arguing that harm occurs not only when AI outputs imitate books, but also when copyrighted works are copied into commercial training pipelines without consent.

The US Copyright Office’s 2025 report said that fair use in generative AI training requires case-by-case analysis, with market effects and the source of the training material playing central roles. In the EU, the AI Act has shifted the debate toward transparency by requiring general-purpose AI providers to publish summaries of their training data and to comply with the EU copyright rules, including rights reservations for text and data mining.

Why does it matter?

The Meta case is the manifestation of a global shift in digital governance: AI copyright disputes are no longer isolated lawsuits, but part of a broader effort to define lawful data supply chains. Anthropic’s $1.5 billion settlement over pirated books, the EU’s training-data transparency regulation, and continuing legal disputes in the US all point in the same direction: courts and regulators are asking whether AI innovation can remain competitive while respecting the rights, labour, and markets that make high-quality knowledge possible.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

European Commission publishes first Digital Markets Act review

The European Commission has published its first formal review of the Digital Markets Act, assessing how the regulation is affecting the behaviour of large online platforms in the EU digital economy. According to the review, the law has produced visible changes in some areas, while also exposing continuing problems in implementation and enforcement.

The review points to changes in user choice since the DMA entered into force in March 2024. These include support for third-party app stores and prompts on devices to select browsers or search engines, alongside reported increases in usage and downloads of alternative services.

Enforcement action is also a central part of the assessment. In April 2025, Apple was fined €500 million for blocking developers from directing users to cheaper purchasing options, while Meta was fined €200 million over its ‘consent or pay’ model. Both companies are appealing the decisions.

At the same time, the review identifies clear implementation challenges. It says investigations are taking around twice as long as the 12-month target, while legal procedures are being used to slow compliance. It also raises broader questions about whether fast-growing areas such as AI tools and cloud platforms should eventually be brought within the scope of the regulation.

The Digital Markets Act is therefore presented less as a completed intervention than as an ongoing regulatory process. The review suggests that its long-term impact will depend not only on the rules already in force, but also on how consistently they are enforced and how the EU responds to changes in digital markets.

Why does it matter?

The review matters because it shows that the real test of the Digital Markets Act is no longer whether the EU can write rules for large platforms, but whether it can enforce them quickly and adapt them to new market realities. Early changes in user choice suggest the law is starting to affect platform behaviour. However, delays in investigations and questions around AI and cloud services show that the regulatory contest is still evolving.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

New Meta age assurance system aims to prevent underage access

Meta has expanded its use of AI to strengthen age assurance and improve enforcement of underage account policies across its platforms. The systems are designed to detect users under 13 for removal and to place suspected teens into protected Teen Account settings on Instagram and Facebook in regions including the EU, Brazil, and the US.

The technology analyses a range of signals, including profile information, user activity, and other contextual indicators, to estimate age more accurately. Automated systems are also being used to support faster and more consistent review of reports related to underage use.

Visual analysis has also become part of Meta’s broader detection approach, with the company saying its systems look for general age-related indicators rather than attempting to identify specific individuals. Reporting tools have been simplified, and AI-assisted moderation is being used to improve the speed and reliability of enforcement decisions.

Alongside these enforcement measures, Meta is increasing parental engagement through notifications and guidance to encourage more accurate age reporting and safer online behaviour. The wider effort reflects growing pressure on platforms to move beyond self-declared age checks and to build stronger systems to protect younger users.

Why does it matter?

The significance of the move lies in the fact that age assurance is becoming a core platform governance issue rather than a secondary moderation tool. Meta is trying to show that large social platforms can use AI not only to recommend or personalise content, but also to enforce minimum age rules at scale. That matters because regulators are increasingly questioning whether self-declared age data is enough to protect minors online. It also points to a broader shift in which platforms are expected to combine safety obligations, automated detection, and parental tools into a more active system of child protection.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

Australia expands collaboration efforts in key science and technology areas

The Australian Government Department of Industry, Science and Resources has announced $6.2 million in funding for nine international projects under round two of the Global Science and Technology Diplomacy Fund (GSTDF).

The programme supports collaboration, innovation and commercialisation in priority technology areas. The selected projects focus on AI, advanced manufacturing, quantum technologies and hydrogen, with several initiatives applying AI to areas such as robotics, satellite networks and ocean forecasting.

According to the department, Australian researchers will work with international partners across Asia-Pacific, with projects spanning fields from healthcare to environmental monitoring and space technologies.

The funding reflects a broader effort to deepen international cooperation and advance strategic technologies, with collaborations involving countries including Singapore, Vietnam, Japan, Malaysia, New Zealand, and South Korea, supporting innovation linked to Australia.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

ILO warns lifelong learning is critical for the future AI economy

The International Labour Organization has warned that governments must place lifelong learning at the centre of economic and social policy as AI, digitalisation and demographic shifts continue transforming labour markets worldwide. The organisation said stronger and more inclusive learning systems are necessary to prevent widening inequality between workers, industries and countries.

According to the ILO’s new report, titled ‘Lifelong learning and skills for the future’, only 16% of people aged between 15 and 64 participated in structured training during the previous year. Access remains significantly higher among full-time employees in formal companies, where employer-supported training reaches 51%.

The ILO report warns that workers in informal jobs and smaller enterprises continue relying mainly on learning through experience instead of structured education programmes. Furthermore, the study found that employers increasingly seek combinations of digital, socio-emotional, communication and problem-solving skills rather than narrow technical expertise alone.

While demand for AI-related capabilities is expected to increase, the report noted that most workers currently use ready-made AI tools that require broader digital literacy, critical thinking and collaborative abilities instead of specialist engineering knowledge.

The ILO also highlighted the growing importance of green and care economy skills. It estimates that 32% of workers globally already perform environmentally relevant tasks, while demand for long-term care workers could almost double by 2050.

The organisation called for greater public investment, stronger institutional coordination and inclusive lifelong learning strategies capable of supporting workers throughout rapidly changing technological and economic transitions.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Cybersecurity and AI safety in focus at European Parliament discussion

Members of the European Parliament’s Committee on the Internal Market and Consumer Protection are set to discuss the safety of AI systems that could pose serious security risks.

According to the event description, the discussion will examine how existing EU legislation applies in practice, particularly the AI Act and the Cybersecurity Act. It will focus on how advanced AI systems are developed and managed when they may present security risks, and on how companies are implementing the EU rules and the challenges they face.

Experts from ENISA, the European Union Agency for Cybersecurity, and the European Commission are expected to take part. They will explain how the relevant legal and regulatory frameworks operate in practice across the EU, including the rules governing AI systems.

The discussion also comes as the European Commission has proposed changes to the Cybersecurity Act. In the European Parliament, the Committee on Industry, Research and Energy is leading work on the file, while IMCO is contributing an opinion focused on internal market and consumer protection aspects.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

White paper sets priorities for Europe’s digital sovereignty and tech competitiveness

A new whitepaper by GITEX AI Europe, in partnership with research firm LUE, outlines key priorities for strengthening Europe’s digital sovereignty and long-term technological competitiveness.

The study suggests scaling AI computing power, expanding cloud infrastructure, adopting open-source standards and increasing startup investment as central pillars. These measures aim to align innovation capacity with broader economic and industrial growth.

It highlights rising demand for AI infrastructure, with data centre expansion and energy integration seen as essential. The report also stresses the need for sovereign cloud systems to ensure greater control over data, alongside the role of open-source technologies in enabling flexibility and transparency.

The whitepaper concludes that stronger investment and coordinated policy are required to support deep-tech growth and prevent talent loss, with initiatives and partnerships shaping Europe’s digital future across the continent.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot