EDPB adopts scientific research data guidelines and Europrivacy opinions

The European Data Protection Board (EDPB) has adopted guidelines on the processing of personal data for scientific research purposes during its latest plenary, and opened them for public consultation until 25 June. The Board also created a dedicated ‘sprint team’ to complete its upcoming guidelines on anonymisation by the summer.

According to the EDPB, the new guidelines are intended to provide researchers with greater clarity on how the General Data Protection Regulation (GDPR) applies to scientific research while protecting individuals’ fundamental rights. The Board says the text clarifies the meaning of ‘scientific research’ under the GDPR and sets out six indicative factors to help determine whether processing is carried out for scientific research purposes.

The guidelines also explain that further processing for scientific research purposes is presumed to be compatible with the initial purpose for collecting personal data, meaning controllers do not need to carry out the GDPR purpose compatibility test. The EDPB says controllers must still ensure that the legal basis for the initial processing is also suitable for the further processing of personal data for scientific research purposes.

EDPB Chair Anu Talus said: ‘Scientific research can drive societal progress and improve our daily lives. Our guidelines facilitate innovative research by helping researchers to navigate the GDPR. The EDPB is committed to supporting the scientific community and unlocking the full potential of scientific research in the EU while upholding data protection rights.’

On consent, the Board says controllers may rely on ‘broad consent’ when research purposes are not fully known at the time of data collection, provided appropriate safeguards are in place. It also says controllers may seek consent separately for individual research projects once their purposes become known, and that a combination of broad and dynamic consent is possible.

The guidelines also address the rights of individuals, including the rights to erasure and to object, and explain when limitations may apply in the context of scientific research. The EDPB says the text also clarifies how responsibilities should be allocated when several entities are involved in processing, and outlines safeguards such as anonymisation or pseudonymisation, secure processing environments, privacy-enhancing technologies, confidentiality arrangements, and conditions for further use.

In addition, the Board adopted two opinions on two sets of Europrivacy certification criteria for approval as European Data Protection Seals. One opinion approves an updated set of criteria whose scope now includes controllers and processors established outside Europe that are subject to Article 3(2) GDPR.

The second, adopted for the first time, recognises Europrivacy certification criteria as a European Data Protection Seal that can be used as a tool for transfers under Articles 42 and 46 GDPR. According to the EDPB, this will allow data importers outside Europe that are not subject to the GDPR to apply to the Europrivacy certification scheme for transferred data they receive.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU updates technology licensing competition rules to reflect data and digital markets

The European Commission has adopted revised rules governing technology transfer agreements (Technology Transfer Block Exemption Regulation and Guidelines on the application of Article 101 of the Treaty to technology transfer agreements), updating a framework originally introduced in 2014.

These changes aim to reflect developments in the digital economy, particularly the growing role of data and standardised technologies in enabling interoperability across markets.

Technology transfer agreements allow firms to license intellectual property such as patents, software and design rights, supporting the dissemination of innovation. While such agreements are often considered pro-competitive, they may also create risks if they restrict market access or distort competition.

The revised framework clarifies how these agreements are assessed under Article 101 of the Treaty on the Functioning of the European Union.

The updated rules introduce specific guidance on data licensing and licensing negotiation groups, addressing new market practices.

They also refine conditions under which agreements benefit from exemptions, including simplified criteria for early-stage technologies and clearer safeguards for technology pools linked to industry standards.

Overall, the revision by the EU seeks to improve legal certainty for businesses while ensuring that licensing practices support innovation, competition and the broader functioning of the single market. The new framework will apply from May 2026.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU investigates Meta over WhatsApp AI access in major antitrust enforcement case

The European Commission has issued a supplementary charge sheet to Meta (called Supplementary Statement of Objections), outlining concerns over potential restrictions on third-party AI assistants’ access to WhatsApp.

A move that forms part of an ongoing investigation into a possible abuse of dominant market position under the EU competition rules.

The Commission’s preliminary assessment suggests that recent policy changes, including the introduction of access fees, may have effects equivalent to an earlier exclusion of competing AI services.

Something that raises concerns about barriers to entry and reduced competition in the emerging market for AI assistants.

As part of interim measures under Article 102 of the Treaty on the Functioning of the European Union, regulators are considering requiring Meta to restore access to its services under previous conditions.

Such measures aim to prevent serious and potentially irreversible harm to competition while the investigation continues.

The case has been expanded to cover the entire European Economic Area, reflecting coordination with national authorities.

These proceedings highlight increasing regulatory scrutiny of platform control over AI ecosystems and access to digital markets.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

UK tests AI transcripts to improve access to justice and reduce court costs

The UK Ministry of Justice, alongside HM Courts & Tribunals Service, has launched a study examining how AI can be used to generate court transcripts more efficiently.

The initiative aims to reduce the cost and time required for accessing official court records.

Currently, transcript fees can be prohibitively expensive, limiting access for victims seeking clarity on court proceedings. The proposed use of AI-based systems, including in-house tools such as Justice Transcribe, could lower these barriers while maintaining required accuracy standards.

The policy forms part of broader efforts in the UK to modernise the justice system and enhance transparency. It aligns with legislative developments, including the Victims and Courts Bill, and plans to provide free access to sentencing remarks in Crown Court cases from 2027.

By improving access to legal records, the initiative seeks to strengthen accountability and support victims’ understanding of judicial processes, contributing to a more accessible and responsive justice system.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

AI improves structured and coherent legal systems for better regulation

A study from Sultan Qaboos University shows how AI can be used to map hidden structural relationships within legal systems, offering new ways to understand how laws interact and evolve.

Published in The Journal of Engineering Research, the research applies natural language processing and network analysis to Oman’s 2023 Labour Law.

The analysis reveals that legal provisions operate as an interconnected system rather than isolated rules. Certain articles emerge as highly influential ‘hubs’, with Article 147 identified as a central node whose modification could generate cascading effects across multiple parts of the legislation.

These interdependencies are visualised through network mapping techniques that highlight structural relationships not easily detected through traditional review.

To construct this model, researchers developed a four-stage methodology combining Arabic-language NLP tools with industrial engineering approaches. Legal texts were mapped using terminology and cross-referencing patterns, with outputs validated by Omani legislative experts to ensure accuracy and relevance.

The study highlights links between labour law and broader regulatory domains, including commercial regulation, social protection, occupational health, and immigration policy.

The findings underline AI’s potential in the regulatory sector to improve coherence, reveal interdependencies, and support scalable, more consistent legal frameworks across jurisdictions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

US Supreme Court narrows ISP copyright liability, sharpening focus on intent with potential implications for generative AI

A unanimous 9–0 US Supreme Court ruling this week has narrowed the circumstances under which an internet service provider (ISP) can be held liable for users’ copyright infringement by focusing on a deceptively simple question: intent. Writing for the Court, Justice Clarence Thomas said an ISP is liable only if its service was designed for unlawful activity or if it actively induced infringement; merely providing a service to the public while knowing some users will infringe is not enough.

Applying that standard, the Court found Cox Communications did neither, shielding it from a potential $1bn exposure following a long-running dispute that included a jury verdict later vacated.

The decision is now being read for its possible implications beyond ISPs, particularly in the escalating copyright battle between publishers/authors and generative AI firms. The key distinction raised is that broadband networks function as neutral conduits, whereas large language models are built specifically to produce fluent, human-like writing, including prose, poetry and dialogue, that can resemble the work of human authors.

In the article’s framing, that resemblance is not incidental but central to the product’s purpose: if a subscriber uses broadband to pirate a novel, the ISP did not build its network to enable that outcome, but an AI model prompted to write in a specific author’s style is designed to fulfil that request.

That contrast could open a new line of argument in AI litigation. While major US cases, such as suits brought by the Authors Guild and individual authors against OpenAI, Meta and others, have largely centred on whether training on copyrighted books is itself infringing, the Cox ruling highlights a second front: whether the systems’ purpose and optimisation for author-like output could be characterised as being ‘tailored for’ infringement or as purposeful inducement under an intent-based standard.

Publishers, who are simultaneously watching the lawsuits and negotiating licensing deals with AI companies, have so far been more cautious than the music industry was in its costly fight against Cox, an effort that ultimately produced a Supreme Court ruling that narrowed, rather than expanded, leverage.

Why does it matter?

The broader takeaway is that copyright enforcement may increasingly turn not only on what was copied, but what the copying was for, an approach that could prove consequential for AI companies whose commercial proposition is generating human-quality creative work.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

France moves toward social media restrictions for children under 15

Legislative efforts in France signal a shift toward stricter governance of youth access to digital platforms, with policymakers preparing to debate a ban on social media use for children under 15.

A proposal that forms part of a broader strategy to address concerns over online harms and excessive screen exposure among adolescents.

The draft law in France extends beyond access restrictions, proposing a digital curfew for older teenagers and expanding existing school phone bans to include high schools.

These measures reflect increasing reliance on regulatory intervention instead of voluntary platform safeguards, as evidence links prolonged digital engagement with risks such as cyberbullying, disrupted sleep patterns and exposure to harmful content.

Political backing for the initiative has emerged from figures aligned with Emmanuel Macron, reinforcing the government’s position that stronger oversight of digital environments is necessary. The proposal also mirrors developments in Australia, where similar restrictions have already entered into force.

A debate that is further influenced by legal actions targeting major platforms, including TikTok and Meta, amid allegations that algorithmic systems contribute to harmful user experiences.

The outcome of the parliamentary discussions in France is expected to shape future approaches to child safety, platform accountability and digital rights governance across Europe.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Metaverse’s decline and the harsh limits of a virtual future

In 2019, Facebook CEO Mark Zuckerberg announced Facebook Horizon, a VR social experience that allows users to interact, create custom avatars, and design virtual spaces. Zuckerberg saw the platform, later renamed Horizon Worlds, as the beginning of a new era of VR social networks, with users trading face-to-face interactions for digital ones.

To show his confidence in VR, Zuckerberg rebranded Facebook Inc. as Meta Platforms Inc. in October 2021, illustrating the company’s shift toward the metaverse as a broad virtual environment intended to integrate social interaction, work, commerce, and entertainment. Building on this new vision, Meta’s ambitions expanded beyond social interaction and entertainment, with the development roadmap including virtual real estate purchases and collaboration in virtual co-working spaces.

Fast forward to 17 March 2026, and the scale of Meta’s retreat from the metaverse vision has become unmistakable. In an official update, the company said it was ‘separating’ VR from Horizon so that each platform could grow with greater focus, while also making Horizon Worlds a mobile-only experience. Under the plan, Horizon Worlds and Events would disappear from the Quest Store by 31 March 2026, several flagship worlds would no longer be available in VR, and the Horizon Worlds app itself would be removed from Quest on 15 June 2026, ending VR access to Worlds altogether.

Yet Meta soon reversed part of the decision. In an Instagram Stories Q&A, CTO Andrew Bosworth said Horizon Worlds would remain available in VR after user backlash. Even so, the greater shift remained unchanged: Horizon Worlds was no longer a flagship VR project, but a much narrower product that reflected a clear contraction of Meta’s original metaverse ambition.

As it stands, Meta’s USD 80 billion investment seems less like a gateway to a new socio-technological era and more like one of the most expensive strategic miscalculations of the 21st century. The sunsetting of Horizon Worlds was certainly not a decision made on a whim, which begs the question: Why did the metaverse fail in the first place? Does it have a future in the AI landscape, and what does its retreat say about the politics of designing the future through corporate platforms?

Metaverse’s mainstream collapse

The most obvious reason for the metaverse’s failure was that it never became a mainstream social space. Meta’s strategy rested on the belief that large numbers of people would start using immersive virtual worlds as a normal setting for interaction, entertainment, and creative activity. The shift never happened at the scale needed to sustain the company’s ambitions.

One reason was friction. VR headsets were less practical than phones, more isolating than social media, and harder to integrate into everyday routines than the platforms people already used to communicate. Entering the virtual world required extra time, extra hardware, and openness to adapt to a different social environment. Most digital habits, however, are built around speed, familiarity, and ease of access.

Meta’s own March 2026 decision makes that failure difficult to deny. A company still convinced that immersive social VR was on its way to becoming mainstream would not have moved Horizon Worlds away from Quest and towards mobile. The shift suggested that the metaverse had failed to move from technological promise to everyday social practice.

Metaverse’s failure was not just one of convenience. It also struggled because it was never presented simply as a new digital space. It was framed as a future built largely on Meta’s own terms, with access tied to the company’s hardware, platforms, rules, and wider ecosystem. Such decisions made the metaverse feel less like an open evolution of the internet and more like a tightly managed corporate environment.

The distinction mattered because Meta was not merely launching another product. It was promoting a vision of how people might one day work, socialise, shop, and create online. Yet the more expansive that vision became, the more obvious it was that the system behind it remained closed and centralised. A future digital environment is harder to embrace when a single company controls the devices, spaces, distribution, and boundaries of participation.

Meta’s handling of Horizon Worlds clearly exposed that tension. The company could remove features, reshape access, alter incentives, and redirect the platform from the top down. Such a level of control may be standard for a private platform, but it sits uneasily with claims about building the next phase of digital life. In that sense, the metaverse failed not only because people were unconvinced by VR, but because its version of the future felt too corporate, too enclosed, and too disconnected from the openness people still associate with the internet.

Metaverse’s economic contradiction

The metaverse did not fail only as a social project. It also became increasingly difficult to justify on economic grounds. Meta spent heavily on Reality Labs while generating only limited returns from those investments. In its 2025 annual filing, the company said Reality Labs had reduced overall operating profit by around USD 19.19 billion for the year, while warning that similar losses would continue into 2026.

Losses on that scale might still have been acceptable if the metaverse had shown clear signs of momentum. However, there was little evidence of mass adoption, strong retention, or a durable path to monetisation. Virtual land, digital goods, branded experiences, and immersive workspaces never developed into the economic base of a new internet layer.

Instead, the metaverse began to look less like a future growth engine and more like a costly experiment with uncertain returns. The gap between spending and payoff became harder to ignore, especially as Meta continued to frame the metaverse as a long-term strategic priority. What used to be sold as the company’s next major frontier was increasingly difficult to justify in commercial terms.

The broader strategic context also changed. Meta’s own forward-looking statements pointed to increased hiring and spending in 2026, especially in AI. In practice, this meant the company was no longer choosing between the metaverse and inactivity, but between two competing visions of the future. AI was already delivering tangible gains in product development, infrastructure, and investor confidence.

In that competition for attention and capital, the metaverse lost. Meta’s pullback was also not an isolated case. Microsoft moved away from metaverse-first ambitions as well, retiring the Immersive space (3D) view in Teams meetings, Microsoft Mesh on the web, and Mesh apps for PC and Quest in December 2025. The services were replaced by immersive events in Teams, a narrower offering built around specific workplace functions rather than a broad metaverse vision.

The wider retreat matters because it suggests the problem was not limited to Meta’s execution. Another major tech company also stepped back from standalone immersive environments and turned to more limited, use-specific tools instead. A larger pattern appeared from that shift: grand metaverse narratives gave way to practical features, embedded tools, and industry-specific uses. In that sense, the metaverse has not entirely disappeared, but it did lose its status as the next internet.

Metaverse’s afterlife in the age of AI

The metaverse’s decline does not necessarily imply a complete disappearance. What seems more likely is that parts of it will survive in altered form, detached from the sweeping vision that once surrounded it. Rather than continuing as a standalone digital world meant to transform social life, the metaverse may persist as a set of tools, features, and immersive functions folded into other technologies.

AI is likely to play a role in that transition. It can lower the cost of building virtual environments, speed up avatar creation, automate elements of interaction design, and make digital spaces more responsive. In this sense, AI may succeed where the original metaverse struggled, not by reviving the same vision, but by making parts of it more practical and easier to use.

Such a distinction is important because it shifts the focus from ideology to utility. The metaverse was once marketed as the next stage of the internet, yet its more durable applications now appear to lie in narrower settings where immersion serves a clear purpose. Training, design, simulation, and industrial planning are all contexts in which virtual environments can offer measurable value without becoming a universal social destination.

What might survive, then, is not the metaverse as it was originally imagined, but a smaller set of immersive capabilities embedded in gaming, education, industry, and workplace systems. Avatars, digital agents, simulations, and adaptive virtual spaces may all remain relevant, but as components rather than the foundation of a new social order.

The shift also helps explain the political lesson of the metaverse’s collapse. Large-scale investment, aggressive branding, and executive certainty were not enough to secure public legitimacy. Meta tried to present the metaverse as an inevitable horizon, yet users did not embrace it, markets did not reward it in proportion to the spending, and the company itself eventually narrowed the project it had once elevated into a corporate identity.

In that sense, the metaverse matters even in failure. Its retreat does not simply mark the end of an overhyped product cycle. It also reveals the limits of top-down corporate future-making, especially when private platforms try to define the direction of collective digital life before society has decided whether such a future is either desirable or necessary.

Conclusion

The metaverse failed because it asked too much of users, promised too much to investors, and concentrated too much power in a platform model that never convincingly earned public trust. Meta’s retreat from Horizon Worlds makes that failure difficult to ignore, while Microsoft’s parallel narrowing of immersive ambitions suggests the problem extended beyond one company’s misjudgement.

Immersive VR technologies are unlikely to vanish, and AI may even extend some of their useful applications. Yet the metaverse as a universal social future has largely collapsed under the combined weight of weak adoption, unsustainable economics, and an overly corporate vision of digital life. What remains is not the next internet, but a reminder that the future cannot simply be declared into existence by the companies most eager to own it.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

California challenges federal approach with new AI rules

The government of California is advancing a more interventionist approach to AI governance, signalling a divergence from federal deregulatory preferences.

An executive order signed by Gavin Newsom mandates the development of comprehensive AI policies within 4 months, prioritising public safety and protecting fundamental rights.

The proposed framework requires companies seeking state contracts to demonstrate safeguards against harmful outputs, including the prevention of child exploitation material and violent content.

It also calls for measures addressing algorithmic bias and unlawful discrimination, alongside increased transparency through mechanisms such as watermarking AI-generated media.

Federal guidance has discouraged state-level intervention, framing such efforts as obstacles to technological leadership.

The evolving policy landscape reflects growing concern over the societal impact of AI systems, including risks to employment, content integrity and civil liberties.

An initiative by California that may therefore serve as a testing ground for future regulatory models, shaping broader debates on balancing innovation with accountability in digital governance.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

EU boosts fact-checking with €5 million disinformation resilience plan

The European Commission has committed €5 million to strengthen independent fact-checking networks, reinforcing efforts to counter disinformation across Europe. The initiative seeks to expand verification capacity in all EU languages while improving coordination among key stakeholders.

The programme introduces a comprehensive support system for fact-checkers, covering legal assistance, cybersecurity protection and psychological support.

It also establishes a centralised European repository of verified information, designed to enhance transparency and improve access to reliable content across the EU.

Led by the European Fact-Checking Standards Network, the project builds on existing frameworks such as the European Digital Media Observatory. The initiative forms part of the EU’s broader strategy to strengthen information integrity and safeguard democratic processes.

By reinforcing independent verification ecosystems, the programme reflects a policy-driven effort to address disinformation threats while supporting a more resilient and trustworthy digital environment across Europe.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!