The Office of the UN High Commissioner for Human Rights has issued a call for inputs to support a report on how new and emerging technologies are affecting human rights defenders, including women human rights defenders, in the digital age.
Issued under Human Rights Council resolution 58/23, the call sought submissions by 31 March 2026 and forms part of a wider effort to examine how digital technologies are reshaping the conditions under which defenders work, communicate, and stay safe.
According to the OHCHR, the report will look at how digital and emerging technologies affect the work, privacy, communications, and security of human rights defenders. The call notes that digital tools have transformed both how defenders operate and the threats they face, with consequences for their safety online and offline.
The questions set out in the call are organised into four broad areas: legislative and regulatory measures, digital communications, privacy restrictions, and corporate responses. The OHCHR specifically asks for information on online safety and cybercrime laws, internet shutdowns, platform attacks, content moderation, surveillance tools, biometric surveillance, encryption, AI-related risks, and how companies assess and respond to harms affecting human rights defenders on their services.
The OHCHR invited member states, civil society, industry, and other stakeholders to submit written inputs in English, French, or Spanish. Those submissions will inform online consultations in April and the preparation of a report to the Human Rights Council under resolution 58/23.
Why does it matter?
Because the call treats the digital environment facing human rights defenders as a governance issue in its own right, rather than only as a technical or security concern. It brings together surveillance, platform accountability, encryption, AI, online harassment, and internet shutdowns under a single human rights framework, while signalling that the OHCHR wants evidence not only on state conduct, but also on how private companies shape civic space in the digital age.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The Government of Canada has launched a formal review of the Privacy Act, opening a broader effort to modernise how the federal public sector governs personal data in an increasingly digital administrative environment.
Led by the Treasury Board of Canada Secretariat and announced by Shafqat Ali, President of the Treasury Board, the process will reassess how more than 250 government institutions collect, use, share, and protect personal information.
The review places particular emphasis on improving how data is managed across government programmes, with reform proposals focused on more secure information-sharing, less duplication, and greater accuracy in public administration. Canadian authorities say the aim is to introduce designated official data sources while ensuring that any reuse of personal information serves individuals directly or delivers a clear public benefit.
The process also points to more structural changes, including recognising privacy as a fundamental right and aligning legal definitions more closely with international standards. It is further intended to harmonise procedures for accessing personal information and to update the federal privacy framework to support a more connected digital state.
Consultations will continue through mid-2026, with feedback expected to feed into a final report in winter 2026–27. Taken together, the review suggests that Canada is rethinking how privacy protection, public-sector data sharing, and institutional accountability should operate in a modern digital governance system.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Amnesty International has warned that proposed EU reforms presented as a way to simplify digital regulation and boost competitiveness could weaken core safeguards for privacy and fundamental rights. At the centre of the concern is the European Commission’s ‘Digital Omnibus’ initiative, which would affect major pieces of legislation, including the General Data Protection Regulation and the AI Act.
Amnesty and other civil society groups argue that the package risks reopening key protections in the EU’s digital rulebook under the banner of regulatory simplification.
Among the most controversial proposals are changes to how personal data is defined, along with exceptions that could make it easier for companies to retain or reuse data for AI systems. Critics say that such changes would weaken safeguards intended to limit excessive data collection and to preserve accountability in how personal information is processed.
Concerns also extend to the AI Act, where proposed adjustments could reduce obligations for high-risk systems. According to Amnesty, companies may be given greater discretion in how they assess and disclose risks, potentially lowering transparency and limiting external scrutiny.
Delays in implementation, the organisation argues, could also allow harmful systems to remain in use without full regulatory oversight.
The broader reform agenda may reach beyond privacy and AI rules. Future ‘fitness checks’ could also affect frameworks such as the Digital Services Act and the Digital Markets Act, raising wider concerns about whether the EU’s digital regulatory model is being softened in the name of competitiveness.
For critics, the cumulative risk is that the balance of the EU digital framework could begin to shift away from rights protection and public accountability, and towards greater corporate flexibility in areas linked to surveillance, discrimination, and market power.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A new global report from UNESCO and the Thomson Reuters Foundation suggests that companies are adopting AI faster than they are building the internal systems needed to govern it responsibly, exposing significant gaps in oversight, accountability, and risk management. Based on data from 3,000 companies, the report found that 44% have an AI strategy, but only 10% are publicly committed to following an AI governance framework.
The gap, according to the report, is no longer one of awareness but of implementation. Many companies now present responsible AI as a principle or ambition, yet provide far less detail on where AI is used, how risks are managed in practice, who is responsible when systems fail, or how concerns are escalated internally. Governance is often described at a conceptual level, but much less often backed by visible operational mechanisms.
Some of the sharpest weaknesses lie in areas central to public-interest AI governance. Only 11% of companies said they assess environmental impact, while just 7% evaluate the human rights impact of the AI they use. Human oversight also remains limited, with only 12% reporting a policy that ensures human supervision of AI systems.
The report also points to weak accountability and data governance structures. Only a small minority of companies could identify who is responsible for ethical risks across the AI lifecycle, while three-quarters showed no evidence of policies to verify the quality of AI training data.
Fewer than one in five reported conducting privacy or data protection impact assessments specific to AI, and only one in five had policies governing data sharing with third-party AI vendors.
Workforce preparedness appears similarly underdeveloped. While 30% of companies said they offer AI training programmes, only 12% provide structured training with comprehensive coverage. The report argues that many businesses now acknowledge the importance of skills development and workforce transition, but rarely explain how workers are supported in practice or how concerns can be raised and addressed.
Taken together, the findings suggest that the main test for responsible AI is shifting from principle to proof. The issue is no longer whether companies say the right things about ethical AI, but whether they can demonstrate that accountability, oversight, and remedies actually work when AI systems are deployed.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The European Patent Office (EPO) is accelerating its transition towards a fully digital patent system, with plans to implement a paperless patent-granting process by 2027.
Discussions at the latest eSACEPO meeting highlighted steady progress and broad stakeholder support for modernising patent workflows.
Electronic filing and communication are set to become the default, with paper-based processes limited to exceptional cases. The shift aims to improve efficiency and accessibility, supported by legal adjustments and the gradual introduction of structured data formats to enhance processing accuracy.
Digital tools continue to evolve, with the MyEPO platform expanding its functionality through interface upgrades, self-service features and new capabilities such as colour drawing submissions.
The rollout of DOCX filing, alongside optional PDF backups, reflects a cautious approach designed to balance innovation with reliability.
AI is increasingly integrated into patent examination processes, supporting tasks such as search and documentation.
However, the EPO maintains a human-centric model, ensuring that decision-making authority remains with patent examiners while AI enhances productivity and consistency.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A new study argues that cookie consent banners should be scrapped, claiming they fail to protect user privacy and instead create frustration. The research highlights how repeated pop-ups have become a defining feature of the modern internet.
The paper suggests that cookie banners, originally introduced under data protection laws, have led to ‘performative compliance’ rather than meaningful consent. Users often click through notices without understanding them, weakening the purpose of privacy regulation.
Researchers say the system may even normalise data tracking by encouraging habitual acceptance. Instead of improving transparency, the approach risks obscuring how personal data is collected and used across digital platforms.
The study calls for regulators to move beyond banner-based consent towards more effective privacy protections. It argues that current rules may hinder the development of better solutions by giving the impression that the problem has already been addressed.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Public concern over big tech companies is growing in Switzerland, according to a new survey by gfs.bern conducted on behalf of the Mercator Foundation Switzerland. A large majority of respondents view major technology firms as primarily profit-driven, while also expressing unease about their broader influence on society and politics.
Survey findings show that 90% of respondents believe big tech companies are mainly motivated by profit, while 94% support stronger protections for children and young people on social media platforms. Concerns extend beyond commercial behaviour, with 84% worried about political influence from the countries where these companies are based and 82% fearing increasing dependence on firms from the United States and China.
Overall perceptions in Switzerland remain mixed: 21% of respondents express a positive view of big tech companies, 40% hold a neutral stance, and 38% report negative impressions. Similar attitudes have been observed across Europe, where surveys in countries such as France and Germany indicate that many citizens consider existing regulatory frameworks insufficient.
Despite concerns about corporate influence, attitudes towards digitalisation itself remain broadly positive. Around 58% of respondents see digitalisation as beneficial overall, and 53% believe it offers personal advantages. However, only 48% think it benefits society as a whole, while 46% perceive its impact on democratic processes as negative.
A strong majority expects public institutions to take on greater responsibility for managing digital transformation. Around 88% support government efforts to ensure transparency in AI decision-making, while 86% want human oversight in critical situations. High levels of trust in Swiss authorities suggest public backing for a more active state role in shaping digital policy and safeguarding democratic values.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Legislative efforts in France signal a shift toward stricter governance of youth access to digital platforms, with policymakers preparing to debate a ban on social media use for children under 15.
A proposal that forms part of a broader strategy to address concerns over online harms and excessive screen exposure among adolescents.
These measures reflect increasing reliance on regulatory intervention instead of voluntary platform safeguards, as evidence links prolonged digital engagement with risks such as cyberbullying, disrupted sleep patterns and exposure to harmful content.
Political backing for the initiative has emerged from figures aligned with Emmanuel Macron, reinforcing the government’s position that stronger oversight of digital environments is necessary. The proposal also mirrors developments in Australia, where similar restrictions have already entered into force.
A debate that is further influenced by legal actions targeting major platforms, including TikTok and Meta, amid allegations that algorithmic systems contribute to harmful user experiences.
The outcome of the parliamentary discussions in France is expected to shape future approaches to child safety, platform accountability and digital rights governance across Europe.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
In 2019, Facebook CEO Mark Zuckerberg announced Facebook Horizon, a VR social experience that allows users to interact, create custom avatars, and design virtual spaces. Zuckerberg saw the platform, later renamed Horizon Worlds, as the beginning of a new era of VR social networks, with users trading face-to-face interactions for digital ones.
To show his confidence in VR, Zuckerberg rebranded Facebook Inc. as Meta Platforms Inc. in October 2021, illustrating the company’s shift toward the metaverse as a broad virtual environment intended to integrate social interaction, work, commerce, and entertainment. Building on this new vision, Meta’s ambitions expanded beyond social interaction and entertainment, with the development roadmap including virtual real estate purchases and collaboration in virtual co-working spaces.
Fast forward to 17 March 2026, and the scale of Meta’s retreat from the metaverse vision has become unmistakable. In an official update, the company said it was ‘separating’ VR from Horizon so that each platform could grow with greater focus, while also making Horizon Worlds a mobile-only experience. Under the plan, Horizon Worlds and Events would disappear from the Quest Store by 31 March 2026, several flagship worlds would no longer be available in VR, and the Horizon Worlds app itself would be removed from Quest on 15 June 2026, ending VR access to Worlds altogether.
Yet Meta soon reversed part of the decision. In an Instagram Stories Q&A, CTO Andrew Bosworth said Horizon Worlds would remain available in VR after user backlash. Even so, the greater shift remained unchanged: Horizon Worlds was no longer a flagship VR project, but a much narrower product that reflected a clear contraction of Meta’s original metaverse ambition.
As it stands, Meta’s USD 80 billion investment seems less like a gateway to a new socio-technological era and more like one of the most expensive strategic miscalculations of the 21st century. The sunsetting of Horizon Worlds was certainly not a decision made on a whim, which begs the question: Why did the metaverse fail in the first place? Does it have a future in the AI landscape, and what does its retreat say about the politics of designing the future through corporate platforms?
Metaverse’s mainstream collapse
The most obvious reason for the metaverse’s failure was that it never became a mainstream social space. Meta’s strategy rested on the belief that large numbers of people would start using immersive virtual worlds as a normal setting for interaction, entertainment, and creative activity. The shift never happened at the scale needed to sustain the company’s ambitions.
One reason was friction. VR headsets were less practical than phones, more isolating than social media, and harder to integrate into everyday routines than the platforms people already used to communicate. Entering the virtual world required extra time, extra hardware, and openness to adapt to a different social environment. Most digital habits, however, are built around speed, familiarity, and ease of access.
Meta’s own March 2026 decision makes that failure difficult to deny. A company still convinced that immersive social VR was on its way to becoming mainstream would not have moved Horizon Worlds away from Quest and towards mobile. The shift suggested that the metaverse had failed to move from technological promise to everyday social practice.
Metaverse’s failure was not just one of convenience. It also struggled because it was never presented simply as a new digital space. It was framed as a future built largely on Meta’s own terms, with access tied to the company’s hardware, platforms, rules, and wider ecosystem. Such decisions made the metaverse feel less like an open evolution of the internet and more like a tightly managed corporate environment.
The distinction mattered because Meta was not merely launching another product. It was promoting a vision of how people might one day work, socialise, shop, and create online. Yet the more expansive that vision became, the more obvious it was that the system behind it remained closed and centralised. A future digital environment is harder to embrace when a single company controls the devices, spaces, distribution, and boundaries of participation.
Meta’s handling of Horizon Worlds clearly exposed that tension. The company could remove features, reshape access, alter incentives, and redirect the platform from the top down. Such a level of control may be standard for a private platform, but it sits uneasily with claims about building the next phase of digital life. In that sense, the metaverse failed not only because people were unconvinced by VR, but because its version of the future felt too corporate, too enclosed, and too disconnected from the openness people still associate with the internet.
Metaverse’s economic contradiction
The metaverse did not fail only as a social project. It also became increasingly difficult to justify on economic grounds. Meta spent heavily on Reality Labs while generating only limited returns from those investments. In its 2025 annual filing, the company said Reality Labs had reduced overall operating profit by around USD 19.19 billion for the year, while warning that similar losses would continue into 2026.
Losses on that scale might still have been acceptable if the metaverse had shown clear signs of momentum. However, there was little evidence of mass adoption, strong retention, or a durable path to monetisation. Virtual land, digital goods, branded experiences, and immersive workspaces never developed into the economic base of a new internet layer.
Instead, the metaverse began to look less like a future growth engine and more like a costly experiment with uncertain returns. The gap between spending and payoff became harder to ignore, especially as Meta continued to frame the metaverse as a long-term strategic priority. What used to be sold as the company’s next major frontier was increasingly difficult to justify in commercial terms.
The broader strategic context also changed. Meta’s own forward-looking statements pointed to increased hiring and spending in 2026, especially in AI. In practice, this meant the company was no longer choosing between the metaverse and inactivity, but between two competing visions of the future. AI was already delivering tangible gains in product development, infrastructure, and investor confidence.
In that competition for attention and capital, the metaverse lost. Meta’s pullback was also not an isolated case. Microsoft moved away from metaverse-first ambitions as well, retiring the Immersive space (3D) view in Teams meetings, Microsoft Mesh on the web, and Mesh apps for PC and Quest in December 2025. The services were replaced by immersive events in Teams, a narrower offering built around specific workplace functions rather than a broad metaverse vision.
The wider retreat matters because it suggests the problem was not limited to Meta’s execution. Another major tech company also stepped back from standalone immersive environments and turned to more limited, use-specific tools instead. A larger pattern appeared from that shift: grand metaverse narratives gave way to practical features, embedded tools, and industry-specific uses. In that sense, the metaverse has not entirely disappeared, but it did lose its status as the next internet.
Metaverse’s afterlife in the age of AI
The metaverse’s decline does not necessarily imply a complete disappearance. What seems more likely is that parts of it will survive in altered form, detached from the sweeping vision that once surrounded it. Rather than continuing as a standalone digital world meant to transform social life, the metaverse may persist as a set of tools, features, and immersive functions folded into other technologies.
AI is likely to play a role in that transition. It can lower the cost of building virtual environments, speed up avatar creation, automate elements of interaction design, and make digital spaces more responsive. In this sense, AI may succeed where the original metaverse struggled, not by reviving the same vision, but by making parts of it more practical and easier to use.
Such a distinction is important because it shifts the focus from ideology to utility. The metaverse was once marketed as the next stage of the internet, yet its more durable applications now appear to lie in narrower settings where immersion serves a clear purpose. Training, design, simulation, and industrial planning are all contexts in which virtual environments can offer measurable value without becoming a universal social destination.
What might survive, then, is not the metaverse as it was originally imagined, but a smaller set of immersive capabilities embedded in gaming, education, industry, and workplace systems. Avatars, digital agents, simulations, and adaptive virtual spaces may all remain relevant, but as components rather than the foundation of a new social order.
The shift also helps explain the political lesson of the metaverse’s collapse. Large-scale investment, aggressive branding, and executive certainty were not enough to secure public legitimacy. Meta tried to present the metaverse as an inevitable horizon, yet users did not embrace it, markets did not reward it in proportion to the spending, and the company itself eventually narrowed the project it had once elevated into a corporate identity.
In that sense, the metaverse matters even in failure. Its retreat does not simply mark the end of an overhyped product cycle. It also reveals the limits of top-down corporate future-making, especially when private platforms try to define the direction of collective digital life before society has decided whether such a future is either desirable or necessary.
Conclusion
The metaverse failed because it asked too much of users, promised too much to investors, and concentrated too much power in a platform model that never convincingly earned public trust. Meta’s retreat from Horizon Worlds makes that failure difficult to ignore, while Microsoft’s parallel narrowing of immersive ambitions suggests the problem extended beyond one company’s misjudgement.
Immersive VR technologies are unlikely to vanish, and AI may even extend some of their useful applications. Yet the metaverse as a universal social future has largely collapsed under the combined weight of weak adoption, unsustainable economics, and an overly corporate vision of digital life. What remains is not the next internet, but a reminder that the future cannot simply be declared into existence by the companies most eager to own it.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
A judge in Amsterdam has ordered AI chatbot Grok and platform X to stop generating and distributing explicit deepfake images. The ruling targets so-called ‘undressing’ content and illegal material involving minors.
The case was brought by Offlimits, which argued that safeguards were failing. The Dutch judges found sufficient evidence that harmful images could still be created despite existing restrictions.
The court imposed a penalty of €100,000 per day for violations, with a maximum of €10 million. Access to Grok on X must also be suspended if the system does not comply with the order.
The decision highlights growing legal pressure on AI platforms to control the misuse of generative tools. Regulators and courts are increasingly demanding stronger protections against online abuse and illegal content.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!