The United Nations System Staff College has highlighted growing interest across the UN and the wider peacebuilding community in how artificial intelligence is shaping conflict prevention, arguing that the technology can support peace efforts but cannot replace human judgement, diplomacy, and oversight.
The reflection draws on a three-part webinar series launched by UNSSC to examine AI governance, field use, and ethical risks in peacebuilding. According to the text, one message ran across all three discussions: AI may offer real value for conflict prevention, but its role should remain supportive rather than substitutive.
The piece argues that AI is already being used across the UN peace and security pillar and should be introduced only where it improves effectiveness, such as by handling repetitive tasks and allowing staff to focus on analysis, leadership, and political judgement. It also stresses that principles long associated with peacebuilding, including trust and ‘do no harm’, should apply across the full AI stack, from data and infrastructure to model design and deployment.
Examples cited from the webinar series include the use of augmented intelligence in early warning systems, where machine learning is combined with human contextual knowledge, and an AI-enabled WhatsApp chatbot used in Yemen to broaden participation in mediation, particularly among women and young people. The text presents these cases as evidence that AI can extend the reach of peacebuilding tools without replacing practitioners.
The final part of the reflection focuses on governance and ethics. It argues that while ethical AI principles are widely discussed, they need to be translated into practical, context-specific safeguards, especially in conflict settings. It also notes that risks differ across use cases such as early warning, social media monitoring, and mediation support, and says meaningful governance requires input from diplomats, researchers, mediators, and the private sector.
UNSSC says the webinar series drew between 300 and 500 registrants per session, which it presents as evidence of strong demand for more targeted learning on AI and peacebuilding. The college argues that its role should extend beyond convening discussion to turning those debates into practical knowledge for UN practitioners working at the intersection of AI and conflict prevention.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The UK Department for Environment, Food & Rural Affairs (Defra) has linked the rollout of digital waste tracking to its wider effort to tackle waste crime in England, presenting stronger traceability as part of its Waste Crime Action Plan.
Defra says waste crime costs the economy an estimated £1 billion a year and continues to damage communities, the environment, and legitimate businesses. Its Waste Crime Action Plan for England combines tighter regulation, stronger enforcement, and faster clean-up of the most harmful illegal waste sites.
A central part of that approach is digital waste tracking. Defra says the system will create a near real-time record of where waste goes at each stage of its journey, making it harder for criminal operators to exploit gaps in the existing system. Better-quality data across the waste chain is also intended to support a more intelligence-led approach to regulation and enforcement.
The department has presented the launch of the public beta for the ‘Report Receipt of Waste’ service as a major step in that process. The service allows waste receivers to submit data on the waste they handle. It is intended to support a more accountable system in which waste movements can be tracked, verified, and audited.
Defra describes digital waste tracking as a shift away from a largely paper-based and bureaucratic system. For legitimate businesses, the department says the new approach should reduce administrative burdens while improving clarity and confidence across the sector.
The rollout will take place in phases. Defra says the first phase begins with the public beta and will become mandatory from October 2026 for licensed or permitted operators of waste receiving sites, including recycling centres, landfills, and treatment facilities. Around 12,000 permitted waste receiving sites will be covered in the first phase, with more than 100,000 operators expected to come into scope as the service expands.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The UK House of Commons has backed government amendments instead of the Children’s Wellbeing and Schools Bill, after insisting on its disagreement with the Lords’ amendments and proposing its own amendments in lieu. In the debate, ministers said the Children’s Wellbeing and Schools Bill will place a statutory duty on the Secretary of State to act following the consultation, changing the wording from ‘may’ to ‘must’.
Education minister Olivia Bailey told MPs that the government is consulting on the mechanism, but that ‘under any outcome’ it will impose ‘some form of age or functionality restrictions for children under 16’. She added that curfews would be considered in addition to, not instead of, those restrictions.
Bailey said the Children’s Wellbeing and Schools Bill now requires a statutory progress report three months after Royal Assent, with regulations to be laid within 12 months after that. She said the government intends to move faster and aims to lay the regulations by the end of the year, while describing any further six-month extension as a backstop for ‘exceptional and unforeseen circumstances’ only.
Opposition MPs and Liberal Democrats argued that the timetable remained too slow. Conservative frontbencher Laura Trott said the revised proposal was ‘a huge step forward’ but warned that ‘every month of delay just leaves children more exposed to the harms of social media online’.
Liberal Democrat spokesperson Munira Wilson said the overall timeline could still amount to 21 months before action. The House later voted by 272 to 64 to insist on its disagreement with the Lords’ amendments and to approve the government’s amendments in lieu. Lords amendment 105C was also agreed to, allowing the Children’s Wellbeing and Schools Bill to move forward with the revised online safety provisions.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Ofcom has published a non-confidential version of its confirmation decision against 4chan, giving a fuller public account of one of the UK regulator’s early enforcement actions under the Online Safety Act.
The decision concerns 4chan.org and sets out Ofcom’s findings that the platform failed to comply with several duties under the Act. According to the regulator, those failures included failing to carry out a suitable and sufficient illegal content risk assessment, failing to clearly set out in its terms of service how users are to be protected from illegal content, and failing to use highly effective age assurance to prevent children from encountering pornographic content.
Ofcom said 4chan must now take a series of corrective steps, including completing an illegal content risk assessment, updating its terms of service, and implementing robust age assurance measures. The regulator also imposed separate financial penalties linked to each breach, including a substantially larger penalty connected to the child protection requirement.
The case is significant because it shows the Online Safety Act moving from general compliance expectations into concrete enforcement. Rather than only warning platforms about their duties, Ofcom is now publicly setting out what it considers to be specific operational failures and attaching financial consequences to them.
The decision also underlines the regulator’s broader approach to compliance. Ofcom has indicated that further daily penalties can apply after the relevant deadlines if required actions are not taken, showing that enforcement is not limited to one-off fines but can escalate where platforms continue to fall short.
However, the publication of the decision provides platforms with a clearer signal of what enforcement under the Act is likely to look like. The 4chan case suggests that Ofcom is focusing not only on the presence of harmful or illegal content itself, but also on whether platforms have the systems, rules, and protective measures in place that the law requires.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Public concern over big tech companies is growing in Switzerland, according to a new survey by gfs.bern conducted on behalf of the Mercator Foundation Switzerland. A large majority of respondents view major technology firms as primarily profit-driven, while also expressing unease about their broader influence on society and politics.
Survey findings show that 90% of respondents believe big tech companies are mainly motivated by profit, while 94% support stronger protections for children and young people on social media platforms. Concerns extend beyond commercial behaviour, with 84% worried about political influence from the countries where these companies are based and 82% fearing increasing dependence on firms from the United States and China.
Overall perceptions in Switzerland remain mixed: 21% of respondents express a positive view of big tech companies, 40% hold a neutral stance, and 38% report negative impressions. Similar attitudes have been observed across Europe, where surveys in countries such as France and Germany indicate that many citizens consider existing regulatory frameworks insufficient.
Despite concerns about corporate influence, attitudes towards digitalisation itself remain broadly positive. Around 58% of respondents see digitalisation as beneficial overall, and 53% believe it offers personal advantages. However, only 48% think it benefits society as a whole, while 46% perceive its impact on democratic processes as negative.
A strong majority expects public institutions to take on greater responsibility for managing digital transformation. Around 88% support government efforts to ensure transparency in AI decision-making, while 86% want human oversight in critical situations. High levels of trust in Swiss authorities suggest public backing for a more active state role in shaping digital policy and safeguarding democratic values.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
In 2019, Facebook CEO Mark Zuckerberg announced Facebook Horizon, a VR social experience that allows users to interact, create custom avatars, and design virtual spaces. Zuckerberg saw the platform, later renamed Horizon Worlds, as the beginning of a new era of VR social networks, with users trading face-to-face interactions for digital ones.
To show his confidence in VR, Zuckerberg rebranded Facebook Inc. as Meta Platforms Inc. in October 2021, illustrating the company’s shift toward the metaverse as a broad virtual environment intended to integrate social interaction, work, commerce, and entertainment. Building on this new vision, Meta’s ambitions expanded beyond social interaction and entertainment, with the development roadmap including virtual real estate purchases and collaboration in virtual co-working spaces.
Fast forward to 17 March 2026, and the scale of Meta’s retreat from the metaverse vision has become unmistakable. In an official update, the company said it was ‘separating’ VR from Horizon so that each platform could grow with greater focus, while also making Horizon Worlds a mobile-only experience. Under the plan, Horizon Worlds and Events would disappear from the Quest Store by 31 March 2026, several flagship worlds would no longer be available in VR, and the Horizon Worlds app itself would be removed from Quest on 15 June 2026, ending VR access to Worlds altogether.
Yet Meta soon reversed part of the decision. In an Instagram Stories Q&A, CTO Andrew Bosworth said Horizon Worlds would remain available in VR after user backlash. Even so, the greater shift remained unchanged: Horizon Worlds was no longer a flagship VR project, but a much narrower product that reflected a clear contraction of Meta’s original metaverse ambition.
As it stands, Meta’s USD 80 billion investment seems less like a gateway to a new socio-technological era and more like one of the most expensive strategic miscalculations of the 21st century. The sunsetting of Horizon Worlds was certainly not a decision made on a whim, which begs the question: Why did the metaverse fail in the first place? Does it have a future in the AI landscape, and what does its retreat say about the politics of designing the future through corporate platforms?
Metaverse’s mainstream collapse
The most obvious reason for the metaverse’s failure was that it never became a mainstream social space. Meta’s strategy rested on the belief that large numbers of people would start using immersive virtual worlds as a normal setting for interaction, entertainment, and creative activity. The shift never happened at the scale needed to sustain the company’s ambitions.
One reason was friction. VR headsets were less practical than phones, more isolating than social media, and harder to integrate into everyday routines than the platforms people already used to communicate. Entering the virtual world required extra time, extra hardware, and openness to adapt to a different social environment. Most digital habits, however, are built around speed, familiarity, and ease of access.
Meta’s own March 2026 decision makes that failure difficult to deny. A company still convinced that immersive social VR was on its way to becoming mainstream would not have moved Horizon Worlds away from Quest and towards mobile. The shift suggested that the metaverse had failed to move from technological promise to everyday social practice.
Metaverse’s failure was not just one of convenience. It also struggled because it was never presented simply as a new digital space. It was framed as a future built largely on Meta’s own terms, with access tied to the company’s hardware, platforms, rules, and wider ecosystem. Such decisions made the metaverse feel less like an open evolution of the internet and more like a tightly managed corporate environment.
The distinction mattered because Meta was not merely launching another product. It was promoting a vision of how people might one day work, socialise, shop, and create online. Yet the more expansive that vision became, the more obvious it was that the system behind it remained closed and centralised. A future digital environment is harder to embrace when a single company controls the devices, spaces, distribution, and boundaries of participation.
Meta’s handling of Horizon Worlds clearly exposed that tension. The company could remove features, reshape access, alter incentives, and redirect the platform from the top down. Such a level of control may be standard for a private platform, but it sits uneasily with claims about building the next phase of digital life. In that sense, the metaverse failed not only because people were unconvinced by VR, but because its version of the future felt too corporate, too enclosed, and too disconnected from the openness people still associate with the internet.
Metaverse’s economic contradiction
The metaverse did not fail only as a social project. It also became increasingly difficult to justify on economic grounds. Meta spent heavily on Reality Labs while generating only limited returns from those investments. In its 2025 annual filing, the company said Reality Labs had reduced overall operating profit by around USD 19.19 billion for the year, while warning that similar losses would continue into 2026.
Losses on that scale might still have been acceptable if the metaverse had shown clear signs of momentum. However, there was little evidence of mass adoption, strong retention, or a durable path to monetisation. Virtual land, digital goods, branded experiences, and immersive workspaces never developed into the economic base of a new internet layer.
Instead, the metaverse began to look less like a future growth engine and more like a costly experiment with uncertain returns. The gap between spending and payoff became harder to ignore, especially as Meta continued to frame the metaverse as a long-term strategic priority. What used to be sold as the company’s next major frontier was increasingly difficult to justify in commercial terms.
The broader strategic context also changed. Meta’s own forward-looking statements pointed to increased hiring and spending in 2026, especially in AI. In practice, this meant the company was no longer choosing between the metaverse and inactivity, but between two competing visions of the future. AI was already delivering tangible gains in product development, infrastructure, and investor confidence.
In that competition for attention and capital, the metaverse lost. Meta’s pullback was also not an isolated case. Microsoft moved away from metaverse-first ambitions as well, retiring the Immersive space (3D) view in Teams meetings, Microsoft Mesh on the web, and Mesh apps for PC and Quest in December 2025. The services were replaced by immersive events in Teams, a narrower offering built around specific workplace functions rather than a broad metaverse vision.
The wider retreat matters because it suggests the problem was not limited to Meta’s execution. Another major tech company also stepped back from standalone immersive environments and turned to more limited, use-specific tools instead. A larger pattern appeared from that shift: grand metaverse narratives gave way to practical features, embedded tools, and industry-specific uses. In that sense, the metaverse has not entirely disappeared, but it did lose its status as the next internet.
Metaverse’s afterlife in the age of AI
The metaverse’s decline does not necessarily imply a complete disappearance. What seems more likely is that parts of it will survive in altered form, detached from the sweeping vision that once surrounded it. Rather than continuing as a standalone digital world meant to transform social life, the metaverse may persist as a set of tools, features, and immersive functions folded into other technologies.
AI is likely to play a role in that transition. It can lower the cost of building virtual environments, speed up avatar creation, automate elements of interaction design, and make digital spaces more responsive. In this sense, AI may succeed where the original metaverse struggled, not by reviving the same vision, but by making parts of it more practical and easier to use.
Such a distinction is important because it shifts the focus from ideology to utility. The metaverse was once marketed as the next stage of the internet, yet its more durable applications now appear to lie in narrower settings where immersion serves a clear purpose. Training, design, simulation, and industrial planning are all contexts in which virtual environments can offer measurable value without becoming a universal social destination.
What might survive, then, is not the metaverse as it was originally imagined, but a smaller set of immersive capabilities embedded in gaming, education, industry, and workplace systems. Avatars, digital agents, simulations, and adaptive virtual spaces may all remain relevant, but as components rather than the foundation of a new social order.
The shift also helps explain the political lesson of the metaverse’s collapse. Large-scale investment, aggressive branding, and executive certainty were not enough to secure public legitimacy. Meta tried to present the metaverse as an inevitable horizon, yet users did not embrace it, markets did not reward it in proportion to the spending, and the company itself eventually narrowed the project it had once elevated into a corporate identity.
In that sense, the metaverse matters even in failure. Its retreat does not simply mark the end of an overhyped product cycle. It also reveals the limits of top-down corporate future-making, especially when private platforms try to define the direction of collective digital life before society has decided whether such a future is either desirable or necessary.
Conclusion
The metaverse failed because it asked too much of users, promised too much to investors, and concentrated too much power in a platform model that never convincingly earned public trust. Meta’s retreat from Horizon Worlds makes that failure difficult to ignore, while Microsoft’s parallel narrowing of immersive ambitions suggests the problem extended beyond one company’s misjudgement.
Immersive VR technologies are unlikely to vanish, and AI may even extend some of their useful applications. Yet the metaverse as a universal social future has largely collapsed under the combined weight of weak adoption, unsustainable economics, and an overly corporate vision of digital life. What remains is not the next internet, but a reminder that the future cannot simply be declared into existence by the companies most eager to own it.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Meta has unveiled its first prescription-optimised AI glasses, expanding its wearable line with Ray-Ban Meta Gen 2 models for everyday vision correction. The launch targets users who already rely on prescription eyewear, offering a more integrated and comfortable experience.
The range includes Blayzer Optics and Scriber Optics with adjustable hinges, nose pads, and temple tips for a better fit. Pre-orders begin at $499 in the United States via Meta and Ray-Ban platforms, with wider availability in optical retailers and select global markets from 14 April.
Alongside the hardware launch, Meta is introducing new frame and lens colour combinations across its Ray-Ban Meta and Oakley Meta collections.
Additional AI-driven features are also rolling out, including hands-free nutrition tracking, WhatsApp message summaries, and improved on-device recall capabilities designed to enhance everyday communication.
Further software updates extend functionality with discreet handwriting input, in-lens navigation across US cities, and expanded media recording tools. The company positions its AI glasses as a multifunctional platform combining vision correction, connectivity, and real-time assistance.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The US Federal Trade Commission has taken action against OkCupid and Match Group Americas over allegations that the dating app shared users’ personal information, including photos and location data, with an unrelated third party despite privacy promises saying such sharing would not occur without notice or an opportunity to opt out.
According to the FTC’s complaint, OkCupid gave the third party access to personal data from millions of users even though the recipient was not a service provider, business partner, or affiliate within the company’s corporate family. The agency says consumers were not informed and were not given a chance to opt out.
The complaint says the third party sought large OkCupid datasets because OkCupid’s founders were financial investors in that company, despite there being no business relationship with the app. The FTC alleges that OkCupid provided access to nearly 3 million user photos, along with location and other information, without formal or contractual limits on how the data could be used.
Christopher Mufarrige, Director of the FTC’s Bureau of Consumer Protection, said: ‘The FTC enforces the privacy promises that companies make. We will investigate, and where appropriate, take action against companies that promise to safeguard your data but fail to follow through—even if that means we have to enforce our Civil Investigative Demands in court.’
The FTC also alleges that, since September 2014, Match and OkCupid have taken extensive steps to conceal and deny that the apps shared users’ personal information with the data recipient, including conduct the agency says obstructed its investigation. One example cited in the complaint is that, after a news report revealed the third party had obtained large OkCupid datasets, the company told the media and users that it was not involved with that third party.
Under the proposed settlement, OkCupid and Match would be permanently prohibited from misrepresenting how they collect, maintain, use, disclose, delete, or protect personal information, including photos, demographic data, and geolocation data. Restrictions would also cover how they describe the purposes of data collection and disclosure, as well as how they present privacy controls and consumer choices under state privacy laws.
The Commission vote authorising staff to file the complaint and stipulating the final order was 2-0. The FTC filed both in the US District Court for the Northern District of Texas, Dallas Division. The agency notes that a complaint reflects its view that it has ‘reason to believe’ the law has been or is about to be violated, while stipulated final orders carry the force of law only if approved and signed by the district court judge.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Cloudflare has announced two changes to its client-side security offering, making Client-Side Security Advanced available to self-serve customers and offering domain-based threat intelligence at no extra cost to all users on the free Client-Side Security bundle. The update is focused on browser-based attacks that can steal data via malicious scripts without visibly disrupting a website’s normal operation.
Cloudflare says its client-side security system assesses 3.5 billion scripts per day and monitors an average of 2,200 scripts per enterprise zone. According to the company, the product relies on browser reporting, including Content Security Policy signals, rather than scanners or application instrumentation, and requires only that traffic be proxied through Cloudflare.
A central part of the announcement is a new detection pipeline combining a Graph Neural Network (GNN) with a Large Language Model (LLM). Cloudflare says the GNN analyses the Abstract Syntax Tree of JavaScript code to identify malicious intent even when scripts are minified or obfuscated. Scripts flagged as suspicious are then passed to an open-source LLM running on Workers AI for a second-stage semantic assessment intended to reduce false positives.
Cloudflare says the GNN is tuned for high recall to identify novel and zero-day threats, but that false alarms remain a challenge at internet scale. Internal evaluation results cited by the company show that the secondary LLM layer reduced false positives in the JS Integrity threat category by nearly three times across the total analysed traffic, lowering the rate from about 0.3% to about 0.1%. On unique scripts, Cloudflare says the false-positive rate fell from about 1.39% to 0.007%.
The company also describes a recent case involving a heavily obfuscated malicious script named core.js. According to Cloudflare, the payload targeted Xiaomi OpenWrt-based home routers, altered DNS settings, and attempted to change admin passwords. Cloudflare says the script was injected through compromised browser extensions rather than by directly compromising a website, and adds that its GNN detected the malicious structure while the LLM confirmed the intent.
Cloudflare argues that the two-stage design provides structural detection via the GNN and broader semantic filtering via the LLM, enabling the company to lower the GNN decision threshold without sharply increasing alert volume. Every script flagged by the GNN is also logged to Cloudflare R2 for later auditing, which the company says helps it review cases where the LLM overrode the initial verdict.
Domain-based threat intelligence is now being made available to all Client-Side Security customers, including those not using the Advanced tier. Cloudflare says the move is partly a response to attacks seen in 2025 against smaller online shops, especially on Magento, where client-side compromises continued for days or weeks after public disclosure. By extending domain-based signals more broadly, the company says site owners can more quickly identify malicious JavaScript or suspicious connections and investigate possible compromises.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
A new study by CGI.br and NIC.br examines how digital services in Brazil implement age assurance measures. Presented in Brasília during an event on the Digital Child and Adolescent Statute (ECA Digital), the study reviewed 25 popular online services used by children and adolescents.
The study found that most of the services analysed do not apply age checks at the point of registration, including some platforms aimed at adults. According to the release, age assurance usually appears later, when users try to access specific features such as livestreaming or monetisation.
Titled ‘Age assurance practices in 25 digital services used by children in Brazil’, the study analysed governance documents published before the ECA Digital entered into force. From 18 March, the law requires information-society services aimed at children and adolescents in Brazil, or likely to be accessed by them, to adopt effective age-assurance measures and parental supervision.
The study found that 11 of the 25 platforms relied on third-party age-assurance services, particularly social media and generative AI platforms. Official identity document submission was the most common verification method, while selfie-based checks were the most common age-estimation tool. Differences were also found between the minimum ages stated by services and those listed in app stores, and some adult-oriented platforms could still be accessed by younger users with parental consent.
Parental supervision tools were available in 15 of the 25 services, but activation was usually optional and depended on parents or guardians. Transparency also emerged as a weakness: only six services published Brazil-specific reports, and only one explained how its minimum-age policy was applied. Policies were often spread across multiple pages, averaging 22 pages per service, and around 40% of the services provided related information in other languages.
Fábio Senne, General Research Coordinator at Cetic.br | NIC.br, said: ‘One of the study’s central aims was to verify the integrity of the information made available by digital services in Brazil. It is essential that data on age protection be communicated clearly and accessibly, allowing more informed and effective parental supervision.’
Juliana Cunha, manager of the Digital Public Policy Advisory Office at CGI.br | NIC.br, said: ‘This survey was developed to support the debate on implementation of the ECA Digital and to offer a clear understanding of the current landscape. This initiative forms part of a broader set of actions by CGI.br and NIC.br aimed at providing technical evidence to support effective enforcement of the law. Our commitment is to foster a safer and more responsible digital ecosystem for children and adolescents in Brazil.’
The release says the study used as a methodological reference the OECD technical paper ‘Age assurance practices of 50 online services used by children’, published in 2025. Information was collected between 10 and 30 January 2026 from public documents made available by the services in Brazil, totalling 550 pages analysed. The event also marked the launch of TIC Kids Online Brazil 2025, a publication on internet use by children and adolescents aged 9 to 17 in Brazil.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!