The Fund for Digital Initiatives of the Eurasian Development Bank has signed a Memorandum of Cooperation with Kazakhstan’s Ministry of AI and Digitalization. The agreement was signed during the Digital Qazaqstan forum held on 27 March in Shymkent.
According to the text provided, the memorandum outlines a strategic partnership to introduce AI technologies and support digital projects. Areas of cooperation include identifying and implementing joint AI projects, exchanging expertise, and strengthening both sides’ capacities as centres of AI competence.
The announcement says the agreement is intended to deepen the partnership and support Kazakhstan’s strategic objectives for AI development. It also links the memorandum to wider efforts to expand cooperation between the bank’s digital initiatives fund and the ministry.
During the forum, Vice Chairman of the Management Board, Tigran Sargsyan, held a working meeting with Deputy Prime Minister and Minister of AI and Digitalization, Zhaslan Madiyev. The discussion covered prospects for broader cooperation, priority projects, and tools to support AI adoption in key sectors of Kazakhstan’s economy.
The text also says Sargsyan described 2025 as a record year for the bank in Kazakhstan, with the most projects implemented in digital public administration, platform solutions, and AI deployment. Madiyev, in turn, proposed creating a registry of Kazakhstan’s open-source e-government component solutions for possible replication across EDB member states.
The announcement presents the memorandum as part of the Eurasian Development Bank’s broader support for digital transformation and AI development across its member states.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Legislative efforts in France signal a shift toward stricter governance of youth access to digital platforms, with policymakers preparing to debate a ban on social media use for children under 15.
A proposal that forms part of a broader strategy to address concerns over online harms and excessive screen exposure among adolescents.
These measures reflect increasing reliance on regulatory intervention instead of voluntary platform safeguards, as evidence links prolonged digital engagement with risks such as cyberbullying, disrupted sleep patterns and exposure to harmful content.
Political backing for the initiative has emerged from figures aligned with Emmanuel Macron, reinforcing the government’s position that stronger oversight of digital environments is necessary. The proposal also mirrors developments in Australia, where similar restrictions have already entered into force.
A debate that is further influenced by legal actions targeting major platforms, including TikTok and Meta, amid allegations that algorithmic systems contribute to harmful user experiences.
The outcome of the parliamentary discussions in France is expected to shape future approaches to child safety, platform accountability and digital rights governance across Europe.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
In 2019, Facebook CEO Mark Zuckerberg announced Facebook Horizon, a VR social experience that allows users to interact, create custom avatars, and design virtual spaces. Zuckerberg saw the platform, later renamed Horizon Worlds, as the beginning of a new era of VR social networks, with users trading face-to-face interactions for digital ones.
To show his confidence in VR, Zuckerberg rebranded Facebook Inc. as Meta Platforms Inc. in October 2021, illustrating the company’s shift toward the metaverse as a broad virtual environment intended to integrate social interaction, work, commerce, and entertainment. Building on this new vision, Meta’s ambitions expanded beyond social interaction and entertainment, with the development roadmap including virtual real estate purchases and collaboration in virtual co-working spaces.
Fast forward to 17 March 2026, and the scale of Meta’s retreat from the metaverse vision has become unmistakable. In an official update, the company said it was ‘separating’ VR from Horizon so that each platform could grow with greater focus, while also making Horizon Worlds a mobile-only experience. Under the plan, Horizon Worlds and Events would disappear from the Quest Store by 31 March 2026, several flagship worlds would no longer be available in VR, and the Horizon Worlds app itself would be removed from Quest on 15 June 2026, ending VR access to Worlds altogether.
Yet Meta soon reversed part of the decision. In an Instagram Stories Q&A, CTO Andrew Bosworth said Horizon Worlds would remain available in VR after user backlash. Even so, the greater shift remained unchanged: Horizon Worlds was no longer a flagship VR project, but a much narrower product that reflected a clear contraction of Meta’s original metaverse ambition.
As it stands, Meta’s USD 80 billion investment seems less like a gateway to a new socio-technological era and more like one of the most expensive strategic miscalculations of the 21st century. The sunsetting of Horizon Worlds was certainly not a decision made on a whim, which begs the question: Why did the metaverse fail in the first place? Does it have a future in the AI landscape, and what does its retreat say about the politics of designing the future through corporate platforms?
Metaverse’s mainstream collapse
The most obvious reason for the metaverse’s failure was that it never became a mainstream social space. Meta’s strategy rested on the belief that large numbers of people would start using immersive virtual worlds as a normal setting for interaction, entertainment, and creative activity. The shift never happened at the scale needed to sustain the company’s ambitions.
One reason was friction. VR headsets were less practical than phones, more isolating than social media, and harder to integrate into everyday routines than the platforms people already used to communicate. Entering the virtual world required extra time, extra hardware, and openness to adapt to a different social environment. Most digital habits, however, are built around speed, familiarity, and ease of access.
Meta’s own March 2026 decision makes that failure difficult to deny. A company still convinced that immersive social VR was on its way to becoming mainstream would not have moved Horizon Worlds away from Quest and towards mobile. The shift suggested that the metaverse had failed to move from technological promise to everyday social practice.
Metaverse’s failure was not just one of convenience. It also struggled because it was never presented simply as a new digital space. It was framed as a future built largely on Meta’s own terms, with access tied to the company’s hardware, platforms, rules, and wider ecosystem. Such decisions made the metaverse feel less like an open evolution of the internet and more like a tightly managed corporate environment.
The distinction mattered because Meta was not merely launching another product. It was promoting a vision of how people might one day work, socialise, shop, and create online. Yet the more expansive that vision became, the more obvious it was that the system behind it remained closed and centralised. A future digital environment is harder to embrace when a single company controls the devices, spaces, distribution, and boundaries of participation.
Meta’s handling of Horizon Worlds clearly exposed that tension. The company could remove features, reshape access, alter incentives, and redirect the platform from the top down. Such a level of control may be standard for a private platform, but it sits uneasily with claims about building the next phase of digital life. In that sense, the metaverse failed not only because people were unconvinced by VR, but because its version of the future felt too corporate, too enclosed, and too disconnected from the openness people still associate with the internet.
Metaverse’s economic contradiction
The metaverse did not fail only as a social project. It also became increasingly difficult to justify on economic grounds. Meta spent heavily on Reality Labs while generating only limited returns from those investments. In its 2025 annual filing, the company said Reality Labs had reduced overall operating profit by around USD 19.19 billion for the year, while warning that similar losses would continue into 2026.
Losses on that scale might still have been acceptable if the metaverse had shown clear signs of momentum. However, there was little evidence of mass adoption, strong retention, or a durable path to monetisation. Virtual land, digital goods, branded experiences, and immersive workspaces never developed into the economic base of a new internet layer.
Instead, the metaverse began to look less like a future growth engine and more like a costly experiment with uncertain returns. The gap between spending and payoff became harder to ignore, especially as Meta continued to frame the metaverse as a long-term strategic priority. What used to be sold as the company’s next major frontier was increasingly difficult to justify in commercial terms.
The broader strategic context also changed. Meta’s own forward-looking statements pointed to increased hiring and spending in 2026, especially in AI. In practice, this meant the company was no longer choosing between the metaverse and inactivity, but between two competing visions of the future. AI was already delivering tangible gains in product development, infrastructure, and investor confidence.
In that competition for attention and capital, the metaverse lost. Meta’s pullback was also not an isolated case. Microsoft moved away from metaverse-first ambitions as well, retiring the Immersive space (3D) view in Teams meetings, Microsoft Mesh on the web, and Mesh apps for PC and Quest in December 2025. The services were replaced by immersive events in Teams, a narrower offering built around specific workplace functions rather than a broad metaverse vision.
The wider retreat matters because it suggests the problem was not limited to Meta’s execution. Another major tech company also stepped back from standalone immersive environments and turned to more limited, use-specific tools instead. A larger pattern appeared from that shift: grand metaverse narratives gave way to practical features, embedded tools, and industry-specific uses. In that sense, the metaverse has not entirely disappeared, but it did lose its status as the next internet.
Metaverse’s afterlife in the age of AI
The metaverse’s decline does not necessarily imply a complete disappearance. What seems more likely is that parts of it will survive in altered form, detached from the sweeping vision that once surrounded it. Rather than continuing as a standalone digital world meant to transform social life, the metaverse may persist as a set of tools, features, and immersive functions folded into other technologies.
AI is likely to play a role in that transition. It can lower the cost of building virtual environments, speed up avatar creation, automate elements of interaction design, and make digital spaces more responsive. In this sense, AI may succeed where the original metaverse struggled, not by reviving the same vision, but by making parts of it more practical and easier to use.
Such a distinction is important because it shifts the focus from ideology to utility. The metaverse was once marketed as the next stage of the internet, yet its more durable applications now appear to lie in narrower settings where immersion serves a clear purpose. Training, design, simulation, and industrial planning are all contexts in which virtual environments can offer measurable value without becoming a universal social destination.
What might survive, then, is not the metaverse as it was originally imagined, but a smaller set of immersive capabilities embedded in gaming, education, industry, and workplace systems. Avatars, digital agents, simulations, and adaptive virtual spaces may all remain relevant, but as components rather than the foundation of a new social order.
The shift also helps explain the political lesson of the metaverse’s collapse. Large-scale investment, aggressive branding, and executive certainty were not enough to secure public legitimacy. Meta tried to present the metaverse as an inevitable horizon, yet users did not embrace it, markets did not reward it in proportion to the spending, and the company itself eventually narrowed the project it had once elevated into a corporate identity.
In that sense, the metaverse matters even in failure. Its retreat does not simply mark the end of an overhyped product cycle. It also reveals the limits of top-down corporate future-making, especially when private platforms try to define the direction of collective digital life before society has decided whether such a future is either desirable or necessary.
Conclusion
The metaverse failed because it asked too much of users, promised too much to investors, and concentrated too much power in a platform model that never convincingly earned public trust. Meta’s retreat from Horizon Worlds makes that failure difficult to ignore, while Microsoft’s parallel narrowing of immersive ambitions suggests the problem extended beyond one company’s misjudgement.
Immersive VR technologies are unlikely to vanish, and AI may even extend some of their useful applications. Yet the metaverse as a universal social future has largely collapsed under the combined weight of weak adoption, unsustainable economics, and an overly corporate vision of digital life. What remains is not the next internet, but a reminder that the future cannot simply be declared into existence by the companies most eager to own it.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The government of California is advancing a more interventionist approach to AI governance, signalling a divergence from federal deregulatory preferences.
An executive order signed by Gavin Newsom mandates the development of comprehensive AI policies within 4 months, prioritising public safety and protecting fundamental rights.
The proposed framework requires companies seeking state contracts to demonstrate safeguards against harmful outputs, including the prevention of child exploitation material and violent content.
It also calls for measures addressing algorithmic bias and unlawful discrimination, alongside increased transparency through mechanisms such as watermarking AI-generated media.
The evolving policy landscape reflects growing concern over the societal impact of AI systems, including risks to employment, content integrity and civil liberties.
An initiative by California that may therefore serve as a testing ground for future regulatory models, shaping broader debates on balancing innovation with accountability in digital governance.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
China has revised its regulation on the national agricultural census ahead of the country’s fourth such survey, with the updated rules due to take effect on 1 May 2026. According to the reported summary, Premier Li Qiang signed a State Council decree publishing the revised regulation.
The changes expand the scope of the agricultural census to include rural industrial development and village construction, alongside more traditional measures of agricultural activity. New data-collection methods, including remote sensing, have also been added to the framework.
Stronger data-quality controls form another part of the revision. The updated regulation introduces a post-census spot-check system and sets out confidentiality obligations for census personnel involved in the process.
Penalties for data falsification have also been tightened. The revised rules say people found to have fabricated or manipulated statistics may face heavier sanctions, including higher fines and possible criminal prosecution.
The fourth national agricultural census aims to provide an updated picture of agricultural development, rural construction, farmers’ living standards, and the outcomes of rural reform in China. Areas listed for coverage include agricultural production conditions, grain output, new quality productive forces in agriculture, rural development, and the living conditions of rural residents.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The US Federal Trade Commission has taken action against OkCupid and Match Group Americas over allegations that the dating app shared users’ personal information, including photos and location data, with an unrelated third party despite privacy promises saying such sharing would not occur without notice or an opportunity to opt out.
According to the FTC’s complaint, OkCupid gave the third party access to personal data from millions of users even though the recipient was not a service provider, business partner, or affiliate within the company’s corporate family. The agency says consumers were not informed and were not given a chance to opt out.
The complaint says the third party sought large OkCupid datasets because OkCupid’s founders were financial investors in that company, despite there being no business relationship with the app. The FTC alleges that OkCupid provided access to nearly 3 million user photos, along with location and other information, without formal or contractual limits on how the data could be used.
Christopher Mufarrige, Director of the FTC’s Bureau of Consumer Protection, said: ‘The FTC enforces the privacy promises that companies make. We will investigate, and where appropriate, take action against companies that promise to safeguard your data but fail to follow through—even if that means we have to enforce our Civil Investigative Demands in court.’
The FTC also alleges that, since September 2014, Match and OkCupid have taken extensive steps to conceal and deny that the apps shared users’ personal information with the data recipient, including conduct the agency says obstructed its investigation. One example cited in the complaint is that, after a news report revealed the third party had obtained large OkCupid datasets, the company told the media and users that it was not involved with that third party.
Under the proposed settlement, OkCupid and Match would be permanently prohibited from misrepresenting how they collect, maintain, use, disclose, delete, or protect personal information, including photos, demographic data, and geolocation data. Restrictions would also cover how they describe the purposes of data collection and disclosure, as well as how they present privacy controls and consumer choices under state privacy laws.
The Commission vote authorising staff to file the complaint and stipulating the final order was 2-0. The FTC filed both in the US District Court for the Northern District of Texas, Dallas Division. The agency notes that a complaint reflects its view that it has ‘reason to believe’ the law has been or is about to be violated, while stipulated final orders carry the force of law only if approved and signed by the district court judge.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The European Patent Office (EPO) has reinforced cooperation with industry stakeholders through discussions with the German Association of Industry IP Experts, focusing on strengthening the European patent system and supporting innovation.
A meeting that brought together representatives from major industrial actors to align priorities and explore future collaboration.
Discussions between the EPO and the stakeholders centred on enhancing technology transfer, empowering startups and fostering economic growth across Europe.
Participants emphasised the importance of inclusive engagement among patent system users instead of fragmented approaches, ensuring that innovation strategies reflect both industrial and societal needs.
The Unitary Patent system was highlighted as gaining traction, particularly among smaller entities such as SMEs, individual inventors and research organisations. Such a trend reflects broader efforts to improve accessibility and scalability within the European innovation ecosystem.
AI also featured prominently, with both sides recognising its growing role in improving efficiency and quality in patent processes.
A human-centric approach remains essential, ensuring that AI deployment supports responsible innovation while maintaining high standards in patent examination and services.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Efforts to combat online child sexual exploitation could be severely weakened, Europol has warned, if legal frameworks supporting detection and reporting are disrupted.
Executive Director Catherine De Bolle highlighted growing concerns over the increasing volume of harmful content online and stressed that protecting children remains a top priority for European law enforcement.
Authorities rely heavily on reports submitted by online service providers, which play a central role in identifying victims and supporting investigations, rather than relying solely on traditional policing methods.
Europol processed around 1.1 million CyberTips in a single year, many originating from the National Centre for Missing & Exploited Children and shared across 24 European countries.
These CyberTips include critical evidence such as images, videos, and other digital data used to track criminal activity.
Europol cautioned that removing the legal basis allowing voluntary detection by platforms could significantly reduce the number of reports submitted to authorities. A decline in CyberTips would limit investigative leads, making it harder to identify victims and disrupt online criminal networks.
Such a development could undermine broader security efforts and weaken the protection of minors across the EU instead of strengthening safeguards.
The agency emphasised that maintaining online service providers’ ability to detect and report suspected abuse is essential to effective law enforcement.
Microsoft has announced an AI collaboration with NVIDIA to support nuclear energy projects across permitting, design, construction, and operations. In a post published on 24 March, the tech conglomerate said the initiative aims to provide end-to-end tools for the nuclear sector, focusing on streamlining permitting, accelerating design, and optimising operations.
Microsoft frames the effort within a broader energy challenge, arguing that rising power demand and long project timelines are putting pressure to accelerate the delivery of firm, carbon-free power. The company says customised engineering, fragmented data, and manual regulatory review slow nuclear projects. It presents AI as a way to make project development more repeatable, traceable, secure, and predictable.
The post says the collaboration spans the full lifecycle of a nuclear plant. Microsoft describes a model in which digital twins, high-fidelity simulations, and AI-assisted workflows support design and engineering, licensing and permitting, construction and delivery, and operations and maintenance.
According to the company, engineers would be able to reuse design patterns, model the impact of changes before construction begins, and link project decisions to supporting evidence and applicable rules. Microsoft also says generative AI can assist with drafting and gap analysis in permit documentation, while predictive modelling and operational digital twins can support anomaly detection and maintenance planning.
Microsoft says traceability and auditability are central to the approach. The company lists four intended qualities of the system: traceable records linking engineering decisions to evidence and regulations, audit-ready documentation, secure use within a governed environment, and predictable outcomes through simulations intended to identify delays before they occur in the real world.
Several case examples are included in the post. Microsoft says Aalo Atomics reduced the permitting process by 92% using its Generative AI for Permitting solution and estimates annual savings of 80$ million.
Aalo Atomics Chief Technology Officer Yasir Arafat is quoted as saying: ‘Two things matter most: enterprise-scale complexity and mission-critical reliability. We’re deploying something complex at a scale only a company like Microsoft really understands. There’s no room for anything less than proven reliability.’
Microsoft also says Southern Nuclear has deployed Copilot agents across engineering and licensing workstreams to improve consistency, reuse knowledge faster, and support decision-making. Idaho National Laboratory is described as an early adopter in the US federal context, with Microsoft saying the lab is using AI capabilities to automate the assembly of engineering and safety analysis reports and to create standard methodologies for regulators to adopt the tools safely.
The post also expands beyond those three examples. Microsoft says Everstar, described as an NVIDIA Inception startup, is bringing domain-specific AI for nuclear to Azure to support project workflows and governed data pipelines.
Everstar Chief Executive Officer Kevin Kong is quoted as saying: ‘The nuclear industry has been bottlenecked by documentation burden and regulatory complexity for decades. This partnership means our customers get the secure, scalable cloud deployments they demand. It’s a significant step toward making nuclear power fast, safe, and unstoppable.’
Microsoft also says Atomic Canyon’s Neutron platform is available on the Microsoft Marketplace for nuclear developers via established procurement channels.
At the technical level, Microsoft says the collaboration brings together NVIDIA Omniverse, NVIDIA Earth-2, NVIDIA CUDA-X, NVIDIA AI Enterprise, PhysicsNeMo, Isaac Sim, and Metropolis with Microsoft Generative AI for Permitting Solution Accelerator and Microsoft Planetary Computer. The company presents the stack as a digital ecosystem for nuclear energy on Azure.
The official post is a corporate announcement rather than an independent assessment of the approach’s effectiveness. The published note outlines the company’s intended use cases, named partners, and customer examples, but it does not provide a third-party evaluation of the broader claims regarding delivery speed, regulatory confidence, or sector-wide impact.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Zimbabwe has launched a National Artificial Intelligence Strategy for 2026 to 2030, marking a significant step towards shaping its digital future instead of relying solely on traditional development pathways.
Announced by President Emmerson Mnangagwa in Harare, the strategy sets out a national framework for the responsible use of AI to support innovation, improve public services, and expand economic opportunities across sectors such as agriculture, healthcare, education, finance, and public administration.
The strategy places strong emphasis on building digital infrastructure, developing AI skills, and strengthening research and innovation ecosystems.
Officials highlighted the importance of governance frameworks to ensure that AI systems remain transparent, ethical, and aligned with national priorities instead of advancing without oversight.
The initiative reflects a broader effort to position Zimbabwe within the evolving technological landscape of the fourth industrial revolution while promoting sustainable economic growth.
Development of the strategy was supported by UNESCO, working alongside national institutions and stakeholders from academia, industry, and civil society.
The process was informed by the Artificial Intelligence Readiness Assessment Methodology and aligned with UNESCO Recommendation on the Ethics of Artificial Intelligence, promoting a human-centred approach that prioritises human rights, fairness, and transparency.
Regional initiatives across Southern Africa have also contributed to strengthening AI adoption readiness through similar assessment frameworks.
Looking ahead, Zimbabwe aims to translate the strategy into concrete investments in infrastructure, talent development, and innovation ecosystems.
International partners, including the UN, have expressed support for implementation efforts, emphasising the importance of inclusive growth and equitable access to digital opportunities.
By combining national leadership with international collaboration, Zimbabwe seeks to ensure that AI benefits communities across urban and rural areas rather than widening existing socioeconomic divides.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!