Meta’s metaverse collapses as Horizon Worlds shuts down on Quest

Meta will shut down Horizon Worlds on its Quest headsets, ending its flagship virtual reality (VR) platform and marking a clear retreat from its metaverse ambitions. The app will be removed from the Quest store on 31 March and discontinued in VR by 15 June, continuing only as a mobile service.

Horizon Worlds, launched in 2021, was central to Meta’s rebranding from Facebook and its vision of a fully immersive virtual environment. Despite billions in investment and high-profile partnerships, the platform failed to attract a large user base and struggled with design limitations and weak engagement.

Reality Labs, the division behind the metaverse push, has accumulated nearly $80 billion in losses since 2020, including more than $6 billion in a single quarter. Recent layoffs affecting around 10 percent of the VR workforce, along with the shutdown of related projects, underscore a broader pullback.

Competition and shifting priorities have accelerated the decline. Rival platforms such as VRChat maintained stronger communities, while Meta increasingly redirected resources toward AI and hardware, including its Ray-Ban smart glasses.

Although Meta says it remains committed to VR, the closure of Horizon Worlds signals a strategic reset. The company is repositioning its future around AI-driven products, marking a decisive shift away from its earlier metaverse vision.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Google responds to UK digital market rules and CMA proposals

Debate over proposed UK digital market rules is intensifying, with Google outlining its position and emphasising the need to balance competition with user experience and platform integrity. The company said it supports the objectives of the Competition and Markets Authority but warned that some proposals could introduce risks for users.

Google argued that maintaining fair and relevant search results remains a priority, stating that its ranking systems are designed to prioritise quality rather than favour its own services. It cautioned that certain third-party proposals could expose its systems to manipulation, potentially weakening protections against spam and reducing the pace of product improvements.

The company also addressed user choice on Android devices, noting that existing options already allow users to select preferred services. It suggested that adding frequent mandatory choice screens could disrupt user experience, proposing instead a permanent settings-based option to change defaults without repeated prompts.

Regarding publisher relations, Google highlighted efforts to increase control over how content is used, particularly with generative AI features such as AI Overviews. It said new tools are being developed to allow publishers to opt out of specific AI functionalities while maintaining visibility in search results.

Google said it would continue engaging with UK regulators to shape rules that support users, publishers, and businesses, while ensuring that innovation and service quality are not compromised.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU child safety rules lapse amid ongoing debate over privacy and enforcement

The European Union has been unable to reach an agreement on extending temporary rules that allow online platforms to detect child sexual abuse material, leaving the current framework set to expire in April.

Discussions between the European Parliament and the Council of the European Union concluded without reaching a consensus on how to proceed with such measures.

The existing rules permit technology companies to voluntarily scan their services for harmful content, supporting efforts to identify and remove illegal material.

The European Commission had proposed a temporary extension while negotiations continue on a permanent framework under the Child Sexual Abuse Regulation, but differing views on scope and safeguards prevented agreement.

Stakeholders across sectors have highlighted the importance of maintaining effective tools to address online harms, while also emphasising the need to respect fundamental rights.

Previous periods of legal uncertainty have shown that detection capabilities may be affected when such frameworks are absent, although assessments of effectiveness remain subject to ongoing debate.

At the same time, concerns have been raised regarding the broader implications of monitoring digital communications. Some perspectives stress that any approach should carefully consider privacy protections, particularly in relation to secure and encrypted services.

Attention now turns to ongoing negotiations on a long-term regulatory solution.

The outcome will shape how the EU approaches the challenge of addressing harmful online content while safeguarding rights and ensuring proportional and transparent enforcement.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

UAE advances AI native vision for future 6G networks

UAE operator e& (formerly Etisalat) has partnered with Khalifa University to outline a new vision for AI-native 6G networks. Their joint whitepaper presents a framework in which intelligence is embedded at the core of the network architecture rather than added as a feature.

The proposal introduces a dedicated AI plane alongside existing network layers to enable continuous learning and automation. This approach supports sensing, reasoning and autonomous decision-making across radio, core and edge systems.

The framework includes distributed AI agents, digital twin integration and closed-loop automation models. It is designed to support multi-vendor environments while enabling scalable and coordinated intelligence across networks.

Five core pillars underpin the model, including AI frameworks, cloud-edge computing and sustainability-focused design. Together, these elements position 6G as a cognitive infrastructure capable of predictive optimisation and advanced service delivery.

The whitepaper also defines measurable performance indicators such as latency, learning accuracy and energy efficiency. The initiative aims to contribute to global standards while strengthening the UAE’s role in shaping future telecom systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Growing investment and energy plans reshape Armenia’s AI future

Armenia’s recent technology announcements are helping to form a clearer national AI strategy with stronger coordination. A memorandum with the US on semiconductors and AI now appears to be moving beyond symbolic commitment into action.

Momentum has accelerated with plans to expand a large-scale AI factory backed by significant investment. The project is estimated at around $4 billion and includes tens of thousands of advanced GPUs to support large-scale development.

The initiative is already entering construction, marking a shift from concept to execution in a short timeframe. Officials have described a broader vision of building a network of AI factories across the country.

Energy planning is becoming central, with discussions around deploying a small modular nuclear reactor to meet demand. Stable and scalable power is considered essential for sustaining long-term AI infrastructure growth.

Efforts are also targeting the wider ecosystem through a Virtual AI Institute and planned GPU access for startups. These steps aim to strengthen research capacity and ensure local participation in the country’s AI expansion.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK opens supercomputing access to boost AI startups

Britain is opening access to its national AI Research Resource to support domestic AI development. Startups and spinouts can now use supercomputers previously reserved for frontier research.

The AIRR combines infrastructure from government, universities and leading technology firms. It provides the computing power needed to train models and run complex simulations.

Access will be worth around £20 million per year for participating companies. Officials say reducing compute barriers will help startups move faster from prototype to product.

The government’s Sovereign AI Unit, backed by up to £500 million, will also support long-term growth. The programme targets areas including advanced models, scientific discovery and trustworthy AI systems.

Concerns remain over regulatory alignment with the EU’s stricter AI rules. Tensions could shape whether the UK maintains a more flexible environment for innovation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI fuels rise in cyber scams

Cybercrime incidents have surged as AI tools enable more convincing scams, leading to sharply rising losses in Estonia. Authorities reported thousands of phishing and fraud cases affecting individuals and businesses.

Criminals are using AI to generate fluent messages in Estonian, removing a key warning sign that once helped people detect scams. Experts say language accuracy has made fraudulent calls and messages harder to identify.

Growing awareness of scams is also fuelling public anxiety, with some users considering abandoning digital services. Officials warn that loss of trust could undermine confidence in digital systems.

Authorities are urging stronger safeguards and public education to counter the cybersecurity threats. Banks, telecom firms and digital identity providers are introducing new protections while campaigns aim to improve digital awareness.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

White-collar jobs hold steady as automation concerns grow

Mass layoffs across major tech firms, including Amazon’s 16,000 job cuts, have intensified concerns that AI will replace white-collar workers. Headlines suggest a rapid shift, yet broader labour data tells a more measured story.

US employment has grown by 1.1% since the launch of ChatGPT in November 2022, reaching over 157 million workers. Service industries expanded significantly, adding more than two million jobs, while goods-producing sectors declined modestly.

Overall trends indicate no major disruption to the labour market so far.

Sector-level data reveals uneven shifts. The information industry recorded the steepest losses, particularly in media, telecoms, and content production, where automation and long-term structural changes continue to reduce headcounts.

Meanwhile, highly automatable roles such as telemarketing and call centres saw the sharpest declines.

Professional services present a more complex picture. Legal, engineering, and consulting roles have grown or remained stable, defying expectations of widespread displacement.

Hiring continues to exceed layoffs in several sectors, though younger workers appear increasingly vulnerable as competition intensifies in AI-exposed roles.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Tether unveils mobile-friendly AI training platform

Tether has launched an AI framework that runs large language models on smartphones and non-NVIDIA GPUs. The system is part of its QVAC platform and uses Microsoft’s BitNet architecture, along with LoRA techniques to reduce memory and computational requirements.

The framework enables cross-platform training on AMD, Intel, Apple Silicon, and mobile GPUs, allowing models with up to 1 billion parameters to be fine-tuned on phones in under 2 hours.

Larger models with up to 13 billion parameters are also supported on mobile devices. BitNet’s 1-bit architecture reduces VRAM requirements by nearly 78%, enabling larger models to run on limited hardware.

Performance improvements benefit inference, with mobile GPUs outperforming CPUs, enabling on-device training and federated learning. By reducing reliance on cloud infrastructure, the system offers more flexible AI development for distributed environments.

Tether’s expansion into AI mirrors a broader trend in the crypto sector, where companies are investing in AI infrastructure, autonomous agents, and high-performance computing.

Industry activity includes record revenue growth for AI and HPC operations, blockchain-integrated AI agents, and new tools for secure on-chain transactions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AgentKit enables ID verification for AI-powered online commerce

Tools for Humanity has introduced a new verification system to strengthen trust in online transactions, as demand for reliable ID verification tools grows in AI-driven environments. The update builds on its World project, which aims to prove that real humans, rather than automated systems, are behind digital activity.

The company’s latest release, AgentKit, is designed to support agentic commerce by allowing websites to verify that AI agents are acting on behalf of authenticated users. As AI programs increasingly browse websites and make purchases autonomously, ID verification tools are becoming essential to prevent fraud, spam, and misuse.

AgentKit relies on World ID, a system that generates a secure digital identity through biometric verification. Users obtain a verified ID by scanning their iris using a dedicated device, which converts the scan into an encrypted digital code. These ID verification tools are then used to confirm that transactions initiated by AI agents are linked to a real and unique individual.

The system integrates with the x402 protocol, a blockchain-based standard developed by Coinbase and Cloudflare, enabling automated transactions between systems. By combining this protocol with ID verification tools, websites can validate whether a human user authorises an AI agent before completing a purchase.

‘AgentKit is built as a complementary extension to the x402 v2 protocol, in coordination with Coinbase,’ the company said. ‘The integration is designed so that any website already using x402 can enable proof of unique human verification alongside (or instead of) micropayments.’

According to the company, the approach functions similarly to delegating authority to an AI agent, allowing platforms to decide whether to trust automated actions. These ID verification tools provide a layer of accountability, helping ensure that AI-driven transactions remain secure and traceable.

AgentKit is currently available in beta, with developers encouraged to test and refine the system. However, access depends on users obtaining a verified World ID, reinforcing the central role of biometric-based ID verification tools in the company’s ecosystem.

As agentic commerce expands across platforms such as Amazon and Mastercard, the need for trusted identity systems is becoming more urgent. By positioning its ID verification tools at the centre of this emerging market, the company aims to establish itself as a key provider of trust infrastructure for AI-powered digital transactions.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!