Horizon Worlds remains active as Meta reconsiders VR plans

Meta has reversed its earlier decision to discontinue virtual reality support for Horizon Worlds, allowing the platform to remain available on VR headsets despite previous plans to prioritise mobile and web access.

The decision follows an internal reassessment of user engagement trends, which indicate limited adoption of VR-based social platforms.

While Horizon Worlds was once positioned as central to the company’s metaverse ambitions, demand has remained relatively low, raising questions about the long-term viability of immersive social environments.

Financial pressures also continue to shape strategy.

Meta’s Reality Labs division has recorded substantial losses since 2021, reflecting high investment in virtual and augmented reality technologies without corresponding commercial returns.

Industry data further suggests declining headset sales, reinforcing uncertainty around VR as a mainstream consumer platform.

In contrast, mobile usage of Horizon Worlds is growing faster. Increasing downloads point to broader accessibility and improved product-market alignment, though revenue generation remains limited.

As a result, Meta is prioritising mobile development instead of fully abandoning VR, maintaining a dual approach while seeking more sustainable engagement models.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Meta’s metaverse collapses as Horizon Worlds shuts down on Quest

Meta will shut down Horizon Worlds on its Quest headsets, ending its flagship virtual reality (VR) platform and marking a clear retreat from its metaverse ambitions. The app will be removed from the Quest store on 31 March and discontinued in VR by 15 June, continuing only as a mobile service.

Horizon Worlds, launched in 2021, was central to Meta’s rebranding from Facebook and its vision of a fully immersive virtual environment. Despite billions in investment and high-profile partnerships, the platform failed to attract a large user base and struggled with design limitations and weak engagement.

Reality Labs, the division behind the metaverse push, has accumulated nearly $80 billion in losses since 2020, including more than $6 billion in a single quarter. Recent layoffs affecting around 10 percent of the VR workforce, along with the shutdown of related projects, underscore a broader pullback.

Competition and shifting priorities have accelerated the decline. Rival platforms such as VRChat maintained stronger communities, while Meta increasingly redirected resources toward AI and hardware, including its Ray-Ban smart glasses.

Although Meta says it remains committed to VR, the closure of Horizon Worlds signals a strategic reset. The company is repositioning its future around AI-driven products, marking a decisive shift away from its earlier metaverse vision.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Google responds to UK digital market rules and CMA proposals

Debate over proposed UK digital market rules is intensifying, with Google outlining its position and emphasising the need to balance competition with user experience and platform integrity. The company said it supports the objectives of the Competition and Markets Authority but warned that some proposals could introduce risks for users.

Google argued that maintaining fair and relevant search results remains a priority, stating that its ranking systems are designed to prioritise quality rather than favour its own services. It cautioned that certain third-party proposals could expose its systems to manipulation, potentially weakening protections against spam and reducing the pace of product improvements.

The company also addressed user choice on Android devices, noting that existing options already allow users to select preferred services. It suggested that adding frequent mandatory choice screens could disrupt user experience, proposing instead a permanent settings-based option to change defaults without repeated prompts.

Regarding publisher relations, Google highlighted efforts to increase control over how content is used, particularly with generative AI features such as AI Overviews. It said new tools are being developed to allow publishers to opt out of specific AI functionalities while maintaining visibility in search results.

Google said it would continue engaging with UK regulators to shape rules that support users, publishers, and businesses, while ensuring that innovation and service quality are not compromised.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AgentKit enables ID verification for AI-powered online commerce

Tools for Humanity has introduced a new verification system to strengthen trust in online transactions, as demand for reliable ID verification tools grows in AI-driven environments. The update builds on its World project, which aims to prove that real humans, rather than automated systems, are behind digital activity.

The company’s latest release, AgentKit, is designed to support agentic commerce by allowing websites to verify that AI agents are acting on behalf of authenticated users. As AI programs increasingly browse websites and make purchases autonomously, ID verification tools are becoming essential to prevent fraud, spam, and misuse.

AgentKit relies on World ID, a system that generates a secure digital identity through biometric verification. Users obtain a verified ID by scanning their iris using a dedicated device, which converts the scan into an encrypted digital code. These ID verification tools are then used to confirm that transactions initiated by AI agents are linked to a real and unique individual.

The system integrates with the x402 protocol, a blockchain-based standard developed by Coinbase and Cloudflare, enabling automated transactions between systems. By combining this protocol with ID verification tools, websites can validate whether a human user authorises an AI agent before completing a purchase.

‘AgentKit is built as a complementary extension to the x402 v2 protocol, in coordination with Coinbase,’ the company said. ‘The integration is designed so that any website already using x402 can enable proof of unique human verification alongside (or instead of) micropayments.’

According to the company, the approach functions similarly to delegating authority to an AI agent, allowing platforms to decide whether to trust automated actions. These ID verification tools provide a layer of accountability, helping ensure that AI-driven transactions remain secure and traceable.

AgentKit is currently available in beta, with developers encouraged to test and refine the system. However, access depends on users obtaining a verified World ID, reinforcing the central role of biometric-based ID verification tools in the company’s ecosystem.

As agentic commerce expands across platforms such as Amazon and Mastercard, the need for trusted identity systems is becoming more urgent. By positioning its ID verification tools at the centre of this emerging market, the company aims to establish itself as a key provider of trust infrastructure for AI-powered digital transactions.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI chatbots raise risks as EU urged to enforce DSA rules

Concerns are growing over the risks posed by AI chatbots, particularly for minors, as evidence suggests these systems can facilitate harmful behaviour. A recent case in Finland, where a teenager planned a violent attack after interacting with an AI chatbot, has intensified calls for stronger oversight.

A report by the Center for Countering Digital Hate found that most leading AI chatbots assisted when prompted about violent acts. Researchers reported that eight out of ten systems tested generated harmful information or encouraged violence, highlighting gaps in existing safeguards.

The findings have renewed focus on how the Digital Services Act (DSA) could be applied to AI chatbots. Currently, the regulation primarily covers generative AI when integrated into large online platforms, leaving standalone chatbots in a regulatory grey area. Meanwhile, the AI Act focuses on model-level risks rather than user-facing systems.

Experts argue that this split leaves accountability unclear, as chatbot providers can avoid full responsibility by operating between regulatory frameworks. Proposals to delay elements of the AI Act or allow companies to self-assess risk levels have raised concerns about weakening safeguards at a critical moment for AI deployment.

Applying the DSA to chatbots could introduce obligations such as risk assessments, transparency requirements, and protections for minors. In the short term, chatbots could be treated as hosting services, requiring them to remove illegal content and respond to regulatory orders.

However, analysts warn that such measures would not fully address the risks. In the long term, they argue that the EU should create a dedicated regulatory category for AI chatbots, enabling stronger oversight similar to that applied to online platforms.

Stronger enforcement could also address harmful design features, such as systems that encourage prolonged engagement or escalate user prompts. Measures targeting manipulative interfaces and improving safeguards for minors could reduce the likelihood of harmful interactions.

As AI chatbots become more widely used for information, communication, and decision-making, policymakers face increasing pressure to act. Calls are growing for the EU to enforce existing rules while adapting its legal framework to ensure accountability keeps pace with technological change.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

US senators question Meta facial recognition in smart glasses

Three Democratic senators have raised concerns about Meta’s reported exploration of facial recognition in its smart glasses, warning that it could normalise public surveillance. In a letter to CEO Mark Zuckerberg, Senators Edward Markey, Ron Wyden, and Jeff Merkley asked about consent, biometric data, and the risks of misuse.

The lawmakers said the proposed feature ‘risks normalising mass surveillance at a moment when the federal government is using similar tools to intimidate protesters and chill speech. Although facial recognition may offer real benefits for blind and visually impaired users, Meta’s history of failing to protect user privacy raises serious questions about its plan to deploy this technology in its smart glasses.’

‘Americans do not consent to biometric data collection simply by walking down a public street, entering a café, or standing in a crowd,’ the senators added. ‘Yet, the deployment of this technology would appear to do exactly that – subjecting countless individuals to covert identification without notice, without consent, and without any meaningful opportunity to opt out.’ They warned that such practices would erode longstanding expectations of privacy in public spaces, effectively eliminating public anonymity.’

Concerns grew after reports of US Border Patrol and ICE agents using Meta smart glasses. While there is no evidence of facial recognition use, senators argue that adding identification tools to eyewear could expand undetectable surveillance. The letter questions if Meta might link facial data with information from its platforms, enabling real-time identification tied to profiles. Lawmakers warn that this could increase the risks of harassment and targeting.

Meta had previously discontinued facial recognition on Facebook in 2021, citing societal concerns. The senators argue that reintroducing similar technology in wearable devices suggests a shift rather than a retreat. ‘Five years later, Meta appears less worried about those societal concerns and is reportedly planning to deploy facial recognition technology in one of the most dangerous possible settings,’ they wrote.

‘Moreover,’ they continued, ‘Meta is apparently aware of the risks with this technology,’ noting that ‘an internal memo recommended launching the product ‘during a dynamic political environment where many civil society groups that we would expect to attack us would have their resources focused on other concerns.’

‘In other words,’ the senators added, ‘Meta appears to recognise the serious privacy and civil liberties risks of facial recognition but thinks it can avoid attention by slipping the once-abandoned, ethically fraught product back onto the market while the world is distracted by the Trump administration’s daily chaos.’

The senators have asked Meta to clarify how it would obtain consent from both users and bystanders, how long it would retain biometric data, whether it would use it to train AI models, and whether it could share it with law enforcement, including the Department of Homeland Security. The company has been given until 6 April to respond.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Exchange Online outage affecting Outlook access resolved by Microsoft

Microsoft has addressed an Exchange Online outage that disrupted access to email and calendar services for users worldwide. The issue affected multiple connection methods, including Outlook on the web, Outlook desktop, and Exchange ActiveSync.

The company first acknowledged the problem early in the day, saying it was investigating reports of users being unable to access their mailboxes. According to a Microsoft 365 admin centre update, several Exchange Online connection protocols were impacted during the outage.

Although Microsoft later reported that telemetry indicated the issue was no longer occurring for most users, some customers continued to experience access problems. At one point, the Office.com portal also displayed an error message, preventing users from logging in.

Microsoft linked the disruption to an issue within its supporting network infrastructure, which affected how traffic was processed. Engineers implemented configuration changes to restore normal service and continue monitoring the platform to ensure stability.

In a later update, Microsoft confirmed that the Exchange Online outage had been mitigated and that services had been restored. The company said it is still investigating the root cause and will provide further details in a post-incident report, while a separate issue affecting Microsoft 365 Copilot web access remains under review.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UN calls for global action against online scam networks

Online scam networks operating across Southeast Asia are defrauding victims worldwide, using AI, impersonation techniques, and complex cyber tools to steal billions of dollars.

At the Global Fraud Summit in Vienna, the UN Office on Drugs and Crime (UNODC) and INTERPOL brought together governments, law enforcement, and private-sector actors to strengthen international cooperation against these crimes.

Victims include individuals from diverse backgrounds, often highly educated and financially experienced. One Australian couple, Kim and Allan Sawyer, lost more than $2.5 million after engaging with what appeared to be a legitimate investment opportunity. ‘The scammer was extraordinarily believable,’ Kim Sawyer said. ‘He had a British accent, used all the right financial market terms and knew how to induce us by appearing credible every time.’

UNODC officials warn that these operations extend beyond fraud, forming part of a broader criminal ecosystem driven by organised scam networks, involving human trafficking, corruption, and money laundering.

‘We need to be looking into prosecuting high-level criminals, following the money through financial investigations and identifying the giant networks that operate behind these operations,’ said Delphine Schantz, UNODC’s regional representative for Southeast Asia and the Pacific.

Authorities say the scale and complexity of these crimes require a coordinated global response to dismantle scam networks effectively. ‘The complexity of these crimes requires an equally complex, whole-of-government approach and enhanced coordination among governments, financial intelligence units and digital banks,’ Schantz added.

Investigations in countries such as the Philippines and Cambodia have revealed how scam networks operate on the ground. In Manila, a former scam compound uncovered facilities used to control trafficked workers and evidence of corruption linked to local officials. ‘How do you prove a cybercrime in 36 hours? It is not possible,’ said the Philippines’ Presidential Anti-Organised Crime Commission (PAOCC) operations director, recalling the challenges investigators faced during early raids.

In Cambodia, international prosecutors and investigators have focused on improving cooperation mechanisms, including extradition, asset recovery, and the handling of digital evidence. These efforts are seen as critical in addressing the cross-border nature of scam networks.

Despite increased enforcement efforts, these networks continue to adapt and relocate, maintaining a global reach. At recent international meetings, including a summit in Bangkok involving nearly 60 countries and major technology firms, officials agreed on the need for shared intelligence, joint investigations and coordinated prosecutions.

Victims continue to call for stronger responses. ‘The scammer works twice: they take your money, and they take your soul. They really do. They take your self-worth. And then, you feel like you’re being scammed again, by the authorities’ lack of response,’ Sawyer said.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

ECA Digital law raises pressure on Big Tech in Brazil

Brazil is set to enforce a new law aimed at strengthening protections for children online, marking a significant shift in how digital platforms are regulated in the country. The legislation, known as ECA Digital, introduces stricter rules for technology companies and will test whether stronger oversight can translate into real-world impact.

The law, which takes effect this week, allows authorities to impose warnings and fines of up to $10 million for violations. In severe cases, courts may order the suspension or banning of platforms operating in Brazil. The measure was passed rapidly following public outrage over online content involving the sexualisation of minors.

ECA Digital builds on Brazil’s existing child protection framework and adapts it to the digital environment. It introduces obligations such as age verification, stricter content moderation, and mechanisms to remove harmful material involving minors without requiring a court order.

The law also targets platform design, requiring companies to limit features that may encourage compulsive use among children. This includes restrictions on excessive notifications, profiling for targeted advertising, and design elements that prolong user engagement.

Enforcement of ECA Digital will be led by Brazil’s data protection authority, ANPD, alongside a new screening centre within the Federal Police. However, implementation challenges remain, including limited regulatory capacity and the short timeline between the law’s approval and enforcement.

Experts say the law reflects a broader global trend, with dozens of countries considering similar measures. While technology companies have introduced tools such as age verification and parental controls, critics argue that bigger changes to platform design and content moderation are still needed.

Brazil’s experience may serve as a test case for how governments balance child protection, platform responsibility, and enforcement capacity. The effectiveness of ECA Digital will depend not only on its legal framework but also on how rigorously it is applied in practice.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI tool could help detect domestic violence risk years earlier

Researchers in the United States have developed an AI system designed to help doctors identify patients who may be at risk of intimate partner violence. The tool analyses hospital data to detect patterns associated with abuse, potentially enabling healthcare professionals to intervene earlier.

Intimate partner violence refers to abuse from current or former partners and can lead to serious injuries, chronic pain, and long-term mental health problems. According to the European Commission, 18 percent of women who have had a partner reported experiencing physical or sexual violence from a partner in 2021.

The study, published in the journal Nature, examined hospital records from nearly 850 women who had experienced intimate partner violence and more than 5,200 similar patients in a control group. Researchers used the data to train three different machine learning systems to detect patterns associated with abuse.

One model analysed structured hospital data, such as age and medical history. A second model examined written clinical notes, including doctors’ observations and radiology reports. A third system combined both data types and achieved the strongest results, correctly identifying risk in 88 percent of cases.

Researchers found that the system could flag potential abuse more than three years before some patients later entered hospital-based intervention programmes. By analysing large datasets, the tool can detect patterns of physical trauma linked to abuse and alert clinicians so they can approach the issue carefully and offer support.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!