Meta to end Instagram private message encryption after May 8

The US tech giant, Meta, has announced that end-to-end encryption for private messages on Instagram will no longer be supported after 8 May.

Previously, such a technology ensured that only intended recipients could read messages, preventing even Meta from accessing their contents.

The decision follows concerns from law enforcement and child protection organisations, which argued that encrypted messages can make it harder to identify harmful content involving children.

Meta has stated that the update allows the platform to monitor messages while maintaining standard privacy safeguards.

End-to-end encryption had been the default for several messaging platforms, including WhatsApp, Messenger, and other Meta services.

The company first signalled its intent to expand encryption across Instagram and Messenger in 2019, implementing it in 2023. The plan was met with objections from organisations such as the Internet Watch Foundation and the Virtual Global Taskforce.

These groups highlighted potential risks in preventing the timely detection of harmful content, particularly child sexual abuse material.

Meta’s shift reflects a compromise between privacy, platform security, and online child safety. The company has not provided further details on changes to encryption policies beyond Instagram’s private messaging service.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Meta’s metaverse collapses as Horizon Worlds shuts down on Quest

Meta will shut down Horizon Worlds on its Quest headsets, ending its flagship virtual reality (VR) platform and marking a clear retreat from its metaverse ambitions. The app will be removed from the Quest store on 31 March and discontinued in VR by 15 June, continuing only as a mobile service.

Horizon Worlds, launched in 2021, was central to Meta’s rebranding from Facebook and its vision of a fully immersive virtual environment. Despite billions in investment and high-profile partnerships, the platform failed to attract a large user base and struggled with design limitations and weak engagement.

Reality Labs, the division behind the metaverse push, has accumulated nearly $80 billion in losses since 2020, including more than $6 billion in a single quarter. Recent layoffs affecting around 10 percent of the VR workforce, along with the shutdown of related projects, underscore a broader pullback.

Competition and shifting priorities have accelerated the decline. Rival platforms such as VRChat maintained stronger communities, while Meta increasingly redirected resources toward AI and hardware, including its Ray-Ban smart glasses.

Although Meta says it remains committed to VR, the closure of Horizon Worlds signals a strategic reset. The company is repositioning its future around AI-driven products, marking a decisive shift away from its earlier metaverse vision.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Google responds to UK digital market rules and CMA proposals

Debate over proposed UK digital market rules is intensifying, with Google outlining its position and emphasising the need to balance competition with user experience and platform integrity. The company said it supports the objectives of the Competition and Markets Authority but warned that some proposals could introduce risks for users.

Google argued that maintaining fair and relevant search results remains a priority, stating that its ranking systems are designed to prioritise quality rather than favour its own services. It cautioned that certain third-party proposals could expose its systems to manipulation, potentially weakening protections against spam and reducing the pace of product improvements.

The company also addressed user choice on Android devices, noting that existing options already allow users to select preferred services. It suggested that adding frequent mandatory choice screens could disrupt user experience, proposing instead a permanent settings-based option to change defaults without repeated prompts.

Regarding publisher relations, Google highlighted efforts to increase control over how content is used, particularly with generative AI features such as AI Overviews. It said new tools are being developed to allow publishers to opt out of specific AI functionalities while maintaining visibility in search results.

Google said it would continue engaging with UK regulators to shape rules that support users, publishers, and businesses, while ensuring that innovation and service quality are not compromised.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU child safety rules lapse amid ongoing debate over privacy and enforcement

The European Union has been unable to reach an agreement on extending temporary rules that allow online platforms to detect child sexual abuse material, leaving the current framework set to expire in April.

Discussions between the European Parliament and the Council of the European Union concluded without reaching a consensus on how to proceed with such measures.

The existing rules permit technology companies to voluntarily scan their services for harmful content, supporting efforts to identify and remove illegal material.

The European Commission had proposed a temporary extension while negotiations continue on a permanent framework under the Child Sexual Abuse Regulation, but differing views on scope and safeguards prevented agreement.

Stakeholders across sectors have highlighted the importance of maintaining effective tools to address online harms, while also emphasising the need to respect fundamental rights.

Previous periods of legal uncertainty have shown that detection capabilities may be affected when such frameworks are absent, although assessments of effectiveness remain subject to ongoing debate.

At the same time, concerns have been raised regarding the broader implications of monitoring digital communications. Some perspectives stress that any approach should carefully consider privacy protections, particularly in relation to secure and encrypted services.

Attention now turns to ongoing negotiations on a long-term regulatory solution.

The outcome will shape how the EU approaches the challenge of addressing harmful online content while safeguarding rights and ensuring proportional and transparent enforcement.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

UNESCO launches research on harmful online content governance in South Africa

A new research initiative led by UNESCO is examining the governance of harmful online content in South Africa, bringing together actors from government, academia, civil society and technology platforms to strengthen digital governance frameworks.

Conducted under the Social Media 4 Peace programme and supported by the EU, the study investigates the spread and impact of hate speech and disinformation while assessing existing regulatory approaches and platform governance systems.

Emphasis is placed on identifying structural gaps and developing practical responses suited to the country’s socio-political context.

Stakeholder engagement has shaped the research design to reflect local realities, with the aim of producing actionable and rights-based recommendations. As noted by a researcher involved in the project,

At Research ICT Africa, we don’t want this study to end with generic recommendations. We are aiming for grounded insights into how social media is shaping information integrity in our context, alongside practical guidance that regulators, platforms, and civil society can apply.

Kola Ijasan, a researcher at Research ICT Africa

Regulatory perspectives also highlight the importance of understanding emerging risks. As one regulator stated,

We are particularly interested in identifying regulatory gaps – areas where current laws and frameworks fall short in addressing emerging digital risks.

Nomzamo Zondi, a regulator in South Africa.

Findings are expected to contribute to evidence-based policymaking, strengthen platform accountability and safeguard freedom of expression and access to information.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

South Korea seeks support for global AI hub

South Korea is seeking international support for a proposed global AI hub to advance cooperation on technology and governance. The initiative was discussed during talks with Switzerland’s leadership.

Officials in Switzerland met with South Korea’s prime minister to strengthen bilateral ties and support the project. The programme is intended to promote collaboration on AI rules, education and innovation.

The government of South Korea has also engaged several UN agencies to support the initiative. Agreements outline cooperation to help establish the hub and expand global dialogue on AI development.

Leaders in South Korea say the country aims to contribute its strong information technology capabilities to the project. The initiative reflects broader efforts to position the nation as a key player in global AI policy and innovation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

CEOs worry about AI progress

Business leaders in Cyprus are increasingly concerned about whether their organisations are adapting quickly enough to AI-driven change. A recent PwC survey shows many executives feel the pace of transformation is too slow.

Despite growing interest, most companies have yet to see significant financial returns from AI. Only a minority reported increased revenue or reduced costs, while many said the impact remains limited. These returns are not limited to Cyprus, but are also seen around the world.

Companies in Cyprus are still building the foundations for wider AI adoption. The challenges include limited investment, difficulty attracting skilled talent and uncertainty about organisational readiness.

Executives expect AI to affect junior roles more than senior positions over the coming years. Leaders emphasise the need for clear strategy, workforce development and stronger alignment between technology and business goals.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI fuels rise in cyber scams

Cybercrime incidents have surged as AI tools enable more convincing scams, leading to sharply rising losses in Estonia. Authorities reported thousands of phishing and fraud cases affecting individuals and businesses.

Criminals are using AI to generate fluent messages in Estonian, removing a key warning sign that once helped people detect scams. Experts say language accuracy has made fraudulent calls and messages harder to identify.

Growing awareness of scams is also fuelling public anxiety, with some users considering abandoning digital services. Officials warn that loss of trust could undermine confidence in digital systems.

Authorities are urging stronger safeguards and public education to counter the cybersecurity threats. Banks, telecom firms and digital identity providers are introducing new protections while campaigns aim to improve digital awareness.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Tether unveils mobile-friendly AI training platform

Tether has launched an AI framework that runs large language models on smartphones and non-NVIDIA GPUs. The system is part of its QVAC platform and uses Microsoft’s BitNet architecture, along with LoRA techniques to reduce memory and computational requirements.

The framework enables cross-platform training on AMD, Intel, Apple Silicon, and mobile GPUs, allowing models with up to 1 billion parameters to be fine-tuned on phones in under 2 hours.

Larger models with up to 13 billion parameters are also supported on mobile devices. BitNet’s 1-bit architecture reduces VRAM requirements by nearly 78%, enabling larger models to run on limited hardware.

Performance improvements benefit inference, with mobile GPUs outperforming CPUs, enabling on-device training and federated learning. By reducing reliance on cloud infrastructure, the system offers more flexible AI development for distributed environments.

Tether’s expansion into AI mirrors a broader trend in the crypto sector, where companies are investing in AI infrastructure, autonomous agents, and high-performance computing.

Industry activity includes record revenue growth for AI and HPC operations, blockchain-integrated AI agents, and new tools for secure on-chain transactions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Gemini and Search gain deeper personalisation tools

Google has expanded Personal Intelligence across AI Mode in Search, the Gemini app, and Gemini in Chrome for US users. Rollout follows early adoption, where users responded positively to more tailored and context-aware assistance.

Personal Intelligence connects data across services such as Gmail and Google Photos to deliver highly personalised responses. Queries no longer need full context, as the system uses past purchases, travel history, and preferences to deliver relevant suggestions.

Use cases range from customised shopping recommendations and technical troubleshooting to travel planning and itinerary creation. Suggestions adapt to user habits, including preferred brands, past bookings, and time constraints, delivering more precise results.

Privacy remains central to the rollout, with users retaining control over which apps are connected. Data from personal services is not directly used to train AI models, while limited interaction data helps improve performance over time.

Access is currently limited to personal Google accounts, excluding enterprise and education users.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot