Worldcoin jumped 40% after reports that OpenAI is developing a biometric social platform to verify users and eliminate bots. The proposed network would reportedly integrate AI tools while relying on biometric identification to ensure proof of personhood.
Sources cited by Forbes claim the project aims to create a humans-only platform, differentiating itself from existing social networks, including X. Development is said to be led by a small internal team, with work reportedly underway since early 2025.
Biometric verification could involve Apple’s Face ID or the World Orb scanner, a device linked to the World project co-founded by OpenAI chief executive Sam Altman.
The report sparked a sharp rally in Worldcoin, though part of the gains later reversed amid wider market weakness. Despite the brief surge, Worldcoin has remained sharply lower over the past year amid weak market sentiment and ongoing privacy concerns.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Pornhub will begin blocking access for new UK users from 2 February 2026, allowing entry only to people who had already created an account and completed age checks before that date, the company said, framing the move as a protest against how the UK’s Online Safety Act is being enforced.
The UK regime, overseen by Ofcom, requires porn services accessible in Britain to deploy ‘highly effective’ age assurance measures, not simple click-through age gates. Ofcom says traffic to pornography sites has fallen by about a third since the age-check deadline of 25 July 2025, and it has pursued investigations into dozens of services as enforcement ramps up.
Privacy and security concerns become sharper when adult platforms are turned into identity checkpoints. In December 2025, reporting linked a large leak of Pornhub premium-user analytics data, including emails and viewing/search histories, to a breach involving a third-party analytics provider, underscoring how sensitive such datasets can be when they are collected or retained.
Government and regulator messaging emphasises child protection and the Online Safety Act’s enforcement teeth, including significant penalties and, in extreme cases, access restrictions, while companies like Aylo argue that inconsistent enforcement simply pushes demand to riskier corners of the internet and fuels workarounds like VPNs.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
SoundCloud disclosed a major data breach in December 2025, confirming that around 29.8 million global user accounts were affected. The incident represents one of the largest security failures involving a global music streaming platform.
The privacy breach exposed email addresses alongside public profile information, including usernames, display names and follower data. SoundCloud said passwords and payment details were not accessed, but the combined data increases the risk of phishing.
SoundCloud detected unauthorised activity in December 2025 and launched an internal investigation. Attackers reportedly exploited a flaw that linked public profile data with private email addresses at scale.
After SoundCloud refused an extortion demand, the stolen dataset was released publicly. SoundCloud has urged users worldwide to monitor accounts closely and enable stronger security protections.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Meta plans to nearly double its AI investment in 2026, according to its latest earnings report. Spending is expected to reach between $115bn and $135bn as the company expands large-scale infrastructure.
Mark Zuckerberg said the investment will focus on data centres needed to train advanced AI models. The strategy is designed to support long-term AI development across Meta’s platforms in the US.
Zuckerberg described 2026 as a pivotal year for AI, with Meta working on multiple products rather than a single launch. Testing is reportedly underway on new models intended to succeed the Llama family in the US.
Meta said building proprietary AI models allows greater control over future products. The company positioned AI as a tool for personal empowerment, setting its approach apart from more centralised automation strategies in the US.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
WhatsApp rejected a class-action lawsuit accusing Meta of accessing encrypted messages, calling such claims false. The company reaffirmed that chats remain protected by device-based Signal protocol encryption.
Filed in a US federal court in California, the complaint alleges Meta misleads more than two billion users by promoting unbreakable encryption while internally storing and analysing message content. Plaintiffs from several countries claim employees can access chats through internal requests.
WhatsApp said no technical evidence accompanies the accusations and stressed that encryption occurs on users’ devices before messages are sent. According to the company, only recipients hold the keys required to decrypt content, which are never accessible to Meta.
The firm described the lawsuit as frivolous and said it will seek sanctions against the legal teams involved. Meta spokespersons reiterated that WhatsApp has relied on independently audited encryption standards for over a decade.
The case highlights ongoing debates about encryption and security, but so far, no evidence has shown that message content has been exposed.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Swiss technology and privacy expert Anna Zeiter is leading the development of W Social, a new European-built social media network designed as an alternative to X. The project aims to reduce reliance on US tech and strengthen European digital sovereignty.
W Social will require users to verify their identity and provide a photo to ensure genuine human accounts, tackling fake profiles and bot driven disinformation that critics link to existing platforms. Zeiter said the name W stands for ‘We’ as well as values and verification.
The platform’s infrastructure will be hosted in Europe under strict EU data protection laws, with decentralised storage and offices planned in Berlin and Paris. Early support comes from European political and tech figures, signalling interest beyond Silicon Valley.
W Social could launch a beta version as early as February, with broader public access planned by year-end. Backers hope the network will foster more positive dialogue and provide a European alternative to US based social media influence.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The certification allows the advertising technology firm to manage personal data without relying on additional transfer mechanisms.
The framework, adopted in 2023, provides a legal basis for EU-to-US data flows while strengthening oversight and accountability. Certification requires organisations to meet strict standards on data minimisation, security, transparency, and individual rights.
By joining the framework, StackAdapt enhances its ability to support advertisers, publishers, and partners through seamless international data processing.
The move also reduces regulatory complexity for European customers while reinforcing the company’s broader commitment to privacy-by-design and responsible data use.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
EU member states are preparing to open formal discussions on the risks posed by AI-powered deepfakes and their use in cyberattacks, following an initiative by the current Council presidency.
The talks are intended to assess how synthetic media may undermine democratic processes and public trust across the bloc.
According to sources, capitals will also begin coordinated exchanges on the proposed Democracy Shield, a framework aimed at strengthening resilience against foreign interference and digitally enabled manipulation.
Deepfakes are increasingly viewed as a cross-cutting threat, combining disinformation, cyber operations and influence campaigns.
The timeline set out by the presidency foresees structured discussions among national experts before escalating the issue to the ministerial level. The approach reflects growing concern that existing cyber and media rules are insufficient to address rapidly advancing AI-generated content.
An initiative that signals a broader shift within the Council towards treating deepfakes not only as a content moderation challenge, but as a security risk with implications for elections, governance and institutional stability.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Scientists are divided over when quantum computers will become powerful enough to break today’s digital encryption, a moment widely referred to as ‘Q–Day’.
While predictions range from just two years to several decades, experts agree that governments and companies must begin preparing urgently for a future where conventional security systems may fail.
Quantum computing uses subatomic behaviour to process data far faster than classical machines, enabling rapid decryption of information once considered secure.
Financial systems, healthcare data, government communications, and military networks could all become vulnerable as advanced quantum machines emerge.
Major technology firms have already made breakthroughs, accelerating concerns that encryption safeguards could be overwhelmed sooner than expected.
Several cybersecurity specialists warn that sensitive data is already being harvested and stored for future decryption, a strategy known as ‘harvest now, decrypt later’.
Regulators in the UK and the US have set timelines for shifting to post-quantum cryptography, aiming for full migration by 2030-2035. However, engineering challenges and unresolved technical barriers continue to cast uncertainty over the pace of progress.
Despite scepticism over timelines, experts agree that early preparation remains the safest approach. Experts stress that education, infrastructure upgrades, and global cooperation are vital to prevent disruption as quantum technology advances.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The UK competition watchdog has proposed new rules that would force Google to give publishers greater control over how their content is used in search and AI tools.
The Competition and Markets Authority (CMA) plans to require opt-outs for AI-generated summaries and model training, marking the first major intervention under Britain’s new digital markets regime.
Publishers argue that generative AI threatens traffic and revenue by answering queries directly instead of sending users to the original sources.
The CMA proposal would also require clearer attribution of publisher content in AI results and stronger transparency around search rankings, including AI Overviews and conversational search features.
Additional measures under consultation include search engine choice screens on Android and Chrome, alongside stricter data portability obligations. The regulator says tailored obligations would give businesses and users more choice while supporting innovation in digital markets.
Google has warned that overly rigid controls could damage the user experience, describing the relationship between AI and search as complex.
The consultation runs until late February, with the outcome expected to shape how AI-powered search operates in the UK.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!