Fake AI assistant steals OpenAI credentials from thousands of Chrome users

A Chrome browser extension posing as an AI assistant has stolen OpenAI credentials from more than 10,000 users. Cybersecurity platform Obsidian identified the malicious software, known as H-Chat Assistant, which secretly harvested API keys and transmitted user data to hacker-controlled servers.

The extension, initially called ChatGPT Extension, appeared to function normally after users provided their OpenAI API keys. Analysts discovered that the theft occurred when users deleted chats or logged out, triggering the transmission of credentials via hardcoded Telegram bot credentials.

At least 459 unique API keys were exfiltrated to a Telegram channel months before they were discovered in January 2025.

Researchers believe the malicious activity began in July 2024 and continued undetected for months. Following disclosure to OpenAI on 13 January, the company revoked compromised API keys, though the extension reportedly remained available in the Chrome Web Store.

Security analysts identified 16 related extensions sharing the identical developer fingerprints, suggesting a coordinated campaign by a single threat actor.

LayerX Security consultant Natalie Zargarov warned that whilst current download numbers remain relatively low, AI-focused browser extensions could rapidly surge in popularity.

The malicious extensions exploit vulnerabilities in web-based authentication processes, creating, as researchers describe, a ‘materially expanded browser attack surface’ through deep integration with authenticated web applications.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Tech giants weigh massive investment in OpenAI

NVIDIA, Microsoft, and Amazon are in talks to invest up to $60 billion in OpenAI, valuing the company at around $730 billion. The talks highlight intensifying competition among technology giants to secure strategic positions in the rapidly expanding AI sector.

NVIDIA is said to be considering the largest commitment, potentially investing as much as $30 billion, while Microsoft may add less than $10 billion despite its long-standing partnership with OpenAI.

Amazon could contribute more than $10 billion, strengthening its cloud and infrastructure ties with the company as demand for large-scale AI computing continues to rise.

OpenAI and NVIDIA are advancing plans to deploy large-scale data centre capacity, with a multi-year rollout starting in late 2026. The project aims to deliver large-scale high-performance computing, supporting OpenAI’s push towards artificial general intelligence and global expansion.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenAI biometric social platform plans spark Worldcoin surge

Worldcoin jumped 40% after reports that OpenAI is developing a biometric social platform to verify users and eliminate bots. The proposed network would reportedly integrate AI tools while relying on biometric identification to ensure proof of personhood.

Sources cited by Forbes claim the project aims to create a humans-only platform, differentiating itself from existing social networks, including X. Development is said to be led by a small internal team, with work reportedly underway since early 2025.

Biometric verification could involve Apple’s Face ID or the World Orb scanner, a device linked to the World project co-founded by OpenAI chief executive Sam Altman.

The report sparked a sharp rally in Worldcoin, though part of the gains later reversed amid wider market weakness. Despite the brief surge, Worldcoin has remained sharply lower over the past year amid weak market sentiment and ongoing privacy concerns.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK expands free AI training to reach 10 million workers by 2030

The government has expanded the UK joint industry programme offering free AI training to every adult, with the ambition of upskilling 10 million workers by 2030.

Newly benchmarked courses are available through the AI Skills Hub, giving people practical workplace skills while supporting Britain’s aim to become the fastest AI adopter in the G7.

The programme includes short online courses that teach workers in the UK how to use basic AI tools for everyday tasks such as drafting text, managing content and reducing administrative workloads.

Participants who complete approved training receive a government-backed virtual AI foundations badge, setting a national standard for AI capability across sectors.

Public sector staff, including NHS and local government employees, are among the groups targeted as the initiative expands.

Ministers also announced £27 million in funding to support local tech jobs, graduate traineeships and professional practice courses, alongside the launch of a new cross-government unit to monitor AI’s impact on jobs and labour markets.

Officials argue that widening access to AI skills will boost productivity, support economic growth and help workers adapt to technological change. The programme builds on existing digital skills initiatives and brings together government, industry and trade unions to shape a fair and resilient future of work.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

SoundCloud breach exposes nearly 30 million users

SoundCloud disclosed a major data breach in December 2025, confirming that around 29.8 million global user accounts were affected. The incident represents one of the largest security failures involving a global music streaming platform.

The privacy breach exposed email addresses alongside public profile information, including usernames, display names and follower data. SoundCloud said passwords and payment details were not accessed, but the combined data increases the risk of phishing.

SoundCloud detected unauthorised activity in December 2025 and launched an internal investigation. Attackers reportedly exploited a flaw that linked public profile data with private email addresses at scale.

After SoundCloud refused an extortion demand, the stolen dataset was released publicly. SoundCloud has urged users worldwide to monitor accounts closely and enable stronger security protections.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Class-action claims challenge WhatsApp end-to-end encryption practices

WhatsApp rejected a class-action lawsuit accusing Meta of accessing encrypted messages, calling such claims false. The company reaffirmed that chats remain protected by device-based Signal protocol encryption.

Filed in a US federal court in California, the complaint alleges Meta misleads more than two billion users by promoting unbreakable encryption while internally storing and analysing message content. Plaintiffs from several countries claim employees can access chats through internal requests.

WhatsApp said no technical evidence accompanies the accusations and stressed that encryption occurs on users’ devices before messages are sent. According to the company, only recipients hold the keys required to decrypt content, which are never accessible to Meta.

The firm described the lawsuit as frivolous and said it will seek sanctions against the legal teams involved. Meta spokespersons reiterated that WhatsApp has relied on independently audited encryption standards for over a decade.

The case highlights ongoing debates about encryption and security, but so far, no evidence has shown that message content has been exposed.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Council presidency launches talks on AI deepfakes and cyberattacks

EU member states are preparing to open formal discussions on the risks posed by AI-powered deepfakes and their use in cyberattacks, following an initiative by the current Council presidency.

The talks are intended to assess how synthetic media may undermine democratic processes and public trust across the bloc.

According to sources, capitals will also begin coordinated exchanges on the proposed Democracy Shield, a framework aimed at strengthening resilience against foreign interference and digitally enabled manipulation.

Deepfakes are increasingly viewed as a cross-cutting threat, combining disinformation, cyber operations and influence campaigns.

The timeline set out by the presidency foresees structured discussions among national experts before escalating the issue to the ministerial level. The approach reflects growing concern that existing cyber and media rules are insufficient to address rapidly advancing AI-generated content.

An initiative that signals a broader shift within the Council towards treating deepfakes not only as a content moderation challenge, but as a security risk with implications for elections, governance and institutional stability.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Experts debate when quantum computers could break modern encryption

Scientists are divided over when quantum computers will become powerful enough to break today’s digital encryption, a moment widely referred to as ‘Q–Day’.

While predictions range from just two years to several decades, experts agree that governments and companies must begin preparing urgently for a future where conventional security systems may fail.

Quantum computing uses subatomic behaviour to process data far faster than classical machines, enabling rapid decryption of information once considered secure.

Financial systems, healthcare data, government communications, and military networks could all become vulnerable as advanced quantum machines emerge.

Major technology firms have already made breakthroughs, accelerating concerns that encryption safeguards could be overwhelmed sooner than expected.

Several cybersecurity specialists warn that sensitive data is already being harvested and stored for future decryption, a strategy known as ‘harvest now, decrypt later’.

Regulators in the UK and the US have set timelines for shifting to post-quantum cryptography, aiming for full migration by 2030-2035. However, engineering challenges and unresolved technical barriers continue to cast uncertainty over the pace of progress.

Despite scepticism over timelines, experts agree that early preparation remains the safest approach. Experts stress that education, infrastructure upgrades, and global cooperation are vital to prevent disruption as quantum technology advances.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI-driven semiconductor expansion continues despite market doubts

The pace of the AI infrastructure boom continues to accelerate, with semiconductor supply chains signalling sustained long-term demand.

NVIDIA remains the most visible beneficiary as data centre investment drives record GPU purchases, yet supplier activity further upstream suggests confidence extends well beyond a single company.

ASML, the Dutch firm that exclusively supplies extreme ultraviolet lithography equipment, has emerged as a critical indicator of future chip production.

Its machines are essential for advanced semiconductor manufacturing, meaning strong performance reflects expectations of high chip volumes across the industry rather than short-term speculation. Quarterly earnings underline that momentum.

ASML reported €32.7 billion in net sales, while new bookings reached a record €13 billion, more than double the previous quarter.

New orders reflect how much capacity manufacturers expect to need, pointing to sustained expansion driven by anticipated AI workloads.

Company leadership attributed the surge directly to AI-related demand, with customers expressing growing confidence in the durability of data centre investment.

While order fulfilment will take years and some plans may change, industry signals suggest a slowdown in AI infrastructure spending is not imminent.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Google faces new UK rules over AI summaries and publisher rights

The UK competition watchdog has proposed new rules that would force Google to give publishers greater control over how their content is used in search and AI tools.

The Competition and Markets Authority (CMA) plans to require opt-outs for AI-generated summaries and model training, marking the first major intervention under Britain’s new digital markets regime.

Publishers argue that generative AI threatens traffic and revenue by answering queries directly instead of sending users to the original sources.

The CMA proposal would also require clearer attribution of publisher content in AI results and stronger transparency around search rankings, including AI Overviews and conversational search features.

Additional measures under consultation include search engine choice screens on Android and Chrome, alongside stricter data portability obligations. The regulator says tailored obligations would give businesses and users more choice while supporting innovation in digital markets.

Google has warned that overly rigid controls could damage the user experience, describing the relationship between AI and search as complex.

The consultation runs until late February, with the outcome expected to shape how AI-powered search operates in the UK.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!