Cybersecurity alarm after 184 million credentials exposed

A vast unprotected database containing over 184 million credentials from major platforms and sectors has highlighted severe weaknesses in data security worldwide.

The leaked credentials, harvested by infostealer malware and stored in plain text, pose significant risks to consumers and businesses, underscoring an urgent need for stronger cybersecurity and better data governance.

Cybersecurity researcher Jeremiah Fowler discovered the 47 GB database exposing emails, passwords, and authorisation URLs from tech giants like Google, Microsoft, Apple, Facebook, and Snapchat, as well as banking, healthcare, and government accounts.

The data was left accessible without any encryption or authentication, making it vulnerable to anyone with the link.

The credentials were reportedly collected by infostealer malware such as Lumma Stealer, which silently steals sensitive information from infected devices. The stolen data fuels a thriving underground economy involving identity theft, fraud, and ransomware.

The breach’s scope extends beyond tech, affecting critical infrastructure like healthcare and government services, raising concerns over personal privacy and national security. With recurring data breaches becoming the norm, industries must urgently reinforce security measures.

Chief Data Officers and IT risk leaders face mounting pressure as regulatory scrutiny intensifies. The leak highlights the need for proactive data stewardship through encryption, access controls, and real-time threat detection.

Many organisations struggle with legacy systems, decentralised data, and cloud adoption, complicating governance efforts.

Enterprise leaders must treat data as a strategic asset and liability, embedding cybersecurity into business processes and supply chains. Beyond technology, cultivating a culture of accountability and vigilance is essential to prevent costly breaches and protect brand trust.

The massive leak signals a new era in data governance where transparency and relentless improvement are critical. The message is clear: there is no room for complacency in safeguarding the digital world’s most valuable assets.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK judges issue warning on unchecked AI use by lawyers

A senior UK judge has warned that lawyers may face prosecution if they continue citing fake legal cases generated by AI without verifying their accuracy.

High Court justice Victoria Sharp called the misuse of AI a threat to justice and public trust, after lawyers in two recent cases relied on false material created by generative tools.

In one £90 million lawsuit involving Qatar National Bank, a lawyer submitted 18 cases that did not exist. The client later admitted to supplying the false information, but Justice Sharp criticised the lawyer for depending on the client’s research instead of conducting proper legal checks.

In another case, five fabricated cases were used in a housing claim against the London Borough of Haringey. The barrister denied using AI but failed to provide a clear explanation.

Both incidents have been referred to professional regulators. Sharp warned that submitting false information could amount to contempt of court or, in severe cases, perverting the course of justice — an offence that can lead to life imprisonment.

While recognising AI as a useful legal tool, Sharp stressed the need for oversight and regulation. She said AI’s risks must be managed with professional discipline if public confidence in the legal system is to be preserved.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK teams with tech giants on AI training

The UK government is launching a nationwide AI skills initiative aimed at both workers and schoolchildren, with Prime Minister Keir Starmer announcing partnerships with major tech companies including Google, Microsoft and Amazon.

The £187 million TechFirst programme will provide AI education to one million secondary students and train 7.5 million workers over the next five years.

Rather than keeping such tools limited to specialists, the government plans to make AI training accessible across classrooms and businesses. Companies involved will make learning materials freely available to boost digital skills and productivity, particularly in using chatbots and large language models.

Starmer said the scheme is designed to empower the next generation to shape AI’s future instead of being shaped by it. He called it the start of a new era of opportunity and growth, as the UK aims to strengthen its global leadership in AI.

The initiative arrives as the country’s AI sector, currently worth £72 billion, is projected to grow to more than £800 billion by 2035.

The government also signed two agreements with NVIDIA to support a nationwide AI talent pipeline, reinforcing efforts to expand both the workforce and innovation in the sector.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI in higher education: A mixed blessing for students and institutions

AI rapidly reshapes university life, offering students new tools to boost creativity, structure assignments, and develop ideas more efficiently. At institutions like Oxford Brookes University, students like 22-year-old Sunjaya Phillips have found that AI enhances confidence and productivity when used responsibly, with faculty guidance.

She describes AI as a ‘study buddy’ that transformed her academic experience, especially during creative blocks, where AI-generated prompts saved valuable time. However, the rise of AI in academia also raises important concerns.

A global student survey revealed that while many embrace AI in their studies, a majority fear its long-term implications on employment. Some admit to misusing the technology for dishonest purposes, highlighting the ethical challenges it presents.

Experts like Dr Charlie Simpson from Oxford Brookes caution that relying too heavily on AI to ‘do the thinking’ undermines educational goals and may devalue the learning process.

Despite these concerns, many educators and institutions remain optimistic about AI’s potential—if used wisely. Professor Keiichi Nakata from Henley Business School stresses that AI is not a replacement but a powerful aid, likening its expected workplace relevance to today’s basic IT skills.

He and others argue that responsible AI use could elevate the capabilities of future graduates and reshape degree expectations accordingly. While some students worry about job displacement, others, like Phillips, view AI as a support system rather than a threat.

The consensus among academics is clear: success in the age of AI will depend not on avoiding the technology, but on mastering it with discernment, ethics, and adaptability.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT adds meeting recording and cloud access

OpenAI has launched new features for ChatGPT that allow it to record meetings, transcribe conversations, and pull information directly from cloud platforms like Google Drive and SharePoint.

Instead of relying on typed input alone, users can now speak to ChatGPT, which records audio, creates editable summaries, and helps generate follow-up content such as emails or project outlines.

‘Record’ is currently available to Team users via the macOS app and will soon expand to Enterprise and Edu accounts.

The recording tool automatically deletes the audio after transcription and applies existing workspace data rules, ensuring recordings are not used for training.

Instead of leaving notes scattered across different platforms, users gain a structured and searchable history of conversations, voice notes, or brainstorming sessions, which ChatGPT can recall and apply during future interactions.

At the same time, OpenAI has introduced new connectors for business users that let ChatGPT access files from cloud services like Dropbox, OneDrive, Box, and others.

These connectors allow ChatGPT to search and summarise information from internal documents, rather than depending only on web search or user uploads. The update also includes beta support for Deep Research agents that can work with tools like GitHub and HubSpot.

OpenAI has embraced the Model Context Protocol, an open standard allowing organisations to build their own custom connectors for proprietary tools.

Rather than serving purely as a general-purpose chatbot, ChatGPT is evolving into a workplace assistant capable of tapping into and understanding a company’s complete knowledge base.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

M&S CEO targeted by hackers in abusive ransom email

Marks & Spencer has been directly targeted by a ransomware group calling itself DragonForce, which sent a vulgar and abusive ransom email to CEO Stuart Machin using a compromised employee email address.

The message, laced with offensive language and racist terms, demanded that Machin engage via a darknet portal to negotiate payment. It also claimed that the hackers had encrypted the company’s servers and stolen customer data, a claim M&S eventually acknowledged weeks later.

The email, dated 23 April, appears to have been sent from the account of an Indian IT worker employed by Tata Consultancy Services (TCS), a long-standing M&S tech partner.

TCS has denied involvement and stated that its systems were not the source of the breach. M&S has remained silent publicly, neither confirming the full scope of the attack nor disclosing whether a ransom was paid.

The cyber attack has caused major disruption, costing M&S an estimated £300 million and halting online orders for over six weeks.

DragonForce has also claimed responsibility for a simultaneous attack on the Co-op, which left some shelves empty for days. While nothing has yet appeared on DragonForce’s leak site, the group claims it will publish stolen information soon.

Investigators believe DragonForce operates as a ransomware-as-a-service collective, offering tools and platforms to cybercriminals in exchange for a 20% share of any ransom.

Some experts suspect the real perpetrators may be young hackers from the West, linked to a loosely organised online community called Scattered Spider. The UK’s National Crime Agency has confirmed it is focusing on the group as part of its inquiry into the recent retail hacks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Epic adds AI NPC tools to Fortnite as Vader voice sparks union clash

Epic Games is launching new tools for Fortnite creators that enable them to build AI-powered non-player characters (NPCs), following the debut of an AI-generated Darth Vader that players can talk to in-game.

The feature, which reproduces the iconic voice of James Earl Jones using AI, marks a significant step in interactive gaming—but also comes with its share of challenges and controversy.

According to The Verge, Epic encountered several difficulties in fine-tuning Vader’s voice and responses to feel authentic and fit smoothly into gameplay. ‘The culmination of a very intense effort for a character everybody understands,’ said Saxs Persson, executive vice president of the Fortnite ecosystem.

Persson noted that the team worked carefully to ensure that when Vader joins a player’s team, he behaves as a fearsome and aggressive ally—true to his cinematic persona.

However, the rollout wasn’t entirely smooth. In a live-streamed session, popular Fortnite creator Loserfruit prompted Vader to swear, exposing the system’s content filtering flaws. Epic responded quickly with patches and has since implemented multiple layers of safety checks.

‘We do our best job on day one,’ said Persson, ‘but more importantly, we’re ready to surround the problem and have fixes in place as fast as possible.’

Now, Fortnite creators will have access to the same suite of AI tools and safety systems used to develop Vader. They can control voice tone, dialogue, and NPC behaviour while relying on Epic’s safeguards to avoid inappropriate interactions.

The feature launch comes at a sensitive moment, as actor union SAG-AFTRA has filed a complaint against Epic Games over using AI to recreate Vader’s voice.

The union claims that Llama Productions, an Epic subsidiary, employed the technology without consulting or bargaining with the union, replacing the work of human voice actors.

‘We must protect our right to bargain terms and conditions around uses of voice that replace the work of our members,’ SAG-AFTRA said, emphasising its support for actors and estates in managing the use of digital replicas.

As Epic expands its AI capabilities in gaming, it faces both the technical challenges of responsible deployment and the growing debate around AI’s impact on creative professions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Reddit accuses Anthropic of misusing user content

Reddit has taken legal action against AI startup Anthropic, alleging that the company scraped its platform without permission and used the data to train and commercialise its Claude AI models.

The lawsuit, filed in San Francisco’s Superior Court, accuses Anthropic of breaching contract terms, unjust enrichment, and interfering with Reddit’s operations.

According to Reddit, Anthropic accessed the platform more than 100,000 times despite publicly claiming to have stopped doing so.

The complaint claims Anthropic ignored Reddit’s technical safeguards, such as robots.txt files, and bypassed the platform’s user agreement to extract large volumes of user-generated content.

Reddit argues that Anthropic’s actions undermine its licensing deals with companies like OpenAI and Google, who have agreed to strict content usage and deletion protocols.

The filing asserts that Anthropic intentionally used personal data from Reddit without ever seeking user consent, calling the company’s conduct deceptive. Despite public statements suggesting respect for privacy and web-scraping limitations, Anthropic is portrayed as having disregarded both.

The lawsuit even cites Anthropic’s own 2021 research that acknowledged Reddit content as useful in training AI models.

Reddit is now seeking damages, repayment of profits, and a court order to stop Anthropic from using its data further. The market responded positively, with Reddit’s shares closing nearly 67% higher at $118.21—indicating investor support for the company’s aggressive stance on data protection.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

WhatsApp to add usernames for better privacy

WhatsApp is preparing to introduce usernames, allowing users to hide their phone numbers and opt for a unique ID instead. Meta’s push reflects growing demand for more secure and anonymous communication online.

Currently in development and not yet available for testing, the new feature will let users create usernames with letters, numbers, periods, and underscores, while blocking misleading formats like web addresses.

The move aims to improve privacy by letting users connect without revealing personal contact details. A system message will alert contacts whenever a username is updated, adding transparency to the process.

Although still in beta, the feature is expected to roll out soon, bringing WhatsApp in line with other major messaging platforms that already support username-based identities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google email will reply by using your voice

Google is building a next-generation email system that uses generative AI to reply to mundane messages in your own tone, according to DeepMind CEO Demis Hassabis.

Speaking at SXSW London, Hassabis said the system would handle everyday emails instead of requiring users to write repetitive responses themselves.

Hassabis called email ‘the thing I really want to get rid of,’ and joked he’d pay thousands each month for that luxury. He emphasised that while AI could help cure diseases or combat climate change, it should also solve smaller daily annoyances first—like managing inbox overload.

The upcoming feature aims to identify routine emails and draft replies that reflect the user’s writing style, potentially making decisions on simpler matters.

While details are still limited, the project remains under development and could debut as part of Google’s premium AI subscription model before reaching free-tier users.

Gmail already includes generative tools that adjust message tone, but the new system goes further—automating replies instead of just suggesting edits.

Hassabis also envisioned a universal AI assistant that protects users’ attention and supports digital well-being, offering personalised recommendations and taking care of routine digital tasks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!