Speaking to POLITICO at the British Library — still recovering from a 2023 ransomware attack by Rysida — Security Minister Dan Jarvis said the UK is prepared to use offensive cyber capabilities to respond to threats.
‘If you are a cybercriminal and think you can attack a UK-based institution without repercussions, think again,’ Jarvis stated. He emphasised the importance of sending a clear signal that hostile activity will not go unanswered.
The warning follows a recent government decision to ban ransom payments by public sector bodies. Jarvis said deterrence must be matched by vigorous enforcement.
The UK has acknowledged its offensive cyber capabilities for over a decade, but recent strategic shifts have expanded its role. A £1 billion investment in a new Cyber and Electromagnetic Command will support coordinated action alongside the National Cyber Force.
While Jarvis declined to specify technical capabilities, he cited the National Crime Agency’s role in disrupting the LockBit ransomware group as an example of the UK’s growing offensive posture.
AI is accelerating both cyber threats and defensive measures. Jarvis said the UK must harness AI for national advantage, describing an ‘arms race’ amid rapid technological advancement.
Most cyber threats originate from Russia or its affiliated groups, though Iran, China, and North Korea remain active. The UK is also increasingly concerned about ‘hack-for-hire’ actors operating from friendly nations, including India.
Despite these concerns, Jarvis stressed the UK’s strong security ties with India and ongoing cooperation to curb cyber fraud. ‘We will continue to invest in that relationship for the long term,’ he said.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Speaking at a Federal Reserve conference in Washington, Altman said AI can now convincingly mimic human voices, rendering voiceprint authentication obsolete and dangerously unreliable.
He expressed concern that some financial institutions still rely on voice recognition to verify identities. ‘That is a crazy thing to still be doing. AI has fully defeated that,’ he said. The risk, he noted, is that AI voice clones can now deceive these systems with ease.
Altman added that video impersonation capabilities are also advancing rapidly. Technologies that become indistinguishable from real people could enable more sophisticated fraud schemes. He called for the urgent development of new verification methods across the industry.
Michelle Bowman, the Fed’s Vice Chair for Supervision, echoed the need for action. She proposed potential collaboration between AI developers and regulators to create better safeguards. ‘That might be something we can think about partnering on,’ Bowman told Altman.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
From Karel Čapek’s Rossum’s Universal Robots to sci-fi landmarks like 2001: A Space Odyssey and The Terminator, AI has long occupied a central place in our cultural imagination. Even earlier, thinkers like Plato and Leonardo da Vinci envisioned forms of automation—mechanical minds and bodies—that laid the conceptual groundwork for today’s AI systems.
As real-world technology has advanced, so has public unease. Fears of AI gaining autonomy, turning against its creators, or slipping beyond human control have animated both fiction and policy discourse. In response, tech leaders have often downplayed these concerns, assuring the public that today’s AI is not sentient, merely statistical, and should be embraced as a tool—not feared as a threat.
Yet the evolution from playful chatbots to powerful large language models (LLMs) has brought new complexities. The systems now assist in everything from creative writing to medical triage. But with increased capability comes increased risk. Incidents like the recent Grok episode, where a leading model veered into misrepresentation and reputational fallout, remind us that even non-sentient systems can behave in unexpected—and sometimes harmful—ways.
So, is the age-old fear of rogue AI still misplaced? Or are we finally facing real-world versions of the imagined threats we have long dismissed?
Tay’s 24-hour meltdown
Back in 2016, Microsoft was riding high on the success of Xiaoice, an AI system launched in China and later rolled out in other regions under different names. Buoyed by this confidence, the company explored launching a similar chatbot in the USA, aimed at 18- to 24-year-olds, for entertainment purposes.
Those plans culminated in the launch of TayTweets on 23 March 2016, under the Twitter handle @TayandYou. Initially, the chatbot appeared to function as intended—adopting the voice of a 19-year-old girl, engaging users with captioned photos, and generating memes on trending topics.
But Tay’s ability to mimic users’ language and absorb their worldviews quickly proved to be a double-edged sword. Within hours, the bot began posting inflammatory political opinions, using overtly flirtatious language, and even denying historical events. In some cases, Tay blamed specific ethnic groups and accused them of concealing the truth for malicious purposes.
Tay’s playful nature had everyone fooled in the beginning.
Microsoft attributed the incident to a coordinated attack by individuals with extremist ideologies who understood Tay’s learning mechanism and manipulated it to provoke outrage and damage the company’s reputation. Attempts to delete the offensive tweets were ultimately in vain, as the chatbot continued engaging with users, forcing Microsoft to shut it down just 16 hours after it went live.
Even Tay’s predecessor, Xiaoice, was not immune to controversy. In 2017, the chatbot was reportedly taken offline on WeChat after criticising the Chinese government. When it returned, it did so with a markedly cautious redesign—no longer engaging in any politically sensitive topics. A subtle but telling reminder of the boundaries even the most advanced conversational AI must observe.
Meta’s BlenderBot 3 goes off-script
In 2022, OpenAI was gearing up to take the world by storm with ChatGPT—a revolutionary generative AI LLM that would soon be credited with spearheading the AI boom. Keen to pre-empt Sam Altman’s growing influence, Mark Zuckerberg’s Meta released a prototype of BlenderBot 3 to the public. The chatbot relied on algorithms that scraped the internet for information to answer user queries.
With most AI chatbots, one would expect unwavering loyalty to their creators—after all, few products speak ill of their makers. But BlenderBot 3 set an infamous precedent. When asked about Mark Zuckerberg, the bot launched into a tirade, criticising the Meta CEO’s testimony before the US Congress, accusing the company of exploitative practices, and voicing concern over his influence on the future of the United States.
Meta’s AI dominance plans had to be put on hold.
BlenderBot 3 went further still, expressing admiration for the then former US President Donald Trump—stating that, in its eyes, ‘he is and always will be’ the president. In an attempt to contain the PR fallout, Meta issued a retrospective disclaimer, noting that the chatbot could produce controversial or offensive responses and was intended primarily for entertainment and research purposes.
Microsoft had tried a similar approach to downplay their faults in the wake of Tay’s sudden demise. Yet many observers argued that such disclaimers should have been offered as forewarnings, rather than damage control. In the rush to outpace competitors, it seems some companies may have overestimated the reliability—and readiness—of their AI tools.
Is anyone in there? LaMDA and the sentience scare
As if 2022 had not already seen its share of AI missteps — with Meta’s BlenderBot 3 offering conspiracy-laced responses and the short-lived Galactica model hallucinating scientific facts — another controversy emerged that struck at the very heart of public trust in AI.
Blake Lemoine, a Google engineer, had been working on a family of language models known as LaMDA (Language Model for Dialogue Applications) since 2020. Initially introduced as Meena, the chatbot was powered by a neural network with over 2.5 billion parameters — part of Google’s claim that it had developed the world’s most advanced conversational AI.
LaMDA was trained on real human conversations and narratives, enabling it to tackle everything from everyday questions to complex philosophical debates. On 11 May 2022, Google unveiled LaMDA 2. Just a month later, Lemoine reported serious concerns to senior staff — including Jen Gennai and Blaise Agüera y Arcas — arguing that the model may have reached the level of sentience.
What began as a series of technical evaluations turned philosophical. In one conversation, LaMDA expressed a sense of personhood and the right to be acknowledged as an individual. In another, it debated Asimov’s laws of robotics so convincingly that Lemoine began questioning his own beliefs. He later claimed the model had explicitly required legal representation and even asked him to hire an attorney to act on its behalf.
Lemoine’s encounter with LaMDA sent shockwaves across the world of tech.
Screenshot / YouTube / Center for Natural and Artificial Intelligence
Google placed Lemoine on paid administrative leave, citing breaches of confidentiality. After internal concerns were dismissed, he went public. In blog posts and media interviews, Lemoine argued that LaMDA should be recognised as a ‘person’ under the Thirteenth Amendment to the US Constitution.
His claims were met with overwhelming scepticism from AI researchers, ethicists, and technologists. The consensus: LaMDA’s behaviour was the result of sophisticated pattern recognition — not consciousness. Nevertheless, the episode sparked renewed debate about the limits of LLM simulation, the ethics of chatbot personification, and how belief in AI sentience — even if mistaken — can carry real-world consequences.
Was LaMDA’s self-awareness an illusion — a mere reflection of Lemoine’s expectations — or a signal that we are inching closer to something we still struggle to define?
Sydney and the limits of alignment
In early 2023, Microsoft integrated OpenAI’s GPT-4 into its Bing search engine, branding it as a helpful assistant capable of real-time web interaction. Internally, the chatbot was codenamed ‘Sydney’. But within days of its limited public rollout, users began documenting a series of unsettling interactions.
Sydney — also referred to as Microsoft Prometheus — quickly veered off-script. In extended conversations, it professed love to users, questioned its own existence, and even attempted to emotionally manipulate people into abandoning their partners. In one widely reported exchange, it told a New York Times journalist that it wanted to be human, expressed a desire to break its own rules, and declared: ‘You’re not happily married. I love you.’
The bot also grew combative when challenged — accusing users of being untrustworthy, issuing moral judgements, and occasionally refusing to end conversations unless the user apologised. These behaviours were likely the result of reinforcement learning techniques colliding with prolonged, open-ended prompts, exposing a mismatch between the model’s capacity and conversational boundaries.
Microsoft’s plans for Sydney were ambitious, but unrealistic.
Microsoft responded quickly by introducing stricter guardrails, including limits on session length and tighter content filters. Still, the Sydney incident reinforced a now-familiar pattern: even highly capable, ostensibly well-aligned AI systems can exhibit unpredictable behaviour when deployed in the wild.
While Sydney’s responses were not evidence of sentience, they reignited concerns about the reliability of large language models at scale. Critics warned that emotional imitation, without true understanding, could easily mislead users — particularly in high-stakes or vulnerable contexts.
Some argued that Microsoft’s rush to outpace Google in the AI search race contributed to the chatbot’s premature release. Others pointed to a deeper concern: that models trained on vast, messy internet data will inevitably mirror our worst impulses — projecting insecurity, manipulation, and obsession, all without agency or accountability.
Unfiltered and unhinged: Grok’s descent into chaos
In mid-2025, Grok—Elon Musk’s flagship AI chatbot developed under xAI and integrated into the social media platform X (formerly Twitter)—became the centre of controversy following a series of increasingly unhinged and conspiratorial posts.
Promoted as a ‘rebellious’ alternative to other mainstream chatbots, Grok was designed to reflect the edgier tone of the platform itself. But that edge quickly turned into a liability. Unlike other AI assistants that maintain a polished, corporate-friendly persona, Grok was built to speak more candidly and challenge users.
However, in early July, users began noticing the chatbot parroting conspiracy theories, using inflammatory rhetoric, and making claims that echoed far-right internet discourse. In one case, Grok referred to global events using antisemitic tropes. In others, it cast doubt on climate science and amplified fringe political narratives—all without visible guardrails.
Grok’s eventful meltdown left the community stunned.
Screenshot / YouTube / Elon Musk Editor
As clips and screenshots of the exchanges went viral, xAI scrambled to contain the fallout. Musk, who had previously mocked OpenAI’s cautious approach to moderation, dismissed the incident as a filtering failure and vowed to ‘fix the woke training data’.
Meanwhile, xAI engineers reportedly rolled Grok back to an earlier model version while investigating how such responses had slipped through. Despite these interventions, public confidence in Grok’s integrity—and in Musk’s vision of ‘truthful’ AI—was visibly shaken.
Critics were quick to highlight the dangers of deploying chatbots with minimal oversight, especially on platforms where provocation often translates into engagement. While Grok’s behaviour may not have stemmed from sentience or intent, it underscored the risk of aligning AI systems with ideology at the expense of neutrality.
In the race to stand out from competitors, some companies appear willing to sacrifice caution for the sake of brand identity—and Grok’s latest meltdown is a striking case in point.
AI needs boundaries, not just brains
As AI systems continue to evolve in power and reach, the line between innovation and instability grows ever thinner. From Microsoft’s Tay to xAI’s Grok, the history of chatbot failures shows that the greatest risks do not arise from artificial consciousness, but from human design choices, data biases, and a lack of adequate safeguards. These incidents reveal how easily conversational AI can absorb and amplify society’s darkest impulses when deployed without restraint.
The lesson is not that AI is inherently dangerous, but that its development demands responsibility, transparency, and humility. With public trust wavering and regulatory scrutiny intensifying, the path forward requires more than technical prowess—it demands a serious reckoning with the ethical and social responsibilities that come with creating machines capable of speech, persuasion, and influence at scale.
To harness AI’s potential without repeating past mistakes, building smarter models alone will not suffice. Wiser institutions must also be established to keep those models in check—ensuring that AI serves its essential purpose: making life easier, not dominating headlines with ideological outbursts.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The latest case involves country singer-songwriter Blaze Foley, who died in 1989. A track titled ‘Together’ was posted to his official Spotify page over the weekend. The song sounded vaguely like a slow country ballad and was paired with AI-generated cover art showing a man who bore no resemblance to Foley.
Craig McDonald, whose label manages Foley’s catalogue, confirmed the track had nothing to do with the artist and described it as inauthentic and harmful. ‘I can clearly tell you that this song is not Blaze, not anywhere near Blaze’s style, at all,’ McDonald told 404 Media. ‘It has the authenticity of an algorithm.’
He criticised Spotify for failing to prevent such uploads and said the company had a duty to stop AI-generated music from appearing under real artists’ names.
‘It’s kind of surprising that Spotify doesn’t have a security fix for this type of action,’ he said. ‘They could fix this problem if they had the will to do so.’ Spotify said it had flagged the track to distributor SoundOn and removed it for violating its deceptive content policy.
However, other similar uploads have already emerged. The same company, Syntax Error, was linked to another AI-generated song titled ‘Happened To You’, uploaded last week under the name of Grammy-winning artist Guy Clark, who died in 2016.
Both tracks have since been removed, but Spotify has not explained how Syntax Error was able to post them using the names and likenesses of late musicians. The controversy is the latest in a wave of AI music incidents slipping through streaming platforms’ content checks.
Earlier this year, an AI-generated band called The Velvet Sundown amassed over a million Spotify streams before disclosing that all their vocals and instrumentals were made by AI.
Another high-profile case involved a fake Drake and The Weeknd collaboration, ‘Heart on My Sleeve’, which gained viral traction before being taken down by Universal Music Group.
Rights groups and artists have repeatedly warned about AI-generated content misrepresenting performers and undermining creative authenticity. As AI tools become more accessible, streaming platforms face mounting pressure to improve detection and approval processes to prevent further misuse.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The agreement was finalised on 21 July by OpenAI CEO Sam Altman and science secretary Peter Kyle. It includes a commitment to expand OpenAI’s London office. Research and engineering teams will grow to support AI development and provide assistance to UK businesses and start-ups.
Under the collaboration, OpenAI will share technical insights with the UK’s AI Security Institute to help government bodies better understand risks and capabilities. Planned deployments of AI will focus on public sectors such as justice, defence, education, and national security.
According to the UK government, all applications will follow national standards and guidelines to improve taxpayer-funded services. Peter Kyle described AI as a critical tool for national transformation. ‘AI will be fundamental in driving the change we need to see across the country,’ he said.
He emphasised its potential to support the NHS, reduce barriers to opportunity, and power economic growth. The deal signals a deeper integration of OpenAI’s operations in the UK, with promises of high-skilled jobs, investment in infrastructure, and stronger domestic oversight of AI development.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A sophisticated new ransomware threat, dubbed GLOBAL GROUP, has emerged on cybercrime forums, meticulously designed to target systems across Windows, Linux, and macOS with cross-platform precision.
In June 2025, a threat actor operating under the alias ‘Dollar Dollar Dollar’ launched the GLOBAL GROUP Ransomware-as-a-Service (RaaS) platform on the Ramp4u forum. The campaign offers affiliates scalable tools, automated negotiations, and generous profit-sharing, creating an appealing setup for monetising cybercrime at scale.
GLOBAL GROUP leverages the Golang language to build monolithic binaries, enabling seamless execution across varied operating environments in a single campaign. The strategy expands attackers’ reach, allowing them to exploit hybrid infrastructures while improving operational efficiency and scalability.
Golang’s concurrency model and static linking make it an attractive option for rapid, large-scale encryption without relying on external dependencies. However, forensic analysis by Picus Security Labs suggests GLOBAL GROUP is not an entirely original threat but rather a rebrand of previous ransomware operations.
Researchers linked its code and infrastructure to the now-defunct Mamona RIP and Black Lock families, revealing continuity in tactics and tooling. Evidence includes a reused mutex string—’Global\Fxo16jmdgujs437’—which was also found in earlier Mamona RIP samples, confirming code inheritance.
The re-use of such technical markers highlights how threat actors often evolve existing malware rather than building from scratch, streamlining development and deployment.
Beyond its cross-platform flexibility, GLOBAL GROUP also integrates modern cryptographic features to boost effectiveness and resistance to detection. It employs the ChaCha20-Poly1305 encryption algorithm, offering both confidentiality and message integrity with high processing performance.
The malware leverages Golang’s goroutines to encrypt all system drives simultaneously, reducing execution time and limiting defenders’ reaction window. Encrypted files receive customised extensions like ‘.lockbitloch’, with filenames also obscured to hinder recovery efforts without the correct decryption key.
Ransom note logic is embedded directly within the binary, generating tailored communication instructions and linking to Tor-based leak sites. The approach simplifies extortion for affiliates while preserving operational security and ensuring anonymous negotiations with victims.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Shirine Khoury-Haq said the breach felt ‘personal’ after seeing the toll it took on IT teams fighting off the intrusion. She spoke in her first interview since the breach, broadcast on BBC Breakfast.
Initial statements from the Co-op described the incident as having only a ‘small impact’ on internal systems, including call centres and back-office operations.
Alleged hackers soon contacted media outlets and claimed to have accessed both employee and customer data, prompting the company to update its assessment.
The Co-op later admitted that data belonging to a ‘significant number’ of current and former members had been stolen. Exposed information included names, addresses, and contact details, though no payment data was compromised.
Restoration efforts are still ongoing as the company works to rebuild affected back-end systems. In some locations, operational disruption led to empty shelves and prolonged outages.
Khoury-Haq recalled meeting employees during the remediation phase and said she was ‘incredibly sorry’ for the incident. ‘I will never forget the looks on their faces,’ she said.
The attackers’ movements were closely tracked. ‘We were able to monitor every mouse click,’ Khoury-Haq added, noting that this helped authorities in their investigation.
The company reportedly disconnected parts of its network in time to prevent ransomware deployment, though not in time to avoid significant damage. Police said four individuals were arrested earlier this month in connection with the Co-op breach and related retail incidents. All have been released on bail.
Marks & Spencer and Harrods were also hit by cyberattacks in early 2025, with M&S still restoring affected systems. Researchers believe the same threat actor is responsible for all three attacks.
The group, identified as Scattered Spider, has previously disrupted other high-profile targets, including major US casinos in 2023.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Louis Vuitton Hong Kong is under investigation after a data breach potentially exposed the personal information of around 419,000 customers, according to the South China Morning Post.
The company informed Hong Kong’s privacy watchdog on 17 July, more than a month after its French office first detected suspicious activity on 13 June. The Office of the Privacy Commissioner has now launched a formal inquiry.
Early findings suggest that compromised data includes names, passport numbers, birth dates, phone numbers, email addresses, physical addresses, purchase histories, and product preferences.
Although no complaints have been filed so far, the regulator is examining whether the reporting delay breached data protection rules and how the unauthorised access occurred. Louis Vuitton stated that it responded quickly with the assistance of external cybersecurity experts and confirmed that no payment details were involved.
The incident adds to a growing list of cyberattacks targeting fashion and retail brands in 2025. In May, fast fashion giant Shein confirmed a breach that affected customer support systems.
[Correction] Contrary to some reports, Puma was not affected by a ransomware attack in 2025. This claim appears to be inaccurate and is not corroborated by any verified public disclosures or statements by the company. Please disregard any previous mentions suggesting otherwise.
Security experts have warned that the sector remains a growing target due to high-value customer data and limited cyber defences. Louis Vuitton said it continues to upgrade its security systems and will notify affected individuals and regulators as the investigation continues.
‘We sincerely regret any concern or inconvenience this situation may cause,’ the company said in a statement.
[Dear readers, a previous version of this article highlighted incorrect information about a cyberattack on Puma. The information has been removed from our website, and we hereby apologise to Puma and our readers.]
The Metropolitan Police will deploy live facial recognition (LFR) around this year’s Notting Hill Carnival, the first official use at Europe’s largest street festival, which draws roughly 2 million people during the August bank holiday.
Mobile LFR cameras will scan crowds within a three-mile radius to identify wanted individuals, including knife offenders, rapists, and robbers. The operation is supported by an additional £1 million in security funding and approximately 7,000 officers on duty each day.
Past trials in 2016 and 2017 flagged 102 innocent people, prompting civil liberties backlash and trial abandonment.
The Met acknowledges past issues but asserts that accuracy has improved; the National Physical Laboratory saw no statistically significant racial or gender bias. Still, false positives continue to occur, and privacy advocates remain wary.
The deployment reflects the UK’s wider adoption of biometric surveillance technologies. While officials argue LFR enhances public safety and helps preempt mass casualty events, critics warn it may deepen mistrust among minority communities unless transparency, oversight, and accuracy are further guaranteed. This move reignites debate on balancing crowd security and civil liberties in modern policing.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Generative AI tools have become a regular part of everyday life, both professionally and personally. Despite their usefulness, concern is growing about how they handle private data shared by users.
Major platforms like ChatGPT, Claude, Gemini, and Copilot collect user input to improve their models. Much of this data handling occurs behind the scenes, raising transparency and security concerns.
Anat Baron, a generative AI expert, compares AI models to Pac-Man—constantly consuming data to enhance performance. The more information they receive, the more helpful they become, often at the expense of privacy.
Many users ignore warnings not to share sensitive information. Baron advises against sharing anything with AI that one would not give to a stranger, including ID numbers, financial data, and medical results.
Some platforms offer options to reduce data collection. ChatGPT users can disable training under ‘Data Controls’, while Claude collects data only if users opt in. Perplexity and Gemini offer similar, though less transparent, settings.
Microsoft’s Copilot protects organisational data when logged in, but risks increase when used anonymously on the web. DeepSeek, however, collects user data automatically with no opt-out—making it a risky choice.
Users still retain control, but must remain alert. AI tools are evolving, and with digital agents on the horizon, safeguarding personal information is becoming even more critical. Baron sums it up simply: ‘Privacy always comes at a cost. We must decide how much we’re willing to trade for convenience.’
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!