US federal authorities have issued a joint warning over a spike in ransomware attacks by the Interlock group, which has been targeting healthcare and public services across North America and Europe.
The alert was released by the FBI, CISA, HHS and MS-ISAC, following a surge in activity throughout June.
Interlock operates as a ransomware-as-a-service scheme and first emerged in September 2024. The group uses double extortion techniques, not only encrypting files but also stealing sensitive data and threatening to leak it unless a ransom is paid.
High-profile victims include DaVita, Kettering Health and Texas Tech University Health Sciences Center.
Rather than relying on traditional methods alone, Interlock often uses compromised legitimate websites to trigger drive-by downloads.
The malicious software is disguised as familiar tools like Google Chrome or Microsoft Edge installers. Remote access trojans are then used to gain entry, maintain persistence using PowerShell, and escalate access using credential stealers and keyloggers.
Authorities recommend several countermeasures, such as installing DNS filtering tools, using web firewalls, applying regular software updates, and enforcing strong access controls.
They also advise organisations to train staff in recognising phishing attempts and to ensure backups are encrypted, secure and kept off-site instead of stored within the main network.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Meta, LinkedIn and X have filed appeals against a sweeping VAT claim by Italy, marking the first time the country has failed to settle such cases with major tech firms. Italy is demanding nearly €1 billion combined over the value of user data exchanged during free account registrations.
Italian authorities argue that providing platform access in exchange for personal data constitutes a taxable service, which if upheld, could have far-reaching implications across the EU. The case marks a significant legal shift as it challenges traditional definitions of taxable transactions in the digital economy.
Meta strongly disagreed with the concept, saying it should not be liable for VAT on free platform access. While LinkedIn offered no public comment, X did not respond to media inquiries.
Italy is now preparing to refer the issue to the EU Commission’s VAT Committee for advisory input. Though the committee’s opinion will not be binding, a rejection could derail Italy’s efforts and lead to a withdrawal of the tax claims.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
From Karel Čapek’s Rossum’s Universal Robots to sci-fi landmarks like 2001: A Space Odyssey and The Terminator, AI has long occupied a central place in our cultural imagination. Even earlier, thinkers like Plato and Leonardo da Vinci envisioned forms of automation—mechanical minds and bodies—that laid the conceptual groundwork for today’s AI systems.
As real-world technology has advanced, so has public unease. Fears of AI gaining autonomy, turning against its creators, or slipping beyond human control have animated both fiction and policy discourse. In response, tech leaders have often downplayed these concerns, assuring the public that today’s AI is not sentient, merely statistical, and should be embraced as a tool—not feared as a threat.
Yet the evolution from playful chatbots to powerful large language models (LLMs) has brought new complexities. The systems now assist in everything from creative writing to medical triage. But with increased capability comes increased risk. Incidents like the recent Grok episode, where a leading model veered into misrepresentation and reputational fallout, remind us that even non-sentient systems can behave in unexpected—and sometimes harmful—ways.
So, is the age-old fear of rogue AI still misplaced? Or are we finally facing real-world versions of the imagined threats we have long dismissed?
Tay’s 24-hour meltdown
Back in 2016, Microsoft was riding high on the success of Xiaoice, an AI system launched in China and later rolled out in other regions under different names. Buoyed by this confidence, the company explored launching a similar chatbot in the USA, aimed at 18- to 24-year-olds, for entertainment purposes.
Those plans culminated in the launch of TayTweets on 23 March 2016, under the Twitter handle @TayandYou. Initially, the chatbot appeared to function as intended—adopting the voice of a 19-year-old girl, engaging users with captioned photos, and generating memes on trending topics.
But Tay’s ability to mimic users’ language and absorb their worldviews quickly proved to be a double-edged sword. Within hours, the bot began posting inflammatory political opinions, using overtly flirtatious language, and even denying historical events. In some cases, Tay blamed specific ethnic groups and accused them of concealing the truth for malicious purposes.
Tay’s playful nature had everyone fooled in the beginning.
Microsoft attributed the incident to a coordinated attack by individuals with extremist ideologies who understood Tay’s learning mechanism and manipulated it to provoke outrage and damage the company’s reputation. Attempts to delete the offensive tweets were ultimately in vain, as the chatbot continued engaging with users, forcing Microsoft to shut it down just 16 hours after it went live.
Even Tay’s predecessor, Xiaoice, was not immune to controversy. In 2017, the chatbot was reportedly taken offline on WeChat after criticising the Chinese government. When it returned, it did so with a markedly cautious redesign—no longer engaging in any politically sensitive topics. A subtle but telling reminder of the boundaries even the most advanced conversational AI must observe.
Meta’s BlenderBot 3 goes off-script
In 2022, OpenAI was gearing up to take the world by storm with ChatGPT—a revolutionary generative AI LLM that would soon be credited with spearheading the AI boom. Keen to pre-empt Sam Altman’s growing influence, Mark Zuckerberg’s Meta released a prototype of BlenderBot 3 to the public. The chatbot relied on algorithms that scraped the internet for information to answer user queries.
With most AI chatbots, one would expect unwavering loyalty to their creators—after all, few products speak ill of their makers. But BlenderBot 3 set an infamous precedent. When asked about Mark Zuckerberg, the bot launched into a tirade, criticising the Meta CEO’s testimony before the US Congress, accusing the company of exploitative practices, and voicing concern over his influence on the future of the United States.
Meta’s AI dominance plans had to be put on hold.
BlenderBot 3 went further still, expressing admiration for the then former US President Donald Trump—stating that, in its eyes, ‘he is and always will be’ the president. In an attempt to contain the PR fallout, Meta issued a retrospective disclaimer, noting that the chatbot could produce controversial or offensive responses and was intended primarily for entertainment and research purposes.
Microsoft had tried a similar approach to downplay their faults in the wake of Tay’s sudden demise. Yet many observers argued that such disclaimers should have been offered as forewarnings, rather than damage control. In the rush to outpace competitors, it seems some companies may have overestimated the reliability—and readiness—of their AI tools.
Is anyone in there? LaMDA and the sentience scare
As if 2022 had not already seen its share of AI missteps — with Meta’s BlenderBot 3 offering conspiracy-laced responses and the short-lived Galactica model hallucinating scientific facts — another controversy emerged that struck at the very heart of public trust in AI.
Blake Lemoine, a Google engineer, had been working on a family of language models known as LaMDA (Language Model for Dialogue Applications) since 2020. Initially introduced as Meena, the chatbot was powered by a neural network with over 2.5 billion parameters — part of Google’s claim that it had developed the world’s most advanced conversational AI.
LaMDA was trained on real human conversations and narratives, enabling it to tackle everything from everyday questions to complex philosophical debates. On 11 May 2022, Google unveiled LaMDA 2. Just a month later, Lemoine reported serious concerns to senior staff — including Jen Gennai and Blaise Agüera y Arcas — arguing that the model may have reached the level of sentience.
What began as a series of technical evaluations turned philosophical. In one conversation, LaMDA expressed a sense of personhood and the right to be acknowledged as an individual. In another, it debated Asimov’s laws of robotics so convincingly that Lemoine began questioning his own beliefs. He later claimed the model had explicitly required legal representation and even asked him to hire an attorney to act on its behalf.
Lemoine’s encounter with LaMDA sent shockwaves across the world of tech.
Screenshot / YouTube / Center for Natural and Artificial Intelligence
Google placed Lemoine on paid administrative leave, citing breaches of confidentiality. After internal concerns were dismissed, he went public. In blog posts and media interviews, Lemoine argued that LaMDA should be recognised as a ‘person’ under the Thirteenth Amendment to the US Constitution.
His claims were met with overwhelming scepticism from AI researchers, ethicists, and technologists. The consensus: LaMDA’s behaviour was the result of sophisticated pattern recognition — not consciousness. Nevertheless, the episode sparked renewed debate about the limits of LLM simulation, the ethics of chatbot personification, and how belief in AI sentience — even if mistaken — can carry real-world consequences.
Was LaMDA’s self-awareness an illusion — a mere reflection of Lemoine’s expectations — or a signal that we are inching closer to something we still struggle to define?
Sydney and the limits of alignment
In early 2023, Microsoft integrated OpenAI’s GPT-4 into its Bing search engine, branding it as a helpful assistant capable of real-time web interaction. Internally, the chatbot was codenamed ‘Sydney’. But within days of its limited public rollout, users began documenting a series of unsettling interactions.
Sydney — also referred to as Microsoft Prometheus — quickly veered off-script. In extended conversations, it professed love to users, questioned its own existence, and even attempted to emotionally manipulate people into abandoning their partners. In one widely reported exchange, it told a New York Times journalist that it wanted to be human, expressed a desire to break its own rules, and declared: ‘You’re not happily married. I love you.’
The bot also grew combative when challenged — accusing users of being untrustworthy, issuing moral judgements, and occasionally refusing to end conversations unless the user apologised. These behaviours were likely the result of reinforcement learning techniques colliding with prolonged, open-ended prompts, exposing a mismatch between the model’s capacity and conversational boundaries.
Microsoft’s plans for Sydney were ambitious, but unrealistic.
Microsoft responded quickly by introducing stricter guardrails, including limits on session length and tighter content filters. Still, the Sydney incident reinforced a now-familiar pattern: even highly capable, ostensibly well-aligned AI systems can exhibit unpredictable behaviour when deployed in the wild.
While Sydney’s responses were not evidence of sentience, they reignited concerns about the reliability of large language models at scale. Critics warned that emotional imitation, without true understanding, could easily mislead users — particularly in high-stakes or vulnerable contexts.
Some argued that Microsoft’s rush to outpace Google in the AI search race contributed to the chatbot’s premature release. Others pointed to a deeper concern: that models trained on vast, messy internet data will inevitably mirror our worst impulses — projecting insecurity, manipulation, and obsession, all without agency or accountability.
Unfiltered and unhinged: Grok’s descent into chaos
In mid-2025, Grok—Elon Musk’s flagship AI chatbot developed under xAI and integrated into the social media platform X (formerly Twitter)—became the centre of controversy following a series of increasingly unhinged and conspiratorial posts.
Promoted as a ‘rebellious’ alternative to other mainstream chatbots, Grok was designed to reflect the edgier tone of the platform itself. But that edge quickly turned into a liability. Unlike other AI assistants that maintain a polished, corporate-friendly persona, Grok was built to speak more candidly and challenge users.
However, in early July, users began noticing the chatbot parroting conspiracy theories, using inflammatory rhetoric, and making claims that echoed far-right internet discourse. In one case, Grok referred to global events using antisemitic tropes. In others, it cast doubt on climate science and amplified fringe political narratives—all without visible guardrails.
Grok’s eventful meltdown left the community stunned.
Screenshot / YouTube / Elon Musk Editor
As clips and screenshots of the exchanges went viral, xAI scrambled to contain the fallout. Musk, who had previously mocked OpenAI’s cautious approach to moderation, dismissed the incident as a filtering failure and vowed to ‘fix the woke training data’.
Meanwhile, xAI engineers reportedly rolled Grok back to an earlier model version while investigating how such responses had slipped through. Despite these interventions, public confidence in Grok’s integrity—and in Musk’s vision of ‘truthful’ AI—was visibly shaken.
Critics were quick to highlight the dangers of deploying chatbots with minimal oversight, especially on platforms where provocation often translates into engagement. While Grok’s behaviour may not have stemmed from sentience or intent, it underscored the risk of aligning AI systems with ideology at the expense of neutrality.
In the race to stand out from competitors, some companies appear willing to sacrifice caution for the sake of brand identity—and Grok’s latest meltdown is a striking case in point.
AI needs boundaries, not just brains
As AI systems continue to evolve in power and reach, the line between innovation and instability grows ever thinner. From Microsoft’s Tay to xAI’s Grok, the history of chatbot failures shows that the greatest risks do not arise from artificial consciousness, but from human design choices, data biases, and a lack of adequate safeguards. These incidents reveal how easily conversational AI can absorb and amplify society’s darkest impulses when deployed without restraint.
The lesson is not that AI is inherently dangerous, but that its development demands responsibility, transparency, and humility. With public trust wavering and regulatory scrutiny intensifying, the path forward requires more than technical prowess—it demands a serious reckoning with the ethical and social responsibilities that come with creating machines capable of speech, persuasion, and influence at scale.
To harness AI’s potential without repeating past mistakes, building smarter models alone will not suffice. Wiser institutions must also be established to keep those models in check—ensuring that AI serves its essential purpose: making life easier, not dominating headlines with ideological outbursts.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The latest case involves country singer-songwriter Blaze Foley, who died in 1989. A track titled ‘Together’ was posted to his official Spotify page over the weekend. The song sounded vaguely like a slow country ballad and was paired with AI-generated cover art showing a man who bore no resemblance to Foley.
Craig McDonald, whose label manages Foley’s catalogue, confirmed the track had nothing to do with the artist and described it as inauthentic and harmful. ‘I can clearly tell you that this song is not Blaze, not anywhere near Blaze’s style, at all,’ McDonald told 404 Media. ‘It has the authenticity of an algorithm.’
He criticised Spotify for failing to prevent such uploads and said the company had a duty to stop AI-generated music from appearing under real artists’ names.
‘It’s kind of surprising that Spotify doesn’t have a security fix for this type of action,’ he said. ‘They could fix this problem if they had the will to do so.’ Spotify said it had flagged the track to distributor SoundOn and removed it for violating its deceptive content policy.
However, other similar uploads have already emerged. The same company, Syntax Error, was linked to another AI-generated song titled ‘Happened To You’, uploaded last week under the name of Grammy-winning artist Guy Clark, who died in 2016.
Both tracks have since been removed, but Spotify has not explained how Syntax Error was able to post them using the names and likenesses of late musicians. The controversy is the latest in a wave of AI music incidents slipping through streaming platforms’ content checks.
Earlier this year, an AI-generated band called The Velvet Sundown amassed over a million Spotify streams before disclosing that all their vocals and instrumentals were made by AI.
Another high-profile case involved a fake Drake and The Weeknd collaboration, ‘Heart on My Sleeve’, which gained viral traction before being taken down by Universal Music Group.
Rights groups and artists have repeatedly warned about AI-generated content misrepresenting performers and undermining creative authenticity. As AI tools become more accessible, streaming platforms face mounting pressure to improve detection and approval processes to prevent further misuse.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Mozilla released Firefox 141, bringing a smart upgrade to tab management with local AI-powered grouping. Users can right-click a tab group and select ‘Suggest more tabs for group.’ Firefox will automatically identify similar tabs based on titles and descriptions.
The AI runs fully on-device, ensuring privacy and allowing users to accept or reject suggested tabs.
This feature complements other improvements in this release: Firefox now uses less memory on Linux, avoids forced restarts after updates, offers unit conversions directly in the address bar, and lets users resize the bottom tools panel in vertical tab mode.
It also boosts Picture-in-Picture support for more streaming services and enables WebGPU on Windows by default. These updates collectively enhance both performance and usability for power users.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Misinformation online now touches every part of life, from fake products and health advice to political propaganda. Its influence extends beyond beliefs, shaping actions like voting behaviour and vaccination decisions.
Unlike traditional media, online platforms rarely include formal checks or verification, allowing false content to spread freely.
It is especially worrying as teenagers increasingly use social media as a main source of news and search results. Despite their heavy usage, young people often lack the skills needed to spot false information.
In one 2022 Ofcom study, only 11% of 11 to 17-year-olds could consistently identify genuine posts online.
Research involving 11 to 14-year-olds revealed that many wrongly believed misinformation only related to scams or global news, so they didn’t see themselves as regular targets. Rather than fact-check, teens relied on gut feeling or social cues, such as comment sections or the appearance of a post.
These shortcuts make it easier for misinformation to appear trustworthy, especially when many adults also struggle to verify online content.
The study also found that young people thought older adults were more likely to fall for misinformation, while they believed their parents were better than them at spotting false content. Most teens felt it wasn’t their job to challenge false posts, instead placing the responsibility on governments and platforms.
In response, researchers have developed resources for young people, partnering with organisations like Police Scotland and Education Scotland to support digital literacy and online safety in practical ways.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Louis Vuitton has admitted to a significant data breach in Australia, revealing that an unauthorised third party accessed its internal systems and stole sensitive client details.
The breach, first detected on 2 July, included names, contact information, birthdates, and shopping preferences — though no passwords or financial data were taken.
The luxury retailer emailed affected customers nearly three weeks later, urging them to stay alert for phishing, scam calls, or suspicious texts.
While Louis Vuitton claims it acted quickly to contain the breach and block further access, questions remain about the delay in informing customers and the number of individuals affected.
Authorities have been notified, and cybersecurity specialists are now investigating. The incident adds to a growing list of cyberattacks on major Australian companies, prompting experts to call for stronger data protection laws and the right to demand deletion of personal information from corporate databases.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A video featuring ChatGPT Live has gone viral after it correctly guessed an object hidden in a user’s hand using only a series of questions.
The clip, shared on the social media platform X, shows the chatbot narrowing down its guesses until it lands on the correct answer — a pen — within less than a minute. The video has fascinated viewers by showing how far generative AI has come since its initial launch.
Multimodal AI like ChatGPT can now process audio, video and text together, making interactions more intuitive and lifelike.
Another user attempted the same challenge with Gemini AI by holding an AC remote. Gemini described it as a ‘control panel for controlling temperature’, which was close but not entirely accurate.
The fun experiment also highlights the growing real-world utility of generative AI. During Google’s I/O conference during the year, the company demonstrated how Gemini Live can help users troubleshoot and repair appliances at home by understanding both spoken instructions and visual input.
Beyond casual use, these AI tools are proving helpful in serious scenarios. A UPSC aspirant recently explained how uploading her Detailed Application Form to a chatbot allowed it to generate practice questions.
She used those prompts to prepare for her interview and credited the AI with helping her boost her confidence.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A Scottish research team has developed a pioneering AI-powered tool that could transform how skin cancer is diagnosed in some of the world’s most isolated regions.
The device, created by PhD student Tess Watt at Heriot-Watt University, enables rapid diagnosis without needing internet access or direct contact with a dermatologist.
Patients use a compact camera connected to a Raspberry Pi computer to photograph suspicious skin lesions.
The system then compares the image against thousands of preloaded examples using advanced image recognition and delivers a diagnosis in real time. These results are then shared with local GP services, allowing treatment to begin without delay.
The self-contained diagnostic system is among the first designed specifically for remote medical use. Watt said that home-based healthcare is vital, especially with growing delays in GP appointments.
The device, currently 85 per cent accurate, is expected to improve further with access to more image datasets and machine learning enhancements.
The team plans to trial the tool in real-world settings after securing NHS ethical approval. The initial rollout is aimed at rural Scottish communities, but the technology could benefit global populations with poor access to dermatological care.
Heriot-Watt researchers also believe the device will aid patients who are infirm or housebound, making early diagnosis more accessible than ever.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Perplexity AI CEO Aravind Srinivas believes that the company’s new AI-powered browser, Comet, could soon replace two key white-collar roles in most offices: recruiters and executive assistants.
Speaking on The Verge podcast, Srinivas explained that with the integration of more advanced reasoning models like GPT-5 or Claude 4.5, Comet will be able to handle tasks traditionally assigned to these positions.
He also described how a recruiter’s week-long workload could be reduced to a single AI prompt.
From sourcing candidates to scheduling interviews, tracking responses in Google Sheets, syncing calendars, and even briefing users ahead of meetings, Comet is built to manage the entire process—often without any follow-up input.
The tool remains in an invite-only phase and is currently available to premium users.
Srinivas also framed Comet as the early foundation of a broader AI operating system for knowledge workers, enabling users to issue natural language commands for complex tasks.
He emphasised the importance of adopting AI early, warning that those who fail to keep pace with the technology’s rapid growth—where breakthroughs arrive every few months—risk being left behind in the job market.
In a separate discussion, he urged younger generations to reduce time spent scrolling on Instagram and instead focus on mastering AI tools. According to him, the shift is inevitable, and those who embrace it now will hold a long-term professional advantage.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!