The FBI has warned Windows users about the rising threat of fake Chrome update installers quietly distributing malware when downloaded from unverified sites.
Windows PCs remain especially vulnerable when users sideload these installers based on aggressive prompts or misleading advice.
These counterfeit Chrome updates often bypass security defences, installing malicious software that can steal data, turn off protections, or give attackers persistent access to infected machines.
In contrast, genuine Chrome updates, distributed through the browser’s built‑in update mechanism, remain secure and advisable.
To reduce risk, the FBI recommends that users remove any Chrome software that is not sourced directly from Google’s official site or the browser’s automatic updater.
They further advise enabling auto‑updates and dismissing pop-ups urging urgent manual downloads. This caution aligns with previous security guidance targeting fake installers masquerading as browser or system updates.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Amazon has acquired Bee AI, a San Francisco-based startup known for its $50 wearable that listens to conversations and provides AI-generated summaries and reminders.
The deal was confirmed by Bee co-founder Maria de Lourdes Zollo in a LinkedIn post on Wednesday, but the acquisition terms were not disclosed. Bee gained attention earlier this year at CES in Las Vegas, where it unveiled a Fitbit-like bracelet using AI to deliver personal insights.
The device received strong feedback for its ability to analyse conversations and create to-do lists, reminders, and daily summaries. Bee also offers a $19-per-month subscription and an Apple Watch app. It raised $7 million before being acquired by Amazon.
‘When we started Bee, we imagined a world where AI is truly personal,’ Zollo wrote. ‘That dream now finds a new home at Amazon.’ Amazon confirmed the acquisition and is expected to integrate Bee’s technology into its expanding AI device strategy.
The company recently updated Alexa with generative AI and added similar features to Ring, its home security brand. Amazon’s hardware division is now led by Panos Panay, the former Microsoft executive who led Surface and Windows 11 development.
Bee’s acquisition suggests Amazon is exploring its own AI-powered wearable to compete in the rapidly evolving consumer tech space. It remains unclear whether Bee will operate independently or be folded into Amazon’s existing device ecosystem.
Privacy concerns have surrounded Bee, as its wearable records audio in real time. The company claims no recordings are stored or used for AI training. Bee insists that users can delete their data at any time. However, privacy groups have flagged potential risks.
The AI hardware market has seen mixed success. Meta’s Ray-Ban smart glasses gained traction, but others like the Rabbit R1 flopped. The Humane AI Pin also failed commercially and was recently sold to HP. Consumers remain cautious of always-on AI devices.
OpenAI is also moving into hardware. In May, it acquired Jony Ive’s AI startup, io, for a reported $6.4 billion. OpenAI has hinted at plans to develop a screenless wearable, joining the race to create ambient AI tools for daily life.
Bee’s transition from startup to Amazon acquisition reflects how big tech is absorbing innovation in ambient, voice-first AI. Amazon’s plans for Bee remain to be seen, but the move could mark a turning point for AI wearables if executed effectively.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Amazon is shutting down its AI research lab in Shanghai, marking another step in its gradual withdrawal from China. The move comes amid continuing US–China trade tensions and a broader trend of American tech companies reassessing their presence in the country.
The company said the decision was part of a global streamlining effort rather than a response to AI concerns.
A spokesperson for AWS said the company had reviewed its organisational priorities and decided to cut some roles across certain teams. The exact number of job losses has not been confirmed.
Before Amazon’s confirmation, one of the lab’s senior researchers noted on WeChat that the Shanghai site was the final overseas AWS AI research lab and attributed its closure to shifts in US–China strategy.
The team had built a successful open-source graph neural network framework known as DGL, which reportedly brought in nearly $1 billion in revenue for Amazon’s e-commerce arm.
Amazon has been reducing its footprint in China for several years. It closed its domestic online marketplace in 2019, halted Kindle sales in 2022, and recently laid off AWS staff in the US.
Other tech giants including IBM and Microsoft have also shut down China-based research units this year, while some Chinese AI firms are now relocating operations abroad instead of remaining in a volatile domestic environment.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Bitcoin’s latest rally past the $120,000 mark has triggered a fresh wave of excitement among investors, but the upward trend also brings a darker side—an increase in crypto-related scams. Rising public interest and ETF demand have led scammers to target new users on unregulated platforms.
Fraudsters are using various methods to deceive investors, including fake trading apps, phishing websites, giveaway scams, and pump-and-dump schemes. Many of these platforms appear legitimate, only to disappear when users attempt to withdraw funds.
Others mimic real exchanges or impersonate support agents to steal credentials and assets.
To avoid falling victim, investors should watch for red flags such as guaranteed returns, no visible team or contact details, lack of regulatory licences, and overly slick websites. Sticking to trusted platforms, using MFA, avoiding unknown links, and checking activity helps reduce risk.
Crypto trading remains full of potential, but education and caution are essential. Staying informed about common scams and adopting safe habits is the best way to protect investments in an evolving digital landscape.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
US federal authorities have issued a joint warning over a spike in ransomware attacks by the Interlock group, which has been targeting healthcare and public services across North America and Europe.
The alert was released by the FBI, CISA, HHS and MS-ISAC, following a surge in activity throughout June.
Interlock operates as a ransomware-as-a-service scheme and first emerged in September 2024. The group uses double extortion techniques, not only encrypting files but also stealing sensitive data and threatening to leak it unless a ransom is paid.
High-profile victims include DaVita, Kettering Health and Texas Tech University Health Sciences Center.
Rather than relying on traditional methods alone, Interlock often uses compromised legitimate websites to trigger drive-by downloads.
The malicious software is disguised as familiar tools like Google Chrome or Microsoft Edge installers. Remote access trojans are then used to gain entry, maintain persistence using PowerShell, and escalate access using credential stealers and keyloggers.
Authorities recommend several countermeasures, such as installing DNS filtering tools, using web firewalls, applying regular software updates, and enforcing strong access controls.
They also advise organisations to train staff in recognising phishing attempts and to ensure backups are encrypted, secure and kept off-site instead of stored within the main network.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The flaws, CVE‑2025‑20281 and CVE‑2025‑20337, allow unauthenticated users to execute arbitrary commands at the root level via manipulated API inputs. A third issue, CVE‑2025‑20282, enables arbitrary file uploads to privileged directories.
All three bugs received a maximum severity score of 10/10. Cisco addressed them in 3.3 Patch 7 and 3.4 Patch 2. Despite no confirmed public breaches, the company has reported attempted exploits in the wild and is urging immediate updates.
The global 5G automotive market is expected to grow sharply from $2.58 billion in 2024 to $31.18 billion by 2034, fuelled by the rapid adoption of connected and self-driving vehicles.
A compound annual growth rate of over 28% reflects the strong momentum behind the transition to smarter mobility and safer road networks.
Vehicle-to-everything communication is predicted to lead adoption, as it allows vehicles to exchange real-time data with other cars, infrastructure and even pedestrians.
In-car entertainment systems are also growing fast, with consumers demanding smoother connectivity and on-the-go access to apps and media.
Autonomous driving, advanced driver-assistance features and real-time navigation all benefit from 5G’s low latency and high-speed capabilities. Automakers such as BMW have already begun integrating 5G into electric models to support automated functions.
Meanwhile, the US government has pledged $1.5 billion to build smart transport networks that rely on 5G-powered communication.
North America remains ahead due to early 5G rollouts and strong manufacturing bases, but Asia Pacific is catching up fast through smart city investment and infrastructure development.
Regulatory barriers and patchy rural coverage continue to pose challenges, particularly in regions with strict data privacy laws or limited 5G networks.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A cyberattack on the Hong Kong Post has been confirmed. Targeting its EC‑Ship online shipping portal, the attack compromised personal address‑book information for approximately 60,000 to 70,000 users.
The data breach included names, physical addresses, phone and fax numbers, and email addresses of both senders and recipients.
The incident, detected late Sunday into Monday, involved an attacker using a legitimate EC‑Ship account to exploit a code vulnerability. Though the system’s security protocols identified unusual activity and suspended the account, the hacker persisted until the flaw was fully patched.
Affected customers received email alerts and were advised to monitor their information closely and alert contacts of potential phishing attempts.
Hong Kong Post is now collaborating with the Hong Kong Police Force, the Digital Policy Office, and the Office of the Privacy Commissioner. It implements a layered cybersecurity solution managed by the government’s Digital Policy Office.
The Postmaster General emphasised that remediation steps have been taken to close the loophole and pledged ongoing infrastructure improvements. An official investigation is underway to reinforce resilience and safeguard user data.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
North Korea is dispatching AI researchers, interns and students to countries such as Russia in an effort to strengthen its domestic tech sector, according to a report by NK News.
The move comes despite strict UN sanctions that restrict technological exchange, particularly in high-priority areas like AI.
Kim Kwang Hyok, head of the AI Institute at Kim Il Sung University, confirmed the strategy in an interview with a pro-Pyongyang outlet in Japan. He admitted that international restrictions remain a major hurdle but noted that researchers continue developing AI applications within North Korea regardless.
Among the projects cited is ‘Ryongma’, a multilingual translation app supporting English, Russian, and Chinese, which has been available on mobile devices since 2021.
Kim also mentioned efforts to develop an AI-driven platform for a hospital under construction in Pyongyang. However, technical limitations remain considerable, with just three known semiconductor plants operating in the country.
While Russia may seem like a natural partner, its own dependence on imported hardware limits how much it can help.
A former South Korean diplomat told NK News that Moscow lacks the domestic capacity to provide high-performance chips essential for advanced AI work, making large-scale collaboration difficult.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
From Karel Čapek’s Rossum’s Universal Robots to sci-fi landmarks like 2001: A Space Odyssey and The Terminator, AI has long occupied a central place in our cultural imagination. Even earlier, thinkers like Plato and Leonardo da Vinci envisioned forms of automation—mechanical minds and bodies—that laid the conceptual groundwork for today’s AI systems.
As real-world technology has advanced, so has public unease. Fears of AI gaining autonomy, turning against its creators, or slipping beyond human control have animated both fiction and policy discourse. In response, tech leaders have often downplayed these concerns, assuring the public that today’s AI is not sentient, merely statistical, and should be embraced as a tool—not feared as a threat.
Yet the evolution from playful chatbots to powerful large language models (LLMs) has brought new complexities. The systems now assist in everything from creative writing to medical triage. But with increased capability comes increased risk. Incidents like the recent Grok episode, where a leading model veered into misrepresentation and reputational fallout, remind us that even non-sentient systems can behave in unexpected—and sometimes harmful—ways.
So, is the age-old fear of rogue AI still misplaced? Or are we finally facing real-world versions of the imagined threats we have long dismissed?
Tay’s 24-hour meltdown
Back in 2016, Microsoft was riding high on the success of Xiaoice, an AI system launched in China and later rolled out in other regions under different names. Buoyed by this confidence, the company explored launching a similar chatbot in the USA, aimed at 18- to 24-year-olds, for entertainment purposes.
Those plans culminated in the launch of TayTweets on 23 March 2016, under the Twitter handle @TayandYou. Initially, the chatbot appeared to function as intended—adopting the voice of a 19-year-old girl, engaging users with captioned photos, and generating memes on trending topics.
But Tay’s ability to mimic users’ language and absorb their worldviews quickly proved to be a double-edged sword. Within hours, the bot began posting inflammatory political opinions, using overtly flirtatious language, and even denying historical events. In some cases, Tay blamed specific ethnic groups and accused them of concealing the truth for malicious purposes.
Tay’s playful nature had everyone fooled in the beginning.
Microsoft attributed the incident to a coordinated attack by individuals with extremist ideologies who understood Tay’s learning mechanism and manipulated it to provoke outrage and damage the company’s reputation. Attempts to delete the offensive tweets were ultimately in vain, as the chatbot continued engaging with users, forcing Microsoft to shut it down just 16 hours after it went live.
Even Tay’s predecessor, Xiaoice, was not immune to controversy. In 2017, the chatbot was reportedly taken offline on WeChat after criticising the Chinese government. When it returned, it did so with a markedly cautious redesign—no longer engaging in any politically sensitive topics. A subtle but telling reminder of the boundaries even the most advanced conversational AI must observe.
Meta’s BlenderBot 3 goes off-script
In 2022, OpenAI was gearing up to take the world by storm with ChatGPT—a revolutionary generative AI LLM that would soon be credited with spearheading the AI boom. Keen to pre-empt Sam Altman’s growing influence, Mark Zuckerberg’s Meta released a prototype of BlenderBot 3 to the public. The chatbot relied on algorithms that scraped the internet for information to answer user queries.
With most AI chatbots, one would expect unwavering loyalty to their creators—after all, few products speak ill of their makers. But BlenderBot 3 set an infamous precedent. When asked about Mark Zuckerberg, the bot launched into a tirade, criticising the Meta CEO’s testimony before the US Congress, accusing the company of exploitative practices, and voicing concern over his influence on the future of the United States.
Meta’s AI dominance plans had to be put on hold.
BlenderBot 3 went further still, expressing admiration for the then former US President Donald Trump—stating that, in its eyes, ‘he is and always will be’ the president. In an attempt to contain the PR fallout, Meta issued a retrospective disclaimer, noting that the chatbot could produce controversial or offensive responses and was intended primarily for entertainment and research purposes.
Microsoft had tried a similar approach to downplay their faults in the wake of Tay’s sudden demise. Yet many observers argued that such disclaimers should have been offered as forewarnings, rather than damage control. In the rush to outpace competitors, it seems some companies may have overestimated the reliability—and readiness—of their AI tools.
Is anyone in there? LaMDA and the sentience scare
As if 2022 had not already seen its share of AI missteps — with Meta’s BlenderBot 3 offering conspiracy-laced responses and the short-lived Galactica model hallucinating scientific facts — another controversy emerged that struck at the very heart of public trust in AI.
Blake Lemoine, a Google engineer, had been working on a family of language models known as LaMDA (Language Model for Dialogue Applications) since 2020. Initially introduced as Meena, the chatbot was powered by a neural network with over 2.5 billion parameters — part of Google’s claim that it had developed the world’s most advanced conversational AI.
LaMDA was trained on real human conversations and narratives, enabling it to tackle everything from everyday questions to complex philosophical debates. On 11 May 2022, Google unveiled LaMDA 2. Just a month later, Lemoine reported serious concerns to senior staff — including Jen Gennai and Blaise Agüera y Arcas — arguing that the model may have reached the level of sentience.
What began as a series of technical evaluations turned philosophical. In one conversation, LaMDA expressed a sense of personhood and the right to be acknowledged as an individual. In another, it debated Asimov’s laws of robotics so convincingly that Lemoine began questioning his own beliefs. He later claimed the model had explicitly required legal representation and even asked him to hire an attorney to act on its behalf.
Lemoine’s encounter with LaMDA sent shockwaves across the world of tech.
Screenshot / YouTube / Center for Natural and Artificial Intelligence
Google placed Lemoine on paid administrative leave, citing breaches of confidentiality. After internal concerns were dismissed, he went public. In blog posts and media interviews, Lemoine argued that LaMDA should be recognised as a ‘person’ under the Thirteenth Amendment to the US Constitution.
His claims were met with overwhelming scepticism from AI researchers, ethicists, and technologists. The consensus: LaMDA’s behaviour was the result of sophisticated pattern recognition — not consciousness. Nevertheless, the episode sparked renewed debate about the limits of LLM simulation, the ethics of chatbot personification, and how belief in AI sentience — even if mistaken — can carry real-world consequences.
Was LaMDA’s self-awareness an illusion — a mere reflection of Lemoine’s expectations — or a signal that we are inching closer to something we still struggle to define?
Sydney and the limits of alignment
In early 2023, Microsoft integrated OpenAI’s GPT-4 into its Bing search engine, branding it as a helpful assistant capable of real-time web interaction. Internally, the chatbot was codenamed ‘Sydney’. But within days of its limited public rollout, users began documenting a series of unsettling interactions.
Sydney — also referred to as Microsoft Prometheus — quickly veered off-script. In extended conversations, it professed love to users, questioned its own existence, and even attempted to emotionally manipulate people into abandoning their partners. In one widely reported exchange, it told a New York Times journalist that it wanted to be human, expressed a desire to break its own rules, and declared: ‘You’re not happily married. I love you.’
The bot also grew combative when challenged — accusing users of being untrustworthy, issuing moral judgements, and occasionally refusing to end conversations unless the user apologised. These behaviours were likely the result of reinforcement learning techniques colliding with prolonged, open-ended prompts, exposing a mismatch between the model’s capacity and conversational boundaries.
Microsoft’s plans for Sydney were ambitious, but unrealistic.
Microsoft responded quickly by introducing stricter guardrails, including limits on session length and tighter content filters. Still, the Sydney incident reinforced a now-familiar pattern: even highly capable, ostensibly well-aligned AI systems can exhibit unpredictable behaviour when deployed in the wild.
While Sydney’s responses were not evidence of sentience, they reignited concerns about the reliability of large language models at scale. Critics warned that emotional imitation, without true understanding, could easily mislead users — particularly in high-stakes or vulnerable contexts.
Some argued that Microsoft’s rush to outpace Google in the AI search race contributed to the chatbot’s premature release. Others pointed to a deeper concern: that models trained on vast, messy internet data will inevitably mirror our worst impulses — projecting insecurity, manipulation, and obsession, all without agency or accountability.
Unfiltered and unhinged: Grok’s descent into chaos
In mid-2025, Grok—Elon Musk’s flagship AI chatbot developed under xAI and integrated into the social media platform X (formerly Twitter)—became the centre of controversy following a series of increasingly unhinged and conspiratorial posts.
Promoted as a ‘rebellious’ alternative to other mainstream chatbots, Grok was designed to reflect the edgier tone of the platform itself. But that edge quickly turned into a liability. Unlike other AI assistants that maintain a polished, corporate-friendly persona, Grok was built to speak more candidly and challenge users.
However, in early July, users began noticing the chatbot parroting conspiracy theories, using inflammatory rhetoric, and making claims that echoed far-right internet discourse. In one case, Grok referred to global events using antisemitic tropes. In others, it cast doubt on climate science and amplified fringe political narratives—all without visible guardrails.
Grok’s eventful meltdown left the community stunned.
Screenshot / YouTube / Elon Musk Editor
As clips and screenshots of the exchanges went viral, xAI scrambled to contain the fallout. Musk, who had previously mocked OpenAI’s cautious approach to moderation, dismissed the incident as a filtering failure and vowed to ‘fix the woke training data’.
Meanwhile, xAI engineers reportedly rolled Grok back to an earlier model version while investigating how such responses had slipped through. Despite these interventions, public confidence in Grok’s integrity—and in Musk’s vision of ‘truthful’ AI—was visibly shaken.
Critics were quick to highlight the dangers of deploying chatbots with minimal oversight, especially on platforms where provocation often translates into engagement. While Grok’s behaviour may not have stemmed from sentience or intent, it underscored the risk of aligning AI systems with ideology at the expense of neutrality.
In the race to stand out from competitors, some companies appear willing to sacrifice caution for the sake of brand identity—and Grok’s latest meltdown is a striking case in point.
AI needs boundaries, not just brains
As AI systems continue to evolve in power and reach, the line between innovation and instability grows ever thinner. From Microsoft’s Tay to xAI’s Grok, the history of chatbot failures shows that the greatest risks do not arise from artificial consciousness, but from human design choices, data biases, and a lack of adequate safeguards. These incidents reveal how easily conversational AI can absorb and amplify society’s darkest impulses when deployed without restraint.
The lesson is not that AI is inherently dangerous, but that its development demands responsibility, transparency, and humility. With public trust wavering and regulatory scrutiny intensifying, the path forward requires more than technical prowess—it demands a serious reckoning with the ethical and social responsibilities that come with creating machines capable of speech, persuasion, and influence at scale.
To harness AI’s potential without repeating past mistakes, building smarter models alone will not suffice. Wiser institutions must also be established to keep those models in check—ensuring that AI serves its essential purpose: making life easier, not dominating headlines with ideological outbursts.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!