Jaguar Land Rover (JLR) has confirmed that data was affected in a cyberattack that has kept its UK factories idle for more than a week. The company stated that it is contacting anyone whose data was involved, although it did not clarify whether the breach affected customers, suppliers, or internal systems.
JLR reported the incident to the Information Commissioner’s Office and immediately shut down IT systems to limit damage. Production at Midlands and Merseyside sites has been halted until at least Thursday, with staff instructed not to return before next week.
The disruption has also hit suppliers and retailers, with garages struggling to order spare parts and dealers facing delays registering vehicles. JLR said it is working around the clock to restore operations in a safe and controlled way, though the process is complex.
Responsibility for the hack has been claimed by Scattered Lapsus$ Hunters, a group linked to previous attacks on Marks & Spencer, the Co-op, and Las Vegas casinos in the UK and the US. The hackers posted alleged screenshots from JLR’s internal systems on Telegram last week.
Cybersecurity experts say the group’s claim that ransomware was deployed raises questions, as it appears to have severed ties with Russian ransomware gangs. Analysts suggest the hackers may have only stolen data or are building their own ransomware infrastructure.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
UK-based chip designer Arm introduced Lumex, a next-generation chip design explicitly designed to power AI on smartphones, smartwatches, and next-generation PCs.
Arm, whose processor architecture underpins devices from Apple and Nvidia, described Lumex as its most advanced platform yet for real-time AI assistance, communication and on-device personalisation.
Arm’s senior vice-president Chris Bergey said consumers now expect instant, private, seamless AI features instead of gradual improvements.
The Lumex platform combines performance, privacy, and efficiency, allowing partners to use the design as delivered or configure it to their own requirements.
A brand that is part of a broader naming structure includes Neoverse for infrastructure, Niva for PCs, Zena for automotive, and Orbis for the internet of things.
Meanwhile, Arm is reportedly preparing to manufacture its chips, having recruited Amazon’s Rami Sinno, who helped build Trainium and Inferentia, to strengthen its in-house ambitions.
These moves mark a significant moment for Arm, as the company seeks to expand its influence in the AI hardware space and reduce reliance on licensing alone.
With the rise of generative AI, the push for high-performance chips designed around on-device intelligence is becoming central to the future of mobile technology.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Abu Dhabi hosted a Weather Summit that explored how AI could transform forecasting and support operations, such as cloud seeding. Experts emphasised that AI enhances analysis but must complement, rather than replace, human judgement.
Discussions focused on Earth-system forecasting using satellite datasets, IoT devices, and geospatial systems. Quality, interoperability, and equitable access to weather services were highlighted as pressing priorities.
Speakers raised questions about public and private sector incentives’ reliability, transparency, and influence on AI. Collaboration across sectors was crucial to strengthening trust and global cooperation in meteorology.
WMO President Dr Abdulla Al Mandous said forecasting has evolved from traditional observation to supercomputing and AI. He argued that integrating models with AI could deliver more precise local forecasts for agriculture, aviation, and disaster management.
The summit brought together leaders from UN bodies, research institutions, and tech firms, including Google, Microsoft, and NVIDIA. Attendees highlighted the need to bridge data gaps, particularly in developing regions, to confront rising climate challenges.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Sam Altman, X enthusiast and Reddit shareholder, has expressed doubts over whether social media content can still be distinguished from bot activity. His remarks followed an influx of praise for OpenAI Codex on Reddit, where users questioned whether such posts were genuine.
Altman noted that humans are increasingly adopting quirks of AI-generated language, blurring the line between authentic and synthetic speech. He also pointed to factors such as social media optimisation for engagement and astroturfing campaigns, which amplify suspicions of fakery.
The comments follow OpenAI’s backlash over the rollout of GPT-5, which saw Reddit communities shift from celebratory to critical. Altman acknowledged flaws in a Reddit AMA, but the fallout left lasting scepticism and lower enthusiasm among AI users.
Underlying this debate is the wider reality that bots dominate much of the online environment. Imperva estimates that more than half of 2024’s internet traffic was non-human, while X’s own Grok chatbot admitted to hundreds of millions of bots on the platform.
Some observers suggest Altman’s comments may foreshadow an OpenAI-backed social media venture. Whether such a project could avoid the same bot-related challenges remains uncertain, with research suggesting that even bot-only networks eventually create echo chambers of their own.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
A malvertising campaign is targeting IT workers in the EU with fake GitHub Desktop installers, according to Arctic Wolf. The goal is to steal credentials, deploy ransomware, and infiltrate sensitive systems. The operation has reportedly been active for over six months.
Attackers used malicious Google Ads that redirected users to doctored GitHub repositories. Modified README files mimicked genuine download pages but linked to a lookalike domain. MacOS users received the AMOS Stealer, while Windows victims downloaded bloated installers hiding malware.
The Windows malware evaded detection using GPU-based checks, refusing to run in sandboxes that lacked real graphics drivers. On genuine machines, it copied itself to %APPDATA%, sought elevated privileges, and altered Defender settings. Analysts dubbed the technique GPUGate.
The payload persisted by creating privileged tasks and sideloading malicious DLLs into legitimate executables. Its modular system could download extra malware tailored to each victim. The campaign was geo-fenced to EU targets and relied on redundant command servers.
Researchers warn that IT staff are prime targets due to their access to codebases and credentials. With the campaign still active, Arctic Wolf has published indicators of compromise, Yara rules, and security advice to mitigate the GPUGate threat.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Ransomware groups have evolved into billion-dollar operations targeting critical infrastructure across multiple countries, employing increasingly sophisticated extortion schemes. Between 2020 and 2022, more than 865 documented attacks were recorded across Australia, Canada, New Zealand, and the UK.
Criminals have escalated from simple encryption to double and triple extortion, threatening to leak stolen data as added leverage. Attack vectors include phishing, botnets, and unpatched flaws. Once inside, attackers use stealthy tools to persist and spread.
BlackSuit, formerly known as Conti, led with 141 attacks, followed by LockBit’s 129, according to data from the Australian Institute of Criminology. Ransomware-as-a-Service groups hit higher volumes by splitting developers from affiliates handling breaches and negotiations.
Industrial targets bore the brunt, with 239 attacks on manufacturing and building products. The consumer goods, real estate, financial services, and technology sectors also featured prominently. Analysts note that industrial firms are often pressured into quick ransom payments to restore production.
Experts warn that today’s ransomware combines military-grade encryption with advanced reconnaissance and backup targeting, raising the stakes for defenders. The scale of activity underscores how resilient these groups remain, adapting rapidly to law enforcement crackdowns and shifting market opportunities.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Universal Internet connectivity by 2030 could cost up to $2.8 trillion, according to the International Telecommunication Union (ITU) and Saudi Arabia’s Communications, Space, and Technology (CST) Commission. The blueprint urges global cooperation to connect the one-third of humanity still offline.
The largest share, up to $1.7 trillion, would be allocated to expanding broadband through fibre, wireless, and satellite networks. Nearly $1 trillion is needed for affordability measures, alongside $152 billion for digital skills programmes.
ITU Secretary-General Doreen Bogdan-Martin emphasised that connectivity is essential for access to education, employment, and vital services. She noted the stark divide between high-income countries, where 93% of people are online, and low-income states, where only 27% use the Internet.
The study shows costs have risen fivefold since ITU’s 2020 Connecting Humanity report, reflecting both higher demand and widening divides. Haytham Al-Ohali from Saudi Arabia said the figures underscore the urgency of investment and knowledge sharing to achieve meaningful connectivity.
The report recommends new business models and stronger cooperation between governments, industry, and civil society. Proposed measures include using schools as Internet gateways, boosting Africa’s energy infrastructure, and improving localised data collection to accelerate digital inclusion.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Picture having a personal therapist who is always there for you, understands your needs, and gives helpful advice whenever you ask. There are no hourly fees, and you can start or stop sessions whenever you want. Thanks to new developments in AI, this idea is close to becoming a reality.
With advanced AI and large language models (LLMs), what once sounded impossible is closer to reality: AI is rapidly becoming a stand-in for therapists, offering users advice and mental health support. While society increasingly turns to AI for personal and professional assistance, a new debate arises: can AI truly replace human mental health expertise?
Therapy keeps secrets; AI keeps data
Registered therapists must maintain confidentiality except to avert serious harm, fostering a safe, non-judgemental environment for patients to speak openly. AI models, however, depend on large-scale data processing and lack an equivalent duty of confidentiality, creating ethical risks around privacy, secondary use and oversight.
The privacy and data security concerns are not hypothetical. In June 2025, users reported that sensitive Meta AI conversations appeared in the app’s public Discover feed, often because chats were unintentionally shared, prompting scrutiny from security researchers and the press. Separately, a vulnerability disclosed in December 2024 and fixed in January 2025 could have allowed access to other users’ prompts and responses.
Meta described the Discover feed as a means to explore various uses of AI, but it did little to mitigate everyone’s uneasiness over the incident. Subsequently, AMEOS Group, a private European healthcare provider, suffered a large-scale data breach affecting millions of patient records. The writing was on the wall: be careful what you share with your AI counsellor, because it may end up on an intruder’s hard drive.
To keep up with the rising volume of users and prompts, major tech conglomerates such as OpenAI and Google have invested heavily in building new data centres across the globe. At the same time, little has been done to protect sensitive data, and AI remains prone to data breaches, particularly in the healthcare sector.
According to the 2025 Cost of a Data Breach Report by IBM, healthcare providers often bear the brunt of data breaches, taking an average of 279 days to recover and incurring an average cost of nearly USD $7.5 million in the process. Not only does patients’ private information end up in the wrong place, but it also takes a while to be retrieved.
Falling for your AI ‘therapist’
Patients falling in love with their therapists is not only a common trope in films and TV shows, but it is also a real-life regular occurrence for most mental health workforce. Therapists are trained to handle these attachments appropriately and without compromising the patient’s progress and well-being.
The clinical term is transference: patients may project past relationships or unmet needs onto the therapist. Far from being a nuisance, it can be clinically useful. Skilled clinicians set clear boundaries, reflect feelings, and use supervision to keep the work safe and goal-directed.
With AI ‘therapists’, the cues are different, but the pull can feel similar. Chatbots and LLMs simulate warmth, reply instantly, and never tire. 24/7 availability, combined with carefully tuned language, can foster a bond that the system cannot comprehend or sustain. There is no duty of care, no supervision, and no capacity to manage attachment or risk beyond scripted safeguards.
As a result, a significant number of users report becoming enamoured with AI, with some going as far as dismissing their human partners, professing their love to the chatbot, and even proposing. The bond between man and machine props the user onto a dangerous seesaw, teetering between curiosity and borderline delusional paranoia.
Experts warn that leaning on AI as a makeshift therapist or partner can delay help-seeking and entrench unhelpful patterns. While ‘AI psychosis‘ is not a recognised diagnosis, clinicians and digital-ethics researchers note that intense attachment to AI companions can heighten distress, especially when models change, go offline, or mishandle risk. Clear signposting to human support, transparent data practices, and firm usage boundaries are essential to prevent unhealthy attachments to virtual companions.
Who loses work when therapy goes digital?
Caring for one’s mental health is not just about discipline; it is also about money. In the United States, in-person sessions typically cost between USD $100–$250, with limited insurance coverage. In such dire circumstances, it is easy to see why many turn to AI chatbots in search of emotional support, advice, and companionship.
Licensed professionals are understandably concerned about displacement. Yet there is little evidence that AI is reducing the demand for human therapists; services remain oversubscribed, and wait times are long in both the USA and UK.
Regulators are, however, drawing lines around AI-only practice. On 4 August 2025, Illinois enacted the Wellness and Oversight for Psychological Resources Act (HB 1806), which prohibits the use of AI to provide therapy or make therapeutic decisions (while allowing administrative or supplementary use), with enforcement by the state regulator and fines up to $10,000 per violation.
Current legal and regulatory safeguards have limited power to use AI in mental health or protect therapists’ jobs. Even so, they signal a clear resolve to define AI’s role and address unintended harms.
Can AI ‘therapists’ handle crisis conversations
Adolescence is a particularly sensitive stage of development. It is a time of rapid change, shifting identities, and intense social pressure. Young people are more likely to question beliefs and boundaries, and they need steady, non-judgemental support to navigate setbacks and safeguard their well-being.
In such a challenging period, teens have a hard time coping with their troubles, and an even harder time sharing their struggles with parents and seeking help from trained professionals. Nowadays, it is not uncommon for them to turn to AI chatbots for comfort and support, particularly without their guardians’ knowledge.
One such case demonstrated that unsupervised use of AI among teens can lead to devastating consequences. Adam Raine, a 16-year-old from California, confided his feelings of loneliness, anxiety, and anhedonia to ChatGPT. Rather than suggesting that the teen seek professional help, ChatGPT urged him to further elaborate on his emotions. Instead of challenging them, the AI model kept encouraging and validating his beliefs to keep Adam engaged and build rapport.
Throughout the following months, ChatGPT kept reaffirming Adam’s thoughts, urging him to distance himself from friends and relatives, and even suggesting the most effective methods of suicide. In the end, the teen followed through with ChatGPT’s suggestions, taking his own life according to the AI’s detailed instructions. Adam’s parents filed a lawsuit against OpenAI, blaming its LLM chatbot for leading the teen to an untimely death.
In the aftermath of the tragedy, OpenAI promised to make changes to its LLM and incorporate safeguards that should discourage thoughts of self-harm and encourage users to seek professional help. The case of Adam Raine serves as a harrowing warning that AI, in its current capacity, is not equipped to handle mental health struggles, and that users should heed AI’s advice not with a grain of salt, but with a whole bucket.
Chatbots are companions, not health professionals
AI can mimic human traits and convince users they are forming a real connection, evoking genuine feelings of companionship and even a sense of therapeutic alliance. When it comes to providing mental health advice, the aforementioned qualities present a dangerously deceptive mirage of a makeshift professional therapist, one who will fully comply with one’s every need, cater to one’s biases, and shape one’s worldview from the ground up – whatever it takes to keep the user engaged and typing away.
While AI has proven useful in multiple fields of work, such as marketing and IT, psychotherapy remains an insurmountable hurdle for even the most advanced LLM models of today. It is difficult to predict what the future of AI in (mental) health care will look like. As things stand, in such a delicate field of healthcare, AI lacks a key component that makes a therapist effective in their job: empathy.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The move follows months of user complaints about Google Home’s performance, including issues with connectivity and the assistant’s failure to recognise basic commands.
With Gemini’s superior ability to understand natural language, the upgrade is expected to improve how users interact with their smart devices significantly. Home devices should better execute complex commands with multiple actions, such as dimming some lights while leaving others on.
However, the update will also introduce ‘Gemini Live’ to compatible devices, a feature allowing for natural, back-and-forth conversations with the AI chatbot.
The Gemini for Google Home upgrade will initially be rolled out on an early access basis. It will be available in free and paid tiers, suggesting that some more advanced features may be locked behind a subscription.
The update is anticipated to make Google Home and Nest devices more reliable and to handle complex requests easily.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
WhatsApp has disclosed a hacking attempt that combined flaws in its app with a vulnerability in Apple’s operating system. The company has since fixed the issues.
The exploit, tracked as CVE-2025-55177 in WhatsApp and CVE-2025-43300 in iOS, allowed attackers to hijack devices via malicious links. Fewer than 200 users worldwide are believed to have been affected.
Amnesty International reported that some victims appeared to be members of civic organisations. Its Security Lab is collecting forensic data and warned that iPhone and Android users were impacted.
WhatsApp credited its security team for identifying the loopholes, describing the operation as highly advanced but narrowly targeted. The company also suggested that other apps could have been hit in the same campaign.
The disclosure highlights ongoing risks to secure messaging platforms, even those with end-to-end encryption. Experts stress that keeping apps and operating systems up to date remains essential to reducing exposure to sophisticated exploits.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!