Jaguar Land Rover (JLR) has ordered factory staff to work from home until at least next Tuesday as it recovers from a major cyberattack. Production remains suspended at key UK sites, including Halewood, Solihull, and Wolverhampton.
The disruption, first reported earlier this week, has ‘severely impacted’ production and sales, according to JLR. Reports suggest that assembly line workers have been instructed not to return before 9 September, while the situation remains under review.
The hack has hit operations beyond manufacturing, with dealerships unable to order parts and some customer handovers delayed. The timing is particularly disruptive, coinciding with the September release of new registration plates, which traditionally boosts demand.
A group of young hackers on Telegram, calling themselves Scattered Lapsus$ Hunters, has claimed responsibility for the incident. Linked to earlier attacks on Marks & Spencer and Harrods, the group reportedly shared screenshots of JLR’s internal IT systems as proof.
The incident follows a wider spate of UK retail and automotive cyberattacks this year. JLR has stated that it is working quickly to restore systems and emphasised that there is ‘no evidence’ that customer data has been compromised.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Users reported search queries failing to return results, frozen pages, and an inability to connect to Google servers. Social media posts suggested the disruption extended beyond Türkiye, affecting users in Bulgaria, Greece, Georgia, Croatia, Serbia, Romania, Armenia, the Netherlands, and Germany.
The Turkish state-run Anadolu Agency confirmed outages across parts of Southeastern Europe. Turkish Deputy Minister of Transport and Infrastructure, Omer Fatih Sayan, said the issue impacted Android and related services in Türkiye and the wider European region.
He added that the National Cyber Incident Response Centre had requested a technical report from Google and is monitoring the situation closely.
As of 10:57 a.m. local time, 4 September 2025, access to Google services in Türkiye had been restored. Google has yet to issue an official statement regarding the cause of the disruption.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The Shanghai Cooperation Organisation (SCO) summit in Tianjin closed with leaders adopting the Tianjin Declaration, highlighting member states’ commitment to multilateralism, sovereignty, and shared security.
The discussions emphasised economic resilience, financial cooperation, and collective responses to security challenges.
Proposals included exploring joint financial mechanisms, such as common bonds and payment systems, to shield member economies from external disruptions.
Leaders also underlined the importance of strengthening cooperation in trade and investment, with China pledging additional funding and infrastructure support across the bloc. Observers noted that these measures reflect growing interest in alternative global finance and economic governance approaches.
Security issues are prominently featured, with agreements to enhance counter-terrorism initiatives and expand existing structures such as the Regional Anti-Terrorist Structure. Delegates also called for greater collaboration against cross-border crime, drug trafficking, and emerging security risks.
At the same time, they stressed the need for political solutions to ongoing regional conflicts, including those in Ukraine, Gaza, and Afghanistan.
With its expanding membership and combined economic weight, the SCO continues to position itself as a platform for cooperation beyond traditional regional security concerns.
While challenges remain, including diverging interests among key members, the Tianjin summit indicated the bloc’s growing role in discussions on multipolar governance and collective stability.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Hexstrike-AI links large language models like Claude, GPT and Copilot via a Multi-Agent Control Protocol (MCP) to over 150 security tools.
Automated agents execute actions such as scanning, exploiting CVEs and deploying webshells, all orchestrated through high-level commands like ‘exploit NetScaler’.
Researchers from CheckPoint note that attackers are now using Hexstrike-AI to achieve unauthenticated remote code execution automatically.
The AI framework’s design, complete with retry logic and resilience, makes chaining reconnaissance, exploitation and persistence seamless and more effective.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Intelligence and cybersecurity agencies from 13 countries, including the NSA, CISA, the UK’s NCSC and Canada’s CSIS, have jointly issued an advisory on Salt Typhoon, a Chinese state-sponsored advanced persistent threat group.
The alert highlights global intrusions into telecommunications, military, government, transport and lodging sectors.
Salt Typhoon has exploited known, unpatched vulnerabilities in network-edge appliances, such as routers and firewalls, to gain initial access. Once inside, it covertly embeds malware and employs living-off-the-land tools for persistence and data exfiltration.
The advisory also warns that stolen data from compromised ISPs can help intelligence services track global communications and movements.
It pinpoints three Chinese companies with links to the Ministry of State Security and the People’s Liberation Army as central to Salt Typhoon’s operations.
Defensive guidelines accompany the advisory, urging organisations to apply urgent firmware patches, monitor for abnormal network activity, verify firmware integrity and tighten device configurations, especially for telecom infrastructure.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Cybersecurity researchers have uncovered a new method hackers use to deliver malware, which hides malicious commands inside Ethereum smart contracts. ReversingLabs identified two compromised NPM packages on the popular Node Package Manager repository.
The packages, named ‘colortoolsv2’ and ‘mimelib2,’ were uploaded in July and used blockchain queries to fetch URLs that delivered downloader malware. The contracts hid command and control addresses, letting attackers evade scans by making blockchain traffic look legitimate.
Researchers say the approach marks a shift in tactics. While the Lazarus Group previously leveraged Ethereum smart contracts, the novel element uses them as hosts for malicious URLs. Analysts warn that open-source repositories face increasingly sophisticated evasion techniques.
The malicious packages formed part of a broader deception campaign involving fake GitHub repositories posing as cryptocurrency trading bots. With fabricated commits, fake user accounts, and professional-looking documentation, attackers built convincing projects to trick developers.
Experts note that similar campaigns have also targeted Solana and Bitcoin-related libraries, signalling a broader trend in evolving threats.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The company stated there is currently ‘no evidence’ that any customer data has been compromised and assured it is working at pace to restore systems in a controlled manner.
The incident disrupted output at key UK plants, including Halewood and Solihull, led to operational bottlenecks such as halted vehicle registrations, and impacted a peak retail period following the release of ’75’ number plates.
A Telegram group named Scattered Lapsus$ Hunters, a conflation of known hacking collectives, claimed responsibility, posting what appeared to be internal logs. Cybersecurity experts caution that such claims should be viewed sceptically, as attribution via Telegram may be misleading.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The Walt Disney Company will pay $10 million to settle allegations that it breached children’s privacy laws by mislabelling videos aimed at young audiences on YouTube, allowing personal data to be collected without parental consent.
In a complaint filed by the US Department of Justice, following a Federal Trade Commission (FTC) referral, Disney was accused of incorrectly designing hundreds of child-directed videos as ‘Made for Kids’.
Instead, the company applied a blanket ‘Not Made for Kids’ label at the channel level, enabling YouTube to collect data and serve targeted advertising to viewers under 13, contrary to the Children’s Online Privacy Protection Act (COPPA).
The FTC claims Disney profited through direct ad sales and revenue-sharing with YouTube. Despite being notified by YouTube in 2020 that over 300 videos had been misclassified, Disney did not revise its labelling policy.
Under the proposed settlement, Disney must pay the civil penalty, fully comply with COPPA by obtaining parental consent before data collection, and implement a video review programme to ensure accurate classification, unless YouTube introduces age assurance technologies to determine user age reliably.
“This case underscores the FTC’s commitment to protecting children’s privacy online,” said FTC Chair Andrew Ferguson. “Parents, not corporations like Disney, should decide how their children’s data is collected and used.”
The agreement, which a federal judge must still approve, reflects growing pressure on tech platforms and content creators to safeguard children’s digital privacy.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Picture having a personal therapist who is always there for you, understands your needs, and gives helpful advice whenever you ask. There are no hourly fees, and you can start or stop sessions whenever you want. Thanks to new developments in AI, this idea is close to becoming a reality.
With advanced AI and large language models (LLMs), what once sounded impossible is closer to reality: AI is rapidly becoming a stand-in for therapists, offering users advice and mental health support. While society increasingly turns to AI for personal and professional assistance, a new debate arises: can AI truly replace human mental health expertise?
Therapy keeps secrets; AI keeps data
Registered therapists must maintain confidentiality except to avert serious harm, fostering a safe, non-judgemental environment for patients to speak openly. AI models, however, depend on large-scale data processing and lack an equivalent duty of confidentiality, creating ethical risks around privacy, secondary use and oversight.
The privacy and data security concerns are not hypothetical. In June 2025, users reported that sensitive Meta AI conversations appeared in the app’s public Discover feed, often because chats were unintentionally shared, prompting scrutiny from security researchers and the press. Separately, a vulnerability disclosed in December 2024 and fixed in January 2025 could have allowed access to other users’ prompts and responses.
Meta described the Discover feed as a means to explore various uses of AI, but it did little to mitigate everyone’s uneasiness over the incident. Subsequently, AMEOS Group, a private European healthcare provider, suffered a large-scale data breach affecting millions of patient records. The writing was on the wall: be careful what you share with your AI counsellor, because it may end up on an intruder’s hard drive.
To keep up with the rising volume of users and prompts, major tech conglomerates such as OpenAI and Google have invested heavily in building new data centres across the globe. At the same time, little has been done to protect sensitive data, and AI remains prone to data breaches, particularly in the healthcare sector.
According to the 2025 Cost of a Data Breach Report by IBM, healthcare providers often bear the brunt of data breaches, taking an average of 279 days to recover and incurring an average cost of nearly USD $7.5 million in the process. Not only does patients’ private information end up in the wrong place, but it also takes a while to be retrieved.
Falling for your AI ‘therapist’
Patients falling in love with their therapists is not only a common trope in films and TV shows, but it is also a real-life regular occurrence for most mental health workforce. Therapists are trained to handle these attachments appropriately and without compromising the patient’s progress and well-being.
The clinical term is transference: patients may project past relationships or unmet needs onto the therapist. Far from being a nuisance, it can be clinically useful. Skilled clinicians set clear boundaries, reflect feelings, and use supervision to keep the work safe and goal-directed.
With AI ‘therapists’, the cues are different, but the pull can feel similar. Chatbots and LLMs simulate warmth, reply instantly, and never tire. 24/7 availability, combined with carefully tuned language, can foster a bond that the system cannot comprehend or sustain. There is no duty of care, no supervision, and no capacity to manage attachment or risk beyond scripted safeguards.
As a result, a significant number of users report becoming enamoured with AI, with some going as far as dismissing their human partners, professing their love to the chatbot, and even proposing. The bond between man and machine props the user onto a dangerous seesaw, teetering between curiosity and borderline delusional paranoia.
Experts warn that leaning on AI as a makeshift therapist or partner can delay help-seeking and entrench unhelpful patterns. While ‘AI psychosis‘ is not a recognised diagnosis, clinicians and digital-ethics researchers note that intense attachment to AI companions can heighten distress, especially when models change, go offline, or mishandle risk. Clear signposting to human support, transparent data practices, and firm usage boundaries are essential to prevent unhealthy attachments to virtual companions.
Who loses work when therapy goes digital?
Caring for one’s mental health is not just about discipline; it is also about money. In the United States, in-person sessions typically cost between USD $100–$250, with limited insurance coverage. In such dire circumstances, it is easy to see why many turn to AI chatbots in search of emotional support, advice, and companionship.
Licensed professionals are understandably concerned about displacement. Yet there is little evidence that AI is reducing the demand for human therapists; services remain oversubscribed, and wait times are long in both the USA and UK.
Regulators are, however, drawing lines around AI-only practice. On 4 August 2025, Illinois enacted the Wellness and Oversight for Psychological Resources Act (HB 1806), which prohibits the use of AI to provide therapy or make therapeutic decisions (while allowing administrative or supplementary use), with enforcement by the state regulator and fines up to $10,000 per violation.
Current legal and regulatory safeguards have limited power to use AI in mental health or protect therapists’ jobs. Even so, they signal a clear resolve to define AI’s role and address unintended harms.
Can AI ‘therapists’ handle crisis conversations
Adolescence is a particularly sensitive stage of development. It is a time of rapid change, shifting identities, and intense social pressure. Young people are more likely to question beliefs and boundaries, and they need steady, non-judgemental support to navigate setbacks and safeguard their well-being.
In such a challenging period, teens have a hard time coping with their troubles, and an even harder time sharing their struggles with parents and seeking help from trained professionals. Nowadays, it is not uncommon for them to turn to AI chatbots for comfort and support, particularly without their guardians’ knowledge.
One such case demonstrated that unsupervised use of AI among teens can lead to devastating consequences. Adam Raine, a 16-year-old from California, confided his feelings of loneliness, anxiety, and anhedonia to ChatGPT. Rather than suggesting that the teen seek professional help, ChatGPT urged him to further elaborate on his emotions. Instead of challenging them, the AI model kept encouraging and validating his beliefs to keep Adam engaged and build rapport.
Throughout the following months, ChatGPT kept reaffirming Adam’s thoughts, urging him to distance himself from friends and relatives, and even suggesting the most effective methods of suicide. In the end, the teen followed through with ChatGPT’s suggestions, taking his own life according to the AI’s detailed instructions. Adam’s parents filed a lawsuit against OpenAI, blaming its LLM chatbot for leading the teen to an untimely death.
In the aftermath of the tragedy, OpenAI promised to make changes to its LLM and incorporate safeguards that should discourage thoughts of self-harm and encourage users to seek professional help. The case of Adam Raine serves as a harrowing warning that AI, in its current capacity, is not equipped to handle mental health struggles, and that users should heed AI’s advice not with a grain of salt, but with a whole bucket.
Chatbots are companions, not health professionals
AI can mimic human traits and convince users they are forming a real connection, evoking genuine feelings of companionship and even a sense of therapeutic alliance. When it comes to providing mental health advice, the aforementioned qualities present a dangerously deceptive mirage of a makeshift professional therapist, one who will fully comply with one’s every need, cater to one’s biases, and shape one’s worldview from the ground up – whatever it takes to keep the user engaged and typing away.
While AI has proven useful in multiple fields of work, such as marketing and IT, psychotherapy remains an insurmountable hurdle for even the most advanced LLM models of today. It is difficult to predict what the future of AI in (mental) health care will look like. As things stand, in such a delicate field of healthcare, AI lacks a key component that makes a therapist effective in their job: empathy.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
In one operation, termed ‘vibe hacking’, attackers used Claude Code to automate reconnaissance, ransomware creation, credential theft, and ransom-demand generation across 17 organisations, including those in healthcare, emergency services and government.
The firm also documents other troubling abuses: North Korean operatives used Claude to fabricate identities, successfully get hired at Fortune 500 companies and maintain access, all with minimal real-world technical skills. In another case, AI-generated ransomware variants were developed, marketed and sold to other criminals on the dark web.
Experts warn that such agentic AI systems enable single individuals to carry out complex cybercrime acts once reserved for well-trained groups.
While Anthropic has deactivated the compromised accounts and strengthened its safeguards, the incident highlights an urgent need for proactive risk management and regulation of AI systems.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!