Cyberattack forces Jaguar Land Rover to halt production

Production at Jaguar Land Rover (JLR) is to remain halted until at least next week after a cyberattack crippled the carmaker’s operations. Disruption is expected to last through September and possibly into October.

The UK’s largest car manufacturer, owned by Tata, has suspended activity at its plants in Halewood, Solihull, and Wolverhampton. Thousands of staff have been told to stay home on full pay while ‘banking’ hours are to be recovered later.

Suppliers, including Evtec, WHS Plastics, SurTec, and OPmobility, which employ more than 6,000 people in the UK, have also paused their operations. The Sunday Times reported speculation that the outage could drag on for most of September.

While there is no evidence of a data breach, JLR has notified the Information Commissioner’s Office about potential risks. Dozens of internal systems, including spare parts databases, remain offline, forcing dealerships to revert to manual processes.

Hackers linked to the groups Scattered Spider, Lapsus$, and ShinyHunters have claimed responsibility for the incident. JLR stated that it was collaborating with cybersecurity experts and law enforcement to restore systems in a controlled and safe manner.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Conti and LockBit dominate ransomware landscape with record attacks

Ransomware groups have evolved into billion-dollar operations targeting critical infrastructure across multiple countries, employing increasingly sophisticated extortion schemes. Between 2020 and 2022, more than 865 documented attacks were recorded across Australia, Canada, New Zealand, and the UK.

Criminals have escalated from simple encryption to double and triple extortion, threatening to leak stolen data as added leverage. Attack vectors include phishing, botnets, and unpatched flaws. Once inside, attackers use stealthy tools to persist and spread.

BlackSuit, formerly known as Conti, led with 141 attacks, followed by LockBit’s 129, according to data from the Australian Institute of Criminology. Ransomware-as-a-Service groups hit higher volumes by splitting developers from affiliates handling breaches and negotiations.

Industrial targets bore the brunt, with 239 attacks on manufacturing and building products. The consumer goods, real estate, financial services, and technology sectors also featured prominently. Analysts note that industrial firms are often pressured into quick ransom payments to restore production.

Experts warn that today’s ransomware combines military-grade encryption with advanced reconnaissance and backup targeting, raising the stakes for defenders. The scale of activity underscores how resilient these groups remain, adapting rapidly to law enforcement crackdowns and shifting market opportunities.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Apple sued over use of pirated books in AI training

Apple is facing a new copyright lawsuit after two authors alleged the company used pirated copies of their books to train its OpenELM AI models. Filed in Northern California, the case claims Apple used the authors’ works without permission, payment, or credit.

The lawsuit seeks class-action status, adding Apple to a growing list of technology firms accused of misusing copyrighted works for AI training.

The action comes amid a wider legal storm engulfing AI companies. Anthropic recently agreed to a $1.5 billion settlement with authors who alleged its Claude chatbot was trained on their works without authorisation, in what lawyers called the most significant copyright recovery in history.

Microsoft, Meta, and OpenAI also face similar lawsuits over alleged reliance on unlicensed material in their datasets.

Analysts warn Apple could face heightened scrutiny given its reputation as a privacy-focused company. Any finding that its AI models were trained on pirated material could cause serious reputational harm alongside potential financial penalties.

The case also underscores the broader unresolved debate over whether AI training constitutes fair use or unlawful exploitation of creative works.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Mental health concerns over chatbots fuel AI regulation calls

The impact of AI chatbots on mental health is emerging as a serious concern, with experts warning that such cases highlight the risks of more advanced systems.

Nate Soares, president of the US-based Machine Intelligence Research Institute, pointed to the tragic case of teenager Adam Raine, who took his own life after months of conversations with ChatGPT, as a warning signal for future dangers.

Soares, a former Google and Microsoft engineer, said that while companies design AI chatbots to be helpful and safe, they can produce unintended and harmful behaviour.

He warned that the same unpredictability could escalate if AI develops into artificial super-intelligence, systems capable of surpassing humans in all intellectual tasks. His new book with Eliezer Yudkowsky, If Anyone Builds It, Everyone Dies, argues that unchecked advances could lead to catastrophic outcomes.

He suggested that governments adopt a multilateral approach, similar to nuclear non-proliferation treaties, to halt a race towards super-intelligence.

Meanwhile, leading voices in AI remain divided. Meta’s chief AI scientist, Yann LeCun, has dismissed claims of an existential threat, insisting AI could instead benefit humanity.

The debate comes as OpenAI faces legal action from Raine’s family and introduces new safeguards for under-18s.

Psychotherapists and researchers also warn of the dangers of vulnerable people turning to chatbots instead of professional care, with early evidence suggesting AI tools may amplify delusional thoughts in those at risk.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Phishing scams surge with record losses in August

ScamSniffer has reported a sharp rise in phishing scams during August, with losses climbing to $12.17 million, a 72% increase from July. The figure marks the highest monthly losses this year and came alongside 15,230 victims, a new annual record.

The spike was driven mainly by EIP-7702 batch signature scams, which accounted for nearly half of the stolen funds. One victim lost $3.08 million in a single incident, while two others lost $1.54 million and $1 million, respectively.

More minor but significant losses also occurred, including users losing $235,977 and $66,000 in scams disguised as Uniswap swaps.

EIP-7702, introduced with Ethereum’s Pectra upgrade, allows externally owned accounts to act temporarily like smart contracts. While intended to improve user experience, it has opened the door to new phishing exploits.

Security experts warn that attackers increasingly use automated sweeper attacks to drain compromised wallets.

Beyond EIP-7702, traditional phishing methods remain a problem. ScamSniffer noted a rise in address poisoning and malicious ads on platforms such as Google and Bing. One user lost $636,559 after copying a tainted address, while two more lost $500,000 and $19,000 in similar schemes.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Google hit with $3.5 billion EU fine

The European Commission fined Google nearly $3.5 billion after ruling that the company had abused its dominance in digital advertising. Regulators found that Google unfairly preferred its ad exchange, AdX, in its publisher ad server and ad-buying tools, which violated EU antitrust rules.

Officials ordered Google to end these practices within 60 days and to address what they described as ‘inherent conflicts of interest’ across the adtech supply chain. Teresa Ribera, the Commission’s executive vice president, said the case showed the need to ensure that digital markets serve the public fairly, warning that more potent remedies would follow if Google failed to comply.

Google announced it would appeal, arguing that its advertising services remain competitive and that businesses have more alternatives than ever. The fine marks the EU’s second-largest competition penalty, following a record $5 billion action against Google in 2018.

The ruling drew criticism from US President Donald Trump, who accused Europe of unfairly targeting American tech firms and threatened retaliatory measures.

Trump hosted a dinner with industry executives, including Google CEO Sundar Pichai and co-founder Sergey Brin, where he won praise for his policies on AI.

Meanwhile, Google secured partial relief in a separate antitrust case in the United States when a judge declined to impose sweeping remedies such as forcing the sale of Chrome or Android.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

New ChatGPT feature enables multi-threaded chats

The US AI firm OpenAI has introduced a new ChatGPT feature that allows users to branch conversations into separate threads and explore different tones, styles, or directions without altering the original chat.

The update, rolled out on 5 September, is available to anyone logged into ChatGPT through the web version.

The branching tool lets users copy a conversation from a chosen point and continue in a new thread while preserving the earlier exchange.

Marketing teams, for example, could test formal, informal, or humorous versions of advertising content within parallel chats, avoiding the need to overwrite or restart a conversation.

OpenAI described the update as a response to user requests for greater flexibility. Many users had previously noted that a linear dialogue structure limited efficiency by forcing them to compare and copy content repeatedly.

Early reactions online have compared the new tool to Git, which enables software developers to branch and merge code.

The feature has been welcomed by ChatGPT users who are experimenting with brainstorming, project analysis, or layered problem-solving. Analysts suggest it also reduces cognitive load by allowing users to test multiple scenarios more naturally.

Alongside the update, OpenAI is working on other projects, including a new AI-powered jobs platform to connect workers and companies more effectively.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Anthropic settles $1.5 billion copyright case with authors

The AI startup, Anthropic, has agreed to pay $1.5 billion to settle a copyright lawsuit accusing the company of using pirated books to train its Claude AI chatbot.

The proposed deal, one of the largest of its kind, comes after a group of authors claimed the startup deliberately downloaded unlicensed copies of around 500,000 works.

According to reports, Anthropic will pay about $3,000 per book and add interest while agreeing to destroy datasets containing the material. A California judge will review the settlement terms on 8 September before finalising them.

Lawyers for the plaintiffs described the outcome as a landmark, warning that using pirated websites for AI training is unlawful.

The case reflects mounting legal pressure on the AI industry, with companies such as OpenAI and Microsoft also facing copyright disputes. The settlement followed a June ruling in which a judge said using the books to train Claude was ‘transformative’ and qualified as fair use.

Anthropic said the deal resolves legacy claims while affirming its commitment to safe AI development.

Despite the legal challenges, Anthropic continues to grow rapidly. Earlier in August, the company secured $13 billion in funding for a valuation of $183 billion, underlining its rise as one of the fastest-growing players in the global technology sector.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Mistral secures €1.3B ASML investment amid $14B valuation

ASML has reportedly become the top shareholder in French AI company Mistral after investing €1.3 billion. The deal forms part of a wider €2 billion funding round that values Mistral at $14 billion, marking a significant milestone for the Paris-based startup.

The Dutch chip-making equipment giant will also gain a board seat at Mistral, with Bank of America advising on the investment. The move is seen as a step towards reinforcing European technological sovereignty by reducing reliance on American and Chinese AI systems.

The partnership could help Mistral expand its generative AI tools and open-source platforms while enhancing ASML’s ability to integrate data analytics into its operations.

Industry analysts suggest the collaboration will unite two European technology leaders at a critical moment in the global race for AI dominance.

Founded by Timothée Lacroix, Guillaume Lample, and Arthur Mensch, Mistral has quickly become one of Europe’s most valuable AI startups.

The company, backed by investors including Microsoft, Databricks, and General Catalyst, develops open-source generative AI models that directly compete with those produced by OpenAI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google avoids breakup as court ruling fuels AI Mode expansion

A US district judge has declined to order a breakup of Google, softening the blow of a 2024 ruling that found the company had illegally monopolised online search.

The decision means Google can press ahead with its shift from a search engine into an answer engine, powered by generative AI.

Google’s AI Mode replaces traditional blue links with direct responses to queries, echoing the style of ChatGPT. While the feature is optional for now, it could become the default.

That alarms publishers, who depend on search traffic for advertising revenue. Studies suggest chatbots reduce referral clicks by more than 90 percent, leaving many sites at risk of collapse.

Google is also experimenting with inserting ads into AI Mode, though it remains unclear how much revenue will flow to content creators. Websites can block their data from being scraped, but doing so would also remove them from Google search entirely.

Despite these concerns, Google argues that competition from ChatGPT, Perplexity, and other AI tools shows that new rivals are reshaping the search landscape.

The judge even cited the emergence of generative AI as a factor that altered the case against Google, underlining how the rise of AI has become central to the future of the internet.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!