Cyberattack forces Jaguar Land Rover to halt production

Production at Jaguar Land Rover (JLR) is to remain halted until at least next week after a cyberattack crippled the carmaker’s operations. Disruption is expected to last through September and possibly into October.

The UK’s largest car manufacturer, owned by Tata, has suspended activity at its plants in Halewood, Solihull, and Wolverhampton. Thousands of staff have been told to stay home on full pay while ‘banking’ hours are to be recovered later.

Suppliers, including Evtec, WHS Plastics, SurTec, and OPmobility, which employ more than 6,000 people in the UK, have also paused their operations. The Sunday Times reported speculation that the outage could drag on for most of September.

While there is no evidence of a data breach, JLR has notified the Information Commissioner’s Office about potential risks. Dozens of internal systems, including spare parts databases, remain offline, forcing dealerships to revert to manual processes.

Hackers linked to the groups Scattered Spider, Lapsus$, and ShinyHunters have claimed responsibility for the incident. JLR stated that it was collaborating with cybersecurity experts and law enforcement to restore systems in a controlled and safe manner.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Mental health concerns over chatbots fuel AI regulation calls

The impact of AI chatbots on mental health is emerging as a serious concern, with experts warning that such cases highlight the risks of more advanced systems.

Nate Soares, president of the US-based Machine Intelligence Research Institute, pointed to the tragic case of teenager Adam Raine, who took his own life after months of conversations with ChatGPT, as a warning signal for future dangers.

Soares, a former Google and Microsoft engineer, said that while companies design AI chatbots to be helpful and safe, they can produce unintended and harmful behaviour.

He warned that the same unpredictability could escalate if AI develops into artificial super-intelligence, systems capable of surpassing humans in all intellectual tasks. His new book with Eliezer Yudkowsky, If Anyone Builds It, Everyone Dies, argues that unchecked advances could lead to catastrophic outcomes.

He suggested that governments adopt a multilateral approach, similar to nuclear non-proliferation treaties, to halt a race towards super-intelligence.

Meanwhile, leading voices in AI remain divided. Meta’s chief AI scientist, Yann LeCun, has dismissed claims of an existential threat, insisting AI could instead benefit humanity.

The debate comes as OpenAI faces legal action from Raine’s family and introduces new safeguards for under-18s.

Psychotherapists and researchers also warn of the dangers of vulnerable people turning to chatbots instead of professional care, with early evidence suggesting AI tools may amplify delusional thoughts in those at risk.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New ChatGPT feature enables multi-threaded chats

The US AI firm OpenAI has introduced a new ChatGPT feature that allows users to branch conversations into separate threads and explore different tones, styles, or directions without altering the original chat.

The update, rolled out on 5 September, is available to anyone logged into ChatGPT through the web version.

The branching tool lets users copy a conversation from a chosen point and continue in a new thread while preserving the earlier exchange.

Marketing teams, for example, could test formal, informal, or humorous versions of advertising content within parallel chats, avoiding the need to overwrite or restart a conversation.

OpenAI described the update as a response to user requests for greater flexibility. Many users had previously noted that a linear dialogue structure limited efficiency by forcing them to compare and copy content repeatedly.

Early reactions online have compared the new tool to Git, which enables software developers to branch and merge code.

The feature has been welcomed by ChatGPT users who are experimenting with brainstorming, project analysis, or layered problem-solving. Analysts suggest it also reduces cognitive load by allowing users to test multiple scenarios more naturally.

Alongside the update, OpenAI is working on other projects, including a new AI-powered jobs platform to connect workers and companies more effectively.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Anthropic settles $1.5 billion copyright case with authors

The AI startup, Anthropic, has agreed to pay $1.5 billion to settle a copyright lawsuit accusing the company of using pirated books to train its Claude AI chatbot.

The proposed deal, one of the largest of its kind, comes after a group of authors claimed the startup deliberately downloaded unlicensed copies of around 500,000 works.

According to reports, Anthropic will pay about $3,000 per book and add interest while agreeing to destroy datasets containing the material. A California judge will review the settlement terms on 8 September before finalising them.

Lawyers for the plaintiffs described the outcome as a landmark, warning that using pirated websites for AI training is unlawful.

The case reflects mounting legal pressure on the AI industry, with companies such as OpenAI and Microsoft also facing copyright disputes. The settlement followed a June ruling in which a judge said using the books to train Claude was ‘transformative’ and qualified as fair use.

Anthropic said the deal resolves legacy claims while affirming its commitment to safe AI development.

Despite the legal challenges, Anthropic continues to grow rapidly. Earlier in August, the company secured $13 billion in funding for a valuation of $183 billion, underlining its rise as one of the fastest-growing players in the global technology sector.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google avoids breakup as court ruling fuels AI Mode expansion

A US district judge has declined to order a breakup of Google, softening the blow of a 2024 ruling that found the company had illegally monopolised online search.

The decision means Google can press ahead with its shift from a search engine into an answer engine, powered by generative AI.

Google’s AI Mode replaces traditional blue links with direct responses to queries, echoing the style of ChatGPT. While the feature is optional for now, it could become the default.

That alarms publishers, who depend on search traffic for advertising revenue. Studies suggest chatbots reduce referral clicks by more than 90 percent, leaving many sites at risk of collapse.

Google is also experimenting with inserting ads into AI Mode, though it remains unclear how much revenue will flow to content creators. Websites can block their data from being scraped, but doing so would also remove them from Google search entirely.

Despite these concerns, Google argues that competition from ChatGPT, Perplexity, and other AI tools shows that new rivals are reshaping the search landscape.

The judge even cited the emergence of generative AI as a factor that altered the case against Google, underlining how the rise of AI has become central to the future of the internet.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Quantum era promises new breakthroughs in security and sensing

Quantum technology has moved from academic circles into public debate, with applications already shaping industries and daily life.

For decades, quantum mechanics has powered tools like semiconductors, GPS and fibre optics, a foundation often described as Quantum 1.0. The UN has declared 2025 the International Year of Quantum Science and Technology to mark its impact.

Researchers are now advancing Quantum 2.0, which manipulates atoms, ions and photons to exploit superposition and entanglement. Emerging tools include quantum encryption systems, distributed atomic clocks to secure networks against GPS failures, and sensing devices with unprecedented precision.

Experts warn that disruptions to satellite navigation could cost billions, but quantum clocks may keep economies and critical infrastructure synchronised. With quantum computing and AI developing in parallel, future breakthroughs could transform medicine, energy, and security.

Achieving this vision will require global collaboration across governments, academia and industry to scale up technologies, ensure supply chain resilience and secure international standards.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Australia moves to block AI nudify apps

Australia has announced plans to curb AI tools that generate nude images and enable online stalking. The government said it would introduce new legislation requiring tech companies to block apps designed to abuse and humiliate people.

Communications Minister Anika Wells said such AI tools are fuelling sextortion scams and putting children at risk. So-called ‘nudify’ apps, which digitally strip clothing from images, have spread quickly online.

A Save the Children survey found one in five young people in Spain had been targeted by deepfake nudes, showing how widespread the abuse has become.

Canberra pledged to use every available measure to restrict access, while ensuring that legitimate AI services are not harmed. Australia has already passed strict laws banning under-16s from social media, with the new measures set to build on its reputation as a leader in online safety.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

NSA, CISA and others urge for unified approach to strengthen cybersecurity resilience

The National Security Agency (NSA) has joined the Cybersecurity and Infrastructure Security Agency (CISA) and other partners to release a new Cybersecurity Information Sheet (CSI) titled ‘A Shared Vision of Software Bill of Materials’ (SBOM) for Cybersecurity.

Aimed at promoting the adoption of SBOM practices, the report highlights their role in improving transparency and addressing risks within the software supply chain.

By integrating SBOM generation, analysis, and sharing into existing security processes, organisations can better manage vulnerabilities and strengthen cyber resilience.

Practical risk management strategies and real-world examples outlined in the CSI support the broader Secure by Design initiative.

Authors urge a unified SBOM approach across the cybersecurity community to prevent fragmentation, lower implementation costs, and enhance long-term effectiveness.

Inconsistent or siloed adoption, they caution, could limit the sustainability and impact of SBOM as a core cybersecurity tool.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Nigeria sets sights on top 50 AI-ready nations

Nigeria has pledged to become one of the top 50 AI-ready nations, according to presidential adviser Hadiza Usman. Speaking in Abuja at a colloquium on AI policy, she said the country needs strong leadership, investment, and partnerships to meet its goals.

She stressed that policies must address Nigeria’s unique challenges and not simply replicate foreign models. The government will offer collaboration opportunities with local institutions and international partners.

The Nigerian Deposit Insurance Commission reinforced its support, noting that technology should secure depositors without restricting innovators.

Private sector voices said AI could transform healthcare, agriculture, and public services if policies are designed with inclusion and trust in mind.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

3D-printed ion traps could accelerate quantum computer scaling

Quantum computers may soon grow more powerful through 3D printing, with researchers building miniaturised ion traps to improve scalability and performance.

Ion traps, which confine ions and control their quantum states, play a central role in ion-based qubits. Researchers at UC Berkeley created 3D-printed traps just a few hundred microns wide, which captured ions up to ten times more efficiently than conventional versions.

The new traps also reduced waiting times, allowing ions to be usable more quickly once the system is activated. Hartmut Häffner, who led the study, said the approach could enable scaling to far larger numbers of qubits while boosting speed.

3D printing offers flexibility not possible with chip-style manufacturing, allowing for more complex shapes and designs. Team members say they are already working on new iterations, with future versions expected to integrate optical components such as miniaturised lasers.

Experts argue that this method could address the challenges of low yield, high costs, and poor reproducibility in current ion-trap manufacturing, paving the way for scalable quantum computing and applications in other fields, including mass spectrometry.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!