Jaguar shutdown extended as ministers meet suppliers

Jaguar Land Rover (JLR) has confirmed its factories will remain closed until at least 1 October, extending a shutdown triggered by a cyber-attack in late August.

Business Secretary Peter Kyle and Industry Minister Chris McDonald are meeting JLR and its suppliers, as fears mount that small firms in the supply chain could collapse without the support of the August cyberattack.

The disruption, estimated to cost JLR £50m per week, affects UK plants in Solihull, Halewood and Wolverhampton. About 30,000 people work directly for JLR, with a further 100,000 in its supply chain.

Unions say some supplier staff have been laid off with little or no pay, forcing them to seek Universal Credit. Unite has called for a furlough-style scheme, while MPs have pressed the government to consider emergency loans.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Europe prepares formal call for AI Gigafactory projects

The European Commission is collaborating with the EU capitals to narrow the list of proposals for large AI training hubs, known as AI Gigafactories. The €20 billion plan will be funded by the Commission (17%), the EU countries (17%), and industry (66%) to boost computing capacity for European developers.

The first call drew 76 proposals from 16 countries, far exceeding the initially planned four or five facilities. Most submissions must be merged or dropped, with Poland already seeking a joint bid with the Baltic states as talks continue.

Some EU members will inevitably lose out, with Ursula von der Leyen, the President of the European Commission, hinting that priority could be given to countries already hosting AI Factories. That could benefit Finland, whose Lumi supercomputer is part of a Nokia-led bid to scale up into a Gigafactory.

The plan has raised concerns that Europe’s efforts come too late, as US tech giants invest heavily in larger AI hubs. Still, Brussels hopes its initiative will allow EU developers to compete globally while maintaining control over critical AI infrastructure.

A formal call for proposals is expected by the end of the year, once the legal framework is finalised. Selection criteria and funding conditions will be set to launch construction as early as 2026.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Misconfigurations drive major global data breaches

Misconfigurations in cloud systems and enterprise networks remain one of the most persistent and damaging causes of data breaches worldwide.

Recent incidents have highlighted the scale of the issue, including a cloud breach at the US Department of Homeland Security, where sensitive intelligence data was inadvertently exposed to thousands of unauthorised users.

Experts say such lapses are often more about people and processes than technology. Complex workflows, rapid deployment cycles and poor oversight allow errors to spread across entire systems. Misconfigured servers, storage buckets or access permissions then become easy entry points for attackers.

Analysts argue that preventing these mistakes requires better governance, training and process discipline rather. Building strong safeguards and ensuring staff have the knowledge to configure systems securely are critical to closing one of the most exploited doors in cybersecurity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Research shows AI complements, not replaces, human work

AI headlines often flip between hype and fear, but the truth is more nuanced. Much research is misrepresented, with task overlaps miscast as job losses. Leaders and workers need clear guidance on using AI effectively.

Microsoft Research mapped 200,000 Copilot conversations to work tasks, but headlines warned of job risks. The study showed overlap, not replacement. Context, judgment, and interpretation remain human strengths, meaning AI supports rather than replaces roles.

Other research is similarly skewed. METR found that AI slowed developers by 19%, but mostly due to the learning curves associated with first use. MIT’s ‘GenAI Divide’ measured adoption, not ability, showing workflow gaps rather than technology failure.

Better studies reveal the collaborative power of AI. Harvard’s ‘Cybernetic Teammate’ experiment demonstrated that individuals using AI performed as well as full teams without it. AI bridged technical and commercial silos, boosting engagement and improving the quality of solutions produced.

The future of AI at work will be shaped by thoughtful trials, not headlines. By treating AI as a teammate, organisations can refine workflows, strengthen collaboration, and turn AI’s potential into long-term competitive advantage.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

MrBeast under scrutiny for child advertising practices

The Children’s Advertising Review Unit (CARU) has advised MrBeast, LLC and Feastables to strengthen their advertising and privacy practices following concerns over promotions aimed at children.

CARU found that some videos on the MrBeast YouTube channel included undisclosed advertising in descriptions and pinned comments, which could mislead young viewers.

It also raised concerns about a promotional taste test for Feastables chocolate bars, which appeared to children as a valid comparison despite lacking a scientific basis.

Investigators said Feastables sweepstakes failed to clearly disclose free entry options, minimum age requirements and the actual odds of winning. Promotions were also criticised for encouraging excessive purchases and applying sales pressure, such as countdown timers urging children to buy more chocolate.

Privacy issues were also identified, with Feastables collecting personal data from under-13s without parental consent. CARU noted the absence of an effective age gate and highlighted that information provided via popups was sent to third parties.

MrBeast and Feastables said many of the practices under review had already been revised or discontinued, but pledged to take CARU’s recommendations into account in future campaigns.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Behavioural AI could be the missing piece in the $2 trillion AI economy

Global AI spending is projected to reach $1.5 trillion in 2025 and exceed $2 trillion in 2026, yet a critical element is missing: human judgement. A growing number of organisations are turning to behavioural science to bridge this gap, coding it directly into AI systems to create what experts call behavioural AI.

Early adopters like Clarity AI utilise behavioural AI to flag ESG controversies before they impact earnings. Morgan Stanley uses machine learning and satellite data to monitor environmental risks, while Google Maps influences driver behaviour, preventing over one million tonnes of CO₂ annually.

Behavioural AI is being used to predict how leaders and societies act under uncertainty. These insights guide corporate strategy, PR campaigns, and decision-making. Mind Friend combines a network of 500 mental health experts with AI to build a ‘behavioural infrastructure’ that enhances judgement.

The behaviour analytics market was valued at $1.1 billion in 2024 and is projected to grow to $10.8 billion by 2032. Major players, such as IBM and Adobe, are entering the field, while Davos and other global forums debate how behavioural frameworks should shape investment and policy decisions.

As AI scrutiny grows, ethical safeguards are critical. Companies that embed governance, fairness, and privacy protections into their behavioural AI are earning trust. In a $2 trillion market, winners will be those who pair algorithms with a deep understanding of human behaviour.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Fatalities linked to Optus Triple Zero disruption spark inquiry

Optus is facing intense scrutiny after a technical fault disrupted access to Triple Zero in parts of Australia, with at least three fatalities reported. The outage followed a firewall upgrade on 18 September that interfered with emergency call routing in several states and territories.

Around 600 households were affected. The deaths of an infant, a 68-year-old woman and another individual are under investigation to determine whether the outage prevented them from receiving critical help.

Chief executive Stephen Rue apologised publicly on 21 September, admitting that procedures were not followed and that customer reports of failures were not properly escalated. He acknowledged Optus lacked internal monitoring to detect Triple Zero disruptions and called the failure ‘unacceptable’.

The company has launched an independent review, introduced compulsory escalation of all future emergency call reports, and committed to real-time monitoring of Triple Zero traffic. Federal and state leaders condemned the incident, with South Australia’s premier calling it ‘unprecedented incompetence’.

Authorities are now weighing regulatory consequences, while wider debate grows over infrastructure resilience, accountability and redundancy in the telecoms sector in Australia.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Quantinuum’s 12-qubit system achieves unassailable quantum advantage

Researchers have reached a major milestone in quantum computing, demonstrating a task that surpasses the capabilities of classical machines. Using Quantinuum’s 12-qubit ion-trap system, they delivered the first permanent, provable example of quantum supremacy, settling a long-running debate.

The experiment addressed a communication-complexity problem in which one processor (Alice) prepared a state and another (Bob) measured it. After 10,000 trials, the team proved that no classical algorithm could match the quantum result with fewer than 62 bits, with equivalent performance requiring 330 bits.

Unlike earlier claims of quantum supremacy, later challenged by improved classical algorithms, the researchers say no future breakthrough can close this gap. Experts hailed the result as a rare proof of permanent quantum advantage and a significant step forward in the field.

However, like past demonstrations, the result has no immediate commercial application. It remains a proof-of-principle demonstration showing that quantum hardware can outperform classical machines under certain conditions, but it has yet to solve real-world problems.

Future work could strengthen the result by running Alice and Bob on separate devices to rule out interaction effects. Experts say the next step is achieving useful quantum supremacy, where quantum machines beat classical ones on problems with real-world value.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

GPT-5-powered ChatGPT Edu comes to Oxford staff and students

The University of Oxford will become the first UK university to offer free ChatGPT Edu access to all staff and students. The rollout follows a year-long pilot with 750 academics, researchers, and professional services staff across the University and Colleges.

ChatGPT Edu, powered by OpenAI’s GPT-5 model, is designed for education with enterprise-grade security and data privacy. Oxford says it will support research, teaching, and operations while encouraging safe, responsible use through robust governance, training, and guidance.

Staff and students will receive access to in-person and online training, webinars, and specialised guidance on the use of generative AI. A dedicated AI Competency Centre and network of AI Ambassadors will support users, alongside mandatory security training.

The prestigious UK university has also established a Digital Governance Unit and an AI Governance Group to oversee the adoption of emerging technologies. Pilots are underway to digitise the Bodleian Libraries and explore how AI can improve access to historical collections worldwide.

A jointly funded research programme with the Oxford Martin School and OpenAI will study the societal impact of AI adoption. The project is part of OpenAI’s NextGenAI consortium, which brings together 15 global research institutions to accelerate breakthroughs in AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

TikTok nears US takeover deal as Washington secures control

The White House has revealed that US companies will take control of TikTok’s algorithm, with Americans occupying six of seven board seats overseeing the platform’s operations in the country. A final deal, which would reshape the app’s US presence, is expected soon, though Beijing has yet to respond publicly.

Washington has long pushed to separate TikTok’s American operations from its Chinese parent company, ByteDance, citing national security risks. The app faced repeated threats of a ban unless sold to US investors, with deadlines extended several times under President Donald Trump. The Supreme Court also upheld legislation requiring ByteDance to divest, though enforcement was delayed earlier this year.

According to the White House, data protection and privacy for American users will be managed by Oracle, chaired by Larry Ellison, a close Trump ally. Oracle will also oversee control of TikTok’s algorithm, the key technology that drives what users see on the app. Ellison’s influence in tech and media has grown, especially after his son acquired Paramount, which owns CBS News.

Trump claimed he had secured an understanding on the deal in a recent call with Chinese President Xi Jinping, describing the exchange as ‘productive.’ However, Beijing’s official response has been less explicit. The Commerce Ministry said discussions should proceed according to market rules and Chinese law, while state media suggested China welcomed continued negotiations.

Trump has avoided clarifying whether US investors need to develop a new system or continue using the existing one. His stance on TikTok has shifted since his first term, when he pushed for a ban, to now embracing the platform as a political tool to engage younger voters during his 2024 campaign.

Concerns over TikTok’s handling of user data remain at the heart of US objections. Officials at the Justice Department have warned that the app’s access to US data posed a security threat of ‘immense depth and scale,’ underscoring why Washington is pressing to lock down control of its operations.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!