Jaguar Land Rover extends production halt after cyberattack

Jaguar Land Rover has told staff to stay at home until at least Wednesday as the company continues to recover from a cyberattack.

The hack forced JLR to shut down systems on 31 August, disrupting operations at plants in Halewood, Solihull and Wolverhampton, UK. Production was initially paused until 9 September but has now been extended for at least another week.

Business minister Sir Chris Bryant said it was too early to determine whether the attack was state-sponsored. The incident follows a wave of cyberattacks in the UK, including recent breaches at M&S, Harrods and train operator LNER.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

YouTube expands AI dubbing to millions of creators

Real-time translation is becoming a standard feature across consumer tech, with Samsung, Google, and Apple all introducing new tools. Apple’s recently announced Live Translation on AirPods demonstrates the utility of such features, particularly for travellers.

YouTube has joined the trend, expanding its multi-language audio feature to millions of creators worldwide. The tool enables creators to add dubbed audio tracks in multiple languages, powered by Google’s Gemini AI, replicating tone and emotion.

The feature was first tested with creators like MrBeast, Mark Rober, and Jamie Oliver. YouTube reports that Jamie Oliver’s channel saw its views triple, while over 25% of the watch time came from non-primary languages.

Mark Rober’s channel now supports more than 30 languages per video, helping creators reach audiences far beyond their native markets. YouTube states that this expansion should make content more accessible to global viewers and increase overall engagement.

Subtitles will still be vital for people with hearing difficulties, but AI-powered dubbing could reduce reliance on them for language translation. For creators, it marks a significant step towards making content truly global.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Educators rethink assignments as AI becomes widespread

Educators are confronting a new reality as AI tools like ChatGPT become widespread among students. Traditional take-home assignments and essays are increasingly at risk as students commonly use AI chatbots to complete schoolwork.

Schools are responding by moving more writing tasks into the classroom and monitoring student activity. Teachers are also integrating AI into lessons, teaching students how to use it responsibly for research, summarising readings, or improving drafts, rather than as a shortcut to cheat.

Policies on AI use still vary widely. Some classrooms allow AI tools for grammar checks or study aids, while others enforce strict bans. Teachers are shifting away from take-home essays, adopting in-class tests, lockdown browsers, or flipped classrooms to manage AI’s impact better. 

The inconsistency often leaves students unsure about acceptable use and challenges educators to uphold academic integrity.

Institutions like the University of California, Berkeley, and Carnegie Mellon have implemented policies promoting ‘AI literacy,’ explaining when and how AI can be used, and adjusting assessments to prevent misuse.

As AI continues improving, educators seek a balance between embracing technology’s potential and safeguarding academic standards. Teachers emphasise guidance, structured use, and supervision to ensure AI supports learning rather than undermining it.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

China creates brain-inspired AI model

Chinese scientists have unveiled SpikingBrain1.0, the world’s first large-scale AI language model to replicate the human brain. The model reduces energy use and runs independently of Nvidia chips, departing from conventional AI architectures.

Developed by the Chinese Academy of Sciences, SpikingBrain1.0 uses spiking neural networks to activate only the required neurons for each task, rather than processing all information simultaneously.

Instead of evaluating every word in parallel, it focuses on the most recent and relevant context, enabling faster and more efficient processing. Researchers claim the model operates 25 to 100 times faster than traditional AI systems while keeping accuracy competitive.

A significant innovation is hardware independence. SpikingBrain1.0 runs on China’s MetaX chip platform, reducing reliance on Nvidia GPUs. It also requires less than 2% of the data typically needed for pre-training large language models, making it more sustainable and accessible.

SpikingBrain1.0 could power low-energy, real-time applications such as autonomous drones, wearable devices, and edge computing. The model highlights a shift toward biologically-inspired AI prioritising efficiency and adaptability over brute-force computation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI moves to for-profit with Microsoft deal

Microsoft and OpenAI have agreed to new non-binding terms that will allow OpenAI to restructure into a for-profit company, marking a significant shift in their long-standing partnership.

The agreement sets the stage for OpenAI to raise capital, pursue additional cloud partnerships, and eventually go public, while Microsoft retains access to its technology.

The previous deal gave Microsoft exclusive rights to sell OpenAI tools via Azure and made it the primary provider of compute power. OpenAI has since expanded its options, including a $300 billion cloud deal with Oracle and an agreement with Google, allowing it to develop its own data centre project, Stargate.

OpenAI aims to maintain its nonprofit arm, which will receive more than $100 billion from the projected $500 billion private market valuation.

Regulatory approval from the attorneys general of California and Delaware is required for the new structure, with OpenAI targeting completion by the end of the year to secure key funding.

Both companies continue to compete across AI products, from consumer chatbots to business tools, while Microsoft works on building its own AI models to reduce reliance on OpenAI technology.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Qwen3-Next strengthens Alibaba’s position in global AI race

Alibaba has open-sourced its latest AI model, Qwen3-Next, claiming it is ten times more powerful and cheaper to train than its predecessor.

Developed by Alibaba Cloud, the 80-billion-parameter model reportedly performs on par with the company’s flagship Qwen3-235B-A22B while remaining optimised for deployment on consumer-grade hardware.

Qwen3-Next introduces innovations such as hybrid attention for long text processing, high-sparsity mixture-of-experts architecture, and multi-token prediction strategies. These upgrades boost both efficiency and model stability during training.

Alibaba also released Qwen3-Next-80B-A3B-Thinking, a reasoning-focused model that outperformed its own Qwen3-32B-Thinking and Google’s Gemini-2.5-Flash-Thinking in benchmark tests.

The release strengthens Alibaba’s position as a major player in open-source AI, following last week’s preview of its 1-trillion-parameter Qwen-3-Max model, which ranked sixth on UC Berkeley’s LMArena leaderboard.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Bank of Russia plans crypto derivatives access for funds

The Bank of Russia plans to allow investment funds to purchase cryptocurrency derivatives next year, a senior official confirmed at the Capital Markets 2025 forum. Currently, only brokers can offer such instruments to qualified investors.

Deputy head of the bank’s Investment Finance Intermediation Department, Valery Krasinsky, explained that the move aims to level the playing field for management companies. Futures on Bitcoin ETFs are available via brokers, and mutual funds could soon access them under new rules.

Access to crypto funds will remain limited to highly qualified investors. Individuals must meet strict financial thresholds, including securities and deposits exceeding 100 million rubles or an annual income of over 50 million.

The CBR is also finalising a list of base assets for derivative financial instruments, with a draft regulatory act expected in 2026.

Authorities have indicated a cautious expansion of investor access. The Ministry of Finance is considering easing the criteria for ‘highly qualified’ investors, signalling a gradual opening of Russia’s crypto market while preserving the dominance of traditional stock and bond investments.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Nurabot to assist nurses with routine tasks

Global health care faces a severe shortage of workers, with WHO projecting a deficit of 4.5 million nurses by 2030. Around one-third of nurses already experience burnout, and high turnover rates exacerbate staffing pressures.

Foxconn’s new AI-powered nursing robot, Nurabot, is designed to assist with repetitive and physically demanding tasks, potentially reducing nurses’ workload by up to 30%.

Nurabot moves autonomously around hospital wards, delivers medication, and guides patients, using a combination of Foxconn’s Chinese large language model and NVIDIA’s AI platforms.

Built with Kawasaki Heavy Industries, the robot was adapted and trained virtually to navigate hospital wards safely. Testing at Taichung Veterans General Hospital since April 2025 has shown promising results, with Foxconn planning a commercial launch in early 2026.

The ageing population and rising patient demand are straining health care systems worldwide. Experts say AI robots can boost efficiency and save the workforce, but issues remain, including patient preference, hospital design, safety, and data ethics.

Hospitals may need redesigns to accommodate free-moving humanoid robots effectively.

While robots like Nurabot cannot replace nurses, they can support staff by handling routine tasks and freeing professionals to provide critical patient care. The smart hospital market, worth $72.24 billion in 2025, shows rising investment in AI and robotics to address staff shortages and ageing populations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

M&S technology chief steps down after cyberattack

Marks & Spencer’s technology chief, Rachel Higham, has stepped down less than 18 months after joining the retailer from BT.

Her departure comes months after a cyberattack in April by Scattered Spider disrupted systems and cost the company around £300 million. Online operations, including click-and-collect, were temporarily halted before being gradually restored.

In a memo to staff, the company described Higham as a steady hand during a turbulent period and wished her well. M&S has said it does not intend to replace her role, leaving questions over succession directly.

The retailer expects part of the financial hit to be offset by insurance. It has declined to comment further on whether Higham will receive a payoff.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

California moves to regulate AI companion chatbots to protect minors

The California State Assembly passed SB 243, advancing legislation making the state the first in the USA to regulate AI companion chatbots. The bill, which aims to safeguard minors and vulnerable users, passed with bipartisan support and now heads to the state Senate for a final vote on Friday.

If signed into law by Governor Gavin Newsom, SB 243 would take effect on 1 January 2026. It would require companies like OpenAI, Replika, and Character.AI to implement safety protocols for AI systems that simulate human companionship.

The law would prohibit such chatbots from engaging in conversations involving suicidal ideation, self-harm, or sexually explicit content. For minors, platforms must provide recurring alerts every three hours, reminding them they interact with AI and encouraging breaks.

The bill also introduces annual transparency and reporting requirements, effective 1 July 2027. Users harmed by violations could seek damages of up to $1,000 per incident, injunctive relief and attorney’s fees.

The legislation follows the suicide of teen Adam Raine after troubling conversations with ChatGPT, and amid mounting scrutiny of AI’s impact on children. Lawmakers nationwide and the Federal Trade Commission (FTC) are increasing pressure on AI companies to bolster safeguards in the USA.

Though earlier versions of the bill included stricter requirements, like banning addictive engagement tactics, those provisions were removed. Still, backers say the final bill strikes a necessary balance between innovation and public safety.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!