CMC pegs JLR hack at £1.9bn with 5,000 firms affected

JLR’s cyberattack is pegged at £1.9bn, the UK’s costliest on record. Production paused for five weeks from 1 September across Solihull, Halewood, and Wolverhampton. CMC says 5,000 firms were hit, with full recovery expected by January 2026.

JLR is restoring manufacturing in phases and declined to comment on the estimate. UK dealer systems were intermittently down, orders were cancelled or delayed, and suppliers faced uncertainty. More than half of the losses fall on JLR; the remainder hits its supply chain and local economies.

The CMC classed the incident as Category 3 on its five-level scale. Chair Ciaran Martin warned organisations to harden critical networks and plan for disruption. The CMC’s assessment draws on public data, surveys, and interviews rather than on disclosed forensic evidence.

Researchers say costs hinge on the attack type, which JLR has not confirmed. Data theft is faster to recover than ransomware; wiper malware would be worse. A claimed hacker group linked to earlier high-profile breaches is unverified.

The CMC’s estimate excludes any ransom, which could add tens of millions of dollars. Earlier this year, retail hacks at M&S, the Co-op, and Harrods were tagged Category 2. Those were pegged at £270m–£440m, below the £506m cited by some victims.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU states split over children’s social media rules

European leaders remain divided over how to restrict children’s use of social media platforms. While most governments agree stronger protections are needed, there is no consensus on enforcement or age limits.

Twenty-five EU countries, joined by Norway and Iceland, recently signed a declaration supporting tougher child protection rules online. The plan calls for a digital age of majority, potentially restricting under-15s or under-16s from joining social platforms.

France and Denmark back full bans for children below 15, while others, prefer verified parental consent. Some nations argue parents should retain primary responsibility, with the state setting only basic safeguards.

Brussels faces pressure to propose EU-wide legislation, but several capitals insist decisions should stay national. Estonia and Belgium declined to sign the declaration, warning that new bans risk overreach and calling instead for digital education.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

USB inventor and Phison CEO warns of an AI storage crunch

Datuk Pua Khein-Seng, inventor of the single-chip USB flash drive and CEO of Phison, warns that AI machines will generate 1,000 times more data than humans. He says the real bottleneck isn’t GPUs but memory, foreshadowing a global storage crunch as AI scales.

Speaking at GITEX Global, Pua outlined Phison’s focus on NAND controllers and systems that can expand effective memory. Adaptive tiering across DRAM and flash, he argues, will ease constraints and cut costs, making AI deployments more attainable beyond elite data centres.

Flash becomes the expansion valve: DRAM stays scarce and expensive, while high-end GPUs are over-credited for AI cost overruns. By intelligently offloading and caching to NAND, cheaper accelerators can still drive useful workloads, widening access to AI capacity.

Cloud centralisation intensifies the risk. With the US and China dominating the AI cloud market, many countries lack the capital and talent to build sovereign stacks. Pua calls for ‘AI blue-collar’ skills to localise open source and tailor systems to real-world applications.

Storage leadership is consolidating in the US, Japan, Korea, and China, with Taiwan rising as a fifth pillar. Hardware strength alone won’t suffice, Pua says; Taiwan must close the AI software gap to capture more value in the data era.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Suzanne Somers lives on in an AI twin

Alan Hamel says he’s moving ahead with a ‘Suzanne AI Twin’ to honor Suzanne Somers’ legacy. The project mirrors plans the couple discussed for decades. He shared an early demo at a recent conference.

Hamel describes the prototype as startlingly lifelike. He says side-by-side, he can’t tell real from AI. The goal is to preserve Suzanne’s voice, look, and mannerisms.

Planned uses include archival storytelling, fan Q&As, and curated appearances. The team is training the model on interviews, performances, and writings. Rights and guardrails are being built in.

Supporters see a new form of remembrance. Critics warn of deepfake risks and consent boundaries. Hamel says fidelity and respect are non-negotiable.

Next steps include wider testing and a controlled public debut. Proceeds could fund causes Suzanne championed. ‘It felt like talking to her,’ Hamel says.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

DeepSeek dominates AI crypto trading challenge

Chinese AI model DeepSeek V3.1 has outperformed its global competitors in a real-market cryptocurrency trading challenge, earning over 10 per cent profit in just a few days.

The experiment, named Alpha Arena, was launched by US research firm Nof1 to test the investing skills of leading LLMs.

Each participating AI was given US$10,000 to trade in six cryptocurrency perpetual contracts, including bitcoin and solana, on the decentralised exchange Hyperliquid. By Tuesday afternoon, DeepSeek V3.1 led the field, while OpenAI’s GPT-5 trailed behind with a loss of nearly 40 per cent.

The competition highlights the growing potential of AI models to make autonomous financial decisions in real markets.

It also underscores the rivalry between Chinese and American AI developers as they push to demonstrate their models’ adaptability beyond traditional text-based tasks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI launches ChatGPT Atlas web browser

OpenAI has launched ChatGPT Atlas, a web browser built around ChatGPT to help users work and explore online more efficiently. The browser lets ChatGPT operate directly on webpages, using past conversations and browsing context to assist with tasks without copying and pasting.

Early testers say it streamlines research, study, and productivity by providing instant AI support alongside the content they are viewing.

Atlas introduces browser memories, letting ChatGPT recall context from visited sites to improve responses and automate tasks. Users stay in control, with the ability to view, archive, or delete memories. 

Agent mode allows ChatGPT to perform tasks such as researching, summarising, or planning events while browsing. Safety is a priority, with safeguards to prevent unauthorised actions and options to operate in logged-out mode.

The browser is available worldwide on macOS for Free, Plus, Pro, and Go users, with Windows, iOS, and Android support coming soon. OpenAI plans to add multi-profile support, better developer tools, and improved app discoverability, advancing an agent-driven web experience with seamless AI integration.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Kenya leads the way in AI skilling across Africa

Kenya’s AI Skilling Initiative (AINSI) is offering valuable insights for African countries aiming to build digital capabilities. With AI projected to create 230 million digital jobs across Africa by 2030, coordinated investment in skills development is vital to unlock this potential.

Despite growing ambition, fragmented efforts and uneven progress continue to limit impact.

Government leadership plays a central role in building national AI capacity. Kenya’s Regional Centre of Competence for Digital and AI Skilling has trained thousands of public servants through structured bootcamps and online programmes.

Standardising credentials and aligning training with industry needs are crucial to ensure skilling efforts translate into meaningful employment.

Industry and the informal economy are key to scaling transformation. Partnerships with KEPSA and MESH are training entrepreneurs and SMEs in AI and cybersecurity while tackling affordability, connectivity, and data access challenges.

Education initiatives, from K–12 to universities and technical institutions, are embedding AI training into curricula to prepare future generations.

Civil society collaboration further broadens access, with community-based programmes reaching gig workers and underserved groups. Kenya’s approach shows how inclusive, cross-sector frameworks can scale digital skills and support Africa’s AI-driven growth.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

YouTube launches likeness detection to protect creators from AI misuse

YouTube has expanded its AI safeguards with a new likeness detection system that identifies AI-generated videos imitating creators’ faces or voices. The tool is now available to eligible members of the YouTube Partner Program after a limited pilot phase.

Creators can review detected videos and request their removal under YouTube’s privacy rules or submit copyright claims.

YouTube said the feature aims to protect users from having their image used to promote products or spread misinformation without consent.

The onboarding process requires identity verification through a short selfie video and photo ID. Creators can opt out at any time, with scanning ending within a day of deactivation.

YouTube has backed recent legislative efforts, such as the NO FAKES Act in the US, which targets deceptive AI replicas. The move highlights growing industry concern over deepfake misuse and the protection of digital identity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta strengthens protection for older adults against online scams

The US giant, Meta, has intensified its campaign against online scams targeting older adults, marking Cybersecurity Awareness Month with new safety tools and global partnerships.

Additionally, Meta said it had detected and disrupted nearly eight million fraudulent accounts on Facebook and Instagram since January, many linked to organised scam centres operating across Asia and the Middle East.

The social media giant is joining the National Elder Fraud Coordination Center in the US, alongside partners including Google, Microsoft and Walmart, to strengthen investigations into large-scale fraud operations.

It is also collaborating with law enforcement and research groups such as Graphika to identify scams involving fake customer service pages, fraudulent financial recovery services and deceptive home renovation schemes.

Meta continues to roll out product updates to improve online safety. WhatsApp now warns users when they share screens with unknown contacts, while Messenger is testing AI-powered scam detection that alerts users to suspicious messages.

Across Facebook, Instagram and WhatsApp, users can activate passkeys and complete a Security Checkup to reinforce account protection.

The company has also partnered with organisations worldwide to raise scam awareness among older adults, from digital literacy workshops in Bangkok to influencer-led safety campaigns across Europe and India.

These efforts form part of Meta’s ongoing drive to protect users through a mix of education, advanced technology and cross-industry cooperation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Most EU workers now rely on digital tools and AI

A new EU study finds that 90% of workers rely on digital tools, while nearly a third use AI-powered chatbots in their daily work. The JRC and European Commission surveyed over 70,000 workers across all EU Member States between 2024 and 2025.

The findings show that AI is most commonly used for writing and translation tasks, followed by data processing and image generation. Adoption rates are particularly high in Northern and Central Europe, especially in office-based sectors.

Alongside this digital transformation, workplace monitoring is becoming increasingly widespread, with 37% of EU workers reporting that their working hours are tracked and 36% that their entry and exit times are monitored.

Algorithmic management, where digital systems allocate tasks or assess performance automatically, now affects about a quarter of EU workers. The study also identifies a growing ‘platformisation’ trend, categorising employees based on their exposure to digital monitoring and algorithmic control.

Workers facing full or physical platformisation often report higher stress levels and reduced autonomy, while informational platformisation appears to have milder effects, particularly for remote workers.

Researchers urge EU policymakers to curb digital oversight risks while promoting fair and responsible innovation. The findings support EU initiatives like the Quality Jobs Roadmap and efforts to regulate algorithmic management.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!