Cyberattack keeps JLR factories shut, hackers claim responsibility

Jaguar Land Rover (JLR) has confirmed that data was affected in a cyberattack that has kept its UK factories idle for more than a week. The company stated that it is contacting anyone whose data was involved, although it did not clarify whether the breach affected customers, suppliers, or internal systems.

JLR reported the incident to the Information Commissioner’s Office and immediately shut down IT systems to limit damage. Production at Midlands and Merseyside sites has been halted until at least Thursday, with staff instructed not to return before next week.

The disruption has also hit suppliers and retailers, with garages struggling to order spare parts and dealers facing delays registering vehicles. JLR said it is working around the clock to restore operations in a safe and controlled way, though the process is complex.

Responsibility for the hack has been claimed by Scattered Lapsus$ Hunters, a group linked to previous attacks on Marks & Spencer, the Co-op, and Las Vegas casinos in the UK and the US. The hackers posted alleged screenshots from JLR’s internal systems on Telegram last week.

Cybersecurity experts say the group’s claim that ransomware was deployed raises questions, as it appears to have severed ties with Russian ransomware gangs. Analysts suggest the hackers may have only stolen data or are building their own ransomware infrastructure.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Claude AI gains powerful file editing tools for documents and spreadsheets

Anthropic’s Claude has expanded its role as a leading AI assistant by adding advanced tools for creating and editing files. Instead of manually working with different programs, users can now describe their needs in plain language and let the AI produce or update Word, Excel, PowerPoint, and PDF files.

A feature that supports uploads of CSV and TSV data and can generate charts, graphs, or images where needed, with a 30MB size limit applying to uploads and downloads.

The real breakthrough lies in editing. Instead of opening a document or spreadsheet, users can simply type instructions such as replacing text, changing currencies, or updating job titles. Claude processes the prompt and makes all the changes in one pass, preserving the original formatting.

It positions the AI as more efficient than rivals, as Gemini can only export reports but not directly modify existing files.

The feature preview is available on web and desktop for subscribers on Max, Team, or Enterprise plans. Analysts suggest the update could reshape productivity tools, especially after reports that Microsoft has partnered with Anthropic to explore using Claude for Office 365 functions.

By removing repetitive tasks and making file handling conversational, Claude is pushing productivity software into a new phase of automation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Small business revival could hinge on AI-driven tools

If AI is to matter in the economy, it must first matter to small businesses. Firms employ over 61 million people, nearly half the private workforce, yet most run on outdated technology. While smartphones update monthly, many small businesses still use systems built a decade ago.

Search fund entrepreneurs bridge this gap by upgrading established firms with modern tech. One deal turned a 50-person roadside assistance firm into Asurion, now a global tech-care provider. Others have scaled compliance firms into nationwide SaaS platforms.

Generative AI now accelerates these transformations, cutting work times by over 60% across supply chains, compliance, and document processing functions. Complex tasks can now be completed in hours, unlocking double-digit productivity gains and allowing small businesses to focus on growth.

Search funds are not the only path forward. AI consulting firms, tech studios, and AI-powered roll-up strategies bring enterprise-grade tools to family-run firms. For communities that have relied on traditional playbooks, decades of growth can be compressed into months.

The cost of AI has never been lower, and the opportunity is wide open. Once deployed at scale, AI could power a wave of productivity on Main Street, helping small businesses compete and strengthening the economy for half of their workforce.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI and AFM deliver real-time macrophage phenotyping

Macrophages drive immune responses, including inflammation, tissue repair, and tumour growth. Identifying their polarisation states is key for diagnosis and immunotherapy, but current methods, such as RNA sequencing and flow cytometry, are expensive, slow, and unsuitable for real-time use.

Atomic force microscopy (AFM) has emerged as a powerful tool for decoding mechanobiological signatures of cells. Combined with AI, AFM data can be rapidly analysed, but macrophage phenotyping has been relatively underexplored using this approach.

Researchers led by Prof Li Yang at the Shenzhen Institutes of Advanced Technology have now developed a label-free, non-invasive method combining AFM with deep learning. The system accurately profiles human macrophage mechanophenotypes and identifies polarisation states in real-time.

The AI model was trained on well-characterised macrophage subtypes and validated using flow cytometry. Results showed that pseudovirus stimulation mainly produced M1 macrophages, with smaller populations of M2 and mixed phenotypes, closely matching the model’s predictions.

The study, published in Small Methods, offers a promising diagnostic tool that could be extended beyond macrophages to other cell types. It could support new approaches in cancer, fibrosis, and infectious disease diagnostics based on mechanophenotypes.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Oracle and OpenAI drive record $300B investment in cloud for AI

OpenAI has finalised a record $300 billion deal with Oracle to secure vast computing infrastructure over five years, marking one of the most significant cloud contracts in history. The agreement is part of Project Stargate, OpenAI’s plan to build massive data centre capacity in the US and abroad.

The two companies will develop 4.5 gigawatts of computing power, equivalent to the energy consumed by millions of homes.

Backed by SoftBank and other partners, the Stargate initiative aims to surpass $500 billion in investment, with construction already underway in Texas. Additional plans include a large-scale data centre project in the United Arab Emirates, supported by Emirati firm G42.

The scale of the deal highlights the fierce race among tech giants to dominate AI infrastructure. Amazon, Microsoft, Google and Meta are also pledging hundreds of billions of dollars towards data centres, while OpenAI faces mounting financial pressure.

The company currently generates around $10 billion in revenue but is expected to spend far more than that annually to support its expansion.

Oracle is betting heavily on OpenAI as a future growth driver, although the risk is high given OpenAI’s lack of profitability and Oracle’s growing debt burden.

A gamble that rests on the assumption that ChatGPT and related AI technologies will continue to grow at an unprecedented pace, despite intense competition from Google, Anthropic and others.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI chatbot mixes up Dutch political party policies

Dutch voters have been warned not to rely on AI chatbots for political advice after Google’s NotebookLM mixed up VVD and PVV policies.

When asked about Ukrainian refugees, the tool attributed a PVV proposal to send men back to Ukraine to the VVD programme. Similar confusions reportedly occurred when others used the system.

Google acknowledged the mistake and said it would investigate whether the error was a hallucination, the term for incorrect AI-generated output.

Experts caution that language models predict patterns rather than facts, making errors unavoidable. Voting guide StemWijzer stressed that reliable political advice requires up-to-date and verified information.

Professor Claes de Vreese said chatbots might be helpful to supplementary tools but should never replace reading actual party programmes. He also urged stricter regulation to avoid undue influence on election choices.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Ransomware 3.0 raises alarm over AI-generated cyber threats

Researchers at NYU’s Tandon School of Engineering have demonstrated how large language models can be utilised to execute ransomware campaigns autonomously. Their prototype, dubbed Ransomware 3.0, simulated every stage of an attack, from intrusion to the generation of a ransom note.

The system briefly raised an alarm after cybersecurity firm ESET discovered its files on VirusTotal, mistakenly identifying them as live malware. The proof-of-concept was designed only for controlled laboratory use and posed no risk outside testing environments.

Instead of pre-written code, the prototype embedded text instructions that triggered AI models to generate tailored attack scripts. Each execution created unique code, evading traditional detection methods and running across Windows, Linux, and Raspberry Pi systems.

The researchers found that the system identified up to 96% of sensitive files and could generate personalised extortion notes, raising psychological pressure on victims. With costs as low as $0.70 per attack using commercial AI services, such methods could lower barriers for criminals.

The team stressed that the work was conducted ethically and aims to help defenders prepare countermeasures. They recommend monitoring file access patterns, limiting outbound AI connections, and developing defences against AI-generated attack behaviours.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Pressure mounts as Apple prepares AI search push with Google ties

Apple’s struggles in the AI race have been hard to miss. Its Apple Intelligence launch was disappointing, and its reliance on ChatGPT appeared to be a concession to rivals.

Bloomberg’s Mark Gurman now reports that Apple plans to introduce its AI-powered web search tool in spring 2026. The move would position it against OpenAI and Perplexity, while renewing pressure on Google.

The speculation comes after news that Google may integrate its Gemini AI into Apple devices. During an antitrust trial in April, Google CEO Sundar Pichai confirmed plans to roll out updates later this year.

According to Gurman, Apple and Google finalised an agreement for Apple to test a Google-developed AI model to boost its voice assistant. The partnership reflects Apple’s mixed strategy of dependence and rivalry with Google.

With a strong record for accurate Apple forecasts, Gurman suggests the company hopes the move will narrow its competitive gap. Whether it can outpace Google, especially given Pixel’s strong AI features, remains an open question.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Altman questions if social media is dominated by bots

OpenAI CEO Sam Altman has sparked debate after admitting he increasingly struggles to distinguish between genuine online conversations and content generated by bots or AI models.

Altman described a ‘strangest experience’ while reading about OpenAI’s Codex model, saying comments instinctively felt fake even though he knew the growth trend was real. He said social media rewards, ‘LLM-speak,’ and astroturfing make communities feel less genuine.

His comments follow an earlier admission that he had never considered the so-called dead internet theory until now, when large language model accounts seemed to be running X. The theory claims bots and artificial content dominate online activity, though evidence of coordinated control is lacking.

Reactions were divided, with some users agreeing that online communities have become increasingly bot-like. Others argued the change reflects shifting dynamics in niche groups rather than fake accounts.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Microsoft brings Anthropic AI into Office 365 as OpenAI tensions rise

The US tech giant Microsoft is expanding its AI strategy by integrating Anthropic’s Claude models into Office 365, adding them to apps like Word, Excel and Outlook instead of relying solely on OpenAI.

Internal tests reportedly showed Anthropic’s systems outperforming OpenAI in specific reasoning and data-processing tasks, prompting Microsoft to adopt a hybrid approach while maintaining OpenAI as a frontier partner.

The shift reflects growing strain between Microsoft and OpenAI, with disputes over intellectual property and cloud infrastructure as well as OpenAI’s plans for greater independence.

By diversifying suppliers, Microsoft reduces risks, lowers costs and positions itself to stay competitive while OpenAI prepares for a potential public offering and develops its own data centres.

Anthropic, backed by Amazon and Google, has built its reputation on safety-focused AI, appealing to Microsoft’s enterprise customers wary of regulatory pressures.

Analysts believe the move could accelerate innovation, spark a ‘multi-model era’ of AI integration, and pressure OpenAI to enhance its technology faster.

The decision comes amid Microsoft’s push to broaden its AI ecosystem, including its in-house MAI-1 model and partnerships with firms like DeepSeek.

Regulators are closely monitoring these developments, given Microsoft’s dominant role in AI investment and the potential antitrust implications of its expanding influence.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!