Live exploitation of CVE-2024-1086 across older Linux versions flagged by CISA

CISA’s warning serves as a reminder that ransomware is not confined to Windows. A Linux kernel flaw, CVE-2024-1086, is being exploited in real-world incidents, and federal networks face a November 20 patch-or-disable deadline. Businesses should read it as their cue, too.

Attackers who reach a vulnerable host can escalate privileges to root, bypass defences, and deploy malware. Many older kernels remain in circulation even though upstream fixes were shipped in January 2024, creating a soft target when paired with phishing and lateral movement.

Practical steps matter more than labels. Patch affected kernels where possible, isolate any components that cannot be updated, and verify the running versions against vendor advisories and the NIST catalogue. Treat emergency changes as production work, with change logs and checks.

Resilience buys time when updates lag. Enforce least privilege, require MFA for admin entry points, and segment crown-jewel services. Tune EDR to spot privilege-escalation behaviour and suspicious modules, then rehearse restores from offline, immutable backups.

Security habits shape outcomes as much as CVEs. Teams that patch quickly, validate fixes, and document closure shrink the blast radius. Teams that defer kernel maintenance invite repeat visits, turning a known bug into an avoidable outage.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Brain-inspired networks boost AI performance and cut energy use

Researchers at the University of Surrey have developed a new method to enhance AI by imitating how the human brain connects information. The approach, called Topographical Sparse Mapping, links each artificial neuron only to nearby or related ones, replicating the brain’s efficient organisation.

According to findings published in Neurocomputing, the structure reduces redundant connections and improves performance without compromising accuracy. Senior lecturer Dr Roman Bauer said intelligent systems can now be designed to consume less energy while maintaining power.

Training large models today often requires over a million kilowatt-hours of electricity, a trend he described as unsustainable.

An advanced version, Enhanced Topographical Sparse Mapping, introduces a biologically inspired pruning process that refines neural connections during training, similar to how the brain learns.

Researchers believe that the system could contribute to more realistic neuromorphic computers, which simulate brain functions to process data more efficiently.

The Surrey team said that such a discovery may advance generative AI systems and pave the way for sustainable large-scale model training. Their work highlights how lessons from biology can shape the next generation of energy-efficient computing.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Q3 funding in Europe rebounds with growth rounds leading

Europe raised €13.7bn across just over 1,300 rounds in Q3, the strongest quarter since Q2 2024. September alone brought €8.7bn. July and August reflected the familiar summer slowdown.

Growth equity provided €7bn, or 51.6% of the total, with two consecutive quarters surpassing 150 growth rounds. Data centres, AI agents, and GenAI led the activity, with more AI startups scaling with larger cheques.

Early-stage totals were the lowest in 12 months, yet they were ahead of Q3 last year. Lovable’s $200 million Series A at a $1.8 billion valuation stood out. Seven new unicorns included Nscale, Fuse Energy, Framer, IQM, Nothing, and Tide.

ASML led the quarter’s largest deal, investing €1.3bn in Mistral AI’s €1.7bn Series C. France tallied €2.7 billion, heavily concentrated in Mistral, while the UK reached €4.49 billion. Germany followed with just over €1.5bn, ahead of the Netherlands and Switzerland.

AI-native funding surpassed all verticals for the first time on record, reaching €3.9 billion, with deeptech at €2.6 billion. Agentic AI logged 129 rounds, sharply higher year-over-year, while data centres edged out agents for capital. Defence and dual-use technology attracted €2.1 billion across 44 rounds.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Cyber and energy leaders meet to harden EU power grid resilience

Europe’s 8th Cybersecurity Forum in Brussels brought together more than 200 officials and operators from energy, cybersecurity and technology to discuss how to protect the bloc’s increasingly digital, decentralised grids. ENISA said strengthening energy infrastructure security is urgent as geopolitics and digitalisation raise risk.

Discussions focused on turning new EU frameworks into real-world protection: the Cyber Resilience Act placing board-level responsibility for security, the NIS2 Directive updating obligations across critical sectors, and the Network Code on Cybersecurity setting common rules for cross-border electricity flows. Speakers pressed for faster implementation, better public-private cooperation and stronger supply-chain security.

Case studies highlighted live threats. Ukraine’s National Cybersecurity Coordination Center warned of the growing threat of hybrid warfare, citing repeated Russian cyberattacks on its power grid dating back to 2015. ENCS demonstrated how insecure consumer-energy devices like EV chargers, PV inverters, and home batteries can be easily exploited when security-by-design measures are absent.

Organisers closed with a call to standardise best practice, improve information sharing and coordinate operators, regulators and suppliers. As DG Energy’s Michaela Kollau noted, the resilience of Europe’s grids depends on a shared commitment to implementing current legislation and sector cybersecurity measures.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Trainium2 power surges as AWS’s Project Rainier enters service for Anthropic

Anthropic and AWS switched on Project Rainier, a vast Trainium2 cluster spanning multiple US sites to accelerate Claude’s evolution.

Project Rainier is now fully operational, less than a year after its announcement. AWS engineered an EC2 UltraCluster of Trainium2 UltraServers to deliver massive training capacity. Anthropic says it offers more than five times the compute used for prior Claude models.

UltraServers bind four Trainium2 servers with high-speed NeuronLinks so 64 chips act as one. Tens of thousands of networks are connected through Elastic Fabric Adapter across buildings. The design reduces latency within racks while preserving flexible scale across data centres.

Anthropic is already training and serving Claude on Rainier across the US and plans to exceed one million Trainium2 chips by year’s end. More computing should raise model accuracy, speed evaluations, and shorten iteration cycles for new frontier releases.

AWS controls the stack from chip to data centre for reliability and efficiency. Teams tune power delivery, cooling, and software orchestration. New sites add water-wise cooling, contributing to the company’s renewable energy and net-zero goals.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Humanoid robots set to power Foxconn’s new Nvidia server plant in Houston

Foxconn will add humanoid robots to a new Houston plant building Nvidia AI servers from early 2026. Announced at Nvidia’s developer conference, the move deepens their partnership and positions the site as a US showcase for AI-driven manufacturing.

Humanoid systems based on Nvidia’s Isaac GR00T N are built to perceive parts, adapt on the line, and work with people. Unlike fixed industrial arms, they handle delicate assembly and switch tasks via software updates. Goals include flexible throughput, faster retooling, and fewer stoppages.

AI models are trained in simulation using digital twins and reinforcement learning to improve accuracy and safety. On the line, robots self-tune as analytics predict maintenance and balance workloads, unlocking gains across logistics, assembly, testing, and quality control.

Texas, US, offers proximity to a growing semiconductor and AI cluster, as well as policy support for domestic capacity. Foxconn also plans expansions in Wisconsin and California to meet global demand for AI servers. Scaling output should ease supply pressures around Nvidia-class compute in data centres.

Job roles will shift as routine tasks automate and oversight becomes data-driven. Human workers focus on design, line configuration, and AI supervision, with safety gates for collaboration. Analysts see a template for Industry 4.0 factories running near-continuously with rapid changeovers.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

NVIDIA and Nokia join forces to build the AI platform for 6G

Nokia and NVIDIA have announced a $1 billion partnership to develop an AI-powered platform that will drive the transition from 5G to 6G networks.

The collaboration will create next-generation AI-RAN systems, combining computing, sensing and connectivity to transform how the US mobile networks process data and deliver services.

However, this partnership marks a strategic step in both companies’ ambition to regain global leadership in telecommunications.

By integrating NVIDIA’s new Aerial RAN Computer and Nokia’s AI-RAN software, operators can upgrade existing networks through software updates instead of complete infrastructure replacements.

T-Mobile US will begin field tests in 2026, supported by Dell’s PowerEdge servers.

NVIDIA’s investment and collaboration with Nokia aim to strengthen the foundation for AI-native networks that can handle the rising demand from agentic, generative and physical AI applications.

These networks are expected to support future 6G use cases, including drones, autonomous vehicles and advanced augmented reality systems.

Both companies see AI-RAN as the next evolution of wireless connectivity, uniting data processing and communication at the edge for greater performance, energy efficiency and innovation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Nvidia and Deutsche Telekom plan €1 billion AI data centre in Germany

Plans are being rolled out for a €1 billion data centre in Germany to bolster Europe’s AI infrastructure, with Nvidia and Deutsche Telekom set to co-fund the project.

The facility is expected to serve enterprise customers, including SAP SE, Europe’s largest software company, and to deploy around 10,000 advanced chips known as graphics processing units (GPUs).

While significant for Europe, the build is modest compared with gigawatt-scale sites elsewhere, highlighting the region’s push to catch up with US and Chinese capacity.

An announcement is anticipated next month in Berlin alongside senior industry and government figures, with Munich identified as the planned location.

The move aligns with the EU efforts to expand AI compute, including the €200 billion initiative announced in February to grow capacity over the next five to seven years.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AMD powers US AI factory supercomputers for national research

The US Department of Energy and AMD are joining forces to expand America’s AI and scientific computing power through two new supercomputers at Oak Ridge National Laboratory.

Named Lux and Discovery, the systems will drive the country’s sovereign AI strategy, combining public and private investment worth around $1 billion to strengthen research, innovation, and security infrastructure.

Lux, arriving in 2026, will become the nation’s first dedicated AI factory for science.

Built with AMD’s EPYC CPUs and Instinct GPUs alongside Oracle and HPE technologies, Lux will accelerate research across materials, medicine, and advanced manufacturing, supporting the US AI Action Plan and boosting the Department of Energy’s AI capacity.

Discovery, set for deployment in 2028, will deepen collaboration between the DOE, AMD, and HPE. Powered by AMD’s next-generation ‘Venice’ CPUs and MI430X GPUs, Discovery will train and deploy AI models on secure US-built systems, protecting national data and competitiveness.

It aims to deliver faster energy, biology, and national security breakthroughs while maintaining high efficiency and open standards.

AMD’s CEO, Dr Lisa Su, said the collaboration represents the best public-private partnerships, advancing the nation’s foundation for science and innovation.

US Energy Secretary Chris Wright described the initiative as proof that America leads when government and industry work together toward shared AI and scientific goals.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Two founders turn note-taking into an AI success

Two 20-year-old drop-outs, Rudy Arora and Sarthak Dhawan, are behind Turbo AI, an AI-powered notetaker that has grown to around 5 million users and reached a multi-million-dollar annual recurring revenue (ARR) in a short timeframe.

Their app addresses a clear pain point, which is that meetings, lectures, and long videos produce information overload. Turbo AI uses generative AI to convert audio, typed notes or uploads into structured summaries, highlight key points and help users organise insights. The founders describe it as a ‘productivity assistant’ more than a general-purpose chat agent.

The business model appears lean, meaning that freemium user acquisition is scaling quickly, then converting power users into paid subscriptions. The insights are that a well-targeted niche tool can win strong uptake even in a crowded productivity-AI market.

Arora and Dhawan say they kept the feature set focused and user experience simple, enabling rapid word-of-mouth growth.

The growth raises interesting implications for enterprise and consumer AI alike. While large language models dominate headlines, tools like Turbo AI show the value of vertical-specific applications addressing tangible workflows (e.g., note-taking, summarisation). It also underscores how younger founders are building AI tools outside the major tech hubs and scaling globally.

At this stage, challenges remain: user retention, differentiation in a field where major players (Microsoft, Google, OpenAI) are adding similar capabilities, and privacy/data governance (especially with audio and meeting content). However, the early results suggest that targeted AI productivity tools can achieve a meaningful scale quickly.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot