Quantinuum’s 12-qubit system achieves unassailable quantum advantage

Researchers have reached a major milestone in quantum computing, demonstrating a task that surpasses the capabilities of classical machines. Using Quantinuum’s 12-qubit ion-trap system, they delivered the first permanent, provable example of quantum supremacy, settling a long-running debate.

The experiment addressed a communication-complexity problem in which one processor (Alice) prepared a state and another (Bob) measured it. After 10,000 trials, the team proved that no classical algorithm could match the quantum result with fewer than 62 bits, with equivalent performance requiring 330 bits.

Unlike earlier claims of quantum supremacy, later challenged by improved classical algorithms, the researchers say no future breakthrough can close this gap. Experts hailed the result as a rare proof of permanent quantum advantage and a significant step forward in the field.

However, like past demonstrations, the result has no immediate commercial application. It remains a proof-of-principle demonstration showing that quantum hardware can outperform classical machines under certain conditions, but it has yet to solve real-world problems.

Future work could strengthen the result by running Alice and Bob on separate devices to rule out interaction effects. Experts say the next step is achieving useful quantum supremacy, where quantum machines beat classical ones on problems with real-world value.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Intel to design custom CPUs as part of NVIDIA AI partnership

The two US tech firms, NVIDIA and Intel, have announced a major partnership to develop multiple generations of AI infrastructure and personal computing products.

They say that the collaboration will merge NVIDIA’s leadership in accelerated computing with Intel’s expertise in CPUs and advanced manufacturing.

For data centres, Intel will design custom x86 CPUs for NVIDIA, which will be integrated into the company’s AI platforms to power hyperscale and enterprise workloads.

In personal computing, Intel will create x86 system-on-chips that incorporate NVIDIA RTX GPU chiplets, aimed at delivering high-performance PCs for a wide range of consumers.

As part of the deal, NVIDIA will invest $5 billion in Intel common stock at $23.28 per share, pending regulatory approvals.

NVIDIA’s CEO Jensen Huang described the collaboration as a ‘fusion of two world-class platforms’ that will accelerate computing innovation, while Intel CEO Lip-Bu Tan said the partnership builds on decades of x86 innovation and will unlock breakthroughs across industries.

The move underscores how AI is reshaping both infrastructure and personal computing. By combining architectures and ecosystems instead of pursuing separate paths, Intel and NVIDIA are positioning themselves to shape the next era of computing at a global scale.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Huawei unveils roadmap for next-generation AI super pods

Huawei chairman Xu outlined the company’s roadmap for AI computing platforms, revealing plans to launch the Atlas 950 SuperPoD in Q4 2026. The system will use over 8,000 Ascend GPUs across 128 racks, covering 1,000 sq metres, and offer 6.7 times more computing power and 15 times more memory.

A year later, the Atlas 960 SuperPod will debut with up to 15,488 Ascend 960 chips, achieving 30 exaflops of computing power and 4,460TB of memory. Xu said the two systems will stay the world’s most potent super nodes, with uses beyond AI in general-purpose computing in China.

Huawei faces Western sanctions limiting access to advanced semiconductor nodes. Xu said assembling less advanced chips into super pods lets Huawei compete with rivals like Nvidia at a system level despite lower individual chip performance.

Over the next three years, Huawei will launch three new Ascend chip series: the 950 line, 950PR and 950DT, the 960, and the 970. The 950PR, optimised for early-stage inference and recommendations, will ship in Q1 2026, while the 950DT with 2Tb/s bandwidth will launch in Q4 2026.

The 960 will double its predecessor’s computing power and memory capacity and arrive in Q4 2027.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

First quantum-AI data centre launched in New York City

Oxford Quantum Circuits (OQC) and Digital Realty have launched the first quantum-AI data centre in New York City at the JFK10 facility, powered by Nvidia GH200 Grace Hopper Superchips. The project combines superconducting quantum computers with AI supercomputing under one roof.

OQC’s GENESIS quantum computer is the first to be deployed in a New York data centre, designed to support hybrid workloads and enterprise adoption. Future GENESIS systems will ship with Nvidia accelerated computing and CUDA-Q integration as standard.

OQC CEO Gerald Mullally said the centre will drive the AI revolution securely and at scale, strengthening the UKUS technology alliance. Digital Realty CEO Andy Power called it a milestone for making quantum-AI accessible to enterprises and governments.

UK Science Minister Patrick Vallance highlighted the £212 billion economic potential of quantum by 2045, citing applications from drug discovery to clean energy. He said the launch puts British innovation at the heart of next-generation computing.

The centre, embedded in Digital Realty’s PlatformDIGITAL, will support applications in finance, security, and AI, including quantum machine learning and accelerated model training. OQC Chair Jack Boyer said it demonstrates UK–US collaboration in leading frontier technologies.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

China’s market watchdog finds Nvidia violated antitrust law

China’s State Administration for Market Regulation (SAMR) has issued a preliminary finding that Nvidia violated antitrust law linked to its 2020 acquisition of Mellanox Technologies. The deal was approved with restrictions, including a ban on bundling and ‘unreasonable trading conditions’ in China.

SAMR now alleges that Nvidia breached those terms. A full investigation is underway. Nvidia shares fell 2.4% in pre-market trading after the announcement. According to the Financial Times, SAMR delayed releasing the findings to gain leverage in trade talks with the USA, currently taking place in Madrid.

At the same time, US export controls on advanced chips remain a challenge for Nvidia. Licensing for its China-specific H20 chips is still under review, affecting Nvidia’s access to the Chinese market.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Lumex chips bring advanced AI to mobile devices

Arm Holdings has unveiled Lumex, its next-generation chip designs built to bring advanced AI performance directly to mobile devices.

The new designs range from highly energy-efficient chips for wearables to high-performance versions capable of running large AI models on smartphones without cloud support.

Lumex forms part of Arm’s Compute Subsystems business, offering handset makers pre-integrated designs, while also strengthening Arm’s broader strategy to expand smartphone and data centre revenues.

The chips are tailored for 3-nanometre manufacturing processes provided by suppliers such as TSMC, whose technology is also used in Apple’s latest iPhone chips. Arm has indicated further investment in its own chip development to capitalise on demand.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Google Quantum AI selected for DARPA quantum benchmarking initiative

Google Quantum AI has been selected by the US Defence Advanced Research Projects Agency (DARPA) to participate in the Quantum Benchmarking Initiative (QBI). QBI is designed to assess quantum computing approaches and judge whether utility-scale, fault-tolerant quantum computers could be developed by 2033.

The selection means Google will work with DARPA’s technical experts, who will be independent validators for its quantum computing roadmap. The evaluation aims to provide rigorous third-party benchmarking, a critical capability for the broader quantum industry.

DARPA’s QBI is not only about validation. It aims to compare different quantum technologies, superconducting qubits, photonic systems, trapped ions and other modalities under shared metrics.

Google’s involvement underscores its ongoing mission to build quantum infrastructure capable of addressing problems such as new medicine design, energy innovation and machine-learning optimisation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

China creates brain-inspired AI model

Chinese scientists have unveiled SpikingBrain1.0, the world’s first large-scale AI language model to replicate the human brain. The model reduces energy use and runs independently of Nvidia chips, departing from conventional AI architectures.

Developed by the Chinese Academy of Sciences, SpikingBrain1.0 uses spiking neural networks to activate only the required neurons for each task, rather than processing all information simultaneously.

Instead of evaluating every word in parallel, it focuses on the most recent and relevant context, enabling faster and more efficient processing. Researchers claim the model operates 25 to 100 times faster than traditional AI systems while keeping accuracy competitive.

A significant innovation is hardware independence. SpikingBrain1.0 runs on China’s MetaX chip platform, reducing reliance on Nvidia GPUs. It also requires less than 2% of the data typically needed for pre-training large language models, making it more sustainable and accessible.

SpikingBrain1.0 could power low-energy, real-time applications such as autonomous drones, wearable devices, and edge computing. The model highlights a shift toward biologically-inspired AI prioritising efficiency and adaptability over brute-force computation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Broadcom lands $10bn AI chip order

Broadcom has secured a $10 billion agreement to supply custom AI chips, with analysts pointing to OpenAI as the likely customer.

The US semiconductor firm announced the deal alongside better-than-expected third-quarter earnings, driven by growing demand for its ASICs. It forecast a strong fourth quarter as cloud providers seek alternatives to Nvidia, whose GPUs remain costly and supply-constrained.

Chief executive Hock Tan said Broadcom is collaborating with four potential new clients on chip development, adding to existing partnerships with major players such as Google and Meta.

The company recently introduced the Tomahawk Ultra and next-generation Jericho networking chips, further strengthening its position in the AI computing sector.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Photonic chips open the path to sustainable AI by training with light

A team of international researchers has shown how training neural networks directly with light on photonic chips could make AI faster and more sustainable.

A breakthrough study, published in Nature, involved collaboration between the Politecnico di Milano, EPFL Lausanne, Stanford University, the University of Cambridge, and the Max Planck Institute.

The research highlights how physical neural networks, which use analogue circuits that exploit the laws of physics, can process information in new ways.

Photonic chips developed at the Politecnico di Milano perform mathematical operations such as addition and multiplication through light interference on silicon microchips only a few millimetres in size.

By eliminating the need to digitise information, these chips dramatically cut both processing time and energy use. Researchers have also pioneered an ‘in-situ’ training technique that enables photonic neural networks to learn tasks entirely through light signals, instead of relying on digital models.

The result is a training process that is faster, more efficient and more robust.

Such advances could lead to more powerful AI models capable of running directly on devices instead of being dependent on energy-hungry data centres.

An approach that paves the way for technologies such as autonomous vehicles, portable intelligent sensors and real-time data processing systems that are both greener and quicker.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!