Qwen3-Max-Thinking hits perfect scores as Alibaba raises the bar on AI reasoning

Alibaba unveiled Qwen3-Max-Thinking, which scored 100 percent on AIME 2025 and HMMT, matching OpenAI’s top model on reasoning tests. It targets high-precision problem-solving across algebra, number theory, and probability. Researchers regard elite maths contests as strong proxies for reasoning.

Built on Qwen3-Max, a trillion-parameter flagship, the thinking variant emphasises step-by-step solutions. Alibaba says it matches or beats Claude Opus 4, DeepSeek V3.1, Grok 4, and GPT-5 Pro. Positioning stresses accuracy, traceability, and controllable latency.

Signal from a live trading trial added momentum. In a two-week crypto experiment, Qwen3-Max returned 22.3 percent on 10,000 US dollars. Competing systems underperformed, with DeepSeek at 4.9 percent and several US models booking losses.

Access is available via the Qwen web chatbot and Alibaba Cloud APIs. Early adopters can test tool use and stepwise reasoning on technical tasks. Enterprises are exploring finance, research, and operations cases requiring reliability and auditability.

Alibaba researchers say further tuning will broaden task coverage without diluting peak maths performance. Plans include multilingual reasoning, safety alignment, and robustness under distribution shift. Community benchmarks and contests will track progress.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AWS becomes key partner in OpenAI’s $38 billion AI growth plan 

Amazon Web Services (AWS) and OpenAI have entered a $38 billion, multi-year partnership that will see OpenAI run and scale its AI workloads on AWS infrastructure. The seven-year deal grants OpenAI access to vast NVIDIA GPU clusters and the capacity to scale to millions of CPUs.

The collaboration aims to meet the growing global demand for computing power driven by rapid advances in generative AI.

OpenAI will immediately begin using AWS compute resources, with all capacity expected to be fully deployed by the end of 2026. The infrastructure will optimise AI performance by clustering NVIDIA GB200 and GB300 GPUs via Amazon EC2 UltraServers for low-latency, large-scale processing.

These clusters will support tasks such as training new models and serving inference for ChatGPT.

OpenAI CEO Sam Altman said the partnership would help scale frontier AI securely and reliably, describing it as a foundation for ‘bringing advanced AI to everyone.’ AWS CEO Matt Garman noted that AWS’s computing power and reliability make it uniquely positioned to support OpenAI’s growing workloads.

The move strengthens an already active collaboration between the two firms. Earlier this year, OpenAI’s models became available on Amazon Bedrock, enabling AWS clients such as Peloton, Thomson Reuters, and Comscore to adopt advanced AI tools.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Enterprise AI gains traction at Mercedes-Benz with Celonis platform

Mercedes-Benz reported faster decisions and better on-time delivery at Celosphere 2025. Using Celonis within MO360, it unifies production and logistics data, extending visibility across every order, part, and process.

Order-to-delivery operations use AI copilots to forecast timelines, optimise sequencing, and cut delays. After-sales teams surface bottlenecks in service parts logistics and speed customer responses. Quality management utilises anomaly detection to identify deviations early, preventing them from impacting production output.

Executives say complete data transparency enables teams to act faster and with greater precision across production and supply chains. The approach helps anticipate change and react to market shifts. Hundreds of active users are expanding adoption as data-driven practices scale across the company.

Celonis positions process intelligence as the backbone that makes enterprise AI valuable. Integrated process data and business context create a live operational twin. The goal is moving from visibility to action, unlocking value through targeted fixes and intelligent automation.

Conference sessions highlighted broader momentum for process intelligence and AI in industry. Leaders discussed governance, standards, and measurable outcomes from digital platforms. Mercedes-Benz framed its results as proof that structured data and AI can lift performance at a global scale.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Deutsche Telekom joins Theta Network as enterprise validator

Deutsche Telekom has joined the Theta Network as a strategic enterprise validator, alongside Google, Samsung and Sony. The company becomes the first major telecom provider to take part in securing the decentralised blockchain platform.

The partnership involves staking THETA tokens and operating validator nodes that support Theta’s layer-1 infrastructure for AI, cloud and media applications. Deutsche Telekom’s unit, T-Systems MMS, will manage the validator operations.

Theta Labs said the collaboration enhances network resilience and underlines growing enterprise interest in decentralised computing. The project’s EdgeCloud system is designed to distribute AI workloads across global nodes more efficiently.

Deutsche Telekom noted that Theta’s decentralised model aligns with its vision of providing reliable, scalable cloud and edge services for future digital ecosystems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Q3 funding in Europe rebounds with growth rounds leading

Europe raised €13.7bn across just over 1,300 rounds in Q3, the strongest quarter since Q2 2024. September alone brought €8.7bn. July and August reflected the familiar summer slowdown.

Growth equity provided €7bn, or 51.6% of the total, with two consecutive quarters surpassing 150 growth rounds. Data centres, AI agents, and GenAI led the activity, with more AI startups scaling with larger cheques.

Early-stage totals were the lowest in 12 months, yet they were ahead of Q3 last year. Lovable’s $200 million Series A at a $1.8 billion valuation stood out. Seven new unicorns included Nscale, Fuse Energy, Framer, IQM, Nothing, and Tide.

ASML led the quarter’s largest deal, investing €1.3bn in Mistral AI’s €1.7bn Series C. France tallied €2.7 billion, heavily concentrated in Mistral, while the UK reached €4.49 billion. Germany followed with just over €1.5bn, ahead of the Netherlands and Switzerland.

AI-native funding surpassed all verticals for the first time on record, reaching €3.9 billion, with deeptech at €2.6 billion. Agentic AI logged 129 rounds, sharply higher year-over-year, while data centres edged out agents for capital. Defence and dual-use technology attracted €2.1 billion across 44 rounds.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

A new phase for Hyundai and NVIDIA in AI mobility and manufacturing

NVIDIA and Hyundai Motor Group will build a Blackwell-powered AI factory for autonomous vehicles, smart plants and robotics. The partners will co-develop core physical AI, shifting from tool adoption to capability building across mobility, manufacturing and on-device chips.

The programme targets integrated training, validation and deployment on 50,000 Blackwell GPUs. In parallel, both sides will back the physical AI cluster in South Korea with about $3 billion, creating an NVIDIA AI Technology Center, Hyundai’s Physical AI Application Center and regional data centres.

Hyundai will use NVIDIA DGX for model training, Omniverse and Cosmos on RTX PRO Servers for digital twins and simulation, and DRIVE AGX Thor in vehicles and robots for real-time intelligence. The stack underpins design, testing and deployment at an industrial scale.

Factory digital twins will unify data, enable virtual commissioning and improve predictive maintenance, supporting safer human-robot work. Isaac Sim will validate tasks and ergonomics before line deployment, speeding robot integration and lifting throughput, quality and uptime.

Vehicles will gain evolving features via Nemotron and NeMo, from autonomy to personalised assistants and adaptive comfort. DRIVE AGX Thor with safety-certified DriveOS will power driver assistance and next-generation safety, linking car and factory into one intelligent ecosystem.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

An AI factory brings Nvidia compute into Samsung’s fabs

Nvidia and Samsung outlined a semiconductor AI factory that embeds accelerated computing into production. Over 50,000 GPUs will drive digital twins, predictive maintenance, and real-time optimisation. Partners present the project as a template for autonomous fabs.

The alliance spans design and manufacturing. Samsung uses CUDA-X and EDA tools to speed simulation and verification. Integrating cuLitho into OPC reports roughly twentyfold gains in computational lithography at advanced nodes.

Factory planning and logistics run on Omniverse digital twins and RTX PRO servers. Unified analytics support anomaly detection, capacity planning, and flow balancing. Managers expect shorter ramps and smoother changeovers with higher equipment effectiveness.

Robotics and edge AI extend intelligence to the line. Isaac Sim, Cosmos models, and Jetson Thor target safe collaboration, faster task retargeting, and teleoperation. Samsung’s in-house models enable multilingual assistance and on-site decision support.

A decades-long Nvidia–Samsung relationship underpins the effort, from NV1 DRAM to HBM3E and HBM4. Work continues on memory, modules, and foundry services, plus AI-RAN research with networks in South Korea and academia linking factory intelligence with next-gen connectivity.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Stargate Michigan expands OpenAI’s US buildout

OpenAI will build a new campus in Saline Township, Michigan, as part of a 4.5 GW partnership with Oracle. Planned US capacity now exceeds 8 gigawatts. Investment over the next three years is expected to surpass $450 billion.

Leaders frame Stargate as a path to reindustrialise the United States while expanding access to AI benefits. Projects generate jobs during buildout and strengthen supply chains. Communities are intended to share gains.

Related Digital will develop the Michigan site, with construction expected in early 2026. More than 2,500 union construction roles are planned. A closed-loop cooling system will significantly reduce on-site water consumption.

DTE Energy will utilise existing excess transmission capacity to serve the campus. The project, not local ratepayers, will fund any required upgrades. Local energy supplies are expected to remain unaffected.

Expansion builds on previously announced sites in Texas, New Mexico, Wisconsin, and Ohio. Programmes aim to bolster modern energy and manufacturing systems. Michigan’s engineering heritage makes it a focal point for future AI infrastructure.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Trainium2 power surges as AWS’s Project Rainier enters service for Anthropic

Anthropic and AWS switched on Project Rainier, a vast Trainium2 cluster spanning multiple US sites to accelerate Claude’s evolution.

Project Rainier is now fully operational, less than a year after its announcement. AWS engineered an EC2 UltraCluster of Trainium2 UltraServers to deliver massive training capacity. Anthropic says it offers more than five times the compute used for prior Claude models.

UltraServers bind four Trainium2 servers with high-speed NeuronLinks so 64 chips act as one. Tens of thousands of networks are connected through Elastic Fabric Adapter across buildings. The design reduces latency within racks while preserving flexible scale across data centres.

Anthropic is already training and serving Claude on Rainier across the US and plans to exceed one million Trainium2 chips by year’s end. More computing should raise model accuracy, speed evaluations, and shorten iteration cycles for new frontier releases.

AWS controls the stack from chip to data centre for reliability and efficiency. Teams tune power delivery, cooling, and software orchestration. New sites add water-wise cooling, contributing to the company’s renewable energy and net-zero goals.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Automakers and freight partners join NVIDIA and Uber to accelerate level 4 deployments

NVIDIA and Uber partner on level 4-ready fleets using the DRIVE AGX Hyperion 10, aiming to scale a unified human-and-robot driver network from 2027. A joint AI data factory on NVIDIA Cosmos will curate training data, aiming to reach 100,000 vehicles over time.

DRIVE AGX Hyperion 10 is a reference compute and sensor stack for level 4 readiness across cars, vans, and trucks. Automakers can pair validated hardware with compatible autonomy software to speed safer, scalable, AI-defined mobility. Passenger and freight services gain faster paths from prototype to fleet.

Stellantis, Lucid, and Mercedes-Benz are preparing passenger platforms on Hyperion 10. Aurora, Volvo Autonomous Solutions, and Waabi are extending level 4 capability to long-haul trucking. Avride, May Mobility, Momenta, Nuro, Pony.ai, Wayve, and WeRide continue to build on NVIDIA DRIVE.

The production platform pairs dual DRIVE AGX Thor on Blackwell with DriveOS and a qualified multimodal sensor suite. Cameras, radar, lidar, and ultrasonics deliver 360-degree coverage. Modular design plus PCIe, Ethernet, confidential computing, and liquid cooling support upgrades and uptime.

NVIDIA is also launching Halos, a cloud-to-vehicle AI safety and certification system with an ANSI-accredited inspection lab and certification program. A multimodal AV dataset and reasoning VLA models aim to improve urban driving, testing, and validation for deployments.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!