Humanoid robots set to power Foxconn’s new Nvidia server plant in Houston

Foxconn will add humanoid robots to a new Houston plant building Nvidia AI servers from early 2026. Announced at Nvidia’s developer conference, the move deepens their partnership and positions the site as a US showcase for AI-driven manufacturing.

Humanoid systems based on Nvidia’s Isaac GR00T N are built to perceive parts, adapt on the line, and work with people. Unlike fixed industrial arms, they handle delicate assembly and switch tasks via software updates. Goals include flexible throughput, faster retooling, and fewer stoppages.

AI models are trained in simulation using digital twins and reinforcement learning to improve accuracy and safety. On the line, robots self-tune as analytics predict maintenance and balance workloads, unlocking gains across logistics, assembly, testing, and quality control.

Texas, US, offers proximity to a growing semiconductor and AI cluster, as well as policy support for domestic capacity. Foxconn also plans expansions in Wisconsin and California to meet global demand for AI servers. Scaling output should ease supply pressures around Nvidia-class compute in data centres.

Job roles will shift as routine tasks automate and oversight becomes data-driven. Human workers focus on design, line configuration, and AI supervision, with safety gates for collaboration. Analysts see a template for Industry 4.0 factories running near-continuously with rapid changeovers.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Nvidia and Deutsche Telekom plan €1 billion AI data centre in Germany

Plans are being rolled out for a €1 billion data centre in Germany to bolster Europe’s AI infrastructure, with Nvidia and Deutsche Telekom set to co-fund the project.

The facility is expected to serve enterprise customers, including SAP SE, Europe’s largest software company, and to deploy around 10,000 advanced chips known as graphics processing units (GPUs).

While significant for Europe, the build is modest compared with gigawatt-scale sites elsewhere, highlighting the region’s push to catch up with US and Chinese capacity.

An announcement is anticipated next month in Berlin alongside senior industry and government figures, with Munich identified as the planned location.

The move aligns with the EU efforts to expand AI compute, including the €200 billion initiative announced in February to grow capacity over the next five to seven years.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI200 and AI250 set a rack-scale inference push from Qualcomm

Qualcomm unveiled AI200 and AI250 data-centre accelerators aimed at high-throughput, low-TCO generative AI inference. AI200 targets rack-level deployment with high performance per pound per watt and 768 GB LPDDR per card for large models.

AI250 introduces a near-memory architecture that boosts adequate memory bandwidth by over tenfold while lowering power draw. Qualcomm pitches the design for disaggregated serving, improving hardware utilisation across large fleets.

Both arrive as full racks with direct liquid cooling, PCIe for scale-up, Ethernet for scale-out, and confidential computing. Qualcomm quotes around 160 kW per rack for thermally efficient, dense inference.

A hyperscaler-grade software stack spans apps to system software with one-click onboarding of Hugging Face models. Support covers leading frameworks, inference engines, and optimisation techniques to simplify secure, scalable deployments.

Commercial timing splits the roadmap: AI200 in 2026 and AI250 in 2027. Qualcomm commits to an annual cadence for data-centre inference, aiming to lead in performance, energy efficiency, and total cost of ownership.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AMD powers US AI factory supercomputers for national research

The US Department of Energy and AMD are joining forces to expand America’s AI and scientific computing power through two new supercomputers at Oak Ridge National Laboratory.

Named Lux and Discovery, the systems will drive the country’s sovereign AI strategy, combining public and private investment worth around $1 billion to strengthen research, innovation, and security infrastructure.

Lux, arriving in 2026, will become the nation’s first dedicated AI factory for science.

Built with AMD’s EPYC CPUs and Instinct GPUs alongside Oracle and HPE technologies, Lux will accelerate research across materials, medicine, and advanced manufacturing, supporting the US AI Action Plan and boosting the Department of Energy’s AI capacity.

Discovery, set for deployment in 2028, will deepen collaboration between the DOE, AMD, and HPE. Powered by AMD’s next-generation ‘Venice’ CPUs and MI430X GPUs, Discovery will train and deploy AI models on secure US-built systems, protecting national data and competitiveness.

It aims to deliver faster energy, biology, and national security breakthroughs while maintaining high efficiency and open standards.

AMD’s CEO, Dr Lisa Su, said the collaboration represents the best public-private partnerships, advancing the nation’s foundation for science and innovation.

US Energy Secretary Chris Wright described the initiative as proof that America leads when government and industry work together toward shared AI and scientific goals.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Qualcomm and HUMAIN power Saudi Arabia’s AI transformation

HUMAIN and Qualcomm Technologies have launched a collaboration to deploy advanced AI infrastructure in Saudi Arabia, aiming to position the Kingdom as a global hub for AI.

Announced ahead of the Future Investment Initiative conference, the project will deliver the world’s first fully optimised edge-to-cloud AI system, expanding Saudi Arabia’s regional and global inferencing services capabilities.

In 2026, HUMAIN plans to deploy 200 megawatts of Qualcomm’s AI200 and AI250 rack solutions to power large-scale AI inference services.

The partnership combines HUMAIN’s regional infrastructure and full AI stack with Qualcomm’s semiconductor expertise, creating a model for nations seeking to develop sovereign AI ecosystems.

However, the initiative will also integrate HUMAIN’s Saudi-developed ALLaM models with Qualcomm’s AI platforms, offering enterprise and government customers tailor-made solutions for industry-specific needs.

The collaboration supports Saudi Arabia’s strategy to drive economic growth through AI and semiconductor innovation, reinforcing its ambition to lead the next wave of global intelligent computing.

Qualcomm’s CEO Cristiano Amon said the partnership would help the Kingdom build a technology ecosystem to accelerate its AI ambitions.

HUMAIN CEO Tareq Amin added that combining local insight with Qualcomm’s product leadership will establish Saudi Arabia as a key player in global AI and semiconductor development.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Gigawatt-scale AI marks Anthropic’s next compute leap

Anthropic will massively expand on Google Cloud, planning to deploy up to 1 million TPUs and bring well over a gigawatt online in 2026. The multiyear investment totals tens of billions to accelerate research and product development.

Google Cloud CEO Thomas Kurian said Anthropic’s move reflects TPUs’ price-performance and efficiency, citing ongoing innovations and the seventh-generation ‘Ironwood’ TPU. Google will add capacity and drive further efficiency across its accelerator portfolio.

Anthropic now serves over 300,000 business customers, with large accounts up nearly sevenfold year over year. Added compute will meet demand while enabling deeper testing, alignment research, and responsible deployment at a global scale.

CFO Krishna Rao said the expansion keeps Claude at the frontier for Fortune 500s and AI-native startups alike. Increased capacity ensures reliability as usage and mission-critical workloads grow rapidly.

Anthropic’s diversified strategy spans Google TPUs, Amazon Trainium, and NVIDIA GPUs. It remains committed to Amazon as its primary training partner, including Project Rainier’s vast US clusters, and will continue investing to advance model capabilities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

South Korea moves to lead the AI era with OpenAI’s economic blueprint

Poised to become a global AI powerhouse, South Korea has the right foundations in place: advanced semiconductor production, robust digital infrastructure, and a highly skilled workforce.

OpenAI’s new Economic Blueprint for Korea sets out how the nation can turn those strengths into broad, inclusive growth through scaled and trusted AI adoption.

The blueprint builds on South Korea’s growing momentum in frontier technology.

Following OpenAI’s first Asia–Pacific country partnership, initiatives such as Stargate with Samsung and SK aim to expand advanced memory supply and explore next-generation AI data centres alongside the Ministry of Science and ICT.

A new OpenAI office in Seoul, along with collaboration with Seoul National University, further signals the country’s commitment to becoming an AI hub.

A strategy that rests on two complementary paths: building sovereign AI capabilities in infrastructure, data governance, and GPU supply, while also deepening cooperation with frontier developers like OpenAI.

The aim is to enhance operational maturity and cost efficiency across key industries, including semiconductors, shipbuilding, healthcare, and education.

By combining domestic expertise with global partnerships, South Korea could boost productivity, improve welfare services, and foster regional growth beyond Seoul. With decisive action, the nation stands ready to transform from a fast adopter into a global standard-setter for safe, scalable AI systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI outlines Japan’s AI Blueprint for inclusive economic growth

A new Japan Economic Blueprint released by OpenAI sets out how AI can power innovation, competitiveness, and long-term prosperity across the country. The plan estimates that AI could add more than ¥100 trillion to Japan’s economy and raise GDP by up to 16%.

Centred on inclusive access, infrastructure, and education, the Blueprint calls for equal AI opportunities for citizens and small businesses, national investment in semiconductors and renewable energy, and expanded lifelong learning to build an adaptive workforce.

AI is already reshaping Japanese industries from manufacturing and healthcare to education and public administration. Factories reduce inspection costs, schools use ChatGPT Edu for personalised teaching, and cities from Saitama to Fukuoka employ AI to enhance local services.

OpenAI suggests that the focus of Japan on ethical and human-centred innovation could make it a model for responsible AI governance. By aligning digital and green priorities, the report envisions technology driving creativity, equality, and shared prosperity across generations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Crypto hiring snaps back as AI cools

Tech firms led crypto’s hiring rebound, adding over 12,000 roles since late 2022, according to A16z’s State of Crypto 2025. Finance and consulting contributed 6,000, offsetting talent pulled into AI after ChatGPT’s debut. Net, crypto gained 1,000 positions as workers rotated in from tech, fintech, and education.

The recovery tracks a market turn: crypto capitalisation topping US$4T and new Bitcoin highs. A friendlier US policy stance on stablecoins and digital-asset oversight buoyed sentiment. Institutions from JPMorgan to BlackRock and Fidelity widened offerings beyond pilots.

Hiring is diversifying beyond developers toward compliance, infrastructure, and product. Firms are moving from proofs of concept to production systems with clearer revenue paths. Result: broader role mix and steadier talent pipelines.

A16z contrasts AI centralisation with crypto’s open ethos. OpenAI/Anthropic dominate AI-native revenue; big clouds hold most of the infrastructure share; NVIDIA leads GPUs. Crypto advocates pitch blockchains as a counterweight via verifiable compute and open rails.

Utility signals mature, too. Stablecoins settled around US$9T in 12 months, up 87% year over year. That’s over half of Visa’s annual volume and five times that of PayPal’s.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Europa chip by Axelera targets NVIDIA’s grip on AI accelerators

Axelera AI has introduced Europa, a new processor built to run modern AI apps on everything from small edge devices to full servers. It focuses on practical speed and low power use. The aim is to offer NVIDIA-rivaling performance without data centre-level budgets.

Inside are eight AI cores that do the heavy lifting, positioned to challenge NVIDIA’s lead in real-world inference. Helper processors handle setup and cleanup so the main system isn’t slowed down. A built-in video decoder offloads common media jobs.

Europa pairs fast on-chip memory with high-bandwidth external memory to cut common AI slowdowns. Axelera says this beats NVIDIA on speed per watt and per dollar in everyday inference. The payoff is cooler, smaller, more affordable deployments.

It ships as a tiny 35×35 mm module or as PCIe accelerator cards that scale up. That’s the same slot where NVIDIA cards often sit today. A built-in secure enclave protects sensitive data.

Research and industry partners are lining up pilots, casting Europa as a real NVIDIA rival. Early names include SURF, Cineca, Ultralytics, Advantech, SECO, Multiverse Computing, and E4. Axelera targets the first half of 2026 for chips and cards.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!