Crypto hiring snaps back as AI cools

Tech firms led crypto’s hiring rebound, adding over 12,000 roles since late 2022, according to A16z’s State of Crypto 2025. Finance and consulting contributed 6,000, offsetting talent pulled into AI after ChatGPT’s debut. Net, crypto gained 1,000 positions as workers rotated in from tech, fintech, and education.

The recovery tracks a market turn: crypto capitalisation topping US$4T and new Bitcoin highs. A friendlier US policy stance on stablecoins and digital-asset oversight buoyed sentiment. Institutions from JPMorgan to BlackRock and Fidelity widened offerings beyond pilots.

Hiring is diversifying beyond developers toward compliance, infrastructure, and product. Firms are moving from proofs of concept to production systems with clearer revenue paths. Result: broader role mix and steadier talent pipelines.

A16z contrasts AI centralisation with crypto’s open ethos. OpenAI/Anthropic dominate AI-native revenue; big clouds hold most of the infrastructure share; NVIDIA leads GPUs. Crypto advocates pitch blockchains as a counterweight via verifiable compute and open rails.

Utility signals mature, too. Stablecoins settled around US$9T in 12 months, up 87% year over year. That’s over half of Visa’s annual volume and five times that of PayPal’s.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Europa chip by Axelera targets NVIDIA’s grip on AI accelerators

Axelera AI has introduced Europa, a new processor built to run modern AI apps on everything from small edge devices to full servers. It focuses on practical speed and low power use. The aim is to offer NVIDIA-rivaling performance without data centre-level budgets.

Inside are eight AI cores that do the heavy lifting, positioned to challenge NVIDIA’s lead in real-world inference. Helper processors handle setup and cleanup so the main system isn’t slowed down. A built-in video decoder offloads common media jobs.

Europa pairs fast on-chip memory with high-bandwidth external memory to cut common AI slowdowns. Axelera says this beats NVIDIA on speed per watt and per dollar in everyday inference. The payoff is cooler, smaller, more affordable deployments.

It ships as a tiny 35×35 mm module or as PCIe accelerator cards that scale up. That’s the same slot where NVIDIA cards often sit today. A built-in secure enclave protects sensitive data.

Research and industry partners are lining up pilots, casting Europa as a real NVIDIA rival. Early names include SURF, Cineca, Ultralytics, Advantech, SECO, Multiverse Computing, and E4. Axelera targets the first half of 2026 for chips and cards.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

USB inventor and Phison CEO warns of an AI storage crunch

Datuk Pua Khein-Seng, inventor of the single-chip USB flash drive and CEO of Phison, warns that AI machines will generate 1,000 times more data than humans. He says the real bottleneck isn’t GPUs but memory, foreshadowing a global storage crunch as AI scales.

Speaking at GITEX Global, Pua outlined Phison’s focus on NAND controllers and systems that can expand effective memory. Adaptive tiering across DRAM and flash, he argues, will ease constraints and cut costs, making AI deployments more attainable beyond elite data centres.

Flash becomes the expansion valve: DRAM stays scarce and expensive, while high-end GPUs are over-credited for AI cost overruns. By intelligently offloading and caching to NAND, cheaper accelerators can still drive useful workloads, widening access to AI capacity.

Cloud centralisation intensifies the risk. With the US and China dominating the AI cloud market, many countries lack the capital and talent to build sovereign stacks. Pua calls for ‘AI blue-collar’ skills to localise open source and tailor systems to real-world applications.

Storage leadership is consolidating in the US, Japan, Korea, and China, with Taiwan rising as a fifth pillar. Hardware strength alone won’t suffice, Pua says; Taiwan must close the AI software gap to capture more value in the data era.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Netherlands and China in talks to resolve Nexperia dispute

The Dutch Economy Minister has spoken with his Chinese counterpart to ease tensions following the Netherlands’ recent seizure of Nexperia, a major Dutch semiconductor firm.

China, where most of Nexperia’s chips are produced and sold, reacted by blocking exports, creating concern among European carmakers reliant on its components.

Vincent Karremans said he had discussed ‘further steps towards reaching a solution’ with Chinese Minister of Commerce Wang Wentao.

Both sides emphasised the importance of finding an outcome that benefits Nexperia, as well as the Chinese and European economies.

Meanwhile, Nexperia’s China division has begun asserting its independence, telling employees they may reject ‘external instructions’.

The firm remains a subsidiary of Shanghai-listed Wingtech, which has faced growing scrutiny from European regulators over national security and strategic technology supply chains.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google Cloud and NVIDIA join forces to accelerate enterprise AI and industrial digitalisation

NVIDIA and Google Cloud are expanding their collaboration to bring advanced AI computing to a wider range of enterprise workloads.

The new Google Cloud G4 virtual machines, powered by NVIDIA RTX PRO 6000 Blackwell GPUs, are now generally available, combining high-performance computing with scalability for AI, design, and industrial applications.

An announcement that also makes NVIDIA Omniverse and Isaac Sim available on the Google Cloud Marketplace, offering enterprises new tools for digital twin development, robotics simulation, and AI-driven industrial operations.

These integrations enable customers to build realistic virtual environments, train intelligent systems, and streamline design processes.

Powered by the Blackwell architecture, the RTX PRO 6000 GPUs support next-generation AI inference and advanced graphics capabilities. Enterprises can use them to accelerate complex workloads ranging from generative and agentic AI to high-fidelity simulations.

The partnership strengthens Google Cloud’s AI infrastructure and cements NVIDIA’s role as the leading provider of end-to-end computing for enterprise transformation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

NVIDIA and TSMC celebrate first US-made Blackwell AI chip

A collaboration between NVIDIA and TSMC has marked a historic milestone with the first NVIDIA Blackwell wafer produced on US soil.

The event, held at TSMC’s facility in Phoenix, symbolised the start of volume production for the Blackwell architecture and a major step toward domestic AI chip manufacturing.

NVIDIA’s CEO Jensen Huang described it as a moment that brings advanced technology and industrial strength back to the US.

A partnership that highlights how the companies aim to strengthen the US’s semiconductor supply chain by producing the world’s most advanced chips domestically.

TSMC Arizona will manufacture next-generation two-, three- and four-nanometre technologies, crucial for AI, telecommunications, and high-performance computing. The process transforms raw wafers through layering, etching, and patterning into the high-speed processors driving the AI revolution.

TSMC executives praised the achievement as the result of decades of partnership with NVIDIA, built on innovation and technical excellence.

Both companies believe that local chip production will help meet the rising global demand for AI infrastructure while securing the US’s strategic position in advanced technology manufacturing.

NVIDIA also plans to use its AI, robotics, and digital twin platforms to design and manage future American facilities, deepening its commitment to domestic production.

The companies say their shared investment signals a long-term vision of sustainable innovation, industrial resilience, and technological leadership for the AI era.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Apple launches M5 with bigger AI gains

Apple unveiled the M5 chip, targeting a major jump in on-device AI. Apple says peak GPU compute for AI is over four times M4, with a Neural Accelerator in each of the 10 GPU cores.

The CPU pairs up to four performance cores with six efficiency cores for up to 15 percent faster multithreaded work versus M4. A faster 16-core Neural Engine and higher unified memory bandwidth at 153 GB/s aim to speed Apple Intelligence features.

Graphics upgrades include third-generation ray tracing and reworked caching for up to 45 percent higher performance than M4 in supported apps. With the help of AI, Apple notes smoother gameplay and quicker 3D renders, plus Vision Pro refresh up to 120 Hz.

The M5 chip reaches the 14-inch MacBook Pro, iPad Pro, and Apple Vision Pro, with pre-orders open. Apple highlights tighter tie-ins with Core ML, Metal 4 and Tensor APIs, and support for larger local models via unified memory up to 32 GB.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

SenseTime and Cambricon strengthen cooperation for China’s AI future

SenseTime and Cambricon Technologies have entered a strategic cooperation agreement to jointly develop an open and mutually beneficial AI ecosystem in China. The partnership will focus on software-hardware integration, vertical industry innovation, and the globalisation of AI technologies.

By combining SenseTime’s strengths in large model R&D, AI infrastructure, and industrial applications with Cambricon’s expertise in intelligent computing chips and high-performance hardware, the collaboration supports the national ‘AI+’ strategy of China.

Both companies aim to foster a new AI development model defined by synergy between software and hardware, enhancing domestic innovation and global competitiveness in the AI sector.

The agreement also includes co-development of adaptive chip solutions and integrated AI systems for enterprise and industrial use. By focusing on compatibility between the latest AI models and hardware architectures, the two firms plan to offer scalable, high-efficiency computing solutions.

A partnership that seeks to drive intelligent transformation across industries and promote the growth of emerging AI enterprises through joint innovation and ecosystem building.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Tokens-at-scale with Intel’s Crescent Island and Xe architecture

Intel unveils ‘Crescent Island’ data-centre GPU at OCP, targeting real-time, everywhere inference with high memory capacity and energy-efficient performance for agentic AI.

Sachin Katti said scaling complex inference needs heterogeneous systems and an open, developer-first stack; Intel positions Xe architecture GPUs to deliver efficient headroom as token volumes surge.

Intel’s approach spans AI PC to data centre and edge, pairing Xeon 6 and GPUs with workload-centric orchestration to simplify deployment, scaling, and developer continuity.

Crescent Island is designed for air-cooled enterprise servers, optimised for power and cost, and tuned for inference with large memory capacity and bandwidth.

Key features include the Xe3P microarchitecture for performance-per-watt gains, 160GB LPDDR5X, broad data-type support for ‘tokens-as-a-service’, and a unified software stack proven on Arc Pro B-Series; customer sampling is slated for H2 2026.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Dell joins Microsoft and Nscale on hyperscale AI capacity

Nscale has signed an expanded deal with Microsoft to deliver about 200,000 NVIDIA GB300 GPUs across Europe and the US, with Dell collaborating. The company calls it one of the largest AI infrastructure contracts to date. The build-out targets surging enterprise demand for GPU capacity.

A ~240MW hyperscale AI campus in Texas, US, will host roughly 104,000 GB300s from Q3 2026, leased from Ionic Digital. Nscale plans to scale the site to 1.2GW, with Microsoft holding an option on a second 700MW phase from late 2027. The campus is optimised for air-cooled, power-efficient deployments.

In Europe, Nscale will deploy about 12,600 GB300s from Q1 2026 at Start Campus in Sines, Portugal, supporting sovereign AI needs within the EU. A separate UK facility at Loughton will house around 23,000 GB300s from Q1 2027. The 50MW site is scalable to 90MW to support Azure services.

A Norway programme also advances Aker-Nscale’s joint venture plans for about 52,000 GB300s at Narvik, along with Nscale’s GW+ greenfield sites and orchestration for target training, fine-tuning, and inference at scale. Microsoft emphasises sustainability and global availability.

Both firms cast the pact as deepening transatlantic tech ties and accelerating the rollout of next-gen AI services. Nscale says few providers can deploy GPU fleets at this pace. The roadmap points to sovereign-grade, multi-region capacity with lower-latency platforms.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!