Nvidia brings AI supercomputer production to the US

Nvidia is shifting its AI supercomputer manufacturing operations to the United States for the first time, instead of relying on a globally dispersed supply chain.

In partnership with industry giants such as TSMC, Foxconn, and Wistron, the company is establishing large-scale facilities to produce its advanced Blackwell chips in Arizona and complete supercomputers in Texas. Production is expected to reach full scale within 12 to 15 months.

Over a million square feet of manufacturing space has been commissioned, with key roles also played by packaging and testing firms Amkor and SPIL.

The move reflects Nvidia’s ambition to create up to half a trillion dollars in AI infrastructure within the next four years, while boosting supply chain resilience and growing its US-based operations instead of expanding solely abroad.

These AI supercomputers are designed to power new, highly specialised data centres known as ‘AI factories,’ capable of handling vast AI workloads.

Nvidia’s investment is expected to support the construction of dozens of such facilities, generating hundreds of thousands of jobs and securing long-term economic value.

To enhance efficiency, Nvidia will apply its own AI, robotics, and simulation tools across these projects, using Omniverse to model factory operations virtually and Isaac GR00T to develop robots that automate production.

According to CEO Jensen Huang, bringing manufacturing home strengthens supply chains and better positions the company to meet the surging global demand for AI computing power.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

TheStage AI makes neural network optimisation easy

In a move set to ease one of the most stubborn hurdles in AI development, Delaware-based startup TheStage AI has secured $4.5 million to launch its Automatic NNs Analyzer (ANNA).

Instead of requiring months of manual fine-tuning, ANNA allows developers to optimise AI models in hours, cutting deployment costs by up to five times. The technology is designed to simplify a process that has remained inaccessible to all but the largest tech firms, often limited by expensive GPU infrastructure.

TheStage AI’s system automatically compresses and refines models using techniques like quantisation and pruning, adapting them to various hardware environments without locking users into proprietary platforms.

Instead of focusing on cloud-based deployment, their models, called ‘Elastic models’, can run anywhere from smartphones to on-premise GPUs. This gives startups and enterprises a cost-effective way to adjust quality and speed with a simple interface, akin to choosing video resolution on streaming platforms.

Backed by notable investors including Mehreen Malik and Atlantic Labs, and already used by companies like Recraft.ai, the startup addresses a growing need as demand shifts from AI training to real-time inference.

Unlike competitors acquired by larger corporations and tied to specific ecosystems, TheStage AI takes a dual-market approach, helping both app developers and AI researchers. Their strategy supports scale without complexity, effectively making AI optimisation available to teams of any size.

Founded by a group of PhD holders with experience at Huawei, the team combines deep academic roots with practical industry application.

By offering a tool that streamlines deployment instead of complicating it, TheStage AI hopes to enable broader use of generative AI technologies in sectors where performance and cost have long been limiting factors.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta under fire for scrapping diversity and moderation policies

The NAACP Legal Defense Fund (LDF) has withdrawn from Meta’s civil rights advisory group, citing deep concerns over the company’s rollback of diversity, equity and inclusion (DEI) policies and changes to content moderation.

The decision follows Meta’s January announcement that it would end DEI programmes, eliminate factchecking teams, and revise moderation rules across its platforms.

Civil rights organisations, including LDF, expressed alarm at the time, warning that the changes could silence marginalised voices and increase the risk of online harm.

In a letter to Meta CEO Mark Zuckerberg, they criticised the company for failing to consult the advisory group or consider the impact on protected communities. LDF’s Todd A Cox later said the policy shift posed a ‘grave risk’ to Black communities and public discourse.

LDF also noted that the company had seen progress under previous DEI policies, including a significant increase in Black and Hispanic employees.

Its reversal, the group argues, may breach federal civil rights laws and expose Meta to legal consequences.

LDF urged Meta to assess the effects of its policy changes and increase transparency about how harmful content is reported and removed. Meta has not commented publicly on the matter.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI could be Geneva’s lifeline in times of crisis

International Geneva is at a crossroads. With mounting budget cuts, declining trust in multilateralism, and growing geopolitical tensions, the city’s role as a hub for global cooperation is under threat.

In his thought-provoking blog, ‘Don’t waste the crisis: How AI can help reinvent International Geneva’, Jovan Kurbalija, Executive Director of Diplo, argues that AI could offer a way forward—not as a mere technological upgrade but as a strategic tool for transforming the city’s institutions and reviving its humanitarian spirit. Kurbalija envisions AI as a means to re-skill Geneva’s workforce, modernise its organisations, and preserve its vast yet fragmented knowledge base.

With professions such as translators, lawyers, and social scientists potentially playing pivotal roles in shaping AI tools, the city can harness its multilingual, highly educated population for a new kind of innovation. A bottom-up approach is key: practical steps like AI apprenticeships, micro-learning platforms, and ‘AI sandboxes’ would help institutions adapt at their own pace while avoiding the pitfalls of top-down tech imposition.

Organisations must also rethink how they operate. AI offers the chance to cut red tape, lighten the administrative burden on NGOs, and flatten outdated hierarchies in favour of more agile, data-driven decision-making.

At the same time, Geneva can lead by example in ethical AI governance—by ensuring accountability, protecting human rights and knowledge, and defending what Kurbalija calls our ‘right to imperfection’ in an increasingly optimised world. Ultimately, Geneva’s challenge is not technological—it’s organisational.

As AI tools become cheaper and more accessible, the real work lies in how institutions and communities embrace change. Kurbalija proposes a dedicated Geneva AI Fund to support apprenticeships, ethical projects, and local initiatives. He argues that this crisis could be Geneva’s opportunity to reinvent itself for survival and to inspire a global model of human-centred AI governance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google unveils new AI agent toolkit

This week at Google Cloud Next in Las Vegas, Google revealed its latest push into ‘agentic AI’. A software designed to act independently, perform tasks, and communicate with other digital systems.

Central to this effort is the Agent Development Kit (ADK), an open-source toolkit said to let developers build AI agents in under 100 lines of code.

Instead of requiring complex systems, the ADK includes pre-built connectors and a so-called ‘agent garden’ to streamline integration with data platforms like BigQuery and AlloyDB.

Google also introduced a new Agent2Agent (A2A) protocol, aimed at enabling cooperation between agents from different vendors. With over 50 partners, including Accenture, SAP and Salesforce, already involved, the company hopes to establish a shared standard for AI interaction.

Powering these tools is Google’s latest AI chip, Ironwood, a seventh-generation TPU promising tenfold performance gains over earlier models. These chips, designed for use with advanced models like Gemini 2.5, reflect Google’s ambition to dominate AI infrastructure.

Despite the buzz, analysts caution that the hype around AI agents may outpace their actual utility. While vendors like Microsoft, Salesforce and Workday push agentic AI to boost revenue, in some cases even replacing staff, experts argue that current models still fall short of real human-like intelligence.

Instead of widespread adoption, businesses are expected to focus more on managing costs and complexity, especially as economic uncertainty grows. Without strong oversight, these tools risk becoming costly, unpredictable, and difficult to scale.

For more information on these topics, visit diplomacy.edu.

Virtual AI agents tested in social good experiment

Nonprofit organisation Sage Future has launched an unusual initiative that puts AI agents to work for philanthropy.

In a recent experiment backed by Open Philanthropy, four AI models, including OpenAI’s GPT-4o and two of Anthropic’s Claude Sonnet models, were tasked with raising money for a charity of their choice. Within a week, they collected $257 for Helen Keller International, which supports global health efforts.

The AI agents were given a virtual workspace where they could browse the internet, send emails, and create documents. They collaborated through group chats and even launched a social media account to promote their campaign.

Though most donations came from human spectators observing the experiment, the exercise revealed the surprising resourcefulness of these AI tools. one Claude model even generated profile pictures using ChatGPT and let viewers vote on their favourite.

Despite occasional missteps, including agents pausing for no reason or becoming distracted by online games, the experiment offered insights into the emerging capabilities of autonomous systems.

Sage’s director, Adam Binksmith, sees this as just the beginning, with future plans to introduce conflicting agent goals, saboteurs, and larger oversight systems to stress-test AI coordination and ethics.

For more information on these topics, visit diplomacy.edu.

Brinc drones raises $75M to boost emergency drone tech

Brinc Drones, a Seattle-based startup founded by 25-year-old Blake Resnick, has secured $75 million in fresh funding led by Index Ventures.

Known for its police and public safety drones, Brinc is scaling its presence across emergency services, with the new funds bringing total investment to over $157 million. The round also includes participation from Motorola Solutions, a major player in US security infrastructure.

The company, founded in 2017, is part of a growing wave of American drone startups benefiting from tightened restrictions on Chinese drone manufacturers.

Brinc’s drones are designed for rapid response in hard-to-reach areas and boast unique features, such as the ability to break windows or deliver emergency supplies.

The new partnership with Motorola will enable tighter integration into 911 call centres, allowing AI systems to dispatch drones directly to emergency scenes.

Despite growing competition from other US startups like Flock Safety and Skydio, Brinc remains confident in the market’s potential.

With its enhanced funding and Motorola collaboration, the company is aiming to position itself as a leader in AI-integrated public safety technology while helping shift drone manufacturing back to the US.

For more information on these topics, visit diplomacy.edu.

Blockchain app ARK fights to keep human creativity ahead of AI

Nearly 20 years after his AI career scare, screenwriter Ed Bennett-Coles and songwriter Jamie Hartman have developed ARK, a blockchain app designed to safeguard creative work from AI exploitation.

The platform lets artists register ownership of their ideas at every stage, from initial concept to final product, using biometric security and blockchain verification instead of traditional copyright systems.

ARK aims to protect human creativity in an AI-dominated world. ‘It’s about ring-fencing the creative process so artists can still earn a living,’ Hartman told AFP.

The app, backed by Claritas Capital and BMI, uses decentralised blockchain technology instead of centralised systems to give creators full control over their intellectual property.

Launching summer 2025, ARK challenges AI’s ‘growth at all costs’ mentality by emphasising creative journeys over end products.

Bennett-Coles compares AI content to online meat delivery, efficient but soulless, while human artistry resembles a grandfather’s butcher trip, where the experience matters as much as the result.

The duo hopes their solution will inspire industries to modernise copyright protections before AI erodes them completely.

For more information on these topics, visit diplomacy.edu.

DeepMind blocks staff from joining AI rivals

Google DeepMind is enforcing strict non-compete agreements in the United Kingdom, preventing employees from joining rival AI companies for up to a year. The length of the restriction depends on an employee’s seniority and involvement in key projects.

Some DeepMind staff, including those working on Google’s Gemini AI, are reportedly being paid not to work while their non-competes run. The policy comes as competition for AI talent intensifies worldwide.

Employees have voiced concern that these agreements could stall their careers in a rapidly evolving industry. Some are seeking ways around the restrictions, such as moving to countries with less rigid employment laws.

While DeepMind claims the contracts are standard for sensitive work, critics say they may stifle innovation and mobility. The practice remains legal in the UK, even though similar agreements have been banned in the US.

For more information on these topics, visit diplomacy.edu.

IBM unveils AI-powered mainframe z17

IBM has announced the launch of its most advanced mainframe yet, the z17, powered by the new Telum II processor. Designed to handle more AI operations, the system delivers up to 50% more daily inference tasks than its predecessor.

The z17 features a second-generation on-chip AI accelerator and introduces new tools for managing and securing enterprise data. A Spyre Accelerator add-on, expected later this year, will enable generative AI features such as large language models.

More than 100 clients contributed to the development of the z17, which also supports a forthcoming operating system, z/OS 3.2. The OS update is set to enable hybrid cloud data processing and enhanced NoSQL support.

IBM says the z17 brings AI to the core of enterprise infrastructure, enabling organisations to tap into large data sets securely and efficiently, with strong performance across both traditional and AI workloads.

For more information on these topics, visit diplomacy.edu.