Europe struggles to explain quantum to its citizens

Most Europeans remain unclear about quantum technology, despite increasing attention from EU leaders. A new survey, released on World Quantum Day, reveals that while 78 per cent of adults in France and Germany are aware of quantum, only a third truly understand what it is.

Nearly half admitted they had heard of the term but didn’t know what it means.

Quantum science studies the smallest building blocks of the universe, particles like electrons and atoms, that behave in ways classical physics can’t explain. Though invisible even to standard microscopes, they already power technologies such as GPS, MRI scanners and semiconductors.

Quantum tools could lead to breakthroughs in healthcare, cybersecurity, and climate change, by enabling ultra-precise imaging, improved encryption, and advanced environmental monitoring.

The survey showed that 47 per cent of respondents expect quantum to positively impact their country within five years, with many hopeful about its role in areas like energy, medicine and fraud prevention.

For example, quantum computers might help simulate complex molecules for drug development, while quantum encryption could secure communications better than current systems.

The EU has committed to developing a European quantum chip and is exploring a potential Quantum Act, backed by €65 million in funding under the EU Chips Act. The UK has pledged £121 million for quantum initiatives.

However, Europe still trails behind China and the US, mainly due to limited private investment and slower deployment. Former ECB president Mario Draghi warned that Europe must build a globally competitive quantum ecosystem instead of falling behind further.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU plans major staff boost for digital rules

The European Commission is ramping up enforcement of its Digital Services Act (DSA) by hiring 60 more staff to support ongoing investigations into major tech platforms. Despite beginning probes into companies such as X, Meta, TikTok, AliExpress and Temu since December 2023, none have concluded.

The Commission currently has 127 employees working on the DSA and aims to reach 200 by year’s end. Applications for the new roles, including legal experts, policy officers, and data scientists, remain open until 10 May.

The DSA, which came into full effect in February last year, applies to all online platforms in the EU. However, the 25 largest platforms, those with over 45 million monthly users like Google, Amazon, and Shein, fall under the direct supervision of the Commission instead of national regulators.

The most advanced case is against X, with early findings pointing to a lack of transparency and accountability.

The law has drawn criticism from the current Republican-led US government, which views it as discriminatory. Brendan Carr of the US Federal Communications Commission called the DSA ‘an attack on free speech,’ accusing the EU of unfairly targeting American companies.

In response, EU Tech Commissioner Henna Virkkunen insisted the rules are fair, applying equally to platforms from Europe, the US, and China.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI chip production begins at TSMC’s Arizona facility

Nvidia has announced a major initiative to produce AI supercomputers in the US in collaboration with Taiwan Semiconductor Manufacturing Co. (TSMC) and several other partners.

The effort aims to create up to US$500 billion worth of AI infrastructure products domestically over the next four years, marking a significant shift in Nvidia’s manufacturing strategy.

Alongside TSMC, other key contributors include Taiwanese firms Hon Hai Precision Industry Co. and Wistron Corp., both known for producing AI servers. US-based Amkor Technology and Taiwan’s Siliconware Precision Industries will also provide advanced packaging and testing services.

Nvidia’s Blackwell AI chips have already begun production at TSMC’s Arizona facility, with large-scale operations planned in Texas through partnerships with Hon Hai in Houston and Wistron in Dallas.

The move could impact Taiwan’s economy, as many Nvidia components are currently produced there. Taiwan’s Economic Affairs Minister declined to comment specifically on the project but assured that the government will monitor overseas investments by Taiwanese firms.

Nvidia said the initiative would help meet surging AI demand while strengthening semiconductor supply chains and increasing resilience amid shifting global trade policies, including new US tariffs on Taiwanese exports.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Siri AI overhaul delayed until 2026

Apple has revealed plans to use real user data, in a privacy-preserving way, to improve its AI models. The company has acknowledged that synthetic data alone is not producing reliable results, particularly in training large language models that power tools like Writing Tools and notification summaries.

To address this, Apple will compare AI-generated content with real emails from users who have opted in to share Device Analytics. The sampled emails remain on the user’s device, with only a signal sent to Apple about which AI-generated message most closely matches real-world usage.

The move reflects broader efforts to boost the performance of Apple Intelligence, a suite of features that includes message recaps and content summaries.

Apple has faced internal criticism over slow progress, particularly with Siri, which is now seen as falling behind competitors like Google Gemini and Samsung’s Galaxy AI. The tech giant recently confirmed that meaningful AI updates for Siri won’t arrive until 2026, despite earlier promises of a rollout later this year.

In a rare leadership shakeup, Apple CEO Tim Cook removed AI chief John Giannandrea from overseeing Siri after delays were labelled ‘ugly and embarrassing’ by senior executives.

The responsibility for Siri’s future has been handed to Mike Rockwell, the creator of Vision Pro, who now reports directly to software chief Craig Federighi. Giannandrea will continue to lead Apple’s other AI initiatives.

For more information on these topics, visit diplomacy.edu.

Nvidia brings AI supercomputer production to the US

Nvidia is shifting its AI supercomputer manufacturing operations to the United States for the first time, instead of relying on a globally dispersed supply chain.

In partnership with industry giants such as TSMC, Foxconn, and Wistron, the company is establishing large-scale facilities to produce its advanced Blackwell chips in Arizona and complete supercomputers in Texas. Production is expected to reach full scale within 12 to 15 months.

Over a million square feet of manufacturing space has been commissioned, with key roles also played by packaging and testing firms Amkor and SPIL.

The move reflects Nvidia’s ambition to create up to half a trillion dollars in AI infrastructure within the next four years, while boosting supply chain resilience and growing its US-based operations instead of expanding solely abroad.

These AI supercomputers are designed to power new, highly specialised data centres known as ‘AI factories,’ capable of handling vast AI workloads.

Nvidia’s investment is expected to support the construction of dozens of such facilities, generating hundreds of thousands of jobs and securing long-term economic value.

To enhance efficiency, Nvidia will apply its own AI, robotics, and simulation tools across these projects, using Omniverse to model factory operations virtually and Isaac GR00T to develop robots that automate production.

According to CEO Jensen Huang, bringing manufacturing home strengthens supply chains and better positions the company to meet the surging global demand for AI computing power.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

TheStage AI makes neural network optimisation easy

In a move set to ease one of the most stubborn hurdles in AI development, Delaware-based startup TheStage AI has secured $4.5 million to launch its Automatic NNs Analyzer (ANNA).

Instead of requiring months of manual fine-tuning, ANNA allows developers to optimise AI models in hours, cutting deployment costs by up to five times. The technology is designed to simplify a process that has remained inaccessible to all but the largest tech firms, often limited by expensive GPU infrastructure.

TheStage AI’s system automatically compresses and refines models using techniques like quantisation and pruning, adapting them to various hardware environments without locking users into proprietary platforms.

Instead of focusing on cloud-based deployment, their models, called ‘Elastic models’, can run anywhere from smartphones to on-premise GPUs. This gives startups and enterprises a cost-effective way to adjust quality and speed with a simple interface, akin to choosing video resolution on streaming platforms.

Backed by notable investors including Mehreen Malik and Atlantic Labs, and already used by companies like Recraft.ai, the startup addresses a growing need as demand shifts from AI training to real-time inference.

Unlike competitors acquired by larger corporations and tied to specific ecosystems, TheStage AI takes a dual-market approach, helping both app developers and AI researchers. Their strategy supports scale without complexity, effectively making AI optimisation available to teams of any size.

Founded by a group of PhD holders with experience at Huawei, the team combines deep academic roots with practical industry application.

By offering a tool that streamlines deployment instead of complicating it, TheStage AI hopes to enable broader use of generative AI technologies in sectors where performance and cost have long been limiting factors.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta under fire for scrapping diversity and moderation policies

The NAACP Legal Defense Fund (LDF) has withdrawn from Meta’s civil rights advisory group, citing deep concerns over the company’s rollback of diversity, equity and inclusion (DEI) policies and changes to content moderation.

The decision follows Meta’s January announcement that it would end DEI programmes, eliminate factchecking teams, and revise moderation rules across its platforms.

Civil rights organisations, including LDF, expressed alarm at the time, warning that the changes could silence marginalised voices and increase the risk of online harm.

In a letter to Meta CEO Mark Zuckerberg, they criticised the company for failing to consult the advisory group or consider the impact on protected communities. LDF’s Todd A Cox later said the policy shift posed a ‘grave risk’ to Black communities and public discourse.

LDF also noted that the company had seen progress under previous DEI policies, including a significant increase in Black and Hispanic employees.

Its reversal, the group argues, may breach federal civil rights laws and expose Meta to legal consequences.

LDF urged Meta to assess the effects of its policy changes and increase transparency about how harmful content is reported and removed. Meta has not commented publicly on the matter.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI could be Geneva’s lifeline in times of crisis

International Geneva is at a crossroads. With mounting budget cuts, declining trust in multilateralism, and growing geopolitical tensions, the city’s role as a hub for global cooperation is under threat.

In his thought-provoking blog, ‘Don’t waste the crisis: How AI can help reinvent International Geneva’, Jovan Kurbalija, Executive Director of Diplo, argues that AI could offer a way forward—not as a mere technological upgrade but as a strategic tool for transforming the city’s institutions and reviving its humanitarian spirit. Kurbalija envisions AI as a means to re-skill Geneva’s workforce, modernise its organisations, and preserve its vast yet fragmented knowledge base.

With professions such as translators, lawyers, and social scientists potentially playing pivotal roles in shaping AI tools, the city can harness its multilingual, highly educated population for a new kind of innovation. A bottom-up approach is key: practical steps like AI apprenticeships, micro-learning platforms, and ‘AI sandboxes’ would help institutions adapt at their own pace while avoiding the pitfalls of top-down tech imposition.

Organisations must also rethink how they operate. AI offers the chance to cut red tape, lighten the administrative burden on NGOs, and flatten outdated hierarchies in favour of more agile, data-driven decision-making.

At the same time, Geneva can lead by example in ethical AI governance—by ensuring accountability, protecting human rights and knowledge, and defending what Kurbalija calls our ‘right to imperfection’ in an increasingly optimised world. Ultimately, Geneva’s challenge is not technological—it’s organisational.

As AI tools become cheaper and more accessible, the real work lies in how institutions and communities embrace change. Kurbalija proposes a dedicated Geneva AI Fund to support apprenticeships, ethical projects, and local initiatives. He argues that this crisis could be Geneva’s opportunity to reinvent itself for survival and to inspire a global model of human-centred AI governance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google unveils new AI agent toolkit

This week at Google Cloud Next in Las Vegas, Google revealed its latest push into ‘agentic AI’. A software designed to act independently, perform tasks, and communicate with other digital systems.

Central to this effort is the Agent Development Kit (ADK), an open-source toolkit said to let developers build AI agents in under 100 lines of code.

Instead of requiring complex systems, the ADK includes pre-built connectors and a so-called ‘agent garden’ to streamline integration with data platforms like BigQuery and AlloyDB.

Google also introduced a new Agent2Agent (A2A) protocol, aimed at enabling cooperation between agents from different vendors. With over 50 partners, including Accenture, SAP and Salesforce, already involved, the company hopes to establish a shared standard for AI interaction.

Powering these tools is Google’s latest AI chip, Ironwood, a seventh-generation TPU promising tenfold performance gains over earlier models. These chips, designed for use with advanced models like Gemini 2.5, reflect Google’s ambition to dominate AI infrastructure.

Despite the buzz, analysts caution that the hype around AI agents may outpace their actual utility. While vendors like Microsoft, Salesforce and Workday push agentic AI to boost revenue, in some cases even replacing staff, experts argue that current models still fall short of real human-like intelligence.

Instead of widespread adoption, businesses are expected to focus more on managing costs and complexity, especially as economic uncertainty grows. Without strong oversight, these tools risk becoming costly, unpredictable, and difficult to scale.

For more information on these topics, visit diplomacy.edu.

Virtual AI agents tested in social good experiment

Nonprofit organisation Sage Future has launched an unusual initiative that puts AI agents to work for philanthropy.

In a recent experiment backed by Open Philanthropy, four AI models, including OpenAI’s GPT-4o and two of Anthropic’s Claude Sonnet models, were tasked with raising money for a charity of their choice. Within a week, they collected $257 for Helen Keller International, which supports global health efforts.

The AI agents were given a virtual workspace where they could browse the internet, send emails, and create documents. They collaborated through group chats and even launched a social media account to promote their campaign.

Though most donations came from human spectators observing the experiment, the exercise revealed the surprising resourcefulness of these AI tools. one Claude model even generated profile pictures using ChatGPT and let viewers vote on their favourite.

Despite occasional missteps, including agents pausing for no reason or becoming distracted by online games, the experiment offered insights into the emerging capabilities of autonomous systems.

Sage’s director, Adam Binksmith, sees this as just the beginning, with future plans to introduce conflicting agent goals, saboteurs, and larger oversight systems to stress-test AI coordination and ethics.

For more information on these topics, visit diplomacy.edu.