Europe struggles to explain quantum to its citizens

Most Europeans remain unclear about quantum technology, despite increasing attention from EU leaders. A new survey, released on World Quantum Day, reveals that while 78 per cent of adults in France and Germany are aware of quantum, only a third truly understand what it is.

Nearly half admitted they had heard of the term but didn’t know what it means.

Quantum science studies the smallest building blocks of the universe, particles like electrons and atoms, that behave in ways classical physics can’t explain. Though invisible even to standard microscopes, they already power technologies such as GPS, MRI scanners and semiconductors.

Quantum tools could lead to breakthroughs in healthcare, cybersecurity, and climate change, by enabling ultra-precise imaging, improved encryption, and advanced environmental monitoring.

The survey showed that 47 per cent of respondents expect quantum to positively impact their country within five years, with many hopeful about its role in areas like energy, medicine and fraud prevention.

For example, quantum computers might help simulate complex molecules for drug development, while quantum encryption could secure communications better than current systems.

The EU has committed to developing a European quantum chip and is exploring a potential Quantum Act, backed by €65 million in funding under the EU Chips Act. The UK has pledged £121 million for quantum initiatives.

However, Europe still trails behind China and the US, mainly due to limited private investment and slower deployment. Former ECB president Mario Draghi warned that Europe must build a globally competitive quantum ecosystem instead of falling behind further.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google uses AI and human reviews to fight ad fraud

Google has revealed it suspended 39.2 million advertiser accounts in 2024, more than triple the number from the previous year, as part of its latest push to combat ad fraud.

The tech giant said it is now able to block most bad actors before they even run an advert, thanks to advanced large language models and detection signals such as fake business details and fraudulent payments.

Instead of relying solely on AI, a team of over 100 experts from across Google and DeepMind also reviews deepfake scams and develops targeted countermeasures.

The company rolled out more than 50 LLM-based safety updates last year and introduced over 30 changes to advertising and publishing policies. These efforts, alongside other technical reinforcements, led to a 90% drop in reports of deepfake ads.

While the US saw the highest number of suspensions, with all 39.2 million accounts coming from there alone, India followed with 2.9 million accounts taken down. In both countries, ads were removed for violations such as trademark abuse, misleading personalisation, and financial service scams.

Overall, Google blocked 5.1 billion ads globally and restricted another 9.1 billion, instead of allowing harmful content to spread unchecked. Nearly half a billion of those removed were linked specifically to scam activity.

In a year when half the global population headed to the polls, Google also verified over 8,900 election advertisers and took down 10.7 million political ads.

While the scale of suspensions may raise concerns about fairness, Google said human reviews are included in the appeals process.

The company acknowledged previous confusion over enforcement clarity and is now updating its messaging to ensure advertisers understand the reasons behind account actions more clearly.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU plans major staff boost for digital rules

The European Commission is ramping up enforcement of its Digital Services Act (DSA) by hiring 60 more staff to support ongoing investigations into major tech platforms. Despite beginning probes into companies such as X, Meta, TikTok, AliExpress and Temu since December 2023, none have concluded.

The Commission currently has 127 employees working on the DSA and aims to reach 200 by year’s end. Applications for the new roles, including legal experts, policy officers, and data scientists, remain open until 10 May.

The DSA, which came into full effect in February last year, applies to all online platforms in the EU. However, the 25 largest platforms, those with over 45 million monthly users like Google, Amazon, and Shein, fall under the direct supervision of the Commission instead of national regulators.

The most advanced case is against X, with early findings pointing to a lack of transparency and accountability.

The law has drawn criticism from the current Republican-led US government, which views it as discriminatory. Brendan Carr of the US Federal Communications Commission called the DSA ‘an attack on free speech,’ accusing the EU of unfairly targeting American companies.

In response, EU Tech Commissioner Henna Virkkunen insisted the rules are fair, applying equally to platforms from Europe, the US, and China.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

South Korea’s $23B chip industry boost in response to global trade war

South Korea announced a $23 billion support package for its semiconductor industry, increasing from last year’s $19 billion to protect giants like Samsung and SK Hynix from US tariff uncertainties and China’s growing competition

The plan allocates 20 trillion won in financial aid, up from 17 trillion, to drive innovation and production, addressing a 31.8% drop in chip exports to China due to US trade restrictions.

The package responds to US policies under President Trump, including export curbs on high-bandwidth chips to China, which have disrupted global demand. 

At the same time, Finance Minister Choi Sang-mok will negotiate with the US to mitigate potential national security probes on chip trade. 

South Korea’s strategy aims to safeguard a critical economic sector that powers everything from smartphones to AI, especially as its auto industry faces US tariff challenges. 

Analysts view this as a preemptive effort to shield the chip industry from escalating global trade tensions.

Why does it matter?

For South Koreans, the semiconductor sector is a national lifeline, tied to jobs and economic stability, with the government betting big to preserve its global tech dominance. As China’s tech ambitions grow and US policies remain unpredictable, Seoul’s $23 billion investment speaks out about the cost of staying competitive in a tech-driven world.

Nvidia hit by the new US export rules

Nvidia is facing fresh US export restrictions on its H20 AI chips, dealing a blow to the company’s operations in China.

In a filing on Tuesday, Nvidia revealed it now needs a licence to export these chips indefinitely, after the US government cited concerns they could be used in a Chinese supercomputer.

The company expects a $5.5 billion charge linked to the controls in its first fiscal quarter of 2026, which ends on 27 April. Shares dropped around 6% in after-hours trading.

The H20 is currently the most advanced AI chip Nvidia can sell to China under existing regulations.

Last week, reports suggested CEO Jensen Huang might have temporarily eased tensions during a dinner at Donald Trump’s Mar-a-Lago resort, by promising investments in US-based AI data centres instead of opposing the rules directly.

Just a day before the filing, Nvidia announced plans to manufacture some chips in the US over the next four years, though the specifics were left vague.

Calls for tighter controls had been building, especially after it emerged that China’s DeepSeek used the H20 to train its R1 model, a system that surprised the US AI sector earlier this year.

Government officials had pushed for action, saying the chip’s capabilities posed a strategic risk. Nvidia declined to comment on the new restrictions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI updates safety rules amid AI race

OpenAI has updated its Preparedness Framework, the internal system used to assess AI model safety and determine necessary safeguards during development.

The company now says it may adjust its safety standards if a rival AI lab releases a ‘high-risk’ system without similar protections, a move that reflects growing competitive pressure in the AI industry.

Instead of outright dismissing such flexibility, OpenAI insists that any changes would be made cautiously and with public transparency.

Critics argue OpenAI is already lowering its standards for the sake of faster deployment. Twelve former employees recently supported a legal case against the company, warning that a planned corporate restructure might encourage further shortcuts.

OpenAI denies these claims, but reports suggest compressed safety testing timelines and increasing reliance on automated evaluations instead of human-led reviews. According to sources, some safety checks are also run on earlier versions of models, not the final ones released to users.

The refreshed framework also changes how OpenAI defines and manages risk. Models are now classified as having either ‘high’ or ‘critical’ capability, the former referring to systems that could amplify harm, the latter to those introducing entirely new risks.

Instead of deploying models first and assessing risk later, OpenAI says it will apply safeguards during both development and release, particularly for models capable of evading shutdown, hiding their abilities, or self-replicating.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

xAI adds collaborative workspace to Grok

Elon Musk’s AI firm xAI has introduced a new feature called Grok Studio, offering users a dedicated space to create and edit documents, code, and simple apps.

Available on Grok.com for both free and paying users, Grok Studio opens content in a separate window, allowing for real-time collaboration between the user and the chatbot instead of relying solely on back-and-forth prompts.

Grok Studio functions much like canvas-style tools from other AI developers. It allows code previews and execution in languages such as Python, C++, and JavaScript. The setup mirrors similar features introduced earlier by OpenAI and Anthropic, instead of offering a radically different experience.

All content appears beside Grok’s chat window, creating a workspace that blends conversation with practical development tools.

Alongside this launch, xAI has also announced integration with Google Drive.

It will allow users to attach files directly to Grok prompts, letting the chatbot work with documents, spreadsheets, and slides from Drive instead of requiring uploads or manual input, making the platform more convenient for everyday tasks and productivity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Opera brings AI assistant to Opera Mini on Android

Opera, the Norway-based browser maker, has announced the rollout of its AI assistant, Aria, to Opera Mini users on Android. The move represents a strategic effort to bring advanced AI capabilities to users with low-end devices and limited data access, rather than confining such tools to high-spec platforms.

Aria allows users to access up-to-date information, generate images, and learn about a range of topics using a blend of models from OpenAI and Google.

Since its 2005 launch, Opera Mini has been known for saving data during browsing, and Opera claims that the inclusion of Aria won’t compromise that advantage nor increase the app’s size.

It makes the AI assistant more accessible for users in regions where data efficiency is critical, instead of making them choose between smart features and performance.

Opera has long partnered with telecom providers in Africa to offer free data to Opera Mini users. However, last year, it had to end its programme in Kenya due to regulatory restrictions around ads on browser bookmark tiles.

Despite such challenges, Opera Mini has surpassed a billion downloads on Android and now serves more than 100 million users globally.

Alongside this update, Opera continues testing new AI functions, including features that let users manage tabs using natural language and tools that assist with task completion.

An effort like this reflects the company’s ambition to embed AI more deeply into everyday browsing instead of limiting innovation to its main browser.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Samsung brings AI-powered service tool to India

Samsung, already the leading home appliance brand in India by volume, is now enhancing its after-sales service with an AI-powered support tool.

The tech company from South Korea has introduced the Home Appliances Remote Management (HRM) tool, designed to improve service speed, accuracy, and overall customer experience instead of sticking with traditional support methods.

The HRM tool allows customer care teams to remotely diagnose and resolve issues in Samsung smart appliances connected via SmartThings. If a problem can be fixed remotely, staff will ask for the user’s consent before taking control of the device.

If the issue can be solved by the customer, step-by-step instructions are provided instead of sending a technician straight away.

When neither of these options applies, the issue is forwarded directly to service technicians with full diagnostics already completed, cutting down the time spent on-site.

The new system reduces the need for in-home visits, shortens waiting times, and increases the uptime of appliances instead of leaving users waiting unnecessarily.

SmartThings also plays a proactive role by automatically detecting issues and offering solutions before customers even need to call.

Samsung India’s Vice President for Customer Satisfaction, Sunil Cutinha, noted that the tool significantly streamlines service, boosts maintenance efficiency, and helps ensure timely product support for users across the country.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Nvidia brings AI supercomputer production to the US

Nvidia is shifting its AI supercomputer manufacturing operations to the United States for the first time, instead of relying on a globally dispersed supply chain.

In partnership with industry giants such as TSMC, Foxconn, and Wistron, the company is establishing large-scale facilities to produce its advanced Blackwell chips in Arizona and complete supercomputers in Texas. Production is expected to reach full scale within 12 to 15 months.

Over a million square feet of manufacturing space has been commissioned, with key roles also played by packaging and testing firms Amkor and SPIL.

The move reflects Nvidia’s ambition to create up to half a trillion dollars in AI infrastructure within the next four years, while boosting supply chain resilience and growing its US-based operations instead of expanding solely abroad.

These AI supercomputers are designed to power new, highly specialised data centres known as ‘AI factories,’ capable of handling vast AI workloads.

Nvidia’s investment is expected to support the construction of dozens of such facilities, generating hundreds of thousands of jobs and securing long-term economic value.

To enhance efficiency, Nvidia will apply its own AI, robotics, and simulation tools across these projects, using Omniverse to model factory operations virtually and Isaac GR00T to develop robots that automate production.

According to CEO Jensen Huang, bringing manufacturing home strengthens supply chains and better positions the company to meet the surging global demand for AI computing power.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!