EU delays tech sovereignty package with AI and Chips Act 2

The European Commission has delayed a flagship tech sovereignty package for the second time, according to its latest College agenda. The measures are now scheduled for adoption on 27 May, after previously being postponed from March to April.

The tech sovereignty package includes several major initiatives aimed at strengthening EU tech sovereignty, such as the Cloud and AI Development Act, the Chips Act 2, an open-source strategy, and a roadmap for digitalisation and AI in energy. European Commission officials have not provided a reason for the latest delay.

The Cloud and AI Development Act is expected to define what constitutes a ‘sovereign’ cloud and simplify rules for building data centres. The proposal is designed to accelerate infrastructure development as Europe seeks to compete in the global AI race.

Chips Act 2 will follow up on the EU’s earlier semiconductor strategy, which struggled to boost domestic chip production significantly. The new proposal is expected to refine industrial policy efforts to reduce reliance on foreign suppliers.

Meanwhile, the planned open source strategy aims to support European software ecosystems and reduce dependence on large US technology firms. By encouraging commercially viable open source projects, the EU hopes to strengthen its long-term digital autonomy.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UN calls for global action against online scam networks

Online scam networks operating across Southeast Asia are defrauding victims worldwide, using AI, impersonation techniques, and complex cyber tools to steal billions of dollars.

At the Global Fraud Summit in Vienna, the UN Office on Drugs and Crime (UNODC) and INTERPOL brought together governments, law enforcement, and private-sector actors to strengthen international cooperation against these crimes.

Victims include individuals from diverse backgrounds, often highly educated and financially experienced. One Australian couple, Kim and Allan Sawyer, lost more than $2.5 million after engaging with what appeared to be a legitimate investment opportunity. ‘The scammer was extraordinarily believable,’ Kim Sawyer said. ‘He had a British accent, used all the right financial market terms and knew how to induce us by appearing credible every time.’

UNODC officials warn that these operations extend beyond fraud, forming part of a broader criminal ecosystem driven by organised scam networks, involving human trafficking, corruption, and money laundering.

‘We need to be looking into prosecuting high-level criminals, following the money through financial investigations and identifying the giant networks that operate behind these operations,’ said Delphine Schantz, UNODC’s regional representative for Southeast Asia and the Pacific.

Authorities say the scale and complexity of these crimes require a coordinated global response to dismantle scam networks effectively. ‘The complexity of these crimes requires an equally complex, whole-of-government approach and enhanced coordination among governments, financial intelligence units and digital banks,’ Schantz added.

Investigations in countries such as the Philippines and Cambodia have revealed how scam networks operate on the ground. In Manila, a former scam compound uncovered facilities used to control trafficked workers and evidence of corruption linked to local officials. ‘How do you prove a cybercrime in 36 hours? It is not possible,’ said the Philippines’ Presidential Anti-Organised Crime Commission (PAOCC) operations director, recalling the challenges investigators faced during early raids.

In Cambodia, international prosecutors and investigators have focused on improving cooperation mechanisms, including extradition, asset recovery, and the handling of digital evidence. These efforts are seen as critical in addressing the cross-border nature of scam networks.

Despite increased enforcement efforts, these networks continue to adapt and relocate, maintaining a global reach. At recent international meetings, including a summit in Bangkok involving nearly 60 countries and major technology firms, officials agreed on the need for shared intelligence, joint investigations and coordinated prosecutions.

Victims continue to call for stronger responses. ‘The scammer works twice: they take your money, and they take your soul. They really do. They take your self-worth. And then, you feel like you’re being scammed again, by the authorities’ lack of response,’ Sawyer said.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI agents test limits of EU rules

AI agents are rapidly gaining traction, raising questions about whether existing EU rules can keep pace. Unlike chatbots, these systems can act autonomously and interact with digital tools on behalf of users.

Experts warn that AI agents require deeper access to personal data and online services to function effectively. Regulators in Europe are monitoring potential risks as the technology becomes more integrated into daily life.

Lawmakers are examining whether current legislation, such as the AI Act and GDPR, adequately covers agent-based systems. Legal experts highlight challenges around contracts, liability and accountability when AI acts independently.

Despite concerns, many governments remain reluctant to introduce new rules, citing regulatory fatigue. Policymakers may rely on existing frameworks unless major incidents force a reassessment of AI oversight.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Google launches AI skills initiative to support Europe’s workforce transition

At the Future of Work Forum, Google introduced ‘AI Works for Europe’, a programme aimed at strengthening digital skills and supporting workforce adaptation to AI across the region.

Funding of $30 million will be directed through Google.org to expand training opportunities, alongside broader access to AI certification programmes designed to help individuals and businesses adopt new technologies in practical contexts.

A central focus involves preparing workers and students for labour market changes.

Partnerships with organisations such as INCO are supporting the development of targeted training programmes, particularly in sectors where demand for AI-related skills is increasing, including finance, logistics and marketing.

New educational pathways are also being introduced, including an expanded AI Professional Certificate available in multiple European languages. These initiatives aim to improve AI literacy and provide hands-on experience aligned with employer expectations.

Collaboration with local organisations and institutions remains a key element, reflecting a broader strategy to ensure access to training across different regions and communities.

Efforts to expand AI capabilities across Europe highlight the growing importance of skills development as AI becomes more integrated into economic activity.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

NVIDIA expands physical AI ecosystem to accelerate real world robotics

Partnerships across the robotics sector are positioning NVIDIA at the centre of what is increasingly described as ‘physical AI’, a shift towards intelligent machines capable of perceiving, reasoning and acting in real environments.

A new generation of tools, including NVIDIA Cosmos world models and updated NVIDIA Isaac simulation frameworks, aims to support developers in training and validating robots before deployment.

These systems enable companies to simulate complex environments, reducing the risks and costs of real-world testing.

Industrial robotics leaders such as ABB Robotics, KUKA, and FANUC are integrating NVIDIA technologies into digital twin environments, enabling more accurate modelling of production lines and automation systems.

Advances are also extending into humanoid robotics, where companies are using AI models to develop machines capable of more flexible and adaptive behaviour.

New foundation models, including GR00T systems, are designed to give robots general-purpose capabilities instead of limiting them to specific tasks.

Healthcare and logistics represent additional areas of deployment, with robotics platforms being tested in surgical systems, warehouse automation and manufacturing environments. These applications highlight how physical AI could reshape industries requiring precision, safety and scalability.

Growing collaboration across cloud providers, manufacturers and AI developers suggests that robotics is moving toward a more integrated ecosystem, where simulation, data generation and deployment are increasingly interconnected.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Human made labels emerge as industries react to AI expansion

Organisations around the world are developing certification labels designed to show that products or creative work were made by humans rather than AI. New badges such as ‘Human made’, ‘AI free’ and ‘Proudly Human’ are appearing across books, films, marketing and websites as industries respond to the rapid spread of AI tools.

At least eight initiatives are now attempting to create a label that could achieve global recognition similar to the Fair Trade mark. Experts warn that competing definitions and inconsistent certification systems could confuse consumers unless a universal standard is agreed upon.

Some schemes allow creators to download AI-free badges with little or no verification, while others use paid auditing processes that rely on analysts and AI detection tools. Researchers note that defining ‘human-made’ is increasingly difficult because AI technologies are embedded in many everyday software tools.

Creative industries are at the centre of the debate as generative AI rapidly produces books, films and music at lower cost and higher speed. Advocates of certification argue that verified human-created content may gain greater value if consumers can clearly distinguish it from AI-generated work.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Seoul deepens ties with global AI developers

South Korea is pursuing a partnership with AI company Anthropic as part of a national strategy to strengthen technological capabilities. Officials are working toward a memorandum of understanding with the developer of the Claude AI system.

The initiative follows discussions between South Korea’s science minister and Anthropic’s chief executive, Dario Amodei, during an AI summit in New Delhi. Authorities are also preparing for the company’s planned office opening in the city in 2026.

Government leaders in South Korea have already expanded cooperation with OpenAI. Policymakers say the strategy aims to build ties with leading global AI developers while supporting domestic innovation.

Officials are also developing a homegrown AI foundation model with local companies. The programme forms part of a national plan to position the country among the world’s leading AI powers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI and robotics could offset impact of aging populations in Asia

Declining fertility rates have long been considered a major risk to economic growth, but analysts suggest the outlook may not be entirely negative for several advanced Asian economies. Rising investment in AI and robotics is increasingly viewed as a way to offset labour shortages caused by ageing populations.

According to analysts at Bank of America Global Research, technological innovation driven by AI and robotics could support productivity growth even as workforces shrink. Strong ecosystems in semiconductors, technology hardware, and industrial machinery allow some countries in the region to deploy advanced technologies faster and at lower cost than many other parts of the world.

South Korea currently has the highest robot density in the world, with about 1,012 industrial robots per 10,000 manufacturing workers. China has 470 and Japan 419, both significantly above the global average of 162, according to 2024 figures from the International Federation of Robotics.

Analysts say governments across East Asia are accelerating the adoption of AI and robotics to address demographic pressures. In particular, China, South Korea, and Japan have expanded investments in robotics, AI systems, and advanced manufacturing technologies to maintain economic productivity.

Population projections highlight the scale of the challenge facing these economies. By 2050, about 37 percent of Japan’s population and nearly 40 percent of South Korea’s population are expected to be aged 65 or older, while China’s share could reach around 31 percent.

Despite concerns about slowing growth, economists argue that advances in AI and robotics could weaken the traditional link between economic output and workforce size. Automation technologies not only replace routine tasks but also enhance human productivity in many industries.

A study by the Bank of Korea estimated that demographic pressures could reduce the country’s gross domestic product by 16.5 percent between 2023 and 2050. However, wider adoption of AI and robotics could limit the decline to around 5.9 percent under favourable conditions.

Some analysts caution that the economic benefits of automation may not be evenly distributed. While AI and robotics can improve productivity, technological gains often benefit capital owners and highly skilled workers more than others.

Economists also warn that consumption may slow as the number of households declines, while governments may face greater fiscal pressure from higher pension and healthcare costs. Policymakers may need to invest in workforce retraining and education to help workers adapt to the growing role of AI and robotics in the economy.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Meta removes encrypted messaging from Instagram DMs

Meta will discontinue end-to-end encryption for Instagram direct messages starting in May 2026. The company said the feature saw limited use among Instagram users.

Users with encrypted chats will receive instructions on how to download messages or media before the feature ends. Meta confirmed the change through updates to its support pages and in-app notifications.

The decision comes amid ongoing debate about encryption and online safety on major social platforms. Critics argue that encrypted messaging can make it harder to detect harmful activity involving minors.

Meta said users seeking encrypted communication can continue using WhatsApp or Messenger. The company maintains end-to-end encryption for messaging services outside Instagram.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

French court upholds €40 million GDPR fine for Criteo

France’s highest administrative court has upheld a €40 million GDPR fine against advertising technology company Criteo. Regulators in France concluded that the firm failed to obtain valid consent for tracking users across websites.

The investigation began in 2018 following complaints from privacy groups and examined Criteo’s behavioural advertising model. Authorities in France said the company did not properly respect rights to access, erasure and transparency.

The ruling in France also confirmed that pseudonymous identifiers linked to browsing data can still qualify as personal data. Judges rejected arguments that such identifiers were effectively anonymous.

Privacy advocates say the decision strengthens GDPR enforcement across Europe. Experts in France argue that the case highlights growing scrutiny of online tracking practices used in digital advertising.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot