EU delays tech sovereignty package with AI and Chips Act 2

The European Commission has delayed a flagship tech sovereignty package for the second time, according to its latest College agenda. The measures are now scheduled for adoption on 27 May, after previously being postponed from March to April.

The tech sovereignty package includes several major initiatives aimed at strengthening EU tech sovereignty, such as the Cloud and AI Development Act, the Chips Act 2, an open-source strategy, and a roadmap for digitalisation and AI in energy. European Commission officials have not provided a reason for the latest delay.

The Cloud and AI Development Act is expected to define what constitutes a ‘sovereign’ cloud and simplify rules for building data centres. The proposal is designed to accelerate infrastructure development as Europe seeks to compete in the global AI race.

Chips Act 2 will follow up on the EU’s earlier semiconductor strategy, which struggled to boost domestic chip production significantly. The new proposal is expected to refine industrial policy efforts to reduce reliance on foreign suppliers.

Meanwhile, the planned open source strategy aims to support European software ecosystems and reduce dependence on large US technology firms. By encouraging commercially viable open source projects, the EU hopes to strengthen its long-term digital autonomy.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UN calls for global action against online scam networks

Online scam networks operating across Southeast Asia are defrauding victims worldwide, using AI, impersonation techniques, and complex cyber tools to steal billions of dollars.

At the Global Fraud Summit in Vienna, the UN Office on Drugs and Crime (UNODC) and INTERPOL brought together governments, law enforcement, and private-sector actors to strengthen international cooperation against these crimes.

Victims include individuals from diverse backgrounds, often highly educated and financially experienced. One Australian couple, Kim and Allan Sawyer, lost more than $2.5 million after engaging with what appeared to be a legitimate investment opportunity. ‘The scammer was extraordinarily believable,’ Kim Sawyer said. ‘He had a British accent, used all the right financial market terms and knew how to induce us by appearing credible every time.’

UNODC officials warn that these operations extend beyond fraud, forming part of a broader criminal ecosystem driven by organised scam networks, involving human trafficking, corruption, and money laundering.

‘We need to be looking into prosecuting high-level criminals, following the money through financial investigations and identifying the giant networks that operate behind these operations,’ said Delphine Schantz, UNODC’s regional representative for Southeast Asia and the Pacific.

Authorities say the scale and complexity of these crimes require a coordinated global response to dismantle scam networks effectively. ‘The complexity of these crimes requires an equally complex, whole-of-government approach and enhanced coordination among governments, financial intelligence units and digital banks,’ Schantz added.

Investigations in countries such as the Philippines and Cambodia have revealed how scam networks operate on the ground. In Manila, a former scam compound uncovered facilities used to control trafficked workers and evidence of corruption linked to local officials. ‘How do you prove a cybercrime in 36 hours? It is not possible,’ said the Philippines’ Presidential Anti-Organised Crime Commission (PAOCC) operations director, recalling the challenges investigators faced during early raids.

In Cambodia, international prosecutors and investigators have focused on improving cooperation mechanisms, including extradition, asset recovery, and the handling of digital evidence. These efforts are seen as critical in addressing the cross-border nature of scam networks.

Despite increased enforcement efforts, these networks continue to adapt and relocate, maintaining a global reach. At recent international meetings, including a summit in Bangkok involving nearly 60 countries and major technology firms, officials agreed on the need for shared intelligence, joint investigations and coordinated prosecutions.

Victims continue to call for stronger responses. ‘The scammer works twice: they take your money, and they take your soul. They really do. They take your self-worth. And then, you feel like you’re being scammed again, by the authorities’ lack of response,’ Sawyer said.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Publishers challenge OpenAI over alleged copyright infringement

Legal pressure is increasing on OpenAI as Encyclopaedia Britannica and Merriam-Webster file a lawsuit accusing the company of large-scale copyright violations.

According to the complaint, nearly 100,000 copyrighted articles were allegedly used without authorisation to train large language models. Publishers also argue that AI-generated outputs can reproduce parts of their content, raising concerns about unauthorised distribution.

Additional claims focus on how AI systems retrieve and present information. The lawsuit argues that retrieval-augmented generation tools may rely on proprietary databases, potentially undermining publishers’ business models by reducing traffic to original sources.

Concerns are also raised about inaccurate outputs attributed to publishers, which could affect trust in established information providers. The case highlights ongoing tensions between AI development and intellectual property protections.

Growing legal disputes involving media organisations, including The New York Times, suggest that courts will play a key role in defining how copyrighted material can be used in AI training.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Google launches AI skills initiative to support Europe’s workforce transition

At the Future of Work Forum, Google introduced ‘AI Works for Europe’, a programme aimed at strengthening digital skills and supporting workforce adaptation to AI across the region.

Funding of $30 million will be directed through Google.org to expand training opportunities, alongside broader access to AI certification programmes designed to help individuals and businesses adopt new technologies in practical contexts.

A central focus involves preparing workers and students for labour market changes.

Partnerships with organisations such as INCO are supporting the development of targeted training programmes, particularly in sectors where demand for AI-related skills is increasing, including finance, logistics and marketing.

New educational pathways are also being introduced, including an expanded AI Professional Certificate available in multiple European languages. These initiatives aim to improve AI literacy and provide hands-on experience aligned with employer expectations.

Collaboration with local organisations and institutions remains a key element, reflecting a broader strategy to ensure access to training across different regions and communities.

Efforts to expand AI capabilities across Europe highlight the growing importance of skills development as AI becomes more integrated into economic activity.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

MIT research highlights embedded and enacted risks in AI

Generative AI offers major productivity and growth opportunities, but also brings new risks as organisations move from experiments to full deployment. MIT research highlights key risk areas, including training data, foundation models, user prompts, and system prompts.

Researchers identify two types of risk.

Embedded risks come from the technology itself, shaped by model behaviour, data quality, and vendor updates, and are mostly outside an organisation’s control.

Enacted risks arise from choices in deploying AI, from prompt design to agent permissions, and require strong governance.

Advanced uses such as retrieval-augmented generation (RAG) and autonomous AI agents increase exposure. RAG uses internal data to improve outputs, but may reveal sensitive information or control gaps. AI agents acting across multiple tools can lead to ‘autonomy creep,’ performing tasks without proper oversight.

To manage AI risk, organisations should map tools, assign ownership, track outputs, and use separate strategies for embedded and enacted risks. Vendor engagement, governance frameworks, and technical controls are essential for safe AI use.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

xAI faces lawsuit over alleged misuse of AI image generation

Legal action has been filed against xAI in a US federal court, with plaintiffs alleging that its AI system Grok was used to generate harmful and explicitly manipulated images of minors.

The lawsuit claims that xAI failed to implement adequate safeguards to prevent the creation of such content, despite similar protections adopted by other AI developers.

According to the filing, the technology enabled the transformation of real images into explicit material without sufficient restrictions.

Plaintiffs seek to establish a class action, arguing that the company should be held accountable for both direct and third-party uses of its models. Legal arguments focus on whether responsibility extends to external applications built using the same underlying AI systems.

The case also highlights broader regulatory challenges surrounding AI-generated content, particularly the difficulty of preventing misuse when systems can modify real images. Questions around platform liability, safety standards, and enforcement are likely to shape future policy discussions.

Growing scrutiny of AI developers reflects increasing concern over how generative systems are deployed, especially in contexts involving sensitive or harmful content.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

New licensing rules for crypto platforms in Australia

Australia is advancing plans to regulate digital asset platforms under its financial services framework. The Senate committee recommended passing the Digital Assets Framework Bill 2025, bringing Australia closer to licensing crypto exchanges and tokenisation platforms.

Industry groups have raised concerns about definitions such as ‘digital token’ and ‘factual control.’ Broad wording could inadvertently cover infrastructure providers, including multi-party wallet systems, potentially classifying them as financial service operators.

Ripple Labs emphasised the need for precise language to avoid unintended regulation.

The committee supported the Treasury’s approach while planning to refine technical details through future regulations. Coinbase welcomed the progress but noted ongoing banking challenges for crypto firms.

The bill now proceeds to the Senate for debate and a final vote, which could reshape digital asset operations in Australia.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Microsoft Exchange Online outage affects users globally

A service disruption has affected users of Microsoft Exchange Online, and Microsoft has confirmed ongoing investigations into mailbox access issues affecting enterprise customers worldwide.

Reports indicate that Microsoft users encountered difficulties connecting via multiple access points, including the Microsoft Outlook desktop and mobile applications and browser-based email services. The issue affects specific connection methods rather than the entire platform.

Organisations relying on cloud-based communication tools experienced interruptions in email workflows, calendar scheduling, and shared mailbox functionality. Such disruptions can significantly disrupt operational continuity, particularly for businesses that depend on real-time communication systems.

Updates through Microsoft’s service health channels suggest that engineering teams are working to identify the root cause, though no definitive explanation has yet been provided.

Such incidents highlight broader concerns around resilience in cloud infrastructure, as enterprises increasingly depend on centralised platforms for critical communication services.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Green light for massive UK AI tech park

North Lincolnshire Council has granted outline planning permission for the Elsham Tech Park, a proposed AI data centre campus that would rank among the largest of its kind in the UK.

At full build-out, the site would include up to 15 hyperscale data centre buildings across 176 hectares, delivering more than 1.5 million square metres of floorspace and up to 1GW of computing capacity.

The development is expected to cost between £5.5 billion and £7.5 billion to build and could attract up to £10 billion in private investment over its lifetime.

Developer Greystoke plans to begin construction in 2027, with the first phase due to open in 2029, and the full campus to be delivered in phases over approximately a decade.

The project is also required to source at least 30% of build costs from businesses within a 30-mile radius, injecting an estimated £1.65 billion to £2.25 billion into the local economy.

The scheme received over 380 letters of objection from residents and environmental groups. Critics raised concerns, including loss of privacy for neighbouring properties, around-the-clock noise and light, and the scale of carbon emissions, with one campaign group estimating the equivalent of twice the woodland of Wales would be needed to offset the development’s environmental impact.

Permission was nonetheless granted unanimously by councillors.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

NVIDIA expands physical AI ecosystem to accelerate real world robotics

Partnerships across the robotics sector are positioning NVIDIA at the centre of what is increasingly described as ‘physical AI’, a shift towards intelligent machines capable of perceiving, reasoning and acting in real environments.

A new generation of tools, including NVIDIA Cosmos world models and updated NVIDIA Isaac simulation frameworks, aims to support developers in training and validating robots before deployment.

These systems enable companies to simulate complex environments, reducing the risks and costs of real-world testing.

Industrial robotics leaders such as ABB Robotics, KUKA, and FANUC are integrating NVIDIA technologies into digital twin environments, enabling more accurate modelling of production lines and automation systems.

Advances are also extending into humanoid robotics, where companies are using AI models to develop machines capable of more flexible and adaptive behaviour.

New foundation models, including GR00T systems, are designed to give robots general-purpose capabilities instead of limiting them to specific tasks.

Healthcare and logistics represent additional areas of deployment, with robotics platforms being tested in surgical systems, warehouse automation and manufacturing environments. These applications highlight how physical AI could reshape industries requiring precision, safety and scalability.

Growing collaboration across cloud providers, manufacturers and AI developers suggests that robotics is moving toward a more integrated ecosystem, where simulation, data generation and deployment are increasingly interconnected.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!