Google launches AI skills initiative to support Europe’s workforce transition

At the Future of Work Forum, Google introduced ‘AI Works for Europe’, a programme aimed at strengthening digital skills and supporting workforce adaptation to AI across the region.

Funding of $30 million will be directed through Google.org to expand training opportunities, alongside broader access to AI certification programmes designed to help individuals and businesses adopt new technologies in practical contexts.

A central focus involves preparing workers and students for labour market changes.

Partnerships with organisations such as INCO are supporting the development of targeted training programmes, particularly in sectors where demand for AI-related skills is increasing, including finance, logistics and marketing.

New educational pathways are also being introduced, including an expanded AI Professional Certificate available in multiple European languages. These initiatives aim to improve AI literacy and provide hands-on experience aligned with employer expectations.

Collaboration with local organisations and institutions remains a key element, reflecting a broader strategy to ensure access to training across different regions and communities.

Efforts to expand AI capabilities across Europe highlight the growing importance of skills development as AI becomes more integrated into economic activity.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

MIT research highlights embedded and enacted risks in AI

Generative AI offers major productivity and growth opportunities, but also brings new risks as organisations move from experiments to full deployment. MIT research highlights key risk areas, including training data, foundation models, user prompts, and system prompts.

Researchers identify two types of risk.

Embedded risks come from the technology itself, shaped by model behaviour, data quality, and vendor updates, and are mostly outside an organisation’s control.

Enacted risks arise from choices in deploying AI, from prompt design to agent permissions, and require strong governance.

Advanced uses such as retrieval-augmented generation (RAG) and autonomous AI agents increase exposure. RAG uses internal data to improve outputs, but may reveal sensitive information or control gaps. AI agents acting across multiple tools can lead to ‘autonomy creep,’ performing tasks without proper oversight.

To manage AI risk, organisations should map tools, assign ownership, track outputs, and use separate strategies for embedded and enacted risks. Vendor engagement, governance frameworks, and technical controls are essential for safe AI use.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI-powered MRI previews aim to reduce errors and rescans

Philips is creating AI-driven predictive MRI previews to improve scan planning and reduce operator variability. Using NVIDIA accelerated computing and foundation models, the system creates a pre-scan image to validate protocols, optimise positioning, and spot potential issues.

The technology is based on a dedicated MR foundation model trained on diverse datasets covering anatomies, field strengths, protocols, and artefacts.

When combined with NVIDIA’s NV‑Generate, NV‑Segment, and NV‑Reason models, the platform integrates image generation, segmentation, and interpretation. It creates a single intelligent workflow that supports consistent and efficient MRI procedures.

Predictive previews reduce rescans, enhance image quality, and increase technologist confidence, especially in complex exams or areas with limited expertise. Early guidance helps confirm protocols, optimise positioning, and flag issues that could affect diagnostic outcomes.

Philips envisions autonomous MRI, with AI monitoring image quality, guiding positioning, and assisting radiologists with actionable insights. Predictive imaging boosts consistency, efficiency, and access, improving patient experience and expanding MRI availability.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

NVIDIA expands physical AI ecosystem to accelerate real world robotics

Partnerships across the robotics sector are positioning NVIDIA at the centre of what is increasingly described as ‘physical AI’, a shift towards intelligent machines capable of perceiving, reasoning and acting in real environments.

A new generation of tools, including NVIDIA Cosmos world models and updated NVIDIA Isaac simulation frameworks, aims to support developers in training and validating robots before deployment.

These systems enable companies to simulate complex environments, reducing the risks and costs of real-world testing.

Industrial robotics leaders such as ABB Robotics, KUKA, and FANUC are integrating NVIDIA technologies into digital twin environments, enabling more accurate modelling of production lines and automation systems.

Advances are also extending into humanoid robotics, where companies are using AI models to develop machines capable of more flexible and adaptive behaviour.

New foundation models, including GR00T systems, are designed to give robots general-purpose capabilities instead of limiting them to specific tasks.

Healthcare and logistics represent additional areas of deployment, with robotics platforms being tested in surgical systems, warehouse automation and manufacturing environments. These applications highlight how physical AI could reshape industries requiring precision, safety and scalability.

Growing collaboration across cloud providers, manufacturers and AI developers suggests that robotics is moving toward a more integrated ecosystem, where simulation, data generation and deployment are increasingly interconnected.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Britain targets quantum leadership with £1bn investment

UK Secretary of State for Science, Innovation and Technology Liz Kendall has announced a £1bn funding package to boost UK quantum computing and retain domestic talent.

The initiative reflects growing concern over the country’s ability to compete globally, particularly after the US established dominance in AI.

Officials emphasised the need to retain British startups, engineers, and researchers who often relocate abroad in search of better funding and scaling opportunities. The UK produces top talent, but Google and OpenAI own many leading firms.

The investment will support the development of large-scale quantum computers for use across science, industry, and the public sector. Another £1bn will fund real-world use in finance, pharmaceuticals, and energy.

The government aims to build a fully operational domestic quantum system by the early 2030s.

Quantum computing uses qubits that can exist in multiple states simultaneously, enabling far greater computational power than classical systems. Fully fault-tolerant machines are still in development, but the technology could drive advances in drug discovery, materials science, and complex modelling.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI tool could help detect domestic violence risk years earlier

Researchers in the United States have developed an AI system designed to help doctors identify patients who may be at risk of intimate partner violence. The tool analyses hospital data to detect patterns associated with abuse, potentially enabling healthcare professionals to intervene earlier.

Intimate partner violence refers to abuse from current or former partners and can lead to serious injuries, chronic pain, and long-term mental health problems. According to the European Commission, 18 percent of women who have had a partner reported experiencing physical or sexual violence from a partner in 2021.

The study, published in the journal Nature, examined hospital records from nearly 850 women who had experienced intimate partner violence and more than 5,200 similar patients in a control group. Researchers used the data to train three different machine learning systems to detect patterns associated with abuse.

One model analysed structured hospital data, such as age and medical history. A second model examined written clinical notes, including doctors’ observations and radiology reports. A third system combined both data types and achieved the strongest results, correctly identifying risk in 88 percent of cases.

Researchers found that the system could flag potential abuse more than three years before some patients later entered hospital-based intervention programmes. By analysing large datasets, the tool can detect patterns of physical trauma linked to abuse and alert clinicians so they can approach the issue carefully and offer support.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Seoul deepens ties with global AI developers

South Korea is pursuing a partnership with AI company Anthropic as part of a national strategy to strengthen technological capabilities. Officials are working toward a memorandum of understanding with the developer of the Claude AI system.

The initiative follows discussions between South Korea’s science minister and Anthropic’s chief executive, Dario Amodei, during an AI summit in New Delhi. Authorities are also preparing for the company’s planned office opening in the city in 2026.

Government leaders in South Korea have already expanded cooperation with OpenAI. Policymakers say the strategy aims to build ties with leading global AI developers while supporting domestic innovation.

Officials are also developing a homegrown AI foundation model with local companies. The programme forms part of a national plan to position the country among the world’s leading AI powers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI and robotics could offset impact of aging populations in Asia

Declining fertility rates have long been considered a major risk to economic growth, but analysts suggest the outlook may not be entirely negative for several advanced Asian economies. Rising investment in AI and robotics is increasingly viewed as a way to offset labour shortages caused by ageing populations.

According to analysts at Bank of America Global Research, technological innovation driven by AI and robotics could support productivity growth even as workforces shrink. Strong ecosystems in semiconductors, technology hardware, and industrial machinery allow some countries in the region to deploy advanced technologies faster and at lower cost than many other parts of the world.

South Korea currently has the highest robot density in the world, with about 1,012 industrial robots per 10,000 manufacturing workers. China has 470 and Japan 419, both significantly above the global average of 162, according to 2024 figures from the International Federation of Robotics.

Analysts say governments across East Asia are accelerating the adoption of AI and robotics to address demographic pressures. In particular, China, South Korea, and Japan have expanded investments in robotics, AI systems, and advanced manufacturing technologies to maintain economic productivity.

Population projections highlight the scale of the challenge facing these economies. By 2050, about 37 percent of Japan’s population and nearly 40 percent of South Korea’s population are expected to be aged 65 or older, while China’s share could reach around 31 percent.

Despite concerns about slowing growth, economists argue that advances in AI and robotics could weaken the traditional link between economic output and workforce size. Automation technologies not only replace routine tasks but also enhance human productivity in many industries.

A study by the Bank of Korea estimated that demographic pressures could reduce the country’s gross domestic product by 16.5 percent between 2023 and 2050. However, wider adoption of AI and robotics could limit the decline to around 5.9 percent under favourable conditions.

Some analysts caution that the economic benefits of automation may not be evenly distributed. While AI and robotics can improve productivity, technological gains often benefit capital owners and highly skilled workers more than others.

Economists also warn that consumption may slow as the number of households declines, while governments may face greater fiscal pressure from higher pension and healthcare costs. Policymakers may need to invest in workforce retraining and education to help workers adapt to the growing role of AI and robotics in the economy.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Meta removes encrypted messaging from Instagram DMs

Meta will discontinue end-to-end encryption for Instagram direct messages starting in May 2026. The company said the feature saw limited use among Instagram users.

Users with encrypted chats will receive instructions on how to download messages or media before the feature ends. Meta confirmed the change through updates to its support pages and in-app notifications.

The decision comes amid ongoing debate about encryption and online safety on major social platforms. Critics argue that encrypted messaging can make it harder to detect harmful activity involving minors.

Meta said users seeking encrypted communication can continue using WhatsApp or Messenger. The company maintains end-to-end encryption for messaging services outside Instagram.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

French court upholds €40 million GDPR fine for Criteo

France’s highest administrative court has upheld a €40 million GDPR fine against advertising technology company Criteo. Regulators in France concluded that the firm failed to obtain valid consent for tracking users across websites.

The investigation began in 2018 following complaints from privacy groups and examined Criteo’s behavioural advertising model. Authorities in France said the company did not properly respect rights to access, erasure and transparency.

The ruling in France also confirmed that pseudonymous identifiers linked to browsing data can still qualify as personal data. Judges rejected arguments that such identifiers were effectively anonymous.

Privacy advocates say the decision strengthens GDPR enforcement across Europe. Experts in France argue that the case highlights growing scrutiny of online tracking practices used in digital advertising.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot