US cloud dominance sparks debate about Europe’s digital sovereignty

European technology leaders are increasingly questioning the long-held assumption that information technology operates outside politics, amid growing concerns about reliance on US cloud providers and digital infrastructure.

At HiPEAC 2026, Nextcloud chief executive Frank Karlitschek argued that software has become an instrument of power, warning that Europe’s dependence on American technology firms exposes organisations to legal uncertainty, rising costs, and geopolitical pressure.

He highlighted conflicts between EU privacy rules and US surveillance laws, predicting continued instability around cross-border data transfers and renewed risks of services becoming legally restricted.

Beyond regulation, Karlitschek pointed to monopoly power among major cloud providers, linking recent price increases to limited competition and warning that vendor lock-in strategies make switching increasingly difficult for European organisations.

He presented open-source and locally controlled cloud systems as a path toward digital sovereignty, urging stronger enforcement of EU competition rules alongside investment in decentralised, federated technology models.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Experts propose frameworks for trustworthy AI systems

A coalition of researchers and experts has identified future research directions aimed at enhancing AI safety, robustness and quality as systems are increasingly integrated into critical functions.

The work highlights the need for improved tools to evaluate, verify and monitor AI behaviour across diverse real-world contexts, including methods to detect harmful outputs, mitigate bias and ensure consistent performance under uncertainty.

The discussion emphasises that technical quality attributes such as reliability, explainability, fairness and alignment with human values should be core areas of focus, especially for high-stakes applications in healthcare, transport, finance and public services.

Researchers advocate for interdisciplinary approaches, combining insights from computer science, ethics, and the social sciences to address systemic risks and to design governance frameworks that balance innovation with public trust.

The article also notes emerging strategies such as formal verification techniques, benchmarks for robustness and continuous post-deployment auditing, which could help contain unintended consequences and improve the safety of AI models before and after deployment at scale.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

GDPR violation reports surge across Europe in 2025, study finds

European data protection authorities recorded a sharp rise in GDPR violation reports in 2025, according to a new study by law firm DLA Piper, signalling growing regulatory pressure across the European Union.

Average daily reports surpassed 400 for the first time since the regulation entered force in 2018, reaching 443 incidents per day, a 22% increase compared with the previous year. The firm noted that expanding digital systems, new breach reporting laws, and geopolitical cyber risks may be driving the surge.

Despite the higher number of cases in the EU, total fines remained broadly stable at around €1.2 billion for the year, pushing cumulative GDPR penalties since 2018 to €7.1 billion, underlining regulators’ continued willingness to impose major sanctions.

Ireland once again led enforcement figures, with fines imposed by its Data Protection Commission totaling €4.04 billion, reflecting the presence of major technology firms headquartered there, including Meta, Google, and Apple.

Recent headline penalties included a €1.2 billion fine against Meta and a €530 million sanction against TikTok over data transfers to China, while courts across Europe increasingly consider compensation claims linked to GDPR violations.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU confronts Grok abuse as Brussels tests its digital power

The European Commission has opened a formal investigation into Grok after the tool produced millions of sexualised images of women and children.

A scrutiny that centres on whether X failed to carry out adequate risk assessments before releasing the undressing feature in the European market. The case arrives as ministers, including Sweden’s deputy prime minister, publicly reveal being targeted by the technology.

Brussels is preparing to use its strongest digital laws instead of deferring to US pressure. The Digital Services Act allows the European Commission to fine major platforms or force compliance measures when systemic harms emerge.

Experts argue the Grok investigation represents an important test of European resolve, particularly as the bloc tries to show it can hold powerful companies to account.

Concerns remain about the willingness of the EU to act decisively. Reports suggest the opening of the probe was delayed because of a tariff dispute with Washington, raising questions about whether geopolitical considerations slowed the enforcement response.

Several lawmakers say the delay undermined confidence in the bloc’s commitment to protecting fundamental rights.

The investigation could last months and may have wider implications for content ranking systems already under scrutiny.

Critics say financial penalties may not be enough to change behaviour at X, yet the case is still viewed as a pivotal moment for European digital governance. Observers believe a firm outcome would demonstrate that emerging harms linked to synthetic media cannot be ignored.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI reduces late breast cancer diagnoses by 12% in landmark study

AI in breast cancer screening reduced late diagnoses by 12% and increased early detection rates in the largest trial of its kind. The Swedish study involved 100,000 women randomly assigned to AI-supported screening or standard radiologist readings between April 2021 and December 2022.

The AI system analysed mammograms and assigned low-risk cases to single readings and high-risk cases to double readings by radiologists.

Results published in The Lancet showed 1.55 cancers per 1,000 women in the AI group versus 1.76 in the control group, with 81% detected at the screening stage, compared with 74% in the control group.

Dr Kristina Lång from Lund University said AI-supported mammography could reduce radiologist workload pressures and improve early detection, but cautioned that implementation must be done carefully with continuous monitoring.

Researchers stressed that screening still requires at least one human radiologist working alongside AI, rather than AI replacing human radiologists. Cancer Research UK’s Dr Sowmiya Moorthie called the findings promising but noted more research is needed to confirm life-saving potential

Breast Cancer Now’s Simon Vincent highlighted the significant potential for AI to support radiologists, emphasising that earlier diagnosis improves treatment outcomes for a disease that affects over 2 million people globally each year.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Millions use Telegram to create AI deepfake nudes as digital abuse escalates

A global wave of deepfake abuse is spreading across Telegram as millions of users generate and share sexualised images of women without consent.

Researchers have identified at least 150 active channels offering AI-generated nudes of celebrities, influencers and ordinary women, often for payment. The widespread availability of advanced AI tools has turned intimate digital abuse into an industrialised activity.

Telegram states that deepfake pornography is banned and says moderators removed nearly one million violating posts in 2025. Yet new channels appear immediately after old ones are shut, enabling users to exchange tips on how to bypass safety controls.

The rise of nudification apps on major app stores, downloaded more than 700 million times, adds further momentum to an expanding ecosystem that encourages harassment rather than accountability.

Experts argue that the celebration of such content reflects entrenched misogyny instead of simple technological misuse. Women targeted by deepfakes face isolation, blackmail, family rejection and lost employment opportunities.

Legal protections remain minimal in much of the world, with fewer than 40% of countries having laws that address cyber-harassment or stalking.

Campaigners warn that women in low-income regions face the most significant risks due to poor digital literacy, limited resources and inadequate regulatory frameworks.

The damage inflicted on victims is often permanent, as deepfake images circulate indefinitely across platforms and are impossible to remove, undermining safety, dignity and long-term opportunities comprehensively.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

UK minister signals interest in universal basic income amid rising AI job disruption

Jason Stockwood, the UK investment minister, has suggested that a universal basic income could help protect workers as AI reshapes the labour market.

He argued that rapid advances in automation will cause disruptive shifts across several sectors, meaning the country must explore safety mechanisms rather than allowing sudden job losses to deepen inequality. He added that workers will need long-term retraining pathways as roles disappear.

Concern about the economic impact of AI continues to intensify.

Research by Morgan Stanley indicates that the UK is losing more jobs than it is creating because of automation and is being affected more severely than other major economies.

Warnings from London’s mayor, Sadiq Khan and senior global business figures, including JP Morgan’s chief executive Jamie Dimon, point to the risk of mass unemployment unless governments and companies step in with support.

Stockwood confirmed that a universal basic income is not part of formal government policy, although he said people inside government are discussing the idea.

He took up his post in September after a long career in the technology sector, including senior roles at Match.com, Lastminute.com and Travelocity, as well as leading a significant sale of Simply Business.

Additionally, Stockwood said he no longer pushes for stronger wealth-tax measures, but he criticised wealthy individuals who seek to minimise their contributions to public finances. He suggested that those who prioritise tax avoidance lack commitment to their communities and the country’s long-term success.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

French public office hit with €5 million CNIL fine after massive data leak

The data protection authority of France has imposed a €5 million penalty on France Travail after a massive data breach exposed sensitive personal information collected over two decades.

A leak which included social security numbers, email addresses, phone numbers and home addresses of an estimated 36.8 million people who had used the public employment service. CNIL said adequate security measures would have made access far more difficult for the attackers.

The investigation found that cybercriminals exploited employees through social engineering instead of breaking in through technical vulnerabilities.

CNIL highlighted the failure to secure such data breach requirements under the General Data Protection Regulation. The watchdog also noted that the size of the fine reflects the fact that France Travail operates with public funding.

France Travail has taken corrective steps since the breach, yet CNIL has ordered additional security improvements.

The authority set a deadline for these measures and warned that non-compliance would trigger a daily €5,000 penalty until France Travail meets GDPR obligations. A case that underlines growing pressure on public institutions to reinforce cybersecurity amid rising threats.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

UK expands free AI training to reach 10 million workers by 2030

The government has expanded the UK joint industry programme offering free AI training to every adult, with the ambition of upskilling 10 million workers by 2030.

Newly benchmarked courses are available through the AI Skills Hub, giving people practical workplace skills while supporting Britain’s aim to become the fastest AI adopter in the G7.

The programme includes short online courses that teach workers in the UK how to use basic AI tools for everyday tasks such as drafting text, managing content and reducing administrative workloads.

Participants who complete approved training receive a government-backed virtual AI foundations badge, setting a national standard for AI capability across sectors.

Public sector staff, including NHS and local government employees, are among the groups targeted as the initiative expands.

Ministers also announced £27 million in funding to support local tech jobs, graduate traineeships and professional practice courses, alongside the launch of a new cross-government unit to monitor AI’s impact on jobs and labour markets.

Officials argue that widening access to AI skills will boost productivity, support economic growth and help workers adapt to technological change. The programme builds on existing digital skills initiatives and brings together government, industry and trade unions to shape a fair and resilient future of work.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI learning tools grow in India with Gemini’s JEE preparation rollout

Google is expanding AI learning tools in India by adding full-length Joint Entrance Exam practice tests to Gemini, targeting millions of engineering applicants.

Students can complete full mock JEE exams directly in Gemini. The questions are developed using vetted material from education platforms in India, including Physics Wallah and Careers360, and will be expanded further.

Gemini provides instant feedback after each test. It explains correct answers and generates personalised study plans based on performance, supporting structured exam preparation.

In addition to these exam-focused features, preparation tools will also roll out to AI Mode in Search, specifically including Canvas. These tools will allow students to build interactive quizzes and study guides from their own notes.

Alongside providing enhanced tools for students, Google is also partnering with universities, government agencies, and nonprofits to integrate AI across education systems, aiming to scale access to tens of millions of learners by 2027.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!