Europe’s digital sovereignty advances through SAP’s new AI collaborations

SAP has announced new partnerships with Bleu, Capgemini, and Mistral AI to advance Europe’s digital sovereignty. The collaboration combines SAP’s expertise in enterprise software with France’s AI ecosystem to develop secure, scalable, and sovereign cloud solutions for governments and regulated sectors.

Bleu and Delos Cloud have established a Franco-German alliance focused on crisis resilience, creating joint capabilities for early detection, analysis, and remediation of cyber incidents. Their cooperation supports rapid response in extreme scenarios and reinforces control over critical infrastructure.

SAP and Capgemini are expanding their partnership to advance sovereign agentic AI and strengthen cybersecurity across Europe. Their new Sovereign Technology Partnership will deliver data management, cloud services, and automation tools for public and regulated sectors.

SAP and Mistral AI are also deepening their collaboration to create Europe’s first full sovereign AI stack. SAP will offer Mistral’s frontier models through its sovereign AI foundation on SAP BTP, while both companies co-develop industry-specific AI applications designed for engineering and R&D workloads.

These partnerships form part of SAP’s broader sovereign cloud strategy, backed by more than €20bn in investment. SAP states that its aim is to provide a secure, compliant, and locally controlled infrastructure that enables innovation while safeguarding European data, assets, and long-term technological independence.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

WHO warns Europe faces widening risks as AI outpaces regulation

A new WHO Europe report warns that AI is advancing faster than health policies can keep up, risking wider inequalities without stronger safeguards. AI already helps doctors with diagnostics, reduces paperwork and improves patient communication, yet significant structural safeguards remain incomplete.

The assessment, covering 50 participating countries across the region, shows that governments acknowledge AI’s transformative potential in personalised medicine, disease surveillance and clinical efficiency. Only a small number, however, have established national strategies.

Estonia, Finland and Spain stand out for early adoption- whether through integrated digital records, AI training programmes or pilots in primary care- but most nations face mounting regulatory gaps.

Legal uncertainty remains the most common obstacle, with 86 percent of countries citing unclear rules as the primary barrier to adoption, followed by financial constraints. Fewer than 10 percent have liability standards defining responsibility when AI-driven decisions cause harm.

WHO urged governments to align AI policy with public health goals, strengthen legal and ethical frameworks, improve cross-border data governance and invest in an AI-literate workforce to ensure patients stay at the centre of the transformation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Scepticism needed for AI says Alphabet CEO

Alphabet CEO Sundar Pichai recently warned people against having total confidence in artificial intelligence tools. Speaking to the BBC, the head of Google’s parent company stressed that current state-of-the-art AI technology remains ‘prone to errors’ and must be used judiciously alongside other resources.

The executive also addressed wider concerns about a potential ‘AI bubble’ following increased tech valuations and spending across the sector. Pichai stated he believes no corporation, including Google, would be completely safe if such an investment surge were to collapse. He compared the current environment to the early internet boom, suggesting the profound impact of AI will nonetheless remain.

Simultaneously, the largest bank in the US, JPMorgan Chase, is sounding an alarm over market instability. Jamie Dimon, the bank’s chair and chief executive, expressed significant worry over a severe US stock market correction, predicting it could materialise within the next six months to two years. Concerns over the geopolitical climate, expansive fiscal spending, and worldwide remilitarisation are adding to this atmosphere of economic uncertainty.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Rising AI demand fuels new climate questions

A growing debate over AI dominated COP30 in Brazil, as delegates weighed its capacity to support climate solutions against its rapidly rising environmental costs.

Technology leaders argued that AI can strengthen energy management, refine climate research and enhance conservation programmes.

Participants highlighted an expanding number of AI-driven tools showcased at the summit, reflecting both enthusiasm and caution about their long-term influence.

Several countries noted that AI systems could help smaller delegations review complex negotiation documents and take part more effectively.

Environmental advocates warned that ballooning electricity use and water demand from data centres risk undermining climate targets.

Campaigners pressed for tighter rules, including mandatory public-interest testing for new facilities and reliance on on-site renewable energy.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UNESCO and SAP selected the AI system EDiSON for the Solomon Islands

SAP and UNESCO have agreed to deploy the AI-supported disaster management system EDiSON in the Solomon Islands.

The platform, created by SAP Japan and the start-up INSPIRATION PLUS, utilises the SAP Business Technology Platform with machine learning to merge real-time meteorological information with historical records, rather than relying on isolated datasets.

A system that delivers predictive insights that help authorities act before severe weather strikes. It anticipates terrain damage, guides emergency services towards threatened areas and supports decisions on evacuation orders.

The initiative aims to serve as a model for other small island states facing similar climate-related pressures.

UNESCO officials say the project strengthens early warning capacity and encourages long-term resilience. EDiSON will become operational in 2026 and aims to offer a scalable approach for nations with limited technical resources.

Its performance in Japan has already demonstrated how integrated data management can overcome fragmented information flows and restricted analytical tools.

The design of EDiSON enables governments to adopt advanced disaster preparedness systems instead of relying on costly, bespoke infrastructure. A partnership that seeks to improve national readiness in the Solomon Islands, where earthquakes, tsunamis, cyclones and floods regularly threaten communities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta wins antitrust case over monopoly claims

Meta has defeated a major antitrust challenge after a US federal judge ruled it does not currently hold monopoly power in social networking. The decision spares the company from being forced to separate Instagram and WhatsApp, which regulators had argued were acquired to suppress competition.

The judge found the Federal Trade Commission failed to prove Meta maintains present-day dominance, noting that the market has been reshaped by rivals such as TikTok. Meta argued it now faces intense competition across mobile platforms as user behaviour shifts rapidly.

FTC lawyers revisited internal emails linked to Meta’s past acquisitions, but the ruling emphasised that the case required proof of ongoing violations.

Analysts say the outcome contrasts sharply with recent decisions against Google in search and advertising, signalling mixed fortunes for large tech firms.

Industry observers note that Meta still faces substantial regulatory pressure, including upcoming US trials regarding children’s mental health and questions about its heavy investment in AI.

The company welcomed the ruling and stated that it intends to continue developing products within a competitive market framework.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI helps to fight against antibiotic-resistant superbugs

UK scientists are launching a three-year initiative to use AI in the fight against drug-resistant infections, a growing threat to public health.

The project, backed by £45 million from GSK and coordinated with the Fleming Initiative, aims to develop new tools against pathogens that currently evade treatment.

Researchers will focus on priority bacteria and fungi identified by the World Health Organisation, including E. coli, Klebsiella pneumoniae, MRSA and Aspergillus.

These AI models will be utilised to design antibiotics and enhance the understanding of immune responses, with data shared globally to expedite drug development.

Experts warn that antimicrobial resistance could claim millions of lives by 2050 if new solutions are not found. The initiative reflects an urgent need to pool scientific expertise and technology to create next-generation treatments and vaccines for resistant infections.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Arizona astronomer creates ray-tracing method to make AI less overconfident

A University of Arizona astronomer, Peter Behroozi, has developed a novel technique to make AI systems more trustworthy by enabling them to quantify when they might be wrong.

Behroozi’s method adapts ray tracing, traditionally used in computer graphics, to explore the high-dimensional spaces in which AI models operate, thereby allowing the system to gauge uncertainty more effectively.

He uses a Bayesian-sampling approach: rather than relying on a single model, the system effectively consults a ‘whole range of experts’ by training many models in parallel and observing the diversity of their outputs.

This advance addresses a critical problem in modern AI: ‘wrong-but-confident’ outputs, situations where a model gives a single, confident answer that may be incorrect. According to Behroozi, his technique is orders of magnitude faster than traditional uncertainty-quantification methods, making it practical even for extensive neural networks.

The implications are broad, extending from healthcare to finance to autonomous systems: AI that knows its own limits could reduce risk and increase reliability. Behroozi hopes his code, now publicly available, will be adopted by other researchers working under high-stakes conditions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI promises better communication tools for disabled users

Students with disabilities met technology executives at National Star College in Gloucestershire, UK to explain what they need from communication devices. Battery life emerged as the top priority, with users saying they need devices that last 24 hours without charging so they can communicate all day long.

One student who controls his device by moving his eyes said losing power during the day feels like having his voice ripped away from him. Another student with cerebral palsy wants her device to help her run a bath independently and eventually design fairground rides that disabled people can enjoy.

Technology companies responded by promising artificial intelligence improvements that will make the devices work much faster. The new AI features will help users type more quickly, correct mistakes automatically and even create personalised voices that sound like the actual person speaking.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Fight over state AI authority heats up in US Congress

US House Republicans are mounting a new effort to block individual states from regulating AI, reviving a proposal that the Senate overwhelmingly rejected just four months ago. Their push aligns with President Donald Trump’s call for a single federal AI standard, which he argues is necessary to avoid a ‘patchwork’ of state-level rules that he claims hinder economic growth and fuel what he described as ‘woke AI.’

House Majority Leader Steve Scalise is now attempting to insert the measure into the National Defence Authorisation Act, a must-pass annual defence spending bill expected to be finalised in the coming weeks. If successful, the move would place a moratorium on state-level AI regulation, effectively ending the states’ current role as the primary rule-setters on issues ranging from child safety and algorithmic fairness to workforce impacts.

The proposal faces significant resistance, including from within the Republican Party. Lawmakers who blocked the earlier attempt in July warned that stripping states of their authority could weaken protections in areas such as copyright, child safety, and political speech.

Critics, such as Senator Marsha Blackburn and Florida Governor Ron DeSantis, argue that the measure would amount to a handout to Big Tech and leave states unable to guard against the use of predatory or intrusive AI.

Congressional leaders hope to reach a deal before the Thanksgiving recess, but the ultimate fate of the measure remains uncertain. Any version of the moratorium would still need bipartisan support in the Senate, where most legislation requires 60 votes to advance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot