OpenInfra Summit Europe brings focus on AI and VMware alternatives

The OpenInfra Foundation and its global community will gather at the OpenInfra Summit Europe from 17 to 19 October in Paris-Saclay to explore how open source is reshaping digital infrastructure.

It will be the first summit since the Foundation joined the Linux Foundation, uniting major projects such as Linux, Kubernetes and OpenStack under the OpenInfra Blueprint. The agenda includes a strong focus on digital sovereignty, VMware migration strategies and infrastructure support for AI workloads.

Taking place at École Polytechnique in Palaiseau, the summit arrives at a time when open source software is powering nearly $9 trillion of economic activity.

With over 38% of the global OpenInfra community based in Europe, the event will focus on regional priorities like data control, security, and compliance with new EU regulations such as the Cyber Resilience Act.

Developers, IT leaders and business strategists will explore how projects like Kata Containers, Ceph and RISC-V integrate to support cost-effective, scalable infrastructure.

The summit will also mark OpenStack’s 15th anniversary, with use cases shared by the UN, BMW and nonprofit Restos du Coeur.

Attendees will witness a live VMware migration demo featuring companies like Canonical and Rackspace, highlighting real-world approaches to transitioning away from proprietary platforms. Sessions will dive into topics like CI pipelines, AI-powered infrastructure, and cloud-native operations.

As a community-led event, OpenInfra Summit Europe remains focused on collaboration.

With sponsors including Canonical, Mirantis, Red Hat and others, the gathering offers developers and organisations an opportunity to share best practices, shape open source development, and strengthen the global infrastructure ecosystem.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Taiwan leads in AI defence of democracy

Taiwan has emerged as a global model for using AI to defend democracy, earning recognition for its success in combating digital disinformation.

The island joined a new international coalition led by the International Foundation for Electoral Systems to strengthen election integrity through AI collaboration.

Constantly targeted by foreign actors, Taiwan has developed proactive digital defence systems that serve as blueprints for other democracies.

Its rapid response strategies and tech-forward approach have made it a leader in countering AI-powered propaganda.

While many nations are only beginning to grasp the risks posed by AI to democratic systems, Taiwan has already faced these threats and adapted.

Its approach now shapes global policy discussions around safeguarding elections in the digital era.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI training with pirated books triggers massive legal risk

A US court has ruled that AI company Anthropic engaged in copyright infringement by downloading millions of pirated books to train its language model, Claude.

Although the court found that using copyrighted material for AI training could qualify as ‘fair use’ under US law when the content is transformed, it also held that acquiring the content illegally instead of licensing it lawfully constituted theft.

Judge William Alsup described AI as one of the most transformative technologies of our time. Still, he stated that Anthropic obtained millions of digital books from pirate sites such as LibGen and Pirate Library Mirror.

He noted that buying the same books later in print form does not erase the initial violation, though it may reduce potential damages.

The penalties for wilful copyright infringement in the US could reach up to $150,000 per work, meaning total compensation might run into the billions.

The case highlights the fine line between transformation and theft and signals growing legal pressure on AI firms to respect intellectual property instead of bypassing established licensing frameworks.

Australia, which uses a ‘fair dealing’ system rather than ‘fair use’, already offers flexible licensing schemes through organisations like the Copyright Agency.

CEO Josephine Johnston urged policymakers not to weaken Australia’s legal framework in favour of global tech companies, arguing that licensing provides certainty for developers and fair payment to content creators.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New NHS plan adds AI to protect patient safety

The NHS is set to introduce a world-first AI system to detect patient safety risks early by analysing hospital data for warning signs of deaths, injuries, or abuse.

Instead of waiting for patterns to emerge through traditional oversight, the AI will use near real-time data to trigger alerts and launch rapid inspections.

Health Secretary Wes Streeting announced that a new maternity-focused AI tool will roll out across NHS trusts in November. It will monitor stillbirths, brain injuries and death rates, helping identify issues before they become scandals.

The initiative forms part of a new 10-year plan to modernise the health service and move it from analogue to digital care.

The technology will send alerts to the Care Quality Commission, whose teams will investigate flagged cases. Professor Meghana Pandit, NHS England’s medical director, said the UK would become the first country to trial this AI-enabled early warning system to improve patient care.

CQC chief Sir Julian Hartley added it would strengthen quality monitoring across services.

However, nursing leaders voiced concerns that AI could distract from more urgent needs. Professor Nicola Ranger of the Royal College of Nursing warned that low staffing levels remain a critical issue.

She stressed that one nurse often handles too many patients, and technology should not replace the essential investment in frontline staff.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cybercrime surge hits airlines across North America

According to the FBI and cybersecurity experts, a well-known cybercrime group has launched fresh attacks on the airline industry, successfully breaching the networks of several airlines in the US and Canada.

The hackers, identified as ‘Scattered Spider’, are known for aggressive extortion tactics and are now shifting their focus to aviation instead of insurance or retail, their previous targets.

Airline security teams remain on high alert despite no flights or operations being disrupted. Hawaiian Airlines and Canada’s WestJet have acknowledged recent cyber incidents, while sources suggest more affected companies may step forward soon.

Both airlines reported no impact on day-to-day services, likely due to solid internal defences and continuity planning.

The attackers often exploit help desks by impersonating employees or customers to access corporate systems. Experts warn that airline call centres are especially vulnerable, given their importance to customer support.

Cybersecurity firms, including Mandiant, are now supporting the response and advising firms to reinforce these high-risk entry points.

Scattered Spider has previously breached major casinos, insurance, and retail companies. The FBI confirmed it is working with aviation partners to contain the threat and assist victims.

Industry leaders remain alert, noting that airlines, IT contractors, and vendors across the aviation sector are at risk from the escalating threat.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Hacktivist attacks surge in Iran–Israel tensions

The Iran–Israel conflict has now expanded into cyberspace, with rival hacker groups launching waves of politically driven attacks.

Following Israel’s military operation against Iran, pro-Israeli hackers known as ‘Predatory Sparrow‘ struck Iran’s Sepah Bank, deleting data and causing significant service disruption.

A day later, the same group targeted Nobitex, Iran’s largest crypto exchange, stealing and destroying over $90 million in assets.

Cyber attacks intensified in the days before and after Israeli strikes. According to NSFOCUS, cyberattacks on Iran peaked three days before the military operation, suggesting pre-attack reconnaissance.

In retaliation, pro-Iranian hackers escalated attacks on Israel on 16 June, focusing on government systems, aerospace, and education.

While attacks on Iran have been fewer, Israeli systems have faced over 1,300 attacks in 2025 alone, with 37% of all global hacktivist activity aimed at Israel since the conflict began.

However, analysts note these attacks have been high in volume but limited in impact. Their malware tactics involve evading antivirus software, deleting data, and turning off recovery systems.

NSFOCUS warns that geopolitical tensions are turning hacktivist groups into informal cyber proxies. Though not formally state-backed, these loosely organised actors align closely with national interests.

As traditional defences lag, cybersecurity experts argue that national infrastructure must adopt more strategic, coordinated defence measures instead of fragmented responses, especially during crises and conflicts.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google Doppl, the new AI app, turns outfit photos into try-on videos

Google has unveiled Doppl, a new AI-powered app that lets users create short videos of themselves wearing any outfit they choose.

Instead of relying on imagination or guesswork, Doppl allows people to upload full-body photos and apply outfits seen on social media, thrift shops, or friends, creating animated try-ons that bring static images to life.

The app builds on Google’s earlier virtual try-on tools integrated with its Shopping Graph. Doppl pushes things further by transforming still photos into motion videos, showing how clothes flow and fit in movement.

Users can upload their full-body image or choose an AI model to preview outfits. However, Google warns that the fit and details might not always be accurate at an early stage.

Doppl is currently only available in the US for Android and iOS users aged 18 or older. While Google encourages sharing videos with friends and followers, the tool raises concerns about misuse, such as generating content using photos of others.

Google’s policy requires disclosure if someone impersonates another person, but the company admits that some abuse may occur. To address the issue, Doppl content will include invisible watermarks for tracking.

In its privacy notice, Google confirmed that user uploads and generated videos will be used to improve AI technologies and services. However, data will be anonymised and separated from user accounts before any human review is allowed.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Dutch government to build AI plant with €70 million pledge

The Dutch government has pledged €70 million to build a new AI facility in Groningen to establish a European hub for AI research and development.

A consortium of Dutch organisations will manage the plant and focus on healthcare, agriculture, defence and energy applications.

The government is also seeking an additional €70 million in EU co-financing and has welcomed a separate €60 million contribution from the Groningen regional administration.

The plant is expected to be commissioned in 2026 and reach operation by early 2027 if funding is secured.

Minister of Economic Affairs Vincent Karremans emphasised the need to develop domestic AI capacity, warning that dependence on foreign technologies could threaten national competitiveness and digital independence.

‘Those who do not develop the technology themselves depend on others, ’ Karremans said on the government’s website.

European countries have grown increasingly concerned over their reliance on AI technologies developed by US companies.

The Groningen initiative marks a broader effort by the EU to build its own AI infrastructure instead of leaving strategic control in foreign hands.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Gartner warns that more than 40 percent of agentic AI projects could be cancelled by 2027

More than 40% of agentic AI projects will likely be cancelled by the end of 2027 due to rising costs, limited business value, and poor risk control, according to research firm Gartner.

These cancellations are expected as many early-stage initiatives remain trapped in hype, often misapplied and far from ready for real-world deployment.

Gartner analyst Anushree Verma warned that most agentic AI efforts are still at the proof-of-concept stage. Instead of focusing on scalable production, many companies have been distracted by experimental use cases, underestimating the cost and complexity of full-scale implementation.

A recent poll by Gartner found that only 19% of organisations had made significant investments in agentic AI, while 31% were undecided or waiting.

Much of the current hype is fuelled by vendors engaging in ‘agent washing’ — marketing existing tools like chatbots or RPA under a new agentic label without offering true agentic capabilities.

Out of thousands of vendors, Gartner believes only around 130 offer legitimate agentic solutions. Verma noted that most agentic models today lack the intelligence to deliver strong returns or follow complex instructions independently.

Still, agentic AI holds long-term promise. Gartner expects 15% of daily workplace decisions to be handled autonomously by 2028, up from zero in 2024. Moreover, one-third of enterprise applications will include agentic capabilities by then.

However, to succeed, organisations must reimagine workflows from the ground up, focusing on enterprise-wide productivity instead of isolated task automation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Efforts to address internet fragmentation take centre stage at IGF 2025 in Norway

On the final day of the Internet Governance Forum 2025 in Lillestrøm, Norway, stakeholders from governments, civil society, technical communities, and the private sector gathered to launch the new work cycle of the Policy Network on Internet Fragmentation (PNIF). Now entering its third year, the PNIF unveiled a structured framework to analyse internet fragmentation across three dimensions: user experience, internet governance coordination, and the technical infrastructure layer.

The session emphasised the urgent need for international cooperation to counter growing fragmentation threats, as enshrined in paragraph 29C of the Global Digital Compact. Speakers raised alarm over how political and economic forces are re-shaping the global internet.

With internet shutdowns and digital censorship increasingly normalised as tools of state control—highlighted by Iran’s recent 90-million-person shutdown—concerns about sovereignty overriding openness were prominent. Michel Lambert described this shift as a ‘political normalisation of network control.’

Marilia Maciel, Director of Digital Trade and Economic Security at Diplo, emphasised how trade and investment policies fuel economic fragmentation. Cuts to internet freedom funding were highlighted by both Lambert and Joyce Chen, who noted severe consequences for underserved regions like the Pacific.

Marilia Maciel
Marilia Maciel, Director of Digital Trade and Economic Security at Diplo

From the technical community, Dhruv, representing the Internet Architecture Board, stressed the importance of safeguarding the internet’s interoperability by including technical experts in regulatory processes. Joyce Chen also pointed to successful coordination initiatives such as the Technical Community Coalition on Multi-Stakeholderism (TCCM).

Naim Gjokaj, State Secretary in Montenegro, offered a government perspective, advocating for stronger legal frameworks and regional coordination to avoid inadvertent fragmentation while supporting connectivity in rural areas.

The session concluded with a call to action: PNIF will focus its upcoming work on developing concrete, risk-based recommendations to implement the Global Digital Compact. Co-facilitators Sheetal Kumar and Bruna Santos encouraged broad community participation, aiming to deliver a final report by 1 November.

Despite the challenges, the atmosphere remained collaborative and forward-looking, reinforcing the importance of inclusive dialogue to ensure the internet remains a unified, accessible, and resilient resource for all.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.