More European cities move to replace Microsoft software as part of digital sovereignty efforts

Following similar moves by Denmark, the German state of Schleswig-Holstein and the city of Lyon—France’s third-largest city and a major economic centre—has initiated a migration from Microsoft Windows and Office to a suite of open-source alternatives, including Linux, OnlyOffice, NextCloud, and PostgreSQL.

This transition is part of Lyon’s broader strategy to strengthen digital sovereignty and reduce reliance on foreign technology providers. As with other European initiatives, the decision aligns with wider EU discussions about data governance and digital autonomy. Concerns over control of sensitive data and long-term sustainability have contributed to increased interest in open-source solutions.

Although Microsoft has publicly affirmed its commitment to supporting EU customers regardless of political context, some European public authorities continue to explore alternatives that allow for local control over software infrastructure and data hosting.

In line with the European Commission’s 2025 State of the Digital Decade report—which notes that Europe has yet to fully leverage the potential of open-source technologies—Lyon aims to enhance both transparency and control over its digital systems.

Lyon’s migration also supports regional economic development. Its collaboration platform, Territoire Numérique Ouvert (Open Digital Territory), is being co-developed with local digital organisations and will be hosted in regional data centres. The project provides secure, interoperable tools for communication, office productivity, and document collaboration.

The city has begun gradually replacing Windows with Linux and Microsoft Office with OnlyOffice across municipal workstations. OnlyOffice, developed by Latvia-based Ascensio System SIA, is an open-source productivity suite distributed under the GNU Affero General Public License. While it shares a similar open-source ethos with LibreOffice, which was chosen in Demark to replace Microsoft, the two are not directly related.

It is reported that Lyon anticipates cost savings through extended hardware lifespans, a reduction in electronic waste, and improved environmental sustainability. Over half of the public contracts for this project have been awarded to companies based in the Auvergne-Rhône-Alpes region, with all awarded to French firms—highlighting a preference for local procurement.

Training for approximately 10,000 civil servants began in June 2025. The initiative is being monitored as a potential model for other municipalities aiming to enhance digital resilience and reduce dependency on proprietary software ecosystems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenInfra Summit Europe brings focus on AI and VMware alternatives

The OpenInfra Foundation and its global community will gather at the OpenInfra Summit Europe from 17 to 19 October in Paris-Saclay to explore how open source is reshaping digital infrastructure.

It will be the first summit since the Foundation joined the Linux Foundation, uniting major projects such as Linux, Kubernetes and OpenStack under the OpenInfra Blueprint. The agenda includes a strong focus on digital sovereignty, VMware migration strategies and infrastructure support for AI workloads.

Taking place at École Polytechnique in Palaiseau, the summit arrives at a time when open source software is powering nearly $9 trillion of economic activity.

With over 38% of the global OpenInfra community based in Europe, the event will focus on regional priorities like data control, security, and compliance with new EU regulations such as the Cyber Resilience Act.

Developers, IT leaders and business strategists will explore how projects like Kata Containers, Ceph and RISC-V integrate to support cost-effective, scalable infrastructure.

The summit will also mark OpenStack’s 15th anniversary, with use cases shared by the UN, BMW and nonprofit Restos du Coeur.

Attendees will witness a live VMware migration demo featuring companies like Canonical and Rackspace, highlighting real-world approaches to transitioning away from proprietary platforms. Sessions will dive into topics like CI pipelines, AI-powered infrastructure, and cloud-native operations.

As a community-led event, OpenInfra Summit Europe remains focused on collaboration.

With sponsors including Canonical, Mirantis, Red Hat and others, the gathering offers developers and organisations an opportunity to share best practices, shape open source development, and strengthen the global infrastructure ecosystem.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Denmark proposes landmark law to protect citizens from deepfake misuse

Denmark’s Ministry of Culture has introduced a draft law aimed at safeguarding citizens’ images and voices under national copyright legislation, Azernews reports. The move marks a significant step in addressing the misuse of deepfake technologies.

The proposed bill prohibits using an individual’s likeness or voice without prior consent, enabling affected individuals to claim compensation. While satire and parody remain exempt, the legislation explicitly bans the unauthorised use of deepfakes in artistic performances.

Under the proposed framework, online platforms that fail to remove deepfake content upon request could be subject to fines. The legislation will apply only within Denmark and is expected to pass with up to 90% parliamentary support.

The bill follows recent incidents involving manipulated videos of Denmark’s Prime Minister and legal challenges against the creators of pornographic deepfakes.

If adopted, Denmark would become the first country in the region to implement such legal measures. The proposal is expected to spark broader discussions across Europe on the ethical boundaries of AI-generated content.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI training with pirated books triggers massive legal risk

A US court has ruled that AI company Anthropic engaged in copyright infringement by downloading millions of pirated books to train its language model, Claude.

Although the court found that using copyrighted material for AI training could qualify as ‘fair use’ under US law when the content is transformed, it also held that acquiring the content illegally instead of licensing it lawfully constituted theft.

Judge William Alsup described AI as one of the most transformative technologies of our time. Still, he stated that Anthropic obtained millions of digital books from pirate sites such as LibGen and Pirate Library Mirror.

He noted that buying the same books later in print form does not erase the initial violation, though it may reduce potential damages.

The penalties for wilful copyright infringement in the US could reach up to $150,000 per work, meaning total compensation might run into the billions.

The case highlights the fine line between transformation and theft and signals growing legal pressure on AI firms to respect intellectual property instead of bypassing established licensing frameworks.

Australia, which uses a ‘fair dealing’ system rather than ‘fair use’, already offers flexible licensing schemes through organisations like the Copyright Agency.

CEO Josephine Johnston urged policymakers not to weaken Australia’s legal framework in favour of global tech companies, arguing that licensing provides certainty for developers and fair payment to content creators.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Sam Altman reverses his stance on AI hardware as current computers can’t meet the demands

Sam Altman, CEO of OpenAI, has returned from his earlier position, saying that AGI would not need new hardware.

Speaking on a podcast with his brother, Altman said current computers are no longer suited for the fast-evolving demands of AI. Instead of relying on standard hardware, he now believes new solutions are necessary.

OpenAI has already started developing dedicated AI hardware, including potential custom chips, marking a shift from using general-purpose GPUs and servers.

Altman also hinted at a new device — not a wearable, nor a phone — that could serve as an AI companion. Designed to be screen-free and aware of its surroundings, the product is being co-developed with former Apple design chief Jony Ive.

The collaboration, however, has run into legal trouble. A federal judge recently ordered OpenAI and Ive to pause the promotion of the new venture after a trademark dispute with a startup named IYO, which had previously pitched similar ideas to Altman’s investment firm.

OpenAI’s recent $6.5 billion acquisition of io Products, co-founded by Ive, reflects the company’s more profound commitment to reshaping how people interact with AI.

Altman’s revised stance on hardware suggests the era of purpose-built AI devices is no longer a vision but a necessary reality.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cyber Command and Coast Guard establish task force for port cyber defence

US Cyber Command has joined forces with the Coast Guard in a major military exercise designed to simulate cyberattacks on key port infrastructure.

Known as Cyber Guard, the training scenario marked a significant evolution in defensive readiness, integrating for the first time with Pacific Sentry—an Indo-Pacific Command exercise simulating conflict over Taiwan.

The joint effort included the formation of Task Force Port, a temporary unit tasked with coordinating defence of coastal infrastructure.

The drill reflected real-world concerns over the vulnerability of US ports in times of geopolitical tension, and brought together multiple combatant commands under a unified operational framework.

Rear Admiral Dennis Velez described the move as part of a broader shift from isolated training to integrated joint force operations.

Cyber Guard also marked the activation of the Department of Defense Cyber Defense Command (DCDC), previously known as Joint Force Headquarters–DOD Information Network.

The unit worked closely with the Coast Guard, signalling the increasing importance of cyber coordination across military branches when protecting critical infrastructure.

Port security has featured in past exercises but was previously handled as a separate scenario. Its inclusion within the core structure of Cyber Guard suggests a strategic realignment, ensuring cyber defence is embedded in wider contingency planning for future conflicts.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Meta wins copyright case over AI training

Meta has won a copyright lawsuit brought by a group of authors who accused the company of using their books without permission to train its Llama generative AI.

A US federal judge in San Francisco ruled the AI training was ‘transformative’ enough to qualify as fair use under copyright law.

Judge Vince Chhabria noted, however, that future claims could be more successful. He warned that using copyrighted books to build tools capable of flooding the market with competing works may not always be protected by fair use, especially when such tools generate vast profits.

The case involved pirated copies of books, including Sarah Silverman’s memoir ‘The Bedwetter’ and Junot Diaz’s award-winning novel ‘The Brief Wondrous Life of Oscar Wao’. Meta defended its approach, stating that open-source AI drives innovation and relies on fair use as a key legal principle.

Chhabria clarified that the ruling does not confirm the legality of Meta’s actions, only that the plaintiffs made weak arguments. He suggested that more substantial evidence and legal framing might lead to a different outcome in future cases.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cybercrime in Africa: Turning research into justice and action

At the Internet Governance Forum 2025 in Lillestrøm, Norway, experts and policymakers gathered to confront the escalating issue of cybercrime across Africa, marked by the launch of the research report ‘Access to Justice in the Digital Age: Empowering Victims of Cybercrime in Africa’, co-organised by UNICRI and ALT Advisory.

Based on experiences in South Africa, Namibia, Sierra Leone, and Uganda, the study highlights a troubling rise in cybercrime, much of which remains invisible due to widespread underreporting, institutional weaknesses, and outdated or absent legal frameworks. The report’s author, Tina Power, underscored the need to recognise cybercrime not merely as a technical challenge, but as a profound justice issue.

One of the central concerns raised was the gendered nature of many cybercrimes. Victims—especially women and LGBTQI+ individuals—face severe societal stigma and are often met with disbelief or indifference when reporting crimes such as revenge porn, cyberstalking, or online harassment.

Sandra Aceng from the Women of Uganda Network detailed how cultural taboos, digital illiteracy, and unsympathetic police responses prevent victims from seeking justice. Without adequate legal tools or trained officers, victims are left exposed, compounding trauma and enabling perpetrators.

Law enforcement officials, such as Zambia’s Michael Ilishebo, described various operational challenges, including limited forensic capabilities, the complexity of crimes facilitated by AI and encryption, and the lack of cross-border legal cooperation. Only a few African nations are party to key international instruments like the Budapest Convention, complicating efforts to address cybercrime that often spans multiple jurisdictions.

Ilishebo also highlighted how social media platforms frequently ignore law enforcement requests, citing global guidelines that don’t reflect African legal realities. To counter these systemic challenges, speakers advocated for a robust, victim-centred response built on strong laws, sustained training for justice-sector actors, and improved collaboration between governments, civil society, and tech companies.

Nigerian Senator Shuaib Afolabi Salisu called for a unified African stance to pressure big tech into respecting the continent’s legal systems. The session ended with a consensus – the road to justice in Africa’s digital age must be paved with coordinated action, inclusive legislation, and empowered victims.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

Anthropic AI training upheld as fair use; pirated book storage heads to trial

A US federal judge has ruled that Anthropic’s use of books to train its AI model falls under fair use, marking a pivotal decision for the generative AI industry.

The ruling, delivered by US District Judge William Alsup in San Francisco, held that while AI training using copyrighted works was lawful, storing millions of pirated books in a central library constituted copyright infringement.

The case involves authors Andrea Bartz, Charles Graeber and Kirk Wallace Johnson, who sued Anthropic last year. They claimed the Amazon- and Alphabet-backed firm had used pirated versions of their books without permission or compensation to train its Claude language model.

The proposed class action lawsuit is among several lawsuits filed by copyright holders against AI developers, including OpenAI, Microsoft, and Meta.

Judge Alsup stated that Anthropic’s training of Claude was ‘exceedingly transformative’, likening it to how a human reader learns to write by studying existing works. He concluded that the training process served a creative and educational function that US copyright law protects under the doctrine of fair use.

‘Like any reader aspiring to be a writer, Anthropic’s LLMs trained upon works not to replicate them but to create something different,’ the ruling said.

However, Alsup drew a clear line between fair use and infringement regarding storage practices. Anthropic’s copying and storage of over 7 million books in what the court described as a ‘central library of all the books in the world’ was not covered by fair use.

The judge ordered a trial scheduled for December to determine how much Anthropic may owe in damages. US copyright law permits statutory damages of up to $150,000 per work for wilful infringement.

Anthropic argued in court that its use of the books was consistent with copyright law’s intent to promote human creativity.

The company claimed that its system studied the writing to extract uncopyrightable insights and to generate original content. It also maintained that the source of the digital copies was irrelevant to the fair use determination.

Judge Alsup disagreed, noting that downloading content from pirate websites when lawful access was possible may not qualify as a reasonable step. He expressed scepticism that infringers could justify acquiring such copies as necessary for a later claim of fair use.

The decision is the first judicial interpretation of fair use in the context of generative AI. It will likely influence ongoing legal battles over how AI companies source and use copyrighted material for model training. Anthropic has not yet commented on the ruling.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI and io face lawsuit over branding conflict

OpenAI and hardware startup io, founded by former Apple designer Jony Ive, are now embroiled in a trademark infringement lawsuit filed by iyO, a Google-backed company specialising in custom headphones.

The legal case prompted OpenAI to withdraw promotional material linked to its $6.005 billion acquisition of io, raising questions about the branding of its future AI device.

Court documents reveal that OpenAI and io had previously met with iyO representatives and tested their custom earbud product, although the tests were unsuccessful.

Despite initial contact and discussions about potential collaboration, OpenAI rejected iyO’s proposals to invest, license, or acquire the company for $200 million. The lawsuit, however, does not centre on an earbud or wearable device, according to io’s co-founders.

Io executives clarified in court that their prototype does not resemble iyO’s product and remains unfinished. It is neither wearable nor intended for sale within the following year.

OpenAI CEO Sam Altman described the joint project as an attempt to reimagine hardware interfaces. At the same time, Jony Ive expressed enthusiasm for the device’s early design, which he claims captured his imagination.

Court testimony and emails suggest io explored various technologies, including desktop, mobile, and portable designs. Internal communications also reference possible ergonomic research using 3D ear scan data.

Although the lawsuit has exposed some development details, the main product of the collaboration between OpenAI and io remains undisclosed.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!