xAI gets Memphis approval to run 15 gas turbines

xAI, Elon Musk’s AI company, has secured permits to operate 15 natural gas turbines at its Memphis data centre, despite facing legal threats over alleged Clean Air Act violations.

The Shelby County Health Department approved the generators, which can produce up to 247 megawatts, provided specific emissions controls are in place.

Environmental lawyers say xAI had already been running as many as 35 generators without permits. The Southern Environmental Law Center (SELC), acting on behalf of the NAACP, has accused the company of serious pollution and is preparing to sue.

Even under the new permit, xAI is allowed to emit substantial pollutants annually, including nearly 10 tons of formaldehyde — a known carcinogen.

Community concerns about the health impact remain strong. A local group pledged $250,000 for an independent air quality study, and although the City of Memphis carried out its own tests, the SELC questioned their validity.

The tests missed ozone levels and were reportedly conducted in favourable wind conditions, with equipment placed too close to buildings.

Officials previously argued that the turbines were exempt from regulation due to their ‘mobile’ status, a claim the SELC refuted as legally flawed. Meanwhile, xAI has recently raised $10 billion, split between debt and equity, highlighting its rapid expansion, even as regulatory scrutiny grows.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Judge allows US antitrust case against apple to proceed

A US federal judge has rejected Apple’s attempt to dismiss a major antitrust lawsuit, allowing the case to move forward. The ruling, issued Monday by District Judge Xavier Neals in New Jersey, marks a significant step in the Justice Department’s ongoing challenge to Apple’s business practices.

The lawsuit, filed 15 months ago, accuses Apple of building an illegal monopoly around the iPhone by erecting barriers that prevent competition and inflate profits. Neals’ 33-page opinion found the case strong enough to proceed to trial, which could begin as early as 2027.

Apple had argued the case was flawed, claiming the government misunderstood the smartphone market and distorted legal standards. But Judge Neals ruled there was sufficient evidence for the Justice Department’s claims to be tested in court.

At the heart of the lawsuit is Apple’s so-called ‘walled garden’ — a tightly controlled ecosystem of hardware and software. While Apple says this approach enhances user experience, the government claims it stifles innovation and raises prices.

The court agreed the case contained ‘several allegations of technological barricades that constitute anticompetitive conduct.’ Neals also warned of the ‘dangerous possibility’ that Apple’s control over the iPhone has crossed into illegal monopoly territory.

In response, Apple maintained its position, stating: ‘The DOJ’s case is wrong on the facts and the law.’
The company pledged to continue defending itself in court against the accusations.

The lawsuit is one of several legal threats confronting Apple, whose 2023 profits totalled $94 billion on $295 billion in revenue. In April, another judge barred Apple from charging fees on in-app purchases processed through alternative payment methods.

That ruling could cost the company billions in commission revenue, previously collected at rates of 15% to 30%. Additionally, a separate antitrust case may impact Apple’s agreement with Google, which is worth over $20 billion per year.

Under that deal, Google is the default search engine on Apple devices — a setup under scrutiny for its alleged anticompetitive effects. A Washington, DC judge is now considering whether to outlaw the arrangement as part of a broader case against Google.

On the same day as Neals’ ruling, Apple was also hit with a new lawsuit by app developer Proton.
The case seeks class-action status and accuses Apple of monopolistic behaviour that harms smaller developers and app creators.

Proton’s suit demands punitive damages and a court order to dismantle the walled garden approach central to Apple’s ecosystem. Combined with the DOJ case, the new lawsuit deepens Apple’s mounting legal pressures over its dominance in the digital economy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

More European cities move to replace Microsoft software as part of digital sovereignty efforts

Following similar moves by Denmark, the German state of Schleswig-Holstein and the city of Lyon—France’s third-largest city and a major economic centre—has initiated a migration from Microsoft Windows and Office to a suite of open-source alternatives, including Linux, OnlyOffice, NextCloud, and PostgreSQL.

This transition is part of Lyon’s broader strategy to strengthen digital sovereignty and reduce reliance on foreign technology providers. As with other European initiatives, the decision aligns with wider EU discussions about data governance and digital autonomy. Concerns over control of sensitive data and long-term sustainability have contributed to increased interest in open-source solutions.

Although Microsoft has publicly affirmed its commitment to supporting EU customers regardless of political context, some European public authorities continue to explore alternatives that allow for local control over software infrastructure and data hosting.

In line with the European Commission’s 2025 State of the Digital Decade report—which notes that Europe has yet to fully leverage the potential of open-source technologies—Lyon aims to enhance both transparency and control over its digital systems.

Lyon’s migration also supports regional economic development. Its collaboration platform, Territoire Numérique Ouvert (Open Digital Territory), is being co-developed with local digital organisations and will be hosted in regional data centres. The project provides secure, interoperable tools for communication, office productivity, and document collaboration.

The city has begun gradually replacing Windows with Linux and Microsoft Office with OnlyOffice across municipal workstations. OnlyOffice, developed by Latvia-based Ascensio System SIA, is an open-source productivity suite distributed under the GNU Affero General Public License. While it shares a similar open-source ethos with LibreOffice, which was chosen in Demark to replace Microsoft, the two are not directly related.

It is reported that Lyon anticipates cost savings through extended hardware lifespans, a reduction in electronic waste, and improved environmental sustainability. Over half of the public contracts for this project have been awarded to companies based in the Auvergne-Rhône-Alpes region, with all awarded to French firms—highlighting a preference for local procurement.

Training for approximately 10,000 civil servants began in June 2025. The initiative is being monitored as a potential model for other municipalities aiming to enhance digital resilience and reduce dependency on proprietary software ecosystems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenInfra Summit Europe brings focus on AI and VMware alternatives

The OpenInfra Foundation and its global community will gather at the OpenInfra Summit Europe from 17 to 19 October in Paris-Saclay to explore how open source is reshaping digital infrastructure.

It will be the first summit since the Foundation joined the Linux Foundation, uniting major projects such as Linux, Kubernetes and OpenStack under the OpenInfra Blueprint. The agenda includes a strong focus on digital sovereignty, VMware migration strategies and infrastructure support for AI workloads.

Taking place at École Polytechnique in Palaiseau, the summit arrives at a time when open source software is powering nearly $9 trillion of economic activity.

With over 38% of the global OpenInfra community based in Europe, the event will focus on regional priorities like data control, security, and compliance with new EU regulations such as the Cyber Resilience Act.

Developers, IT leaders and business strategists will explore how projects like Kata Containers, Ceph and RISC-V integrate to support cost-effective, scalable infrastructure.

The summit will also mark OpenStack’s 15th anniversary, with use cases shared by the UN, BMW and nonprofit Restos du Coeur.

Attendees will witness a live VMware migration demo featuring companies like Canonical and Rackspace, highlighting real-world approaches to transitioning away from proprietary platforms. Sessions will dive into topics like CI pipelines, AI-powered infrastructure, and cloud-native operations.

As a community-led event, OpenInfra Summit Europe remains focused on collaboration.

With sponsors including Canonical, Mirantis, Red Hat and others, the gathering offers developers and organisations an opportunity to share best practices, shape open source development, and strengthen the global infrastructure ecosystem.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Denmark proposes landmark law to protect citizens from deepfake misuse

Denmark’s Ministry of Culture has introduced a draft law aimed at safeguarding citizens’ images and voices under national copyright legislation, Azernews reports. The move marks a significant step in addressing the misuse of deepfake technologies.

The proposed bill prohibits using an individual’s likeness or voice without prior consent, enabling affected individuals to claim compensation. While satire and parody remain exempt, the legislation explicitly bans the unauthorised use of deepfakes in artistic performances.

Under the proposed framework, online platforms that fail to remove deepfake content upon request could be subject to fines. The legislation will apply only within Denmark and is expected to pass with up to 90% parliamentary support.

The bill follows recent incidents involving manipulated videos of Denmark’s Prime Minister and legal challenges against the creators of pornographic deepfakes.

If adopted, Denmark would become the first country in the region to implement such legal measures. The proposal is expected to spark broader discussions across Europe on the ethical boundaries of AI-generated content.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI training with pirated books triggers massive legal risk

A US court has ruled that AI company Anthropic engaged in copyright infringement by downloading millions of pirated books to train its language model, Claude.

Although the court found that using copyrighted material for AI training could qualify as ‘fair use’ under US law when the content is transformed, it also held that acquiring the content illegally instead of licensing it lawfully constituted theft.

Judge William Alsup described AI as one of the most transformative technologies of our time. Still, he stated that Anthropic obtained millions of digital books from pirate sites such as LibGen and Pirate Library Mirror.

He noted that buying the same books later in print form does not erase the initial violation, though it may reduce potential damages.

The penalties for wilful copyright infringement in the US could reach up to $150,000 per work, meaning total compensation might run into the billions.

The case highlights the fine line between transformation and theft and signals growing legal pressure on AI firms to respect intellectual property instead of bypassing established licensing frameworks.

Australia, which uses a ‘fair dealing’ system rather than ‘fair use’, already offers flexible licensing schemes through organisations like the Copyright Agency.

CEO Josephine Johnston urged policymakers not to weaken Australia’s legal framework in favour of global tech companies, arguing that licensing provides certainty for developers and fair payment to content creators.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Sam Altman reverses his stance on AI hardware as current computers can’t meet the demands

Sam Altman, CEO of OpenAI, has returned from his earlier position, saying that AGI would not need new hardware.

Speaking on a podcast with his brother, Altman said current computers are no longer suited for the fast-evolving demands of AI. Instead of relying on standard hardware, he now believes new solutions are necessary.

OpenAI has already started developing dedicated AI hardware, including potential custom chips, marking a shift from using general-purpose GPUs and servers.

Altman also hinted at a new device — not a wearable, nor a phone — that could serve as an AI companion. Designed to be screen-free and aware of its surroundings, the product is being co-developed with former Apple design chief Jony Ive.

The collaboration, however, has run into legal trouble. A federal judge recently ordered OpenAI and Ive to pause the promotion of the new venture after a trademark dispute with a startup named IYO, which had previously pitched similar ideas to Altman’s investment firm.

OpenAI’s recent $6.5 billion acquisition of io Products, co-founded by Ive, reflects the company’s more profound commitment to reshaping how people interact with AI.

Altman’s revised stance on hardware suggests the era of purpose-built AI devices is no longer a vision but a necessary reality.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cyber Command and Coast Guard establish task force for port cyber defence

US Cyber Command has joined forces with the Coast Guard in a major military exercise designed to simulate cyberattacks on key port infrastructure.

Known as Cyber Guard, the training scenario marked a significant evolution in defensive readiness, integrating for the first time with Pacific Sentry—an Indo-Pacific Command exercise simulating conflict over Taiwan.

The joint effort included the formation of Task Force Port, a temporary unit tasked with coordinating defence of coastal infrastructure.

The drill reflected real-world concerns over the vulnerability of US ports in times of geopolitical tension, and brought together multiple combatant commands under a unified operational framework.

Rear Admiral Dennis Velez described the move as part of a broader shift from isolated training to integrated joint force operations.

Cyber Guard also marked the activation of the Department of Defense Cyber Defense Command (DCDC), previously known as Joint Force Headquarters–DOD Information Network.

The unit worked closely with the Coast Guard, signalling the increasing importance of cyber coordination across military branches when protecting critical infrastructure.

Port security has featured in past exercises but was previously handled as a separate scenario. Its inclusion within the core structure of Cyber Guard suggests a strategic realignment, ensuring cyber defence is embedded in wider contingency planning for future conflicts.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Meta wins copyright case over AI training

Meta has won a copyright lawsuit brought by a group of authors who accused the company of using their books without permission to train its Llama generative AI.

A US federal judge in San Francisco ruled the AI training was ‘transformative’ enough to qualify as fair use under copyright law.

Judge Vince Chhabria noted, however, that future claims could be more successful. He warned that using copyrighted books to build tools capable of flooding the market with competing works may not always be protected by fair use, especially when such tools generate vast profits.

The case involved pirated copies of books, including Sarah Silverman’s memoir ‘The Bedwetter’ and Junot Diaz’s award-winning novel ‘The Brief Wondrous Life of Oscar Wao’. Meta defended its approach, stating that open-source AI drives innovation and relies on fair use as a key legal principle.

Chhabria clarified that the ruling does not confirm the legality of Meta’s actions, only that the plaintiffs made weak arguments. He suggested that more substantial evidence and legal framing might lead to a different outcome in future cases.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cybercrime in Africa: Turning research into justice and action

At the Internet Governance Forum 2025 in Lillestrøm, Norway, experts and policymakers gathered to confront the escalating issue of cybercrime across Africa, marked by the launch of the research report ‘Access to Justice in the Digital Age: Empowering Victims of Cybercrime in Africa’, co-organised by UNICRI and ALT Advisory.

Based on experiences in South Africa, Namibia, Sierra Leone, and Uganda, the study highlights a troubling rise in cybercrime, much of which remains invisible due to widespread underreporting, institutional weaknesses, and outdated or absent legal frameworks. The report’s author, Tina Power, underscored the need to recognise cybercrime not merely as a technical challenge, but as a profound justice issue.

One of the central concerns raised was the gendered nature of many cybercrimes. Victims—especially women and LGBTQI+ individuals—face severe societal stigma and are often met with disbelief or indifference when reporting crimes such as revenge porn, cyberstalking, or online harassment.

Sandra Aceng from the Women of Uganda Network detailed how cultural taboos, digital illiteracy, and unsympathetic police responses prevent victims from seeking justice. Without adequate legal tools or trained officers, victims are left exposed, compounding trauma and enabling perpetrators.

Law enforcement officials, such as Zambia’s Michael Ilishebo, described various operational challenges, including limited forensic capabilities, the complexity of crimes facilitated by AI and encryption, and the lack of cross-border legal cooperation. Only a few African nations are party to key international instruments like the Budapest Convention, complicating efforts to address cybercrime that often spans multiple jurisdictions.

Ilishebo also highlighted how social media platforms frequently ignore law enforcement requests, citing global guidelines that don’t reflect African legal realities. To counter these systemic challenges, speakers advocated for a robust, victim-centred response built on strong laws, sustained training for justice-sector actors, and improved collaboration between governments, civil society, and tech companies.

Nigerian Senator Shuaib Afolabi Salisu called for a unified African stance to pressure big tech into respecting the continent’s legal systems. The session ended with a consensus – the road to justice in Africa’s digital age must be paved with coordinated action, inclusive legislation, and empowered victims.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.