Meta wins copyright case over AI training

Meta has won a copyright lawsuit brought by a group of authors who accused the company of using their books without permission to train its Llama generative AI.

A US federal judge in San Francisco ruled the AI training was ‘transformative’ enough to qualify as fair use under copyright law.

Judge Vince Chhabria noted, however, that future claims could be more successful. He warned that using copyrighted books to build tools capable of flooding the market with competing works may not always be protected by fair use, especially when such tools generate vast profits.

The case involved pirated copies of books, including Sarah Silverman’s memoir ‘The Bedwetter’ and Junot Diaz’s award-winning novel ‘The Brief Wondrous Life of Oscar Wao’. Meta defended its approach, stating that open-source AI drives innovation and relies on fair use as a key legal principle.

Chhabria clarified that the ruling does not confirm the legality of Meta’s actions, only that the plaintiffs made weak arguments. He suggested that more substantial evidence and legal framing might lead to a different outcome in future cases.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cybercrime in Africa: Turning research into justice and action

At the Internet Governance Forum 2025 in Lillestrøm, Norway, experts and policymakers gathered to confront the escalating issue of cybercrime across Africa, marked by the launch of the research report ‘Access to Justice in the Digital Age: Empowering Victims of Cybercrime in Africa’, co-organised by UNICRI and ALT Advisory.

Based on experiences in South Africa, Namibia, Sierra Leone, and Uganda, the study highlights a troubling rise in cybercrime, much of which remains invisible due to widespread underreporting, institutional weaknesses, and outdated or absent legal frameworks. The report’s author, Tina Power, underscored the need to recognise cybercrime not merely as a technical challenge, but as a profound justice issue.

One of the central concerns raised was the gendered nature of many cybercrimes. Victims—especially women and LGBTQI+ individuals—face severe societal stigma and are often met with disbelief or indifference when reporting crimes such as revenge porn, cyberstalking, or online harassment.

Sandra Aceng from the Women of Uganda Network detailed how cultural taboos, digital illiteracy, and unsympathetic police responses prevent victims from seeking justice. Without adequate legal tools or trained officers, victims are left exposed, compounding trauma and enabling perpetrators.

Law enforcement officials, such as Zambia’s Michael Ilishebo, described various operational challenges, including limited forensic capabilities, the complexity of crimes facilitated by AI and encryption, and the lack of cross-border legal cooperation. Only a few African nations are party to key international instruments like the Budapest Convention, complicating efforts to address cybercrime that often spans multiple jurisdictions.

Ilishebo also highlighted how social media platforms frequently ignore law enforcement requests, citing global guidelines that don’t reflect African legal realities. To counter these systemic challenges, speakers advocated for a robust, victim-centred response built on strong laws, sustained training for justice-sector actors, and improved collaboration between governments, civil society, and tech companies.

Nigerian Senator Shuaib Afolabi Salisu called for a unified African stance to pressure big tech into respecting the continent’s legal systems. The session ended with a consensus – the road to justice in Africa’s digital age must be paved with coordinated action, inclusive legislation, and empowered victims.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

Anthropic AI training upheld as fair use; pirated book storage heads to trial

A US federal judge has ruled that Anthropic’s use of books to train its AI model falls under fair use, marking a pivotal decision for the generative AI industry.

The ruling, delivered by US District Judge William Alsup in San Francisco, held that while AI training using copyrighted works was lawful, storing millions of pirated books in a central library constituted copyright infringement.

The case involves authors Andrea Bartz, Charles Graeber and Kirk Wallace Johnson, who sued Anthropic last year. They claimed the Amazon- and Alphabet-backed firm had used pirated versions of their books without permission or compensation to train its Claude language model.

The proposed class action lawsuit is among several lawsuits filed by copyright holders against AI developers, including OpenAI, Microsoft, and Meta.

Judge Alsup stated that Anthropic’s training of Claude was ‘exceedingly transformative’, likening it to how a human reader learns to write by studying existing works. He concluded that the training process served a creative and educational function that US copyright law protects under the doctrine of fair use.

‘Like any reader aspiring to be a writer, Anthropic’s LLMs trained upon works not to replicate them but to create something different,’ the ruling said.

However, Alsup drew a clear line between fair use and infringement regarding storage practices. Anthropic’s copying and storage of over 7 million books in what the court described as a ‘central library of all the books in the world’ was not covered by fair use.

The judge ordered a trial scheduled for December to determine how much Anthropic may owe in damages. US copyright law permits statutory damages of up to $150,000 per work for wilful infringement.

Anthropic argued in court that its use of the books was consistent with copyright law’s intent to promote human creativity.

The company claimed that its system studied the writing to extract uncopyrightable insights and to generate original content. It also maintained that the source of the digital copies was irrelevant to the fair use determination.

Judge Alsup disagreed, noting that downloading content from pirate websites when lawful access was possible may not qualify as a reasonable step. He expressed scepticism that infringers could justify acquiring such copies as necessary for a later claim of fair use.

The decision is the first judicial interpretation of fair use in the context of generative AI. It will likely influence ongoing legal battles over how AI companies source and use copyrighted material for model training. Anthropic has not yet commented on the ruling.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI and io face lawsuit over branding conflict

OpenAI and hardware startup io, founded by former Apple designer Jony Ive, are now embroiled in a trademark infringement lawsuit filed by iyO, a Google-backed company specialising in custom headphones.

The legal case prompted OpenAI to withdraw promotional material linked to its $6.005 billion acquisition of io, raising questions about the branding of its future AI device.

Court documents reveal that OpenAI and io had previously met with iyO representatives and tested their custom earbud product, although the tests were unsuccessful.

Despite initial contact and discussions about potential collaboration, OpenAI rejected iyO’s proposals to invest, license, or acquire the company for $200 million. The lawsuit, however, does not centre on an earbud or wearable device, according to io’s co-founders.

Io executives clarified in court that their prototype does not resemble iyO’s product and remains unfinished. It is neither wearable nor intended for sale within the following year.

OpenAI CEO Sam Altman described the joint project as an attempt to reimagine hardware interfaces. At the same time, Jony Ive expressed enthusiasm for the device’s early design, which he claims captured his imagination.

Court testimony and emails suggest io explored various technologies, including desktop, mobile, and portable designs. Internal communications also reference possible ergonomic research using 3D ear scan data.

Although the lawsuit has exposed some development details, the main product of the collaboration between OpenAI and io remains undisclosed.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Banks and tech firms create open-source AI standards

A group of leading banks and technology firms has joined forces to create standardised open-source controls for AI within the financial sector.

The initiative, led by the Fintech Open Source Foundation (FINOS), includes financial institutions such as Citi, BMO, RBC, and Morgan Stanley, working alongside major cloud providers like Microsoft, Google Cloud, and Amazon Web Services.

Known as the Common Controls for AI Services project, the effort seeks to build neutral, industry-wide standards for AI use in financial services.

The framework will be tailored to regulatory environments, offering peer-reviewed governance models and live validation tools to support real-time compliance. It extends FINOS’s earlier Common Cloud Controls framework, which originated with contributions from Citi.

Gabriele Columbro, Executive Director of FINOS, described the moment as critical for AI in finance. He emphasised the role of open source in encouraging early collaboration between financial firms and third-party providers on shared security and compliance goals.

Instead of isolated standards, the project promotes unified approaches that reduce fragmentation across regulated markets.

The project remains open for further contributions from financial organisations, AI vendors, regulators, and technology companies.

As part of the Linux Foundation, FINOS provides a neutral space for competitors to co-develop tools that enhance AI adoption’s safety, transparency, and efficiency in finance.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Apple considers buying Perplexity AI

Apple is reportedly considering the acquisition of Perplexity AI as it attempts to catch up in the fast-moving race for dominance in generative technology.

According to Bloomberg, the discussions involve senior executives, including Eddy Cue and merger head Adrian Perica, who remain at an early stage.

Such a move would significantly shift Apple, which typically avoids large-scale takeovers. However, with investor pressure mounting after an underwhelming developer conference, the tech giant may rethink its traditionally cautious acquisition strategy.

Perplexity has gained prominence for its fast, clear AI chatbot and recently secured funding at a $14 billion valuation.

Should Apple proceed, the acquisition would be the company’s largest ever financially and strategically, potentially transforming its position in AI and reducing its long-standing dependence on Google’s search infrastructure.

Apple’s slow development of Siri and reliance on a $20 billion revenue-sharing deal with Google have left it trailing rivals. With that partnership now under regulatory scrutiny in the US, Apple may view Perplexity as a vital step towards building a more autonomous search and AI ecosystem.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cybersecurity vs freedom of expression: IGF 2025 panel calls for balanced, human-centred digital governance

At the 2025 Internet Governance Forum in Lillestrøm, Norway, experts from government, civil society, and the tech industry convened to discuss one of the thorniest challenges of the digital age: how to secure cyberspace without compromising freedom of expression and fundamental human rights. The session, moderated by terrorism survivor and activist Bjørn Ihler, revealed a shared urgency across sectors to move beyond binary thinking and craft nuanced, people-centred approaches to online safety.

Paul Ash, head of the Christchurch Call Foundation, warned against framing regulation and inaction as the only options, urging legislators to build human rights safeguards directly into cybersecurity laws. Echoing him, Mallory Knodel of the Global Encryption Coalition stressed the foundational role of end-to-end encryption, calling it a necessary boundary-setting tool in an era where digital surveillance and content manipulation pose systemic risks. She warned that weakening encryption compromises privacy and invites broader security threats.

Representing the tech industry, Meta’s Cagatay Pekyrour underscored the complexity of moderating content across jurisdictions with over 120 speech-restricting laws. He called for more precise legal definitions, robust procedural safeguards, and a shift toward ‘system-based’ regulatory frameworks that assess platforms’ processes rather than micromanage content.

Meanwhile, Romanian regulator and former MP Pavel Popescu detailed his country’s recent struggles with election-related disinformation and cybercrime, arguing that social media companies must shoulder more responsibility, particularly in responding swiftly to systemic threats like AI-driven scams and coordinated influence operations.

While perspectives diverged on enforcement and regulation, all participants agreed that lasting digital governance requires sustained multistakeholder collaboration grounded in transparency, technical expertise, and respect for human rights. As the digital landscape evolves rapidly under the influence of AI and new forms of online harm, this session underscored that no single entity or policy can succeed alone, and that the stakes for security and democracy have never been higher.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

Perplexity AI bot now makes videos on X

Perplexity’s AI chatbot, now integrated with X (formerly Twitter), has introduced a feature that allows users to generate short AI-created videos with sound.

By tagging @AskPerplexity with a brief prompt, users receive eight-second clips featuring computer-generated visuals and audio, including dialogue. The move is as a potential driver of engagement on the Elon Musk-owned platform.

However, concerns have emerged over the possibility of misinformation spreading more easily. Perplexity claims to have installed strong filters to limit abuse, but X’s poor content moderation continues to fuel scepticism.

The feature has already been used to create imaginative videos involving public figures, sparking debates around ethical use.

The competition between Perplexity’s ‘Ask’ bot and Musk’s Grok AI is intensifying, with the former taking the lead in multimedia capabilities. Despite its popularity on X, Grok does not currently support video generation.

Meanwhile, Perplexity is expanding to other platforms, including WhatsApp, offering AI services directly without requiring a separate app or registration.

Legal troubles have also surfaced. The BBC is threatening legal action against Perplexity over alleged unauthorised use of its content for AI training. In a strongly worded letter, the broadcaster has demanded content deletion, compensation, and a halt to further scraping.

Perplexity dismissed the claims as manipulative, accusing the BBC of misunderstanding technology and copyright law.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

France appeals porn site ruling based on EU legal grounds

The French government is challenging a recent decision by the Administrative Court of Paris that temporarily halted the enforcement of mandatory age verification on pornographic websites based in the EU. The court found France’s current approach potentially inconsistent with the EU law—specifically the 2002 E-Commerce Directive—which upholds the ‘country-of-origin’ principle.

That rule limits an EU country’s authority to regulate online services hosted in another member state unless it follows a formal process involving both the host country and the European Commission. The dispute’s heart is whether France correctly followed the required legal steps.

While French authorities say they notified the host countries of porn companies like Hammy Media (Xhamster) and Aylo (owner of Pornhub and others) and waited the mandated three months, legal experts argue that notifying the Commission is also essential. So far, there is no confirmation that this additional step was taken, which may weaken France’s legal standing.

Digital Minister Clara Chappaz reaffirmed the government’s commitment to enforcing age checks, calling it a ‘priority’ in a public statement. The ministry insists its rules align with the EU’s Audiovisual Media Services Directive.

However, the court’s ruling highlights broader tensions between France’s national digital regulations and overarching the EU law. Similar legal challenges have already forced France to adjust parts of its digital, influencer, and cloud regulation frameworks in the past two years.

The appeal could have significant implications for age restrictions on adult content and how France asserts digital sovereignty within the EU. If the court upholds the suspension, other digital regulations based on national initiatives may also be vulnerable to legal scrutiny under the EU principles.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU AI Act challenges 68% of European businesses, AWS report finds

As AI becomes integral to digital transformation, European businesses struggle to adapt to new regulations like the EU AI Act.

A report commissioned by AWS and Strand Partners revealed that 68% of surveyed companies find the EU AI Act difficult to interpret, with compliance absorbing around 40% of IT budgets.

Businesses unsure of regulatory obligations are expected to invest nearly 30% less in AI over the coming year, risking a slowdown in innovation across the continent.

The EU AI Act, effective since August 2024, introduces a phased risk-based framework to regulate AI in the EU. Some key provisions, including banned practices and AI literacy rules, are already enforceable.

Over the next year, further requirements will roll out, affecting AI system providers, users, distributors, and non-EU companies operating within the EU. The law prohibits exploitative AI applications and imposes strict rules on high-risk systems while promoting transparency in low-risk deployments.

AWS has reaffirmed its commitment to responsible AI, which is aligned with the EU AI Act. The company supports customers through initiatives like AI Service Cards, its Responsible AI Guide, and Bedrock Guardrails.

AWS was the first primary cloud provider to receive ISO/IEC 42001 certification for its AI offerings and continues to engage with the EU institutions to align on best practices. Amazon’s AI Ready Commitment also offers free education on responsible AI development.

Despite the regulatory complexity, AWS encourages its customers to assess how their AI usage fits within the EU AI Act and adopt safeguards accordingly.

As compliance remains a shared responsibility, AWS provides tools and guidance, but customers must ensure their applications meet the legal requirements. The company updates customers as enforcement advances and new guidance is issued.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!