Meta wins copyright case over AI training

Meta has won a copyright lawsuit brought by a group of authors who accused the company of using their books without permission to train its Llama generative AI.

A US federal judge in San Francisco ruled the AI training was ‘transformative’ enough to qualify as fair use under copyright law.

Judge Vince Chhabria noted, however, that future claims could be more successful. He warned that using copyrighted books to build tools capable of flooding the market with competing works may not always be protected by fair use, especially when such tools generate vast profits.

The case involved pirated copies of books, including Sarah Silverman’s memoir ‘The Bedwetter’ and Junot Diaz’s award-winning novel ‘The Brief Wondrous Life of Oscar Wao’. Meta defended its approach, stating that open-source AI drives innovation and relies on fair use as a key legal principle.

Chhabria clarified that the ruling does not confirm the legality of Meta’s actions, only that the plaintiffs made weak arguments. He suggested that more substantial evidence and legal framing might lead to a different outcome in future cases.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Anthropic AI training upheld as fair use; pirated book storage heads to trial

A US federal judge has ruled that Anthropic’s use of books to train its AI model falls under fair use, marking a pivotal decision for the generative AI industry.

The ruling, delivered by US District Judge William Alsup in San Francisco, held that while AI training using copyrighted works was lawful, storing millions of pirated books in a central library constituted copyright infringement.

The case involves authors Andrea Bartz, Charles Graeber and Kirk Wallace Johnson, who sued Anthropic last year. They claimed the Amazon- and Alphabet-backed firm had used pirated versions of their books without permission or compensation to train its Claude language model.

The proposed class action lawsuit is among several lawsuits filed by copyright holders against AI developers, including OpenAI, Microsoft, and Meta.

Judge Alsup stated that Anthropic’s training of Claude was ‘exceedingly transformative’, likening it to how a human reader learns to write by studying existing works. He concluded that the training process served a creative and educational function that US copyright law protects under the doctrine of fair use.

‘Like any reader aspiring to be a writer, Anthropic’s LLMs trained upon works not to replicate them but to create something different,’ the ruling said.

However, Alsup drew a clear line between fair use and infringement regarding storage practices. Anthropic’s copying and storage of over 7 million books in what the court described as a ‘central library of all the books in the world’ was not covered by fair use.

The judge ordered a trial scheduled for December to determine how much Anthropic may owe in damages. US copyright law permits statutory damages of up to $150,000 per work for wilful infringement.

Anthropic argued in court that its use of the books was consistent with copyright law’s intent to promote human creativity.

The company claimed that its system studied the writing to extract uncopyrightable insights and to generate original content. It also maintained that the source of the digital copies was irrelevant to the fair use determination.

Judge Alsup disagreed, noting that downloading content from pirate websites when lawful access was possible may not qualify as a reasonable step. He expressed scepticism that infringers could justify acquiring such copies as necessary for a later claim of fair use.

The decision is the first judicial interpretation of fair use in the context of generative AI. It will likely influence ongoing legal battles over how AI companies source and use copyrighted material for model training. Anthropic has not yet commented on the ruling.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Perplexity AI bot now makes videos on X

Perplexity’s AI chatbot, now integrated with X (formerly Twitter), has introduced a feature that allows users to generate short AI-created videos with sound.

By tagging @AskPerplexity with a brief prompt, users receive eight-second clips featuring computer-generated visuals and audio, including dialogue. The move is as a potential driver of engagement on the Elon Musk-owned platform.

However, concerns have emerged over the possibility of misinformation spreading more easily. Perplexity claims to have installed strong filters to limit abuse, but X’s poor content moderation continues to fuel scepticism.

The feature has already been used to create imaginative videos involving public figures, sparking debates around ethical use.

The competition between Perplexity’s ‘Ask’ bot and Musk’s Grok AI is intensifying, with the former taking the lead in multimedia capabilities. Despite its popularity on X, Grok does not currently support video generation.

Meanwhile, Perplexity is expanding to other platforms, including WhatsApp, offering AI services directly without requiring a separate app or registration.

Legal troubles have also surfaced. The BBC is threatening legal action against Perplexity over alleged unauthorised use of its content for AI training. In a strongly worded letter, the broadcaster has demanded content deletion, compensation, and a halt to further scraping.

Perplexity dismissed the claims as manipulative, accusing the BBC of misunderstanding technology and copyright law.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Turing Institute urges stronger AI research security

The Alan Turing Institute has warned that urgent action is needed to protect the UK’s AI research from espionage, intellectual property theft and risky international collaborations.

Its Centre for Emerging Technology and Security (CETaS) has published a report calling for a culture shift across academia to better recognise and mitigate these risks.

The report highlights inconsistencies in how security risks are understood within universities and a lack of incentives for researchers to follow government guidelines. Sensitive data, the dual-use potential of AI, and the risk of reverse engineering make the field particularly vulnerable to foreign interference.

Lead author Megan Hughes stressed the need for a coordinated response, urging government and academia to find the right balance between academic freedom and security.

The report outlines 13 recommendations, including expanding support for academic due diligence and issuing clearer guidance on high-risk international partnerships.

Further proposals call for compulsory research security training, better threat communication from national agencies, and standardised risk assessments before publishing AI research.

The aim is to build a more resilient research ecosystem as global interest in UK-led AI innovation continues to grow.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Bjorn Ulvaeus says AI is ‘an extension of your mind’

ABBA legend Bjorn Ulvaeus is working on a new musical with the help of AI, describing the technology as ‘an extension of your mind.’ Despite previously criticising AI companies’ unlicensed use of artists’ work, the 80-year-old Swedish songwriter believes AI can be a valuable creative partner.

At London’s inaugural SXSW, Ulvaeus explained how he uses AI tools to explore lyrical ideas and overcome writer’s block. ‘It is like having another songwriter in the room with a huge reference frame,’ he said.

‘You can prompt a lyric and ask where to go from there. It usually comes out with garbage, but sometimes something in it gives you another idea.’

Ulvaeus was among over 10,000 creatives who signed an open letter warning of the risks AI poses to artists’ rights. Still, he maintains that when used with consent and care, AI can support — not replace — human creativity. ‘It must not exclude the human,’ he warned.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Epic adds AI NPC tools to Fortnite as Vader voice sparks union clash

Epic Games is launching new tools for Fortnite creators that enable them to build AI-powered non-player characters (NPCs), following the debut of an AI-generated Darth Vader that players can talk to in-game.

The feature, which reproduces the iconic voice of James Earl Jones using AI, marks a significant step in interactive gaming—but also comes with its share of challenges and controversy.

According to The Verge, Epic encountered several difficulties in fine-tuning Vader’s voice and responses to feel authentic and fit smoothly into gameplay. ‘The culmination of a very intense effort for a character everybody understands,’ said Saxs Persson, executive vice president of the Fortnite ecosystem.

Persson noted that the team worked carefully to ensure that when Vader joins a player’s team, he behaves as a fearsome and aggressive ally—true to his cinematic persona.

However, the rollout wasn’t entirely smooth. In a live-streamed session, popular Fortnite creator Loserfruit prompted Vader to swear, exposing the system’s content filtering flaws. Epic responded quickly with patches and has since implemented multiple layers of safety checks.

‘We do our best job on day one,’ said Persson, ‘but more importantly, we’re ready to surround the problem and have fixes in place as fast as possible.’

Now, Fortnite creators will have access to the same suite of AI tools and safety systems used to develop Vader. They can control voice tone, dialogue, and NPC behaviour while relying on Epic’s safeguards to avoid inappropriate interactions.

The feature launch comes at a sensitive moment, as actor union SAG-AFTRA has filed a complaint against Epic Games over using AI to recreate Vader’s voice.

The union claims that Llama Productions, an Epic subsidiary, employed the technology without consulting or bargaining with the union, replacing the work of human voice actors.

‘We must protect our right to bargain terms and conditions around uses of voice that replace the work of our members,’ SAG-AFTRA said, emphasising its support for actors and estates in managing the use of digital replicas.

As Epic expands its AI capabilities in gaming, it faces both the technical challenges of responsible deployment and the growing debate around AI’s impact on creative professions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Japan plans to boost IP through AI and global talent

Japan has unveiled a new IP strategy aimed at boosting competitiveness through the use of AI and global talent.

The government hopes to strengthen its economies by leveraging the international appeal of Japanese anime and cultural content, with an expected impact of up to 1 trillion yen.

Prime Minister Shigeru Ishiba stressed that IP and technology are vital to maintaining Japan’s corporate strength. The plan also sets a long-term goal of reaching fourth place or higher in the Global Innovation Index by 2035, up from 13th in 2024.

To support innovation, Japan will explore recognising AI developers as patent holders and encourage cooperation between the public and private sectors across areas like disaster prevention and energy.

Efforts will focus on attracting foreign experts and standardising Japanese technologies globally.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI copyright clash stalls UK data bill

A bitter standoff over AI and copyright has returned to the House of Lords, as ministers and peers clash over how to protect creative workers while fostering technological innovation.

At the centre of the debate is the proposed Data (Use and Access) Bill, which was expected to pass smoothly but is now stuck in parliamentary limbo due to growing resistance.

The bill would allow AI firms to access copyrighted material unless rights holders opt out, a proposal that many artists and peers believe threatens the UK’s £124bn creative industry.

Nearly 300 Lords have called for AI developers to disclose what content they use and seek licences instead of relying on blanket access. Former film director Baroness Kidron described the policy as ‘state-sanctioned theft’ and warned it would sacrifice British talent to benefit large tech companies.

Supporters of the bill, like former Meta executive Sir Nick Clegg, argue that forcing AI firms to seek individual permissions would severely damage the UK’s AI sector. The Department for Science, Innovation and Technology insists it will only consider changes if they are proven to benefit creators.

If no resolution is found, the bill risks being shelved entirely. That would also scrap unrelated proposals bundled into it, such as new NHS data-sharing rules and plans for a nationwide underground map.

Despite the bill’s wide scope, the fight over copyright remains its most divisive and emotionally charged feature.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New York Times partners with Amazon on AI integration

The New York Times Company and Amazon have signed a multi-year licensing agreement that will allow Amazon to integrate editorial content from The New York Times, NYT Cooking, and The Athletic into a range of its AI-powered services, the companies announced Wednesday.

Under the deal, Amazon will use licensed content for real-time display in consumer-facing products such as Alexa, as well as for training its proprietary foundation models. The agreement marks an expansion of the firms’ existing partnership.

‘The agreement expands the companies’ existing relationship, and will deliver additional value to Amazon customers while bringing Times journalism to broader audiences,’ the companies said in a joint statement.

According to the announcement, the licensing terms include ‘real-time display of summaries and short excerpts of Times content within Amazon products and services’ alongside permission to use the content in AI model development. Amazon platforms will also feature direct links to full Times articles.

Both companies described the partnership as a reflection of a shared commitment to delivering global news and information across Amazon’s AI ecosystem. Financial details of the agreement were not made public.

The announcement comes amid growing industry debate about the role of journalistic material in training AI systems.

By entering a formal licensing arrangement, The New York Times positions itself as one of the first major media outlets to publicly align with a technology company for AI-related content use.

The companies have yet to name additional Amazon products that will feature Times content, and no timeline has been disclosed for the rollout of the new integrations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI Mode reshapes Google’s search results

One year after launching AI-generated search results via AI Overviews, Google has unveiled AI Mode—a new feature it claims will redefine online search.

Functioning as an integrated chatbot, AI Mode allows users to ask complex questions, receive detailed responses, and continue with follow-up queries, eliminating the need to click through traditional links.

Google’s CEO Sundar Pichai described it as a ‘total reimagining of search,’ noting significant changes in user behaviour during early trials.

Analysts suggest the company is attempting to disrupt its own search business before rivals do, following internal concerns sparked by the rise of tools like ChatGPT.

With AI Mode, Google is increasingly shifting from directing users to websites toward delivering instant answers itself. Critics fear it could dramatically reduce web traffic for publishers who depend on Google for visibility and revenue.

While Google insists the open web will continue to grow, many publishers remain unconvinced. The News/Media Alliance condemned the move, calling it theft of content without fair return.

Links were the last mechanism providing meaningful traffic, said CEO Danielle Coffey, who urged the US Department of Justice to take action against what she described as monopolistic behaviour.

Meanwhile, Google is rapidly integrating AI across its ecosystem. Alongside AI Mode, it introduced developments in its Gemini model, with the aim of building a ‘world model’ capable of simulating and planning like the human brain.

Google DeepMind’s Demis Hassabis said the goal is to lay the foundations for an AI-native operating system.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!