A US federal judge has ruled that Anthropic’s use of books to train its AI model falls under fair use, marking a pivotal decision for the generative AI industry.
The ruling, delivered by US District Judge William Alsup in San Francisco, held that while AI training using copyrighted works was lawful, storing millions of pirated books in a central library constituted copyright infringement.
The case involves authors Andrea Bartz, Charles Graeber and Kirk Wallace Johnson, who sued Anthropic last year. They claimed the Amazon- and Alphabet-backed firm had used pirated versions of their books without permission or compensation to train its Claude language model.
The proposed class action lawsuit is among several lawsuits filed by copyright holders against AI developers, including OpenAI, Microsoft, and Meta.
Judge Alsup stated that Anthropic’s training of Claude was ‘exceedingly transformative’, likening it to how a human reader learns to write by studying existing works. He concluded that the training process served a creative and educational function that US copyright law protects under the doctrine of fair use.
‘Like any reader aspiring to be a writer, Anthropic’s LLMs trained upon works not to replicate them but to create something different,’ the ruling said.
However, Alsup drew a clear line between fair use and infringement regarding storage practices. Anthropic’s copying and storage of over 7 million books in what the court described as a ‘central library of all the books in the world’ was not covered by fair use.
The judge ordered a trial scheduled for December to determine how much Anthropic may owe in damages. US copyright law permits statutory damages of up to $150,000 per work for wilful infringement.
Anthropic argued in court that its use of the books was consistent with copyright law’s intent to promote human creativity.
The company claimed that its system studied the writing to extract uncopyrightable insights and to generate original content. It also maintained that the source of the digital copies was irrelevant to the fair use determination.
Judge Alsup disagreed, noting that downloading content from pirate websites when lawful access was possible may not qualify as a reasonable step. He expressed scepticism that infringers could justify acquiring such copies as necessary for a later claim of fair use.
The decision is the first judicial interpretation of fair use in the context of generative AI. It will likely influence ongoing legal battles over how AI companies source and use copyrighted material for model training. Anthropic has not yet commented on the ruling.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Perplexity’s AI chatbot, now integrated with X (formerly Twitter), has introduced a feature that allows users to generate short AI-created videos with sound.
By tagging @AskPerplexity with a brief prompt, users receive eight-second clips featuring computer-generated visuals and audio, including dialogue. The move is as a potential driver of engagement on the Elon Musk-owned platform.
However, concerns have emerged over the possibility of misinformation spreading more easily. Perplexity claims to have installed strong filters to limit abuse, but X’s poor content moderation continues to fuel scepticism.
The feature has already been used to create imaginative videos involving public figures, sparking debates around ethical use.
The competition between Perplexity’s ‘Ask’ bot and Musk’s Grok AI is intensifying, with the former taking the lead in multimedia capabilities. Despite its popularity on X, Grok does not currently support video generation.
Meanwhile, Perplexity is expanding to other platforms, including WhatsApp, offering AI services directly without requiring a separate app or registration.
Legal troubles have also surfaced. The BBC is threatening legal action against Perplexity over alleged unauthorised use of its content for AI training. In a strongly worded letter, the broadcaster has demanded content deletion, compensation, and a halt to further scraping.
Perplexity dismissed the claims as manipulative, accusing the BBC of misunderstanding technology and copyright law.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The Alan Turing Institute has warned that urgent action is needed to protect the UK’s AI research from espionage, intellectual property theft and risky international collaborations.
The report highlights inconsistencies in how security risks are understood within universities and a lack of incentives for researchers to follow government guidelines. Sensitive data, the dual-use potential of AI, and the risk of reverse engineering make the field particularly vulnerable to foreign interference.
Lead author Megan Hughes stressed the need for a coordinated response, urging government and academia to find the right balance between academic freedom and security.
The report outlines 13 recommendations, including expanding support for academic due diligence and issuing clearer guidance on high-risk international partnerships.
Further proposals call for compulsory research security training, better threat communication from national agencies, and standardised risk assessments before publishing AI research.
The aim is to build a more resilient research ecosystem as global interest in UK-led AI innovation continues to grow.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
ABBA legend Bjorn Ulvaeus is working on a new musical with the help of AI, describing the technology as ‘an extension of your mind.’ Despite previously criticising AI companies’ unlicensed use of artists’ work, the 80-year-old Swedish songwriter believes AI can be a valuable creative partner.
At London’s inaugural SXSW, Ulvaeus explained how he uses AI tools to explore lyrical ideas and overcome writer’s block. ‘It is like having another songwriter in the room with a huge reference frame,’ he said.
‘You can prompt a lyric and ask where to go from there. It usually comes out with garbage, but sometimes something in it gives you another idea.’
Ulvaeus was among over 10,000 creatives who signed an open letter warning of the risks AI poses to artists’ rights. Still, he maintains that when used with consent and care, AI can support — not replace — human creativity. ‘It must not exclude the human,’ he warned.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Epic Games is launching new tools for Fortnite creators that enable them to build AI-powered non-player characters (NPCs), following the debut of an AI-generated Darth Vader that players can talk to in-game.
The feature, which reproduces the iconic voice of James Earl Jones using AI, marks a significant step in interactive gaming—but also comes with its share of challenges and controversy.
According to The Verge, Epic encountered several difficulties in fine-tuning Vader’s voice and responses to feel authentic and fit smoothly into gameplay. ‘The culmination of a very intense effort for a character everybody understands,’ said Saxs Persson, executive vice president of the Fortnite ecosystem.
Persson noted that the team worked carefully to ensure that when Vader joins a player’s team, he behaves as a fearsome and aggressive ally—true to his cinematic persona.
However, the rollout wasn’t entirely smooth. In a live-streamed session, popular Fortnite creator Loserfruit prompted Vader to swear, exposing the system’s content filtering flaws. Epic responded quickly with patches and has since implemented multiple layers of safety checks.
‘We do our best job on day one,’ said Persson, ‘but more importantly, we’re ready to surround the problem and have fixes in place as fast as possible.’
Now, Fortnite creators will have access to the same suite of AI tools and safety systems used to develop Vader. They can control voice tone, dialogue, and NPC behaviour while relying on Epic’s safeguards to avoid inappropriate interactions.
The feature launch comes at a sensitive moment, as actor union SAG-AFTRA has filed a complaint against Epic Games over using AI to recreate Vader’s voice.
The union claims that Llama Productions, an Epic subsidiary, employed the technology without consulting or bargaining with the union, replacing the work of human voice actors.
‘We must protect our right to bargain terms and conditions around uses of voice that replace the work of our members,’ SAG-AFTRA said, emphasising its support for actors and estates in managing the use of digital replicas.
As Epic expands its AI capabilities in gaming, it faces both the technical challenges of responsible deployment and the growing debate around AI’s impact on creative professions.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Japan has unveiled a new IP strategy aimed at boosting competitiveness through the use of AI and global talent.
The government hopes to strengthen its economies by leveraging the international appeal of Japanese anime and cultural content, with an expected impact of up to 1 trillion yen.
Prime Minister Shigeru Ishiba stressed that IP and technology are vital to maintaining Japan’s corporate strength. The plan also sets a long-term goal of reaching fourth place or higher in the Global Innovation Index by 2035, up from 13th in 2024.
To support innovation, Japan will explore recognising AI developers as patent holders and encourage cooperation between the public and private sectors across areas like disaster prevention and energy.
Efforts will focus on attracting foreign experts and standardising Japanese technologies globally.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A bitter standoff over AI and copyright has returned to the House of Lords, as ministers and peers clash over how to protect creative workers while fostering technological innovation.
At the centre of the debate is the proposed Data (Use and Access) Bill, which was expected to pass smoothly but is now stuck in parliamentary limbo due to growing resistance.
The bill would allow AI firms to access copyrighted material unless rights holders opt out, a proposal that many artists and peers believe threatens the UK’s £124bn creative industry.
Nearly 300 Lords have called for AI developers to disclose what content they use and seek licences instead of relying on blanket access. Former film director Baroness Kidron described the policy as ‘state-sanctioned theft’ and warned it would sacrifice British talent to benefit large tech companies.
Supporters of the bill, like former Meta executive Sir Nick Clegg, argue that forcing AI firms to seek individual permissions would severely damage the UK’s AI sector. The Department for Science, Innovation and Technology insists it will only consider changes if they are proven to benefit creators.
If no resolution is found, the bill risks being shelved entirely. That would also scrap unrelated proposals bundled into it, such as new NHS data-sharing rules and plans for a nationwide underground map.
Despite the bill’s wide scope, the fight over copyright remains its most divisive and emotionally charged feature.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The New York Times Company and Amazon have signed a multi-year licensing agreement that will allow Amazon to integrate editorial content from The New York Times, NYT Cooking, and The Athletic into a range of its AI-powered services, the companies announced Wednesday.
Under the deal, Amazon will use licensed content for real-time display in consumer-facing products such as Alexa, as well as for training its proprietary foundation models. The agreement marks an expansion of the firms’ existing partnership.
‘The agreement expands the companies’ existing relationship, and will deliver additional value to Amazon customers while bringing Times journalism to broader audiences,’ the companies said in a joint statement.
According to the announcement, the licensing terms include ‘real-time display of summaries and short excerpts of Times content within Amazon products and services’ alongside permission to use the content in AI model development. Amazon platforms will also feature direct links to full Times articles.
Both companies described the partnership as a reflection of a shared commitment to delivering global news and information across Amazon’s AI ecosystem. Financial details of the agreement were not made public.
The announcement comes amid growing industry debate about the role of journalistic material in training AI systems.
By entering a formal licensing arrangement, The New York Times positions itself as one of the first major media outlets to publicly align with a technology company for AI-related content use.
The companies have yet to name additional Amazon products that will feature Times content, and no timeline has been disclosed for the rollout of the new integrations.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
One year after launching AI-generated search results via AI Overviews, Google has unveiled AI Mode—a new feature it claims will redefine online search.
Functioning as an integrated chatbot, AI Mode allows users to ask complex questions, receive detailed responses, and continue with follow-up queries, eliminating the need to click through traditional links.
Google’s CEO Sundar Pichai described it as a ‘total reimagining of search,’ noting significant changes in user behaviour during early trials.
Analysts suggest the company is attempting to disrupt its own search business before rivals do, following internal concerns sparked by the rise of tools like ChatGPT.
With AI Mode, Google is increasingly shifting from directing users to websites toward delivering instant answers itself. Critics fear it could dramatically reduce web traffic for publishers who depend on Google for visibility and revenue.
While Google insists the open web will continue to grow, many publishers remain unconvinced. The News/Media Alliance condemned the move, calling it theft of content without fair return.
Links were the last mechanism providing meaningful traffic, said CEO Danielle Coffey, who urged the US Department of Justice to take action against what she described as monopolistic behaviour.
Meanwhile, Google is rapidly integrating AI across its ecosystem. Alongside AI Mode, it introduced developments in its Gemini model, with the aim of building a ‘world model’ capable of simulating and planning like the human brain.
Google DeepMind’s Demis Hassabis said the goal is to lay the foundations for an AI-native operating system.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
It feels like just yesterday that the internet was buzzing over the first renditions of OpenAI’s DALL·E tool, with millions competing to craft the funniest, weirdest prompts and sharing the results across social media. The sentiment was clear: the public was fascinated by the creative potential of this new technology.
But beneath the laughter and viral memes was a quieter, more uneasy question: what happens when AI not only generates quirky artwork, but begins to reshape our daily lives—both online and off? As it turns out, that process was already underway behind the scenes—and we were none the wiser.
AI in action: How the entertainment industry is using it today
Three years later, we have reached a point where AI’s influence seems to have passed the point of no return. The entertainment industry was among the first to embrace this technology, and starting with the 2025 Academy Awards, films that incorporate AI are now eligible for Oscar nominations.
That decision has been met with mixed reactions, to put it lightly. While some have praised the industry’s eagerness to explore new technological frontiers, others have claimed that AI greatly diminishes the human contribution to the art of filmmaking and therefore takes away the essence of the seventh art form.
The first wave of AI-enhanced storytelling
One recent example is the film The Brutalist, in which AI was used to refine Adrien Brody’s Hungarian dialogue to sound more authentic—a move that sparked both technical admiration and creative scepticism.
With AI now embedded in everything from voiceovers to entire digital actors, we are only beginning to confront what it truly means when creativity is no longer exclusively human.
Adrien Brody’s Hungarian dialogue in ‘The Brutalist’ was subject to generative AI to make it sound more authentic.
Screenshot / YouTube/ Oscars
Setting the stage: AI in the spotlight
The first major big-screen resurrection occurred in 1994’s The Crow, where Brandon Lee’s sudden passing mid-production forced the studio to rely on body doubles, digital effects, and existing footage to complete his scenes. However, it was not until 2016 that audiences witnessed the first fully digital revival.
In Rogue One: A Star Wars Story, Peter Cushing’s character was brought back to life using a combination of CGI, motion capture, and a facial stand-in. Although primarily reliant on traditional VFX, the project paved the way for future use of deepfakes and AI-assisted performance recreation across movies, TV shows, and video games.
Afterward, some speculated that studios tied to Peter Cushing’s legacy—such as Tyburn Film Productions—could pursue legal action against Disney for reviving his likeness without direct approval. While no lawsuit was filed, questions were raised about who owns a performer’s digital identity after death.
The digital Jedi: How AI helped recreate Luke Skywalker
Fate would have it that AI’s grand debut would take place in a galaxy far, far away—with the surprise appearance of Luke Skywalker in the Season 2 finale of The Mandalorian (spoiler alert). The moment thrilled fans and marked a turning point for the franchise—but it was more than just fan service.
Here’s the twist: Mark Hamill did not record any new voice lines. Instead, actor Max Lloyd-Jones performed the physical role, while Hamill’s de-aged voice was recreated with the help of Respeecher, a Ukrainian company specialising in AI-driven speech synthesis.
Impressed by their work, Disney turned to Respeecher once again—this time to recreate James Earl Jones’s iconic Darth Vader voice for the Obi-Wan Kenobi miniseries. Using archival recordings that Jones signed over for AI use, the system synthesised new dialogue that perfectly matched the intonation and timbre of his original trilogy performances.
Screenshot / YouTube / Star Wars
AI in moviemaking: Preserving legacy or crossing a line?
The use of AI to preserve and extend the voices of legendary actors has been met with a mix of admiration and unease. While many have praised the seamless execution and respect shown toward the legacy of both Hamill and Jones, others have raised concerns about consent, creative authenticity, and the long-term implications of allowing AI to perform in place of humans.
In both cases, the actors were directly involved or gave explicit approval, but these high-profile examples may be setting a precedent for a future where that level of control is not guaranteed.
A notable case that drew backlash was the planned use of a fully CGI-generated James Dean in the unreleased film Finding Jack, decades after his death. Critics and fellow actors have voiced strong opposition, arguing that bringing back a performer without their consent reduces them to a brand or asset, rather than honouring them as an artist.
AI in Hollywood: Actors made redundant?
What further heightened concerns among working actors was the launch of Promise, a new Hollywood studio built entirely around generative AI. Backed by wealthy investors, Promise is betting big on Muse—a GenAI tool designed to produce high-quality films and TV series at a fraction of the cost and time required for traditional Hollywood productions.
Filmmaking is a business, after all—and with production budgets ballooning year after year, AI-powered entertainment sounds like a dream come true for profit-driven studios.
Meta’s recent collaboration with Blumhouse Productions on MovieGen only adds fuel to the fire, signalling that major players are eager to explore a future where storytelling may be driven as much by algorithms as by authentic artistry.
AI in gaming: Automation or artistic collapse?
Speaking of entertainment businesses, we cannot ignore the world’s most popular entertainment medium: gaming. While the pandemic triggered a massive boom in game development and player engagement, the momentum was short-lived.
As profits began to slump in the years that followed, the industry was hit by a wave of layoffs, prompting widespread internal restructuring and forcing publishers to rethink their business models entirely. In hopes of cost-cutting, AAA companies had their eye on AI as their one saving grace.
Nvidia developing AI chips, along with Ubisoft and EA investing in AI and machine learning, have sent clear signals to the industry: automation is no longer just a backend tool—it is a front-facing strategy.
With AI-assisted NPC behaviour and AI voice acting, game development is shifting toward faster, cheaper, and potentially less human-driven production. In response, game developers have become concerned about their future in the industry, and actors are less inclined to sign off their rights for future projects.
AI voice acting in video games
In an attempt to compete with wealthier studios, even indie developers have turned to GenAI to replicate the voices of celebrity voice actors. Tools like ElevenLabs and Altered Studio offer a seemingly straightforward way to get high-quality talent—but if only it were that simple.
Copyright laws and concerns over authenticity remain two of the strongest barriers to the widespread adoption of AI-generated voices—especially as many consumers still view the technology as a crutch rather than a creative tool for game developers.
The legal landscape around AI-generated voices remains murky. In many places, the rights to a person’s voice—or its synthetic clone—are poorly defined, creating loopholes developers can exploit.
AI voice cloning challenges legal boundaries in gaming
The legal ambiguity has fuelled a backlash from voice actors, who argue that their performances are being mimicked without consent or pay. SAG-AFTRA and others began pushing for tighter legal protections in 2023.
A notable flashpoint came in 2025, when Epic Games faced criticism for using an AI-generated Darth Vader voice in Fortnite. SAG-AFTRA filed a formal complaint, citing licensing concerns and a lack of actor involvement.
Not all uses have been controversial. CD Projekt Red recreated the voice of the late Miłogost Reczek in Cyberpunk 2077: Phantom Liberty—with his family’s blessing—setting a respectful precedent for the ethical use of AI.
How AI is changing music production and artist Identity
AI is rapidly reshaping music production, with a recent survey showing that nearly 25% of producers are already integrating AI tools into their creative workflows. This shift reflects a growing trend in how technology is influencing composition, mixing, and even vocal performance.
Artists like Imogen Heap are embracing the change with projects like Mogen, an AI version of herself that can create music and interact with fans—blurring the line between human creativity and digital innovation.
Major labels are also experimenting: Universal Music has recently used AI to reimagine Brenda Lee’s 1958 classic in Spanish, preserving the spirit of the original while expanding its cultural reach.
AI and the future of entertainment
As AI becomes more embedded in entertainment, the line between innovation and exploitation grows thinner. What once felt like science fiction is now reshaping the way stories are told—and who gets to tell them.
Whether AI becomes a tool for creative expansion or a threat to human artistry will depend on how the industry and audiences choose to engage with it in the years ahead. As in any business, consumers vote with their wallets, and only time will tell whether AI and authenticity can truly go hand-in-hand.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!