It feels like just yesterday that the internet was buzzing over the first renditions of OpenAI’s DALL·E tool, with millions competing to craft the funniest, weirdest prompts and sharing the results across social media. The sentiment was clear: the public was fascinated by the creative potential of this new technology.
But beneath the laughter and viral memes was a quieter, more uneasy question: what happens when AI not only generates quirky artwork, but begins to reshape our daily lives—both online and off? As it turns out, that process was already underway behind the scenes—and we were none the wiser.
AI in action: How the entertainment industry is using it today
Three years later, we have reached a point where AI’s influence seems to have passed the point of no return. The entertainment industry was among the first to embrace this technology, and starting with the 2025 Academy Awards, films that incorporate AI are now eligible for Oscar nominations.
That decision has been met with mixed reactions, to put it lightly. While some have praised the industry’s eagerness to explore new technological frontiers, others have claimed that AI greatly diminishes the human contribution to the art of filmmaking and therefore takes away the essence of the seventh art form.
The first wave of AI-enhanced storytelling
One recent example is the film The Brutalist, in which AI was used to refine Adrien Brody’s Hungarian dialogue to sound more authentic—a move that sparked both technical admiration and creative scepticism.
With AI now embedded in everything from voiceovers to entire digital actors, we are only beginning to confront what it truly means when creativity is no longer exclusively human.
Setting the stage: AI in the spotlight
The first major big-screen resurrection occurred in 1994’s The Crow, where Brandon Lee’s sudden passing mid-production forced the studio to rely on body doubles, digital effects, and existing footage to complete his scenes. However, it was not until 2016 that audiences witnessed the first fully digital revival.
In Rogue One: A Star Wars Story, Peter Cushing’s character was brought back to life using a combination of CGI, motion capture, and a facial stand-in. Although primarily reliant on traditional VFX, the project paved the way for future use of deepfakes and AI-assisted performance recreation across movies, TV shows, and video games.
Afterward, some speculated that studios tied to Peter Cushing’s legacy—such as Tyburn Film Productions—could pursue legal action against Disney for reviving his likeness without direct approval. While no lawsuit was filed, questions were raised about who owns a performer’s digital identity after death.
The digital Jedi: How AI helped recreate Luke Skywalker
Fate would have it that AI’s grand debut would take place in a galaxy far, far away—with the surprise appearance of Luke Skywalker in the Season 2 finale of The Mandalorian (spoiler alert). The moment thrilled fans and marked a turning point for the franchise—but it was more than just fan service.
Here’s the twist: Mark Hamill did not record any new voice lines. Instead, actor Max Lloyd-Jones performed the physical role, while Hamill’s de-aged voice was recreated with the help of Respeecher, a Ukrainian company specialising in AI-driven speech synthesis.
Impressed by their work, Disney turned to Respeecher once again—this time to recreate James Earl Jones’s iconic Darth Vader voice for the Obi-Wan Kenobi miniseries. Using archival recordings that Jones signed over for AI use, the system synthesised new dialogue that perfectly matched the intonation and timbre of his original trilogy performances.
AI in moviemaking: Preserving legacy or crossing a line?
The use of AI to preserve and extend the voices of legendary actors has been met with a mix of admiration and unease. While many have praised the seamless execution and respect shown toward the legacy of both Hamill and Jones, others have raised concerns about consent, creative authenticity, and the long-term implications of allowing AI to perform in place of humans.
In both cases, the actors were directly involved or gave explicit approval, but these high-profile examples may be setting a precedent for a future where that level of control is not guaranteed.
A notable case that drew backlash was the planned use of a fully CGI-generated James Dean in the unreleased film Finding Jack, decades after his death. Critics and fellow actors have voiced strong opposition, arguing that bringing back a performer without their consent reduces them to a brand or asset, rather than honouring them as an artist.
AI in Hollywood: Actors made redundant?
What further heightened concerns among working actors was the launch of Promise, a new Hollywood studio built entirely around generative AI. Backed by wealthy investors, Promise is betting big on Muse—a GenAI tool designed to produce high-quality films and TV series at a fraction of the cost and time required for traditional Hollywood productions.
Filmmaking is a business, after all—and with production budgets ballooning year after year, AI-powered entertainment sounds like a dream come true for profit-driven studios.
Meta’s recent collaboration with Blumhouse Productions on Movie Gen only adds fuel to the fire, signalling that major players are eager to explore a future where storytelling may be driven as much by algorithms as by authentic artistry.
AI in gaming: Automation or artistic collapse?
Speaking of entertainment businesses, we cannot ignore the world’s most popular entertainment medium: gaming. While the pandemic triggered a massive boom in game development and player engagement, the momentum was short-lived.
As profits began to slump in the years that followed, the industry was hit by a wave of layoffs, prompting widespread internal restructuring and forcing publishers to rethink their business models entirely. In hopes of cost-cutting, AAA companies had their eye on AI as their one saving grace.
Nvidia developing AI chips, along with Ubisoft and EA investing in AI and machine learning, have sent clear signals to the industry: automation is no longer just a backend tool—it is a front-facing strategy.
With AI-assisted NPC behaviour and AI voice acting, game development is shifting toward faster, cheaper, and potentially less human-driven production. In response, game developers have become concerned about their future in the industry, and actors are less inclined to sign off their rights for future projects.
AI voice acting in video games
In an attempt to compete with wealthier studios, even indie developers have turned to GenAI to replicate the voices of celebrity voice actors. Tools like ElevenLabs and Altered Studio offer a seemingly straightforward way to get high-quality talent—but if only it were that simple.
Copyright laws and concerns over authenticity remain two of the strongest barriers to the widespread adoption of AI-generated voices—especially as many consumers still view the technology as a crutch rather than a creative tool for game developers.
The legal landscape around AI-generated voices remains murky. In many places, the rights to a person’s voice—or its synthetic clone—are poorly defined, creating loopholes developers can exploit.
AI voice cloning challenges legal boundaries in gaming
The legal ambiguity has fuelled a backlash from voice actors, who argue that their performances are being mimicked without consent or pay. SAG-AFTRA and others began pushing for tighter legal protections in 2023.
A notable flashpoint came in 2025, when Epic Games faced criticism for using an AI-generated Darth Vader voice in Fortnite. SAG-AFTRA filed a formal complaint, citing licensing concerns and a lack of actor involvement.
Not all uses have been controversial. CD Projekt Red recreated the voice of the late Miłogost Reczek in Cyberpunk 2077: Phantom Liberty—with his family’s blessing—setting a respectful precedent for the ethical use of AI.
How AI is changing music production and artist Identity
AI is rapidly reshaping music production, with a recent survey showing that nearly 25% of producers are already integrating AI tools into their creative workflows. This shift reflects a growing trend in how technology is influencing composition, mixing, and even vocal performance.
Artists like Imogen Heap are embracing the change with projects like Mogen, an AI version of herself that can create music and interact with fans—blurring the line between human creativity and digital innovation.
Major labels are also experimenting: Universal Music has recently used AI to reimagine Brenda Lee’s 1958 classic in Spanish, preserving the spirit of the original while expanding its cultural reach.
AI and the future of entertainment
As AI becomes more embedded in entertainment, the line between innovation and exploitation grows thinner. What once felt like science fiction is now reshaping the way stories are told—and who gets to tell them.
Whether AI becomes a tool for creative expansion or a threat to human artistry will depend on how the industry and audiences choose to engage with it in the years ahead. As in any business, consumers vote with their wallets, and only time will tell whether AI and authenticity can truly go hand-in-hand.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!