The rise of AI in Hollywood, gaming, and music

It feels like just yesterday that the internet was buzzing over the first renditions of OpenAI’s DALL·E tool, with millions competing to craft the funniest, weirdest prompts and sharing the results across social media. The sentiment was clear: the public was fascinated by the creative potential of this new technology.

But beneath the laughter and viral memes was a quieter, more uneasy question: what happens when AI not only generates quirky artwork, but begins to reshape our daily lives—both online and off? As it turns out, that process was already underway behind the scenes—and we were none the wiser.

AI in action: How the entertainment industry is using it today

Three years later, we have reached a point where AI’s influence seems to have passed the point of no return. The entertainment industry was among the first to embrace this technology, and starting with the 2025 Academy Awards, films that incorporate AI are now eligible for Oscar nominations.

That decision has been met with mixed reactions, to put it lightly. While some have praised the industry’s eagerness to explore new technological frontiers, others have claimed that AI greatly diminishes the human contribution to the art of filmmaking and therefore takes away the essence of the seventh art form.

The first wave of AI-enhanced storytelling

One recent example is the film The Brutalist, in which AI was used to refine Adrien Brody’s Hungarian dialogue to sound more authentic—a move that sparked both technical admiration and creative scepticism.

With AI now embedded in everything from voiceovers to entire digital actors, we are only beginning to confront what it truly means when creativity is no longer exclusively human.

Academy Awards 2025, Adrien Brody, The Brutalist, The Oscars, Best Actor
Adrien Brody’s Hungarian dialogue in ‘The Brutalist’ was subject to generative AI to make it sound more authentic. Screenshot / YouTube/ Oscars

Setting the stage: AI in the spotlight

The first major big-screen resurrection occurred in 1994’s The Crow, where Brandon Lee’s sudden passing mid-production forced the studio to rely on body doubles, digital effects, and existing footage to complete his scenes. However, it was not until 2016 that audiences witnessed the first fully digital revival.

In Rogue One: A Star Wars Story, Peter Cushing’s character was brought back to life using a combination of CGI, motion capture, and a facial stand-in. Although primarily reliant on traditional VFX, the project paved the way for future use of deepfakes and AI-assisted performance recreation across movies, TV shows, and video games.

Afterward, some speculated that studios tied to Peter Cushing’s legacy—such as Tyburn Film Productions—could pursue legal action against Disney for reviving his likeness without direct approval. While no lawsuit was filed, questions were raised about who owns a performer’s digital identity after death.

The digital Jedi: How AI helped recreate Luke Skywalker

Fate would have it that AI’s grand debut would take place in a galaxy far, far away—with the surprise appearance of Luke Skywalker in the Season 2 finale of The Mandalorian (spoiler alert). The moment thrilled fans and marked a turning point for the franchise—but it was more than just fan service.

Here’s the twist: Mark Hamill did not record any new voice lines. Instead, actor Max Lloyd-Jones performed the physical role, while Hamill’s de-aged voice was recreated with the help of Respeecher, a Ukrainian company specialising in AI-driven speech synthesis.

Impressed by their work, Disney turned to Respeecher once again—this time to recreate James Earl Jones’s iconic Darth Vader voice for the Obi-Wan Kenobi miniseries. Using archival recordings that Jones signed over for AI use, the system synthesised new dialogue that perfectly matched the intonation and timbre of his original trilogy performances.

Darth Vader, James Earl Jones, Star Wars, Obi-Wan Kenobi, Respeecher, AI voice synthesizer
Screenshot / YouTube / Star Wars

AI in moviemaking: Preserving legacy or crossing a line?

The use of AI to preserve and extend the voices of legendary actors has been met with a mix of admiration and unease. While many have praised the seamless execution and respect shown toward the legacy of both Hamill and Jones, others have raised concerns about consent, creative authenticity, and the long-term implications of allowing AI to perform in place of humans.

In both cases, the actors were directly involved or gave explicit approval, but these high-profile examples may be setting a precedent for a future where that level of control is not guaranteed.

A notable case that drew backlash was the planned use of a fully CGI-generated James Dean in the unreleased film Finding Jack, decades after his death. Critics and fellow actors have voiced strong opposition, arguing that bringing back a performer without their consent reduces them to a brand or asset, rather than honouring them as an artist.

AI in Hollywood: Actors made redundant?

What further heightened concerns among working actors was the launch of Promise, a new Hollywood studio built entirely around generative AI. Backed by wealthy investors, Promise is betting big on Muse—a GenAI tool designed to produce high-quality films and TV series at a fraction of the cost and time required for traditional Hollywood productions.

Filmmaking is a business, after all—and with production budgets ballooning year after year, AI-powered entertainment sounds like a dream come true for profit-driven studios.

Meta’s recent collaboration with Blumhouse Productions on Movie Gen only adds fuel to the fire, signalling that major players are eager to explore a future where storytelling may be driven as much by algorithms as by authentic artistry.

AI in gaming: Automation or artistic collapse?

Speaking of entertainment businesses, we cannot ignore the world’s most popular entertainment medium: gaming. While the pandemic triggered a massive boom in game development and player engagement, the momentum was short-lived.

As profits began to slump in the years that followed, the industry was hit by a wave of layoffs, prompting widespread internal restructuring and forcing publishers to rethink their business models entirely. In hopes of cost-cutting, AAA companies had their eye on AI as their one saving grace.

Nvidia developing AI chips, along with Ubisoft and EA investing in AI and machine learning, have sent clear signals to the industry: automation is no longer just a backend tool—it is a front-facing strategy.

With AI-assisted NPC behaviour and AI voice acting, game development is shifting toward faster, cheaper, and potentially less human-driven production. In response, game developers have become concerned about their future in the industry, and actors are less inclined to sign off their rights for future projects.

AI voice acting in video games

In an attempt to compete with wealthier studios, even indie developers have turned to GenAI to replicate the voices of celebrity voice actors. Tools like ElevenLabs and Altered Studio offer a seemingly straightforward way to get high-quality talent—but if only it were that simple.

Copyright laws and concerns over authenticity remain two of the strongest barriers to the widespread adoption of AI-generated voices—especially as many consumers still view the technology as a crutch rather than a creative tool for game developers.

The legal landscape around AI-generated voices remains murky. In many places, the rights to a person’s voice—or its synthetic clone—are poorly defined, creating loopholes developers can exploit.

AI voice cloning challenges legal boundaries in gaming

The legal ambiguity has fuelled a backlash from voice actors, who argue that their performances are being mimicked without consent or pay. SAG-AFTRA and others began pushing for tighter legal protections in 2023.

A notable flashpoint came in 2025, when Epic Games faced criticism for using an AI-generated Darth Vader voice in Fortnite. SAG-AFTRA filed a formal complaint, citing licensing concerns and a lack of actor involvement.

Not all uses have been controversial. CD Projekt Red recreated the voice of the late Miłogost Reczek in Cyberpunk 2077: Phantom Liberty—with his family’s blessing—setting a respectful precedent for the ethical use of AI.

How AI is changing music production and artist Identity

AI is rapidly reshaping music production, with a recent survey showing that nearly 25% of producers are already integrating AI tools into their creative workflows. This shift reflects a growing trend in how technology is influencing composition, mixing, and even vocal performance.

Artists like Imogen Heap are embracing the change with projects like Mogen, an AI version of herself that can create music and interact with fans—blurring the line between human creativity and digital innovation.

Major labels are also experimenting: Universal Music has recently used AI to reimagine Brenda Lee’s 1958 classic in Spanish, preserving the spirit of the original while expanding its cultural reach.

AI and the future of entertainment

As AI becomes more embedded in entertainment, the line between innovation and exploitation grows thinner. What once felt like science fiction is now reshaping the way stories are told—and who gets to tell them.

Whether AI becomes a tool for creative expansion or a threat to human artistry will depend on how the industry and audiences choose to engage with it in the years ahead. As in any business, consumers vote with their wallets, and only time will tell whether AI and authenticity can truly go hand-in-hand.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI Darth Vader in Fortnite sparks union dispute

The use of an AI-generated Darth Vader voice in Fortnite has triggered a legal dispute between SAG-AFTRA and Epic Games.

According to GamesIndustry.biz, the actors’ union filed an unfair labor practice complaint, claiming it was not informed or consulted about the decision to use an artificial voice model in the game.

In Fortnite’s Galactic Battle season, players who defeat Darth Vader in Battle Royale can recruit him, triggering limited voice interactions powered by conversational AI.

The voice used stems from a licensing agreement with the estate of James Earl Jones, who retired in 2022 and granted rights for AI use of his iconic performance.

While Epic Games has confirmed it had legal permission to use Jones’ voice, SAG-AFTRA alleges the company bypassed union protocols by not informing them or offering the role to a human actor.

The outcome of this dispute could have broader implications for how AI voices are integrated into video games and media going forward, particularly regarding labor rights and union oversight.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

India probes Uber and Ola over iPhone pricing

The Indian government has issued notices to ride-hailing companies Ola and Uber, launching an investigation into allegations of price discrimination. Concerns have arisen over reports and user complaints suggesting that iPhone users are being charged significantly higher fares for the same rides compared to those using Android devices. This investigation, led by the Central Consumer Protection Agency (CCPA), aims to determine if these price discrepancies are indeed occurring and whether they constitute unfair trade practices.

The government has previously expressed strong opposition to differential pricing, deeming it an unfair and discriminatory practice. India is a crucial market for both Ola and Uber, with intense competition among various ride-hailing services. The outcome of this investigation could have significant implications for the industry, potentially impacting pricing models and consumer trust.

Beyond the ride-hailing sector, the CCPA will also examine potential pricing disparities in other sectors, including food delivery and online ticketing platforms. The broader investigation aims to identify and address any instances where consumers may be facing discriminatory pricing based on factors such as the device they use or other personal characteristics.

Ensuring fair and transparent pricing practices in the digital economy is crucial. As technology continues to shape our daily lives, it is essential to address concerns about potential algorithmic biases and discriminatory practices that may be embedded within digital platforms. The Indian government’s action sends a clear message that such practices will not be tolerated and that consumer protection remains a top priority.

Mystery of David Mayer and ChatGPT resolved

Social media buzzed over the weekend as ChatGPT, the popular AI chatbot, mysteriously refused to generate the name ‘David Mayer.’ Users reported responses halting mid-sentence or error messages when attempting to input the name, sparking widespread speculation about Mayer’s identity and theories that he might have requested privacy through legal means.

OpenAI, the chatbot’s developer, attributed the issue to a system glitch. A spokesperson clarified, ‘One of our tools mistakenly flagged this name, which shouldn’t have happened. We’re working on a fix.’ The company has since resolved the glitch for ‘David Mayer,’ but other names continue to trigger errors.

Conspiracy theories emerged online, with some suggesting a link to David Mayer de Rothschild, who denied involvement, and others speculating connections to a deceased academic with ties to a security list. Experts noted the potential relevance of GDPR’s ‘right to be forgotten’ privacy rules, which allow individuals to request the removal of their data from digital platforms.

However, privacy specialists highlighted AI systems’ challenges in fully erasing personal data due to their reliance on massive datasets from public sources. While the incident has drawn attention to the complexities of AI data handling and privacy compliance, OpenAI remains tight-lipped on whether the glitch stemmed from a deletion request under GDPR guidelines. The situation underscores the tension between advancing AI capabilities and safeguarding individual privacy.

Judge rules NYC food delivery data law unconstitutional

A federal judge has ruled that New York City’s law requiring food delivery companies to share customer data with restaurants is unconstitutional. The decision, handed down by US District Judge Analisa Torres, found the law violated the First Amendment by regulating commercial speech inappropriately.

The law, introduced in 2021 to support local restaurants recovering from the COVID-19 pandemic, required delivery platforms like DoorDash and UberEats to share customer details. Delivery companies in US argued that the law threatened both customer privacy and their business by allowing restaurants to use the data for their own marketing purposes.

Judge Torres stated that New York City failed to prove the law was necessary and suggested alternative methods to support restaurants, such as letting customers opt-in to share their data or providing financial incentives. City officials are reviewing the ruling, while delivery companies hailed it as a victory for data protection.

The New York City Hospitality Alliance expressed disappointment, claiming the ruling hurts small businesses and calling for the city to appeal the decision.

Meta complies with Brazil’s data protection demands

Meta Platforms, the parent company of Facebook and Instagram, announced on Tuesday that it will inform Brazilian users about how their data is utilised to train generative AI. Meta’s step has been caused by the pressure from Brazil’s National Data Protection Authority (ANPD), which had previously suspended Meta’s new privacy policy due to concerns over using personal data for AI training.

Starting this week, Meta users in Brazil will receive email and social media notifications, providing details on how their data might be used for AI development. Users will also have the option to opt out of this data usage. The ANPD had initially halted Meta’s privacy policy in July, but it lifted the suspension last Friday after Meta agreed to make these disclosures.

In response to the ANPD’s concerns, Meta had also temporarily suspended using generative AI tools in Brazil, including popular AI-generated stickers on WhatsApp, a platform with a significant user base. This suspension was enacted while Meta engaged in discussions with the ANPD to address the agency’s concerns.

Despite the ANPD lifting the suspension, Meta has yet to confirm whether it will immediately reinstate the AI tools in Brazil. When asked, the company reiterated that the suspension was initially a measure taken during ongoing talks with the data protection authority.

The development marks an important step in Brazil’s efforts to ensure transparency and user control over personal data in the age of AI.

California passes new bill regulating digital replicas of performers

California’s efforts to regulate the use of digital replicas of performers took a significant step forward with the passage of AB 1836 in the state Senate. The new bill mandates that studios obtain explicit consent from the estates of deceased performers before creating digital replicas for use in films, TV shows, video games, and other media. The move comes just days after the California legislature passed AB 2602, which enforces similar consent requirements for living actors.

SAG-AFTRA, the union representing film and television performers, has strongly advocated for these measures, emphasising the importance of protecting performers’ rights in the digital age. In a statement released after the Senate’s approval of AB 1836, the union described the bill as a ‘legislative priority’ and urged Governor Gavin Newsom to sign it into law. The union’s stance highlights the growing concern over the unauthorised use of digital replicas, particularly as technology makes it increasingly easy to recreate performers’ likenesses long after they have passed away, keeping the audience concerned and aware of the issue.

If signed into law, AB 1836 would ensure that the estates of deceased performers have control over how their likenesses are used, potentially setting a precedent for other states to follow. However, the bill also raises practical challenges, such as determining who has the authority to grant consent on behalf of the deceased, which could complicate its implementation. The bill reflects a broader push within the entertainment industry to establish clear legal protections against exploiting living and deceased performers in the rapidly evolving digital landscape.

Alongside the AI bill, the passing of bill AB 1836 underscores California’s role as a leader in entertainment industry legislation, particularly in areas where technology intersects with performers’ rights. As the debate over digital replicas continues, the potential impact of AB 1836 on the industry could have far-reaching implications, keeping the audience engaged and interested in the future of entertainment law.

Delhi High Court directs Google and Microsoft to challenge NCII images removal order

The Delhi High Court has directed Google and Microsoft to file a review petition seeking the recall of a previous order mandating search engines to promptly restrict access to non-consensual intimate images (NCII) without necessitating victims to provide specific URLs repeatedly. Both tech giants argued the technological infeasibility of identifying and proactively taking down NCII images, even with the assistance of AI tools.

The court’s order stems from a 2023 ruling requiring search engines to remove NCII within 24 hours, as per the IT Rules, 2021, or risk losing their safe harbour protections under Section 79 of the IT Act, 2000. It proposed issuing a unique token upon initial takedown, with search engines responsible for turning off any resurfaced content using pre-existing technology to alleviate the burden on victims of tracking and repeatedly reporting specific URLs. Moreover, the court suggested leveraging hash-matching technology and developing a ‘trusted third-party encrypted platform’ for victims to register NCII content or URLs, shifting the responsibility of identifying and removing resurfaced content away from victims and onto the platform while ensuring utmost transparency and accountability standards.

However, Google expressed concerns regarding automated tools’ inability to discern consent in shared sexual content, potentially leading to unintended takedowns and infringing on free speech, echoing Microsoft’s apprehension about the implications of proactive monitoring on privacy and freedom of expression.

CJEU: Search engines to dereference allegedly inaccurate content

At the request of the German Federal Court of Justice, the Court of Justice of the European Union (CJEU) has held that search engine operators shall dereference content that the user shows to be manifestly inaccurate, in the exercise of their right to be forgotten. In the case at hand, two managers of a group of investment companies filed a request with Google asking for dereference of results of searches made with their names that reveal articles containing inaccurate claims about the group. Also, they requested the removal of their photos from the list of results of an image search made on the basis of their names. The burden of proof is on requesting users to provide evidence capable of establishing the inaccuracy of the information. Such evidence does not need to stem from a judicial decision proving the inaccuracy. In regard to the display of photos, the CJEU stated that the search engine operators must conduct a separate balancing of competing rights and that the informative value of photos should be taken into account without taking into consideration the context of their publication on the internet page from which they are taken.