AI technology sparks debate in Hollywood

Hollywood is grappling with AI’s increasing role in filmmaking, with executives, actors, and developers exploring the technology’s potential. At a recent event, industry leaders discussed AI-generated video, heralded as the biggest breakthrough since the advent of sound in cinema.

Despite its growing presence, AI’s impact remains controversial, especially after recent strikes from actors and writers seeking protection from AI exploitation.

AI technology is making its way into movies and TV shows, with Oscar-nominated films like Emilia Perez and The Brutalist using AI for voice alterations and actor de-aging. AI’s capacity to generate scripts, animation, and even actors has led to fears of job displacement, particularly for background actors.

However, proponents like Bryn Mooser of Moonvalley argue that AI can empower filmmakers, especially independent creators, to produce high-quality content at a fraction of traditional costs.

While Hollywood is still divided on AI’s potential, several tech companies, including OpenAI and Google, are lobbying for AI models to access copyrighted art to fuel their development, claiming it’s vital for national security.

The push has met resistance from filmmakers who fear it could undermine the creative industry, which provides millions of jobs. Despite the opposition, AI’s role in filmmaking is rapidly expanding, and its future remains uncertain.

Some in the industry believe AI, if used correctly, can enhance creativity by allowing filmmakers to create worlds and narratives beyond their imagination. However, there is a push to ensure that artists remain central to this transformation, and that AI’s role in cinema respects creators’ rights and protections.

As AI technology evolves, Hollywood faces a critical choice: embrace it responsibly instead of the risk of being overtaken by powerful tech companies.

For more information on these topics, visit diplomacy.edu.

Runway expands AI video capabilities with Gen-4

Runway has unveiled Gen-4, its most advanced AI-powered video generator yet, promising superior character consistency, realistic motion, and world understanding.

The model is now available to individual and enterprise users, allowing them to generate dynamic videos using visual references and text-based instructions.

Backed by investors such as Google and Nvidia, Runway faces fierce competition from OpenAI and Google in the AI video space. The company has differentiated itself by securing Hollywood partnerships and investing heavily in AI-generated filmmaking.

However, it remains tight-lipped about its training data, raising concerns over copyright issues.

Runway is currently embroiled in a lawsuit from artists accusing the company of training its models on copyrighted works instead of getting permission. The company claims fair use as a defence.

Meanwhile, it is reportedly seeking new funding at a $4 billion valuation, with hopes of reaching $300 million in annual revenue. As AI video tools advance, concerns grow over their impact on jobs in the entertainment industry, with thousands of positions at risk.

For more information on these topics, visit diplomacy.edu.

OpenAI expands image generator access to all users

OpenAI has made its image generator, powered by the GPT-4o model, accessible to all users, CEO Sam Altman announced on X. Previously, this feature was available only to paying ChatGPT subscribers.

While there is no clear indication of how many images free-tier users can create, Altman previously mentioned a possible limit of three per day.

The tool has seen massive demand since its launch, with Altman joking that OpenAI’s GPUs were ‘melting’ under the pressure. However, it has also sparked controversy, particularly after users began generating images in the style of Studio Ghibli, raising copyright concerns.

Others have used the generator to create fake receipts, such as restaurant bills. OpenAI has responded by stating that all AI-generated images contain metadata identifying them and that the company takes action when violations occur.

In a major financial development, OpenAI has secured $40 billion in funding from SoftBank, valuing the company at $300 billion. The company also revealed that ChatGPT now boasts 500 million weekly active users and 700 million monthly active users, marking a significant milestone in its growth.

For more information on these topics, visit diplomacy.edu.

Studio Ghibli AI trend overwhelms OpenAI

A wave of Studio Ghibli-style image generation has taken social media by storm, thanks to OpenAI’s new tool that lets users create art in the beloved animation style. The viral craze began in late March and quickly flooded platforms like TikTok and Instagram.

Initially amused, OpenAI CEO Sam Altman even joined in by updating his profile picture to a Ghibli-inspired version of himself. However, the trend’s popularity soon spiralled out of control, straining the company’s servers and pushing staff to their limits.

Altman has now urged users to ease off, describing the demand as ‘biblical’ and joking that his team needs sleep.

OpenAI plans to introduce temporary usage limits while it works to make the system more efficient. Fans, however, continue to flood Altman’s replies with memes and even more Ghibli art.

For more information on these topics, visit diplomacy.edu.

OpenAI faces copyright debate over Ghibli-style images

Studio Ghibli-style artwork has gone viral on social media, with users flocking to ChatGPT’s feature to create or transform images into Japanese anime-inspired versions. Celebrities have also joined the trend, posting Ghibli-style photos of themselves.

However, what began as a fun trend has sparked concerns over copyright infringement and the ethics of AI recreating the work of established artists instead of respecting their intellectual property.

While OpenAI has allowed premium users to create Ghibli-style images, users without subscriptions can still make up to three images for free.

The rise of this feature has led to debates over whether these AI-generated images violate copyright laws, particularly as the style is closely associated with renowned animator Hayao Miyazaki.

Intellectual property lawyer Even Brown clarified that the style itself isn’t explicitly protected, but he raised concerns that OpenAI’s AI may have been trained on Ghibli’s previous works instead of using independent sources, which could present potential copyright issues.

OpenAI has responded by taking a more conservative approach with its tools, introducing a refusal feature when users attempt to generate images in the style of living artists instead of allowing such images.

Despite this, the controversy continues, as artists like Karla Ortiz are suing other AI generators for copyright infringement. Ortiz has criticised OpenAI for not valuing the work and livelihoods of artists, calling the Ghibli trend a clear example of such disregard.

For more information on these topics, visit diplomacy.edu.

EU softens AI copyright rules

The latest draft of the EU AI Act’s Code of Practice offers a more flexible approach to copyright rules, focusing on proportionate compliance based on a provider’s size and capabilities.

However, this change comes as model providers face looming deadlines under the Act.

AI Developers must still avoid training on pirated content, respect opt-outs like robots.txt, and make reasonable efforts to prevent models from repeating copyrighted material.

However, they are no longer expected to perform exhaustive copyright checks on every dataset.

With potential fines of up to 15 million euros or 3% of global turnover, stakes remain high. Still, stakeholders welcome the clearer, more practical path to compliance, with final feedback on the draft due by the end of this month.

For more information on these topics, visit diplomacy.edu.

Judge rejects UMG’s bid to block Anthropic

A US federal judge has denied a request by Universal Music Group and other publishers to block AI firm Anthropic from using copyrighted song lyrics to train its chatbot, Claude.

Judge Eumi Lee ruled that the publishers failed to prove Anthropic’s actions caused them ‘irreparable harm’ and said their request was too broad. The lawsuit, filed in 2023, accuses Anthropic of infringing on lyrics from at least 500 songs by artists such as Beyoncé and the Rolling Stones without permission.

The case is part of a wider debate over AI training and copyright law, with companies like OpenAI and Meta arguing that their use of copyrighted material falls under ‘fair use.’

Publishers claim that Anthropic’s actions threaten the licensing market for lyrics, but the court ruled that defining such a market is premature while fair use remains unresolved.

Lee’s decision did not address whether AI training with copyrighted works constitutes fair use, leaving that question open for future legal battles.

Anthropic welcomed the ruling, calling the publishers’ request ‘disruptive and amorphous,’ while the publishers remain confident in their broader case against the AI company.

The lawsuit highlights the growing tension between content creators and AI firms as courts and lawmakers grapple with the legal and ethical implications of training AI on copyrighted material.

For more information on these topics, visit diplomacy.edu.

Meta’s use of pirated content in AI development raises legal and ethical challenges

In its quest to develop the Llama 3 AI model, Meta faced significant ethical and legal hurdles regarding sourcing a large volume of high-quality text required for AI training. The company evaluated legal licensing for acquiring books and research papers but dismissed these options due to high costs and delays.

Internal discussions indicated a preference for maintaining legal flexibility by avoiding licensing constraints and pursuing a ‘fair use’ strategy. Consequently, Meta turned to Library Genesis (LibGen), a vast database of pirated books and papers, a move reportedly sanctioned by CEO Mark Zuckerberg.

That decision led to copyright-infringement lawsuits from authors, including Sarah Silverman and Junot Díaz, underlining the complexities of pirated content in AI development. Meta and OpenAI have defended their use of copyrighted materials by invoking ‘fair use’, arguing that their AI systems transform original works into new creations.

Despite this defence, the legality remains contentious, especially as Meta’s internal communications acknowledged the legal risks and outlined measures to reduce exposure, such as removing data marked as pirated.

The situation draws attention to broader issues in the publishing world, where expensive and restricted access to literature and research has fuelled the rise of piracy sites like LibGen and Sci-Hub. While providing wider access, these platforms threaten intellectual creation’s sustainability by bypassing compensation for authors and researchers.

The challenges facing Meta and other AI companies raise important questions about managing the flow of knowledge in the digital era. While LibGen and similar repositories democratise access, they undermine intellectual property rights, disrupting the balance between accessibility and the protection of creators’ contributions.

For more information on these topics, visit diplomacy.edu.

OpenAI and Google face lawsuits while advocating for AI copyright exceptions

OpenAI and Google have urged the US government to allow AI models to be trained on copyrighted material under fair use.

The companies submitted feedback to the White House’s ‘AI Action Plan,’ arguing that restrictions could slow AI progress and give countries like China a competitive edge. Google stressed the importance of copyright and privacy exceptions, stating that text and data mining provisions are critical for innovation.

Anthropic also responded to the White House’s request but focused more on AI risks to national security and infrastructure rather than copyright concerns.

Meanwhile, OpenAI and Google are facing multiple lawsuits from news organisations and content creators, including Sarah Silverman and George R.R. Martin, who allege their works were used without permission for AI training.

Other companies, including Apple and Nvidia, have also been accused of improperly using copyrighted material, such as YouTube subtitles, to train AI models.

As legal challenges continue, major tech firms remain committed to pushing for regulations that support AI development while navigating the complexities of intellectual property rights.

For more information on these topics, visit diplomacy.edu.

Mark Zuckerberg confirms Llama’s soaring popularity

Meta’s open AI model family, Llama, has reached a significant milestone, surpassing 1 billion downloads, according to CEO Mark Zuckerberg. The announcement, made on Threads, highlights a rapid rise in adoption, with downloads increasing by 53% since December 2024. Llama powers Meta’s AI assistant across Facebook, Instagram, and WhatsApp, forming a crucial part of the company’s expanding AI ecosystem.

Despite its success, Llama has not been without controversy. Meta faces a lawsuit alleging the model was trained on copyrighted material without permission, while regulatory concerns have stalled its rollout in some European markets. Additionally, emerging competitors, such as China’s DeepSeek R1, have challenged Llama’s technological edge, prompting Meta to intensify its AI research efforts.

Looking ahead, Meta plans to launch several new Llama models, including those with advanced reasoning and multimodal capabilities. Zuckerberg has hinted at ‘agentic’ features, suggesting the AI could soon perform tasks autonomously. More details are expected at LlamaCon, Meta’s first AI developer conference, set for 29 April.

For more information on these topics, visit diplomacy.edu.