Studio Ghibli AI trend overwhelms OpenAI

A wave of Studio Ghibli-style image generation has taken social media by storm, thanks to OpenAI’s new tool that lets users create art in the beloved animation style. The viral craze began in late March and quickly flooded platforms like TikTok and Instagram.

Initially amused, OpenAI CEO Sam Altman even joined in by updating his profile picture to a Ghibli-inspired version of himself. However, the trend’s popularity soon spiralled out of control, straining the company’s servers and pushing staff to their limits.

Altman has now urged users to ease off, describing the demand as ‘biblical’ and joking that his team needs sleep.

OpenAI plans to introduce temporary usage limits while it works to make the system more efficient. Fans, however, continue to flood Altman’s replies with memes and even more Ghibli art.

For more information on these topics, visit diplomacy.edu.

OpenAI faces copyright debate over Ghibli-style images

Studio Ghibli-style artwork has gone viral on social media, with users flocking to ChatGPT’s feature to create or transform images into Japanese anime-inspired versions. Celebrities have also joined the trend, posting Ghibli-style photos of themselves.

However, what began as a fun trend has sparked concerns over copyright infringement and the ethics of AI recreating the work of established artists instead of respecting their intellectual property.

While OpenAI has allowed premium users to create Ghibli-style images, users without subscriptions can still make up to three images for free.

The rise of this feature has led to debates over whether these AI-generated images violate copyright laws, particularly as the style is closely associated with renowned animator Hayao Miyazaki.

Intellectual property lawyer Even Brown clarified that the style itself isn’t explicitly protected, but he raised concerns that OpenAI’s AI may have been trained on Ghibli’s previous works instead of using independent sources, which could present potential copyright issues.

OpenAI has responded by taking a more conservative approach with its tools, introducing a refusal feature when users attempt to generate images in the style of living artists instead of allowing such images.

Despite this, the controversy continues, as artists like Karla Ortiz are suing other AI generators for copyright infringement. Ortiz has criticised OpenAI for not valuing the work and livelihoods of artists, calling the Ghibli trend a clear example of such disregard.

For more information on these topics, visit diplomacy.edu.

EU softens AI copyright rules

The latest draft of the EU AI Act’s Code of Practice offers a more flexible approach to copyright rules, focusing on proportionate compliance based on a provider’s size and capabilities.

However, this change comes as model providers face looming deadlines under the Act.

AI Developers must still avoid training on pirated content, respect opt-outs like robots.txt, and make reasonable efforts to prevent models from repeating copyrighted material.

However, they are no longer expected to perform exhaustive copyright checks on every dataset.

With potential fines of up to 15 million euros or 3% of global turnover, stakes remain high. Still, stakeholders welcome the clearer, more practical path to compliance, with final feedback on the draft due by the end of this month.

For more information on these topics, visit diplomacy.edu.

Judge rejects UMG’s bid to block Anthropic

A US federal judge has denied a request by Universal Music Group and other publishers to block AI firm Anthropic from using copyrighted song lyrics to train its chatbot, Claude.

Judge Eumi Lee ruled that the publishers failed to prove Anthropic’s actions caused them ‘irreparable harm’ and said their request was too broad. The lawsuit, filed in 2023, accuses Anthropic of infringing on lyrics from at least 500 songs by artists such as Beyoncé and the Rolling Stones without permission.

The case is part of a wider debate over AI training and copyright law, with companies like OpenAI and Meta arguing that their use of copyrighted material falls under ‘fair use.’

Publishers claim that Anthropic’s actions threaten the licensing market for lyrics, but the court ruled that defining such a market is premature while fair use remains unresolved.

Lee’s decision did not address whether AI training with copyrighted works constitutes fair use, leaving that question open for future legal battles.

Anthropic welcomed the ruling, calling the publishers’ request ‘disruptive and amorphous,’ while the publishers remain confident in their broader case against the AI company.

The lawsuit highlights the growing tension between content creators and AI firms as courts and lawmakers grapple with the legal and ethical implications of training AI on copyrighted material.

For more information on these topics, visit diplomacy.edu.

Meta’s use of pirated content in AI development raises legal and ethical challenges

In its quest to develop the Llama 3 AI model, Meta faced significant ethical and legal hurdles regarding sourcing a large volume of high-quality text required for AI training. The company evaluated legal licensing for acquiring books and research papers but dismissed these options due to high costs and delays.

Internal discussions indicated a preference for maintaining legal flexibility by avoiding licensing constraints and pursuing a ‘fair use’ strategy. Consequently, Meta turned to Library Genesis (LibGen), a vast database of pirated books and papers, a move reportedly sanctioned by CEO Mark Zuckerberg.

That decision led to copyright-infringement lawsuits from authors, including Sarah Silverman and Junot Díaz, underlining the complexities of pirated content in AI development. Meta and OpenAI have defended their use of copyrighted materials by invoking ‘fair use’, arguing that their AI systems transform original works into new creations.

Despite this defence, the legality remains contentious, especially as Meta’s internal communications acknowledged the legal risks and outlined measures to reduce exposure, such as removing data marked as pirated.

The situation draws attention to broader issues in the publishing world, where expensive and restricted access to literature and research has fuelled the rise of piracy sites like LibGen and Sci-Hub. While providing wider access, these platforms threaten intellectual creation’s sustainability by bypassing compensation for authors and researchers.

The challenges facing Meta and other AI companies raise important questions about managing the flow of knowledge in the digital era. While LibGen and similar repositories democratise access, they undermine intellectual property rights, disrupting the balance between accessibility and the protection of creators’ contributions.

For more information on these topics, visit diplomacy.edu.

OpenAI and Google face lawsuits while advocating for AI copyright exceptions

OpenAI and Google have urged the US government to allow AI models to be trained on copyrighted material under fair use.

The companies submitted feedback to the White House’s ‘AI Action Plan,’ arguing that restrictions could slow AI progress and give countries like China a competitive edge. Google stressed the importance of copyright and privacy exceptions, stating that text and data mining provisions are critical for innovation.

Anthropic also responded to the White House’s request but focused more on AI risks to national security and infrastructure rather than copyright concerns.

Meanwhile, OpenAI and Google are facing multiple lawsuits from news organisations and content creators, including Sarah Silverman and George R.R. Martin, who allege their works were used without permission for AI training.

Other companies, including Apple and Nvidia, have also been accused of improperly using copyrighted material, such as YouTube subtitles, to train AI models.

As legal challenges continue, major tech firms remain committed to pushing for regulations that support AI development while navigating the complexities of intellectual property rights.

For more information on these topics, visit diplomacy.edu.

Mark Zuckerberg confirms Llama’s soaring popularity

Meta’s open AI model family, Llama, has reached a significant milestone, surpassing 1 billion downloads, according to CEO Mark Zuckerberg. The announcement, made on Threads, highlights a rapid rise in adoption, with downloads increasing by 53% since December 2024. Llama powers Meta’s AI assistant across Facebook, Instagram, and WhatsApp, forming a crucial part of the company’s expanding AI ecosystem.

Despite its success, Llama has not been without controversy. Meta faces a lawsuit alleging the model was trained on copyrighted material without permission, while regulatory concerns have stalled its rollout in some European markets. Additionally, emerging competitors, such as China’s DeepSeek R1, have challenged Llama’s technological edge, prompting Meta to intensify its AI research efforts.

Looking ahead, Meta plans to launch several new Llama models, including those with advanced reasoning and multimodal capabilities. Zuckerberg has hinted at ‘agentic’ features, suggesting the AI could soon perform tasks autonomously. More details are expected at LlamaCon, Meta’s first AI developer conference, set for 29 April.

For more information on these topics, visit diplomacy.edu.

UK Technology Secretary uses ChatGPT for advice on media and AI

Technology Secretary Peter Kyle has been using ChatGPT to seek advice on media appearances and to define technical terms related to his role.

His records, obtained by New Scientist through freedom of information laws, reveal that he asked the AI tool for recommendations on which podcasts to feature and for explanations of terms like ‘digital inclusion’ and ‘anti-matter.’

ChatGPT suggested The Infinite Monkey Cage and The Naked Scientists due to their broad reach and scientific focus.

Kyle also inquired why small and medium-sized businesses in the UK have been slow to adopt AI. The chatbot pointed to factors such as a lack of awareness about government initiatives, funding limitations, and concerns over data protection regulations like GDPR.

While AI adoption remains a challenge, Labour leader Sir Keir Starmer has praised its potential, arguing that the UK government should embrace AI more to improve efficiency.

Despite Kyle’s enthusiasm for AI, he has faced criticism for allegedly prioritising the interests of Big Tech over Britain’s creative industries. Concerns have been raised over a proposed policy that could allow tech firms to train AI on copyrighted material without permission unless creators opt out.

His department defended his use of AI, stating that while he utilises the tool, it does not replace expert advice from officials.

For more information on these topics, visit diplomacy.edu.

Meta faces lawsuit in France over copyrighted AI training data

Leading French publishers and authors have filed a lawsuit against Meta, alleging the tech giant used their copyrighted content to train its artificial intelligence systems without permission.

The National Publishing Union (SNE), the National Union of Authors and Composers (SNAC), and the Society of Men of Letters (SGDL) argue that Meta’s actions constitute significant copyright infringement and economic ‘parasitism.’ The complaint was lodged earlier this week in a Paris court.

This lawsuit is the first of its kind in France but follows a wave of similar actions in the US, where authors and visual artists are challenging the use of their works by companies like Meta to train AI models.

As the issue of AI-generated content continues to grow, these legal actions highlight the mounting concerns over how tech companies utilise vast amounts of copyrighted material without compensation or consent from creators.

For more information on these topics, visit diplomacy.edu.

EU draft AI code faces industry pushback

The tech industry remains concerned about a newly released draft of the Code of Practice on General-Purpose Artificial Intelligence (GPAI), which aims to help AI providers comply with the EU‘s AI Act.

The proposed rules, which cover transparency, copyright, risk assessment, and mitigation, have sparked significant debate, especially among copyright holders and publishers.

Industry representatives argue that the draft still presents serious issues, particularly regarding copyright obligations and external risk assessments, which they believe could hinder innovation.

Tech lobby groups, such as the CCIA and DOT Europe, have expressed dissatisfaction with the latest draft, highlighting that it continues to impose burdensome requirements beyond the scope of the original AI Act.

Notably, the mandatory third-party risk assessments both before and after deployment remain a point of contention. Despite some improvements in the new version, these provisions are seen as unnecessary and potentially damaging to the industry.

Copyright concerns remain central, with organisations like News Media Europe warning that the draft still fails to respect copyright law. They argue that AI companies should not be merely expected to make ‘best efforts’ not to use content without proper authorisation.

Additionally, the draft is criticised for failing to fully address fundamental rights risks, which, according to experts, should be a primary concern for AI model providers.

The draft is open for feedback until 30 March, with the final version expected to be released in May. However, the European Commission’s ability to formalise the Code under the AI Act, which comes into full effect in 2027, remains uncertain.

Meanwhile, the issue of copyright and AI is also being closely examined by the European Parliament.

For more information on these topics, visit diplomacy.edu.