Reddit’s new rules for AI and content use

Reddit has announced updates to its Robots Exclusion Protocol (robots.txt file), which regulates automated web bot access to websites. Traditionally used to allow search engines to index site content, the protocol now faces challenges with AI-driven scraping for model training, often without proper attribution.

In addition to the revised robots.txt file, Reddit will enforce rate limits and blocks on unidentified bots and crawlers. According to multiple sources, these measures apply to entities not complying with Reddit’s Public Content Policy or lacking formal agreements with the platform. The changes are aimed at deterring AI companies from using Reddit content to train large language models without permission. Despite these updates, AI crawlers could potentially disregard Reddit’s directives, as highlighted by recent incidents.

Recently, Wired uncovered that AI-powered startup Perplexity continued scraping Reddit content despite being blocked in the robots.txt file. Perplexity’s CEO argued that robots.txt isn’t legally binding, raising questions about the effectiveness of such protocols in regulating AI scraping practices.

Reddit’s updates will exempt authorised partners like Google, with whom Reddit has a substantial agreement allowing AI model training on its data. This move signals Reddit’s stance on controlling access to its content for AI training purposes, emphasising compliance with its policies to safeguard user interests.

These developments align with Reddit’s recent policy updates, underscoring its efforts to manage and regulate data access and use by commercial entities and partners.

Industry leaders unite for ethical AI data practices

Several companies that license music, images, videos, and other datasets for training AI systems have formed the first trade group in the sector, the Dataset Providers Alliance (DPA). The founding members of the DPA include Rightsify, vAIsual, Pixta, and Datarade. The group aims to advocate for ethical data sourcing, including protecting intellectual property rights and ensuring rights for individuals depicted in datasets.

The rise of generative AI technologies has led to backlash from content creators and numerous copyright lawsuits against major tech companies like Google, Meta, and OpenAI. Developers often train AI models using vast amounts of content, much of which is scraped from the internet without permission. To address these issues, the DPA will establish ethical standards for data transactions, ensuring that members do not sell data obtained without explicit consent. The alliance will also push for legislative measures in the NO FAKES Act, penalising unauthorised digital replicas of voices or likenesses and supporting transparency requirements in AI training data.

The DPA plans to release a white paper in July outlining its positions and advocating for these standards and legislative changes to ensure ethical practices in AI data sourcing and usage.

London cinema cancels AI-written film premiere after public backlash

A central London cinema has cancelled the premiere of a film written entirely by AI following a public backlash. The Prince Charles Cinema in Soho was set to host the world debut of ‘The Last Screenwriter,’ created by ChatGPT, but concerns about ‘the use of AI in place of a writer’ led to the screening being axed.

In a statement, the cinema explained that customer feedback highlighted significant concerns regarding AI’s role in the arts. The film, directed by Peter Luisi, was marketed as the first feature film written entirely by AI, and its plot centres on a screenwriter who grapples with an AI scriptwriting system that surpasses his abilities.

The cinema stated that the film was intended as an experiment to spark discussion about AI’s impact on the arts. However, the strong negative response from their audience prompted them to cancel the screening, emphasising their commitment to their patrons and the movie industry.

The controversy over AI’s role in the arts reflects broader industry concerns, as seen in last year’s Sag-Aftra strike in Hollywood. The debate continues, with UK MPs now calling for measures to ensure fair compensation for artists whose work is used by AI developers.

AI award-winning headless flamingo photo found to be real

A controversial AI-generated photo of a headless flamingo has ignited a heated debate over the ethical implications of AI in art and technology. The image, which was honored in the AI category of the 1839 Awards’ Color Photography Contest, has drawn criticism and concern from various sectors, including artists, technologists, and ethicists. 

The photo, titled ‘F L A M I N G O N E,’ depicts a flamingo without its head. It was created by photographer Miles Astray using a sophisticated AI model designed to generate lifelike images. Contrary to initial impressions, the photo wasn’t generated from a text prompt but was instead based on a real — and not at all beheaded — flamingo that Astray captured on the beaches of Aruba two years ago. After the photo won both third place in the category and the People’s Vote award, Astray revealed the truth, leading to his disqualification.

Proponents of AI-generated art assert that such creations push the boundaries of artistic expression, offering new and innovative ways to explore and challenge traditional concepts of art. They argue that the AI’s ability to produce unconventional and provocative images can be seen as a form of artistic evolution, allowing for greater diversity and creativity in the art world. However, detractors highlight the potential risks and ethical dilemmas posed by such technology. The headless flamingo photo, in particular, has been described as unsettling and inappropriate, sparking a broader conversation about the limits of AI-generated content. Concerns have been raised about the potential for AI to produce harmful or distressing images, and the need for guidelines and oversight to ensure responsible use.

The release of the headless flamingo photo has prompted a range of responses from the art and tech communities. Some artists view the image as a provocative statement on the nature of AI and its role in society, while others see it as a troubling example of the technology’s potential to create disturbing content. Tech experts emphasise the importance of developing ethical frameworks and guidelines for AI-generated art. They argue that while AI has the potential to revolutionize creative fields, it is crucial to establish clear boundaries and standards to prevent misuse and ensure that the technology is used responsibly.

‘‘F L A M I N G O N E’ accomplished its mission by sending a poignant message to a world grappling with ever-advancing, powerful technology and the profusion of fake images it brings. My goal was to show that nature is just so fantastic and creative, and I don’t think any machine can beat that. But, on the other hand, AI imagery has advanced to a point where it’s indistinguishable from real photography. So where does that leave us? What are the implications and the pitfalls of that? I think that is a very important conversation that we need to be having right now.”, Miles Astray told The Washington Post.

Why does it matter?

The controversy surrounding the AI-generated headless flamingo photo highlights the broader ethical challenges posed by artificial intelligence in creative fields. As AI technology continues to advance, it is increasingly capable of producing highly realistic and complex images. That raises important questions about the role of AI in art, the responsibilities of creators and developers, and the need for ethical guidelines to navigate these new frontiers.

Adobe removes AI imitations after Ansel Adams estate complaint

Adobe faced backlash this weekend after the Ansel Adams estate criticised the company for selling AI-generated imitations of the famous photographer’s work. The estate posted a screenshot on Threads showing ‘Ansel Adams-style’ images on Adobe Stock, stating that Adobe’s actions had pushed them to their limit. Adobe allows AI-generated images on its platform but requires users to have appropriate rights and prohibits content created using prompts with other artists’ names.

In response, Adobe removed the offending content and reached out to the Adams estate, which claimed it had been contacting Adobe since August 2023 without resolution. The estate urged Adobe to respect intellectual property and support the creative community proactively. Adobe Stock’s Vice President, Matthew Smith, noted that moderators review all submissions, and the company can block users who violate rules.

Adobe’s Director of Communications, Bassil Elkadi, confirmed they are in touch with the Adams estate and have taken appropriate steps to address the issue. The Adams estate has thanked Adobe for the removal and expressed hope that the issue is resolved permanently.

Taiwan accuses Chinese firms of illegal operations and talent poaching

Taiwanese authorities have accused Luxshare Precision Industry, a Chinese Apple supplier, of illegally operating in Taiwan and attempting to poach tech talent. The Ministry of Justice Investigation Bureau identified Luxshare as one of eight companies from China engaging in these illegal activities but provided no further details. The crackdown is part of Taiwan’s broader efforts to protect its high-tech industry from Chinese firms trying to steal expertise and talent.

Additionally, the investigation bureau named Zhejiang Dahua Technology, a video surveillance equipment maker blacklisted by the US in 2019 for its role in the treatment of Muslim minorities in Xinjiang. Zhejiang Dahua allegedly set up covert operations in Taiwan and attempted to obscure its activities by listing employees under a different company name. Both Luxshare and Zhejiang Dahua have not responded to these accusations.

Taiwan, home to semiconductor giant TSMC and a leader in advanced chip manufacturing views these Chinese efforts as a significant threat to its technological edge. The bureau emphasised its commitment to cracking down on illegal operations and talent poaching, warning that it will enforce the law resolutely. This announcement follows a sweep conducted earlier this month targeting suspected illegal activities by Chinese tech firms.

Senators to introduce No Fakes Act to regulate AI in music and film industries

US senators are set to introduce a bill in June to regulate AI in the music and movie industries amid rising tensions in Hollywood. The NO FAKES Act, an acronym for Nurture Originals, Foster Art, and Keep Entertainment Safe, aims to prohibit the unauthorised creation of AI-generated replicas of individuals’ likenesses or voices.

Senator Chris Coons (D-Del.) is leading the bipartisan effort with Senators Amy Klobuchar (D-Minn.), Marsha Blackburn (R-Tenn.), and Thom Tillis (R-N.C.). They are working with artists in the recording and movie industries on the bill’s details.

Musicians, in particular, are increasingly worried about the lack of protection for their names, likenesses, and voices from being used in AI-generated songs. During the Grammys on the Hill lobbying event, Sheryl Crow noted the urgency of establishing guidelines and safeguards considering the unsettling trend of artists’ voices being used without consent, even posthumously.

However, before considering a national AI bill, Senators will need to address several issues, including whether the law will override existing state laws like Tennessee’s ELVIS Act and determine the duration of licensing restrictions and postmortem rights for an artist’s digital replica.

As Senate discussions continue, the Recording Academy has supported the bill. Meanwhile, the movie industry also backs the regulation but has raised concerns about potential First Amendment infringements. A similar bill, the No AI Fraud Act, is being considered in the House. Senate Majority Leader Chuck Schumer is also pushing for AI legislation that respects First Amendment principles.

Why does it matter?

Concerns about AI’s impact on the entertainment industry escalated after a dispute between Scarlett Johansson and OpenAI. Johansson accused OpenAI of using an ‘eerily similar’ voice to hers for a new chatbot without her permission. A similar situation happened with singers Ariana Grande and Lainey Wilson, who have also had their voices mimicked without consent. Last year, an anonymous artist released ‘Heart on my Sleeve,’ falsely impersonating Drake and The Weeknd, raising alarm bells across the industry.

OpenAI’s use of Scarlett Johansson’s voice faces Hollywood backlash

OpenAI’s use of Scarlett Johansson’s voice likeness in its AI model, ChatGPT, has ignited controversy in Hollywood, with Johansson accusing the company of copying her performance from the movie ‘Her’ without consent. The dispute has intensified concerns among entertainment executives about the implications of AI technology for the creative industry, particularly regarding copyright infringement and the right to publicity.

Despite OpenAI’s claims that the voice in question was not intended to resemble Johansson’s, the incident has strained relations between content creators and tech companies. Some industry insiders view OpenAI’s actions as disrespectful and indicative of hubris, potentially hindering future collaborations between Hollywood and the tech giant.

The conflict with Johansson highlights broader concerns about using copyrighted material in OpenAI’s models and the need to protect performers’ rights. While some technologists see AI as a valuable tool for enhancing filmmaking processes, others worry about its potential misuse and infringement on intellectual property.

Johansson’s case could set a precedent for performers seeking to protect their voice and likeness rights in the age of AI. Legal experts and industry figures advocate for federal legislation to safeguard performers’ rights and address the growing impact of AI-generated content, signalling a broader dialogue about the need for regulatory measures in this evolving landscape.

Uganda minister urges stronger digital regulations for cultural diversity and artists’ rights

During World Culture Day in Kampala, Minister of State for Gender and Culture, Peace Mutuuzo, highlighted the urgent need for stronger regulation of digital platforms to protect cultural diversity, safeguard artists’ intellectual property, and ensure fair access to content. She noted the concern about the dominance of digital platforms in cultural content distribution, which poses challenges for artists in protecting their intellectual property and securing fair compensation.

This year’s World Culture Day, themed “Digital Transformation of the Culture and Creative Industries: Packaging Art and Culture as a National Public Good,” calls for updated legal structures to support digital transformation while ensuring accessibility and benefits for all.

Mutuuzo stressed that the government remains committed to strengthening the culture and creative industry through new and existing policies and legal frameworks. As part of this effort, the commemorative day aims to raise public awareness about culture’s role in development, deepen understanding of cultural diversity, and encourage appreciation of Uganda’s heritage, as guaranteed by its Constitution. It also seeks to advance the UNESCO Convention on the Protection and Promotion of the Diversity of Cultural Expression’s goals, including sustainable governance for culture, balanced cultural exchanges, increased mobility for artists, integrating culture into development, and promoting human rights.

Why does it matter?

In this context, the music industry, in particular, faces significant challenges with the growth of digital platforms. Uganda and other countries share these concerns. The rapid rise of AI-generated content, exemplified by the incident where a song mimicking Drake and The Weeknd was released, uncovered the need for the music industry to adapt to technological advancements. Early this year, the EU proposed changes to the music streaming industry to promote smaller artists and ensure fair compensation by addressing inadequate royalties and biassed algorithms. Meanwhile, the Online Streaming Act has introduced new regulations for digital distributors and media in Canada, potentially including new CanCon requirements.

Scarlett Johansson slams OpenAI for voice likeness

Scarlett Johansson has accused OpenAI of creating a voice for its ChatGPT system that sounds ‘eerily similar’ to hers despite declining an offer to voice the chatbot herself. Johansson’s statement, released Monday, followed OpenAI’s announcement to withdraw the voice known as ‘Sky’.

OpenAI CEO Sam Altman clarified that a different professional actress performed Sky’s voice and was not meant to imitate Johansson. He expressed regret for not communicating better and paused the use of Sky’s voice out of respect for Johansson.

Johansson revealed that Altman had approached her last September with an offer to voice a ChatGPT feature, which she turned down. She stated that the resemblance of Sky’s voice to her own shocked and angered her, noting that even her friends and the public found the similarity striking. The actress suggested that Altman might have intentionally chosen a voice resembling hers, referencing his tweet about ‘Her’, a film where Johansson voices an AI assistant.

Why does it matter?

The controversy highlights a growing issue in Hollywood concerning the use of AI to replicate actors’ voices and likenesses. Johansson’s concerns reflect broader industry anxieties as AI technology advances, making computer-generated voices and images increasingly indistinguishable from human ones. She has hired legal counsel to investigate the creation process of Sky’s voice.

OpenAI recently introduced its latest AI model, GPT-4o, featuring audio capabilities that enable users to converse with the chatbot in real-time, showcasing a leap forward in creating more lifelike AI interactions. Scarlett Johansson’s accusations underline the ongoing challenges and ethical considerations of using AI in entertainment.