Time magazine partners with OpenAI for content access

Time magazine has entered a multi-year agreement with OpenAI, granting the AI firm access to its news archives. The deal allows OpenAI’s ChatGPT to cite and link back to Time.com in user queries, although financial details were not disclosed. OpenAI, led by Sam Altman, has forged similar partnerships with prominent media outlets such as the Financial Times, Axel Springer, Le Monde, and Prisa Media.

These collaborations help train and enhance OpenAI’s products while providing media companies access to AI technology for developing new products. Despite some media companies suing OpenAI over content usage, such partnerships are crucial for training AI models and offer a potential revenue stream for news publishers. Such a trend comes amid broader industry tensions, highlighted by Meta’s decision to block news sharing in Canada following new legislation requiring payment for news content.

Why does it matter?

The OpenAI-Time deal is part of a larger movement where publishers seek fair compensation for their content amid the rise of generative AI, which has prompted discussions on ethical content usage and compliance with web standards.

AI-generated Elon Musk hijacks Channel Seven’s YouTube

Channel Seven is currently investigating a significant breach on its YouTube channel, where unauthorised content featuring an AI-generated deepfake version of Elon Musk was streamed repeatedly. The incident on Thursday involved the channel being altered to mimic Tesla’s official presence. Viewers were exposed to a fabricated live stream where the AI-generated Musk promoted cryptocurrency investments via a QR code, claiming a potential doubling of assets.

During the stream, the fake Musk engaged with an audience, urging them to take advantage of the purported investment opportunity. The footage also featured a chat box from the fake Tesla page, displaying comments and links that further promoted the fraudulent scheme. The incident affected several other channels under Channel Seven’s umbrella, including 7 News and Spotlight, with all content subsequently deleted from these platforms.

A spokesperson from Channel Seven acknowledged the issue, confirming they are investigating alongside YouTube to resolve the situation swiftly. The network’s main YouTube page appeared inaccessible following the breach, prompting the investigation into how the security lapse occurred. The incident comes amidst broader challenges for Seven West Media, which recently announced significant job cuts as part of a cost-saving initiative led by its new CEO.

Why does it matter?

The breach underscores growing concerns over cybersecurity on social media platforms, particularly as unauthorised access to high-profile channels can disseminate misleading or harmful information. Channel Seven’s efforts to address the issue highlight the importance of robust digital security measures in safeguarding against such incidents in the future.

YouTube seeks music licensing deals for AI generation tools

YouTube is negotiating with major record labels to license their songs for AI tools that clone popular artists’ music. The negotiations aim to secure the content needed to legally train AI song generators and launch new tools this year. Google-owned YouTube has offered upfront payments to major labels like Sony, Warner, and Universal to encourage artists to participate, but many remain opposed, fearing it could devalue their work.

Previously, YouTube tested an AI tool called ‘Dream Track,’ which allowed users to create music clips mimicking well-known artists. However, only a few artists participated, including Charli XCX and John Legend. YouTube now hopes to sign up dozens more artists to expand its AI song generator tool, though it won’t carry the Dream Track brand.

Why does it matter?

These negotiations come as AI companies like OpenAI are making licensing agreements with media groups. The proposed music deals would involve one-off payments to labels rather than royalty-based arrangements. YouTube’s AI tools could become part of its Shorts platform, competing with TikTok and other similar platforms. As these discussions continue, major labels are also suing AI startups for allegedly using copyrighted recordings without permission, seeking significant damages.

Reddit’s new rules for AI and content use

Reddit has announced updates to its Robots Exclusion Protocol (robots.txt file), which regulates automated web bot access to websites. Traditionally used to allow search engines to index site content, the protocol now faces challenges with AI-driven scraping for model training, often without proper attribution.

In addition to the revised robots.txt file, Reddit will enforce rate limits and blocks on unidentified bots and crawlers. According to multiple sources, these measures apply to entities not complying with Reddit’s Public Content Policy or lacking formal agreements with the platform. The changes are aimed at deterring AI companies from using Reddit content to train large language models without permission. Despite these updates, AI crawlers could potentially disregard Reddit’s directives, as highlighted by recent incidents.

Recently, Wired uncovered that AI-powered startup Perplexity continued scraping Reddit content despite being blocked in the robots.txt file. Perplexity’s CEO argued that robots.txt isn’t legally binding, raising questions about the effectiveness of such protocols in regulating AI scraping practices.

Reddit’s updates will exempt authorised partners like Google, with whom Reddit has a substantial agreement allowing AI model training on its data. This move signals Reddit’s stance on controlling access to its content for AI training purposes, emphasising compliance with its policies to safeguard user interests.

These developments align with Reddit’s recent policy updates, underscoring its efforts to manage and regulate data access and use by commercial entities and partners.

Industry leaders unite for ethical AI data practices

Several companies that license music, images, videos, and other datasets for training AI systems have formed the first trade group in the sector, the Dataset Providers Alliance (DPA). The founding members of the DPA include Rightsify, vAIsual, Pixta, and Datarade. The group aims to advocate for ethical data sourcing, including protecting intellectual property rights and ensuring rights for individuals depicted in datasets.

The rise of generative AI technologies has led to backlash from content creators and numerous copyright lawsuits against major tech companies like Google, Meta, and OpenAI. Developers often train AI models using vast amounts of content, much of which is scraped from the internet without permission. To address these issues, the DPA will establish ethical standards for data transactions, ensuring that members do not sell data obtained without explicit consent. The alliance will also push for legislative measures in the NO FAKES Act, penalising unauthorised digital replicas of voices or likenesses and supporting transparency requirements in AI training data.

The DPA plans to release a white paper in July outlining its positions and advocating for these standards and legislative changes to ensure ethical practices in AI data sourcing and usage.

London cinema cancels AI-written film premiere after public backlash

A central London cinema has cancelled the premiere of a film written entirely by AI following a public backlash. The Prince Charles Cinema in Soho was set to host the world debut of ‘The Last Screenwriter,’ created by ChatGPT, but concerns about ‘the use of AI in place of a writer’ led to the screening being axed.

In a statement, the cinema explained that customer feedback highlighted significant concerns regarding AI’s role in the arts. The film, directed by Peter Luisi, was marketed as the first feature film written entirely by AI, and its plot centres on a screenwriter who grapples with an AI scriptwriting system that surpasses his abilities.

The cinema stated that the film was intended as an experiment to spark discussion about AI’s impact on the arts. However, the strong negative response from their audience prompted them to cancel the screening, emphasising their commitment to their patrons and the movie industry.

The controversy over AI’s role in the arts reflects broader industry concerns, as seen in last year’s Sag-Aftra strike in Hollywood. The debate continues, with UK MPs now calling for measures to ensure fair compensation for artists whose work is used by AI developers.

AI award-winning headless flamingo photo found to be real

A controversial AI-generated photo of a headless flamingo has ignited a heated debate over the ethical implications of AI in art and technology. The image, which was honored in the AI category of the 1839 Awards’ Color Photography Contest, has drawn criticism and concern from various sectors, including artists, technologists, and ethicists. 

The photo, titled ‘F L A M I N G O N E,’ depicts a flamingo without its head. It was created by photographer Miles Astray using a sophisticated AI model designed to generate lifelike images. Contrary to initial impressions, the photo wasn’t generated from a text prompt but was instead based on a real — and not at all beheaded — flamingo that Astray captured on the beaches of Aruba two years ago. After the photo won both third place in the category and the People’s Vote award, Astray revealed the truth, leading to his disqualification.

Proponents of AI-generated art assert that such creations push the boundaries of artistic expression, offering new and innovative ways to explore and challenge traditional concepts of art. They argue that the AI’s ability to produce unconventional and provocative images can be seen as a form of artistic evolution, allowing for greater diversity and creativity in the art world. However, detractors highlight the potential risks and ethical dilemmas posed by such technology. The headless flamingo photo, in particular, has been described as unsettling and inappropriate, sparking a broader conversation about the limits of AI-generated content. Concerns have been raised about the potential for AI to produce harmful or distressing images, and the need for guidelines and oversight to ensure responsible use.

The release of the headless flamingo photo has prompted a range of responses from the art and tech communities. Some artists view the image as a provocative statement on the nature of AI and its role in society, while others see it as a troubling example of the technology’s potential to create disturbing content. Tech experts emphasise the importance of developing ethical frameworks and guidelines for AI-generated art. They argue that while AI has the potential to revolutionize creative fields, it is crucial to establish clear boundaries and standards to prevent misuse and ensure that the technology is used responsibly.

‘‘F L A M I N G O N E’ accomplished its mission by sending a poignant message to a world grappling with ever-advancing, powerful technology and the profusion of fake images it brings. My goal was to show that nature is just so fantastic and creative, and I don’t think any machine can beat that. But, on the other hand, AI imagery has advanced to a point where it’s indistinguishable from real photography. So where does that leave us? What are the implications and the pitfalls of that? I think that is a very important conversation that we need to be having right now.”, Miles Astray told The Washington Post.

Why does it matter?

The controversy surrounding the AI-generated headless flamingo photo highlights the broader ethical challenges posed by artificial intelligence in creative fields. As AI technology continues to advance, it is increasingly capable of producing highly realistic and complex images. That raises important questions about the role of AI in art, the responsibilities of creators and developers, and the need for ethical guidelines to navigate these new frontiers.

Adobe removes AI imitations after Ansel Adams estate complaint

Adobe faced backlash this weekend after the Ansel Adams estate criticised the company for selling AI-generated imitations of the famous photographer’s work. The estate posted a screenshot on Threads showing ‘Ansel Adams-style’ images on Adobe Stock, stating that Adobe’s actions had pushed them to their limit. Adobe allows AI-generated images on its platform but requires users to have appropriate rights and prohibits content created using prompts with other artists’ names.

In response, Adobe removed the offending content and reached out to the Adams estate, which claimed it had been contacting Adobe since August 2023 without resolution. The estate urged Adobe to respect intellectual property and support the creative community proactively. Adobe Stock’s Vice President, Matthew Smith, noted that moderators review all submissions, and the company can block users who violate rules.

Adobe’s Director of Communications, Bassil Elkadi, confirmed they are in touch with the Adams estate and have taken appropriate steps to address the issue. The Adams estate has thanked Adobe for the removal and expressed hope that the issue is resolved permanently.

Taiwan accuses Chinese firms of illegal operations and talent poaching

Taiwanese authorities have accused Luxshare Precision Industry, a Chinese Apple supplier, of illegally operating in Taiwan and attempting to poach tech talent. The Ministry of Justice Investigation Bureau identified Luxshare as one of eight companies from China engaging in these illegal activities but provided no further details. The crackdown is part of Taiwan’s broader efforts to protect its high-tech industry from Chinese firms trying to steal expertise and talent.

Additionally, the investigation bureau named Zhejiang Dahua Technology, a video surveillance equipment maker blacklisted by the US in 2019 for its role in the treatment of Muslim minorities in Xinjiang. Zhejiang Dahua allegedly set up covert operations in Taiwan and attempted to obscure its activities by listing employees under a different company name. Both Luxshare and Zhejiang Dahua have not responded to these accusations.

Taiwan, home to semiconductor giant TSMC and a leader in advanced chip manufacturing views these Chinese efforts as a significant threat to its technological edge. The bureau emphasised its commitment to cracking down on illegal operations and talent poaching, warning that it will enforce the law resolutely. This announcement follows a sweep conducted earlier this month targeting suspected illegal activities by Chinese tech firms.

Senators to introduce No Fakes Act to regulate AI in music and film industries

US senators are set to introduce a bill in June to regulate AI in the music and movie industries amid rising tensions in Hollywood. The NO FAKES Act, an acronym for Nurture Originals, Foster Art, and Keep Entertainment Safe, aims to prohibit the unauthorised creation of AI-generated replicas of individuals’ likenesses or voices.

Senator Chris Coons (D-Del.) is leading the bipartisan effort with Senators Amy Klobuchar (D-Minn.), Marsha Blackburn (R-Tenn.), and Thom Tillis (R-N.C.). They are working with artists in the recording and movie industries on the bill’s details.

Musicians, in particular, are increasingly worried about the lack of protection for their names, likenesses, and voices from being used in AI-generated songs. During the Grammys on the Hill lobbying event, Sheryl Crow noted the urgency of establishing guidelines and safeguards considering the unsettling trend of artists’ voices being used without consent, even posthumously.

However, before considering a national AI bill, Senators will need to address several issues, including whether the law will override existing state laws like Tennessee’s ELVIS Act and determine the duration of licensing restrictions and postmortem rights for an artist’s digital replica.

As Senate discussions continue, the Recording Academy has supported the bill. Meanwhile, the movie industry also backs the regulation but has raised concerns about potential First Amendment infringements. A similar bill, the No AI Fraud Act, is being considered in the House. Senate Majority Leader Chuck Schumer is also pushing for AI legislation that respects First Amendment principles.

Why does it matter?

Concerns about AI’s impact on the entertainment industry escalated after a dispute between Scarlett Johansson and OpenAI. Johansson accused OpenAI of using an ‘eerily similar’ voice to hers for a new chatbot without her permission. A similar situation happened with singers Ariana Grande and Lainey Wilson, who have also had their voices mimicked without consent. Last year, an anonymous artist released ‘Heart on my Sleeve,’ falsely impersonating Drake and The Weeknd, raising alarm bells across the industry.