Vimeo has joined TikTok, YouTube, and Meta in requiring creators to label AI-generated content. Announced on Wednesday, this new policy mandates that creators disclose when realistic content is produced using AI. The updated terms of service aim to prevent confusion between genuine and AI-created videos, addressing the challenge of distinguishing real from fake content due to advanced generative AI tools.
Not all AI usage requires labelling; animated content, videos with obvious visual effects, or minor AI production assistance are exempt. However, videos that feature altered depictions of celebrities or events must include an AI content label. Vimeo’s AI tools, such as those that edit out long pauses, will also prompt labelling.
Creators can manually indicate AI usage when uploading or editing videos, specifying whether AI was used for audio, visuals, or both. Vimeo plans to develop automated systems to detect and label AI-generated content to enhance transparency and reduce the burden on creators. CEO Philip Moyer emphasised the importance of protecting user-generated content from AI training models, aligning Vimeo with similar policies at YouTube.
A recent survey by Tracklib reveals that 25% of music producers are now integrating AI into their creative processes, marking a significant adoption of technology within the industry. However, most producers exhibit resistance towards AI, citing concerns over losing creative control as a primary barrier.
Among those using AI, the survey found that most employ it for stem separation (73.9%) rather than full song creation, which is used by only a small fraction (3%). Concerns among non-users primarily revolve around artistic integrity (82.2%) and doubts about AI’s ability to maintain quality (34.5%), with additional concerns including cost and copyright issues.
Interestingly, the survey highlights a stark divide between perceptions of assistive AI, which aids in music creation, and generative AI, which directly generates elements or entire songs. While some producers hold a positive view of assistive AI, generative AI faces stronger opposition, especially among younger respondents.
Overall, the survey underscores a cautious optimism about AI’s future impact on music production, with 70% of respondents expecting it to have a significant influence going forward. Despite current reservations, Tracklib predicts continued adoption of music AI, noting it is entering the “early majority” phase of adoption according to technology adoption models.
YouTube has introduced an updated eraser tool that allows creators to remove copyrighted music from their videos without affecting speech, sound effects, or other audio. Launched on 4 July, the tool uses an AI-powered algorithm to target only the copyrighted music, leaving the rest of the video intact.
Previously, videos flagged for copyrighted audio faced muting or removal. However, YouTube cautions that the tool might only be effective if the song is easy to isolate.
Good news creators: our updated Erase Song tool helps you easily remove copyright-claimed music from your video (while leaving the rest of your audio intact). Learn more… https://t.co/KeWIw3RFeH
YouTube chief Neal Mohan announced the launch on X, explaining that the company had been testing the tool for some time but struggled to remove copyrighted tracks accurately. The new AI algorithm represents a significant improvement, allowing users to mute all sound or erase the music in their videos. Advancements like this are part of YouTube’s broader efforts to leverage AI technology to enhance user experience and compliance with copyright laws.
In addition to the eraser tool, YouTube is making strides in AI-driven music licensing. The company has been negotiating with major record labels to roll out AI music licensing deals, aiming to use AI to create music and potentially offer AI voice imitations of famous artists. Following the launch of YouTube’s AI tool Dream Track last year, which allowed users to create music with AI-generated voices of well-known singers, YouTube continues to engage with major labels like Sony, Warner, and Universal to expand the use of AI in music creation and licensing.
Why does it matter?
The IATSE’s tentative agreement represents a significant step forward in securing fair wages and job protections for Hollywood’s behind-the-scenes workers, ensuring that the rapid advancements in technology do not come at the expense of human employment.
Actor Morgan Freeman, renowned for his distinctive voice, recently addressed concerns over a video circulating on TikTok featuring a voice purportedly his own but created using AI. The video, depicting a day in his niece’s life, prompted Freeman to emphasise the importance of reporting unauthorised AI usage. He thanked his fans on social media for their vigilance in maintaining authenticity and integrity, underscoring the need to protect against such deceptive practices.
This isn’t the first time Freeman has encountered unauthorised use of his likeness. Previously, his production company’s EVP, Lori McCreary, encountered deepfake videos attempting to mimic Freeman, including one falsely depicting him firing her. Such incidents highlight the growing prevalence of AI-generated content, prompting discussions about its ethical implications and the need for heightened awareness.
Thank you to my incredible fans for your vigilance and support in calling out the unauthorized use of an A.I. voice imitating me. Your dedication helps authenticity and integrity remain paramount. Grateful. #AI#scam#imitation#IdentityProtection
Freeman’s case joins a broader trend of celebrities, from Taylor Swift to Tom Cruise, facing similar challenges with AI-generated deepfakes. These instances underscore ongoing concerns about digital identity theft and the blurred lines between real and fabricated content in the digital age.
Although Judy Garland never recorded herself reading ‘The Wonderful Wizard of Oz,’ fans will soon be able to hear her rendition thanks to a new app by ElevenLabs. The AI company has launched the Reader app, which can convert text into voice-overs using digitally produced voices of deceased celebrities, including Garland, James Dean, and Burt Reynolds. The app can transform articles, e-books, and other text formats into audio.
Dustin Blank, head of partnerships at ElevenLabs, emphasised the company’s respect for the legacies of these celebrities. The company has made agreements with the estates of the actors, though compensation details remain undisclosed. That initiative highlights AI’s potential in Hollywood, especially for creating content using synthetic voices, but it also raises important questions about the licensing and ethical use of AI.
The use of AI-generated celebrity voices comes amid growing concerns about authenticity and copyright in creative industries. ElevenLabs had previously faced scrutiny when its tool was reportedly used to create a fake robocall from President Joe Biden. Similar controversies have arisen, such as OpenAI’s introduction of a voice similar to Scarlett Johansson’s, which she publicly criticised.
As AI technology advances, media companies are increasingly utilising it for voiceovers. NBC recently announced the use of an AI version of sportscaster Al Michaels for Olympics recaps on its Peacock streaming platform, with Michaels receiving compensation. While the market for AI-generated voices remains uncertain, the demand for audiobooks narrated by recognisable voices suggests a promising future for this technology.
The UN General Assembly has adopted a resolution on AI capacity building, led by China. This non-binding resolution seeks to enhance developing countries’ AI capabilities through international cooperation and capacity-building initiatives. It also urges international organisations and financial institutions to support these efforts.
The resolution comes in the context of the ongoing technology rivalry between Beijing and Washington, as both nations strive to influence AI governance and portray each other as destabilising forces. Earlier this year, the US promoted a UN resolution advocating for ‘safe, secure, and trustworthy’ AI systems, gaining the support of over 110 countries, including China.
China’s resolution acknowledges the UN’s role in AI capacity-building and calls on Secretary-General Antonio Guterres to report on the unique challenges developing countries face and provide recommendations to address them.
The Center for Investigative Reporting (CIR), known for producing Mother Jones and Reveal, has sued OpenAI and Microsoft, accusing them of using its content without permission and compensation. The lawsuit, filed in New York federal court, claims that OpenAI’s business model is based on exploiting copyrighted works and argues that AI-generated summaries threaten the financial stability of news organisations by reducing direct engagement with their content.
CIR’s CEO, Monika Bauerlein, emphasised the danger of AI tools replacing direct relationships between readers and news organisations, potentially undermining the foundations of independent journalism. The lawsuit is part of a broader legal challenge faced by OpenAI and Microsoft, with similar suits filed by other media outlets and authors.
Why does it matter?
Some news organisations have opted to collaborate with OpenAI, signing deals to allow the use of their content for AI training in exchange for compensation. Despite OpenAI’s argument that its use of publicly accessible content falls under ‘fair use,’ CIR’s lawsuit highlights the financial and ethical implications of using copyrighted material without proper attribution or payment, warning of significant impacts on investigative journalism and democracy.
Time magazine has entered a multi-year agreement with OpenAI, granting the AI firm access to its news archives. The deal allows OpenAI’s ChatGPT to cite and link back to Time.com in user queries, although financial details were not disclosed. OpenAI, led by Sam Altman, has forged similar partnerships with prominent media outlets such as the Financial Times, Axel Springer, Le Monde, and Prisa Media.
These collaborations help train and enhance OpenAI’s products while providing media companies access to AI technology for developing new products. Despite some media companies suing OpenAI over content usage, such partnerships are crucial for training AI models and offer a potential revenue stream for news publishers. Such a trend comes amid broader industry tensions, highlighted by Meta’s decision to block news sharing in Canada following new legislation requiring payment for news content.
Why does it matter?
The OpenAI-Time deal is part of a larger movement where publishers seek fair compensation for their content amid the rise of generative AI, which has prompted discussions on ethical content usage and compliance with web standards.
Channel Seven is currently investigating a significant breach on its YouTube channel, where unauthorised content featuring an AI-generated deepfake version of Elon Musk was streamed repeatedly. The incident on Thursday involved the channel being altered to mimic Tesla’s official presence. Viewers were exposed to a fabricated live stream where the AI-generated Musk promoted cryptocurrency investments via a QR code, claiming a potential doubling of assets.
During the stream, the fake Musk engaged with an audience, urging them to take advantage of the purported investment opportunity. The footage also featured a chat box from the fake Tesla page, displaying comments and links that further promoted the fraudulent scheme. The incident affected several other channels under Channel Seven’s umbrella, including 7 News and Spotlight, with all content subsequently deleted from these platforms.
A spokesperson from Channel Seven acknowledged the issue, confirming they are investigating alongside YouTube to resolve the situation swiftly. The network’s main YouTube page appeared inaccessible following the breach, prompting the investigation into how the security lapse occurred. The incident comes amidst broader challenges for Seven West Media, which recently announced significant job cuts as part of a cost-saving initiative led by its new CEO.
Why does it matter?
The breach underscores growing concerns over cybersecurity on social media platforms, particularly as unauthorised access to high-profile channels can disseminate misleading or harmful information. Channel Seven’s efforts to address the issue highlight the importance of robust digital security measures in safeguarding against such incidents in the future.
YouTube is negotiating with major record labels to license their songs for AI tools that clone popular artists’ music. The negotiations aim to secure the content needed to legally train AI song generators and launch new tools this year. Google-owned YouTube has offered upfront payments to major labels like Sony, Warner, and Universal to encourage artists to participate, but many remain opposed, fearing it could devalue their work.
Previously, YouTube tested an AI tool called ‘Dream Track,’ which allowed users to create music clips mimicking well-known artists. However, only a few artists participated, including Charli XCX and John Legend. YouTube now hopes to sign up dozens more artists to expand its AI song generator tool, though it won’t carry the Dream Track brand.
Why does it matter?
These negotiations come as AI companies like OpenAI are making licensing agreements with media groups. The proposed music deals would involve one-off payments to labels rather than royalty-based arrangements. YouTube’s AI tools could become part of its Shorts platform, competing with TikTok and other similar platforms. As these discussions continue, major labels are also suing AI startups for allegedly using copyrighted recordings without permission, seeking significant damages.