The prime minister of Australian state Queensland, Steven Miles, has condemned an AI-generated video created by the LNP opposition, calling it a ‘turning point for our democracy.’ The TikTok video depicts the Queensland premier dancing under text about rising living costs and is clearly marked as AI-generated. Miles has stated that the state Labor party will not use AI-generated advertisements in the upcoming election campaign.
Miles expressed concerns about the potential dangers of AI in political communication, highlighting the need for caution as videos are more likely to be believed than doctored photos. Despite rejecting AI for their own content, Miles dismissed the need for truth in advertising laws, asserting that Labor has no intention of creating deepfake videos.
The LNP defended their use of AI, emphasising that the video was clearly labelled and aimed at highlighting issues like higher rents and increased power prices under Labor. The Electoral Commission of Queensland noted that while the state’s electoral act does not specifically address AI, any false statements about a candidate’s character can be prosecuted.
Experts, including communications lecturer Susan Grantham and QUT’s Patrik Wikstrom, have warned about the broader implications of AI in politics. Grantham pointed out that politicians already using AI for lighter content are at greater risk of being targeted. Wikstrom stressed that the real issue is political communication designed to deceive, echoing concerns raised by a UK elections watchdog about AI deepfakes undermining elections. Australia is also planning to implement tougher laws focusing on deepfakes.
The Content Origin Protection and Integrity from Edited and Deepfaked Media Bill, also known as the COPIED Act, was introduced on 11 July 2024 by US lawmakers, Senators Marsha Blackburn, Maria Cantrell and Martin Heinrich. The bill is expected to safeguard the intellectual property of creatives, particularly journalists, publishers, broadcasters and artists.
According to the bill, images, videos, audio clips and texts are considered deepfakes if they contain ‘synthetic or synthetically modified content that appears authentic to a reasonable person and creates a false understanding or impression’. If moved into legislation, the bill restricts online platforms where US-based customers frequent, and annual revenue of at least $50 million is generated or where 25 million active users are registered for three consecutive months.
Under the bill, companies that deploy or develop AI models must install a feature allowing users to tag such images with contextual or content provenance information, such as their source and history, in a machine-readable format. After that, it would be illegal to remove such tags for any other reason than research, use these images to train subsequent AI models or generate content. Victims will then have the right to sue offenders.
The COPIED Act is backed by several artist-affiliated groups, including SAG-AFTRA, the National Music Publishers’ Association, the Songwriters Guild of America (SGA), the National Association of Broadcasters as well as The US National Institute of Standards and Technology (NIST), the US Patent and Trademark Office (USPTO) and the US Copyright Office. The bill also has received bipartisan support.
AI is currently a hot topic in the K-Pop community, as several top groups, including Seventeen, have begun using the technology to create music videos and write lyrics. Seventeen, one of the most successful K-Pop acts, has incorporated AI-generated scenes in their latest single, ‘Maestro,’ and experimented with AI in songwriting. Band member Woozi expressed a desire to develop alongside technology rather than resist it.
The use of AI has divided fans. Some, like super fan Ashley Peralta, appreciate AI’s ability to help artists overcome creative blocks but worry it might disconnect fans from the artists’ authentic emotions. Podcaster Chelsea Toledo shares similar concerns, fearing AI-generated lyrics might dilute Seventeen’s reputation as a self-producing group known for their personal touch in songwriting and choreography.
Industry professionals, such as producer Chris Nairn, recognise South Korea’s progressive approach to music production. While he acknowledges AI’s potential, he doubts its ability to match top-tier songwriting’s innovation and uniqueness. Music journalist Arpita Adhya points out the immense pressure on K-Pop artists to produce frequent content, which may drive the adoption of AI.
Vimeo has joined TikTok, YouTube, and Meta in requiring creators to label AI-generated content. Announced on Wednesday, this new policy mandates that creators disclose when realistic content is produced using AI. The updated terms of service aim to prevent confusion between genuine and AI-created videos, addressing the challenge of distinguishing real from fake content due to advanced generative AI tools.
Not all AI usage requires labelling; animated content, videos with obvious visual effects, or minor AI production assistance are exempt. However, videos that feature altered depictions of celebrities or events must include an AI content label. Vimeo’s AI tools, such as those that edit out long pauses, will also prompt labelling.
Creators can manually indicate AI usage when uploading or editing videos, specifying whether AI was used for audio, visuals, or both. Vimeo plans to develop automated systems to detect and label AI-generated content to enhance transparency and reduce the burden on creators. CEO Philip Moyer emphasised the importance of protecting user-generated content from AI training models, aligning Vimeo with similar policies at YouTube.
A recent survey by Tracklib reveals that 25% of music producers are now integrating AI into their creative processes, marking a significant adoption of technology within the industry. However, most producers exhibit resistance towards AI, citing concerns over losing creative control as a primary barrier.
Among those using AI, the survey found that most employ it for stem separation (73.9%) rather than full song creation, which is used by only a small fraction (3%). Concerns among non-users primarily revolve around artistic integrity (82.2%) and doubts about AI’s ability to maintain quality (34.5%), with additional concerns including cost and copyright issues.
Interestingly, the survey highlights a stark divide between perceptions of assistive AI, which aids in music creation, and generative AI, which directly generates elements or entire songs. While some producers hold a positive view of assistive AI, generative AI faces stronger opposition, especially among younger respondents.
Overall, the survey underscores a cautious optimism about AI’s future impact on music production, with 70% of respondents expecting it to have a significant influence going forward. Despite current reservations, Tracklib predicts continued adoption of music AI, noting it is entering the “early majority” phase of adoption according to technology adoption models.
YouTube has introduced an updated eraser tool that allows creators to remove copyrighted music from their videos without affecting speech, sound effects, or other audio. Launched on 4 July, the tool uses an AI-powered algorithm to target only the copyrighted music, leaving the rest of the video intact.
Previously, videos flagged for copyrighted audio faced muting or removal. However, YouTube cautions that the tool might only be effective if the song is easy to isolate.
Good news creators: our updated Erase Song tool helps you easily remove copyright-claimed music from your video (while leaving the rest of your audio intact). Learn more… https://t.co/KeWIw3RFeH
YouTube chief Neal Mohan announced the launch on X, explaining that the company had been testing the tool for some time but struggled to remove copyrighted tracks accurately. The new AI algorithm represents a significant improvement, allowing users to mute all sound or erase the music in their videos. Advancements like this are part of YouTube’s broader efforts to leverage AI technology to enhance user experience and compliance with copyright laws.
In addition to the eraser tool, YouTube is making strides in AI-driven music licensing. The company has been negotiating with major record labels to roll out AI music licensing deals, aiming to use AI to create music and potentially offer AI voice imitations of famous artists. Following the launch of YouTube’s AI tool Dream Track last year, which allowed users to create music with AI-generated voices of well-known singers, YouTube continues to engage with major labels like Sony, Warner, and Universal to expand the use of AI in music creation and licensing.
Why does it matter?
The IATSE’s tentative agreement represents a significant step forward in securing fair wages and job protections for Hollywood’s behind-the-scenes workers, ensuring that the rapid advancements in technology do not come at the expense of human employment.
Actor Morgan Freeman, renowned for his distinctive voice, recently addressed concerns over a video circulating on TikTok featuring a voice purportedly his own but created using AI. The video, depicting a day in his niece’s life, prompted Freeman to emphasise the importance of reporting unauthorised AI usage. He thanked his fans on social media for their vigilance in maintaining authenticity and integrity, underscoring the need to protect against such deceptive practices.
This isn’t the first time Freeman has encountered unauthorised use of his likeness. Previously, his production company’s EVP, Lori McCreary, encountered deepfake videos attempting to mimic Freeman, including one falsely depicting him firing her. Such incidents highlight the growing prevalence of AI-generated content, prompting discussions about its ethical implications and the need for heightened awareness.
Thank you to my incredible fans for your vigilance and support in calling out the unauthorized use of an A.I. voice imitating me. Your dedication helps authenticity and integrity remain paramount. Grateful. #AI#scam#imitation#IdentityProtection
Freeman’s case joins a broader trend of celebrities, from Taylor Swift to Tom Cruise, facing similar challenges with AI-generated deepfakes. These instances underscore ongoing concerns about digital identity theft and the blurred lines between real and fabricated content in the digital age.
Although Judy Garland never recorded herself reading ‘The Wonderful Wizard of Oz,’ fans will soon be able to hear her rendition thanks to a new app by ElevenLabs. The AI company has launched the Reader app, which can convert text into voice-overs using digitally produced voices of deceased celebrities, including Garland, James Dean, and Burt Reynolds. The app can transform articles, e-books, and other text formats into audio.
Dustin Blank, head of partnerships at ElevenLabs, emphasised the company’s respect for the legacies of these celebrities. The company has made agreements with the estates of the actors, though compensation details remain undisclosed. That initiative highlights AI’s potential in Hollywood, especially for creating content using synthetic voices, but it also raises important questions about the licensing and ethical use of AI.
The use of AI-generated celebrity voices comes amid growing concerns about authenticity and copyright in creative industries. ElevenLabs had previously faced scrutiny when its tool was reportedly used to create a fake robocall from President Joe Biden. Similar controversies have arisen, such as OpenAI’s introduction of a voice similar to Scarlett Johansson’s, which she publicly criticised.
As AI technology advances, media companies are increasingly utilising it for voiceovers. NBC recently announced the use of an AI version of sportscaster Al Michaels for Olympics recaps on its Peacock streaming platform, with Michaels receiving compensation. While the market for AI-generated voices remains uncertain, the demand for audiobooks narrated by recognisable voices suggests a promising future for this technology.
The UN General Assembly has adopted a resolution on AI capacity building, led by China. This non-binding resolution seeks to enhance developing countries’ AI capabilities through international cooperation and capacity-building initiatives. It also urges international organisations and financial institutions to support these efforts.
The resolution comes in the context of the ongoing technology rivalry between Beijing and Washington, as both nations strive to influence AI governance and portray each other as destabilising forces. Earlier this year, the US promoted a UN resolution advocating for ‘safe, secure, and trustworthy’ AI systems, gaining the support of over 110 countries, including China.
China’s resolution acknowledges the UN’s role in AI capacity-building and calls on Secretary-General Antonio Guterres to report on the unique challenges developing countries face and provide recommendations to address them.
The Center for Investigative Reporting (CIR), known for producing Mother Jones and Reveal, has sued OpenAI and Microsoft, accusing them of using its content without permission and compensation. The lawsuit, filed in New York federal court, claims that OpenAI’s business model is based on exploiting copyrighted works and argues that AI-generated summaries threaten the financial stability of news organisations by reducing direct engagement with their content.
CIR’s CEO, Monika Bauerlein, emphasised the danger of AI tools replacing direct relationships between readers and news organisations, potentially undermining the foundations of independent journalism. The lawsuit is part of a broader legal challenge faced by OpenAI and Microsoft, with similar suits filed by other media outlets and authors.
Why does it matter?
Some news organisations have opted to collaborate with OpenAI, signing deals to allow the use of their content for AI training in exchange for compensation. Despite OpenAI’s argument that its use of publicly accessible content falls under ‘fair use,’ CIR’s lawsuit highlights the financial and ethical implications of using copyrighted material without proper attribution or payment, warning of significant impacts on investigative journalism and democracy.