AI music faces legal challenges

AI-generated music faces strong opposition from musicians and major record labels over concerns about copyright infringement. Grammy-nominated artist Tift Merritt and other prominent musicians have criticised AI music platforms like Udio for producing imitations of their work without permission. Merritt argues that these AI-generated songs are not transformative but amount to theft, harming creativity and human artists.

Major record labels, including Sony, Universal, and Warner Music, have taken legal action against AI companies like Udio and Suno. These lawsuits claim that the companies have used copyrighted recordings to train their systems without proper authorisation, thus creating unfair competition by flooding the market with cheap imitations. The labels argue that such practices drain revenue from real artists and violate copyright laws.

The AI companies defend their technology, asserting that their systems do not infringe on copyrights and that their practices fall under ‘fair use.’ They liken the backlash to past industry fears over new technologies like synthesisers and drum machines. However, the record labels maintain that AI systems misuse copyrighted material to mimic famous artists without appropriate licenses, including Mariah Carey and Bruce Springsteen.

Why does this matter?

These legal battles echo other high-profile copyright cases involving generative AI, such as those against chatbots like OpenAI’s ChatGPT. The outcome of these cases could set significant precedents for using AI in creative industries, with courts needing to address whether AI’s use of copyrighted material constitutes fair use or infringement.

OpenAI delays release of anti-cheating tool

OpenAI has developed a method to detect when ChatGPT is used to write essays or research papers, but the company still needs to release it. That decision results from an internal debate lasting two years, balancing the company’s commitment to transparency with the potential to deter users. One survey found nearly a third of loyal ChatGPT users would be turned off by the anti-cheating technology.

Concerns have been raised that the tool could disproportionately affect non-native English speakers. OpenAI’s spokeswoman emphasised the need for a deliberate approach due to the complexities involved. Employees supporting the tool argue that its benefits outweigh the risks, as AI-generated essays can be completed in seconds, posing a significant issue for educators.

The watermarking method would subtly alter token selection in AI-generated text, creating a detectable pattern invisible to human readers. That method is reported to be 99.9% effective, but there are concerns it could be bypassed through translation or text modifications. OpenAI is still determining how to provide access to the detector while preventing misuse.

Despite the effectiveness of watermarking, internal discussions at OpenAI have been ongoing since before ChatGPT’s launch in 2022. A 2023 survey showed global support for AI detection tools, but many ChatGPT users feared false accusations of AI use. OpenAI explores alternative approaches to address these concerns while maintaining AI transparency and credibility.

Suno claims AI music training on copyrighted songs is ‘fair use’

Following a recent lawsuit by the Recording Industry Association of America (RIAA) against music generation startups Udio and Suno, Suno admitted in a court filing that it trained its AI model using copyrighted songs. Suno claimed this was legal under the fair-use doctrine.

The RIAA’s lawsuit, filed on 24 June, alleges that both startups used copyrighted music without permission to train their models. Suno’s admission is the first direct acknowledgement of this practice. Suno CEO Mikey Shulman defended the use of copyrighted material on the open internet, comparing it to a kid learning to write rock songs after listening to the genre.

The RIAA responded by calling Suno’s actions ‘industrial scale infringement’ that does not qualify as fair use. They argued that such practices harm artists by repackaging their work and competing directly with the originals. The outcome of this case, still in its early stages, could set a significant precedent for AI model training and copyright law.

Formal complaint in Argentina challenges Meta’s data use for AI training

A formal complaint has been filed with the Agency for Access to Public Information (AAIP) of Argentina against Meta, the parent company of Facebook, WhatsApp and Instagram. The case is in line with the international context of increasing scrutiny on the data protection practices of large technology companies.

The presentation was made by lawyers specialising in personal data protection, Facundo Malaureille and Daniel Monastersky, directors of the Diploma in Data Governance at the CEMA University. The complaint signals the company’s use of personal data for AI training.

The presentation consists of 22 points and requests that Meta Argentina explain its practices for collecting and using personal data for AI training. The AAIP will evaluate and respond to this presentation as the enforcement authority of the Personal Data Protection Law of Argentina (Law 25,326).

The country’s technological and legal community is closely watching the development of this case, given that the outcome of this complaint could impact innovation in AI and the protection of personal data in Argentina in the coming years.

Bechtle secures €770 million deal with German government

Bechtle has secured a significant framework agreement with the German government to provide up to 300,000 iPhones and iPads, all equipped with approved Apple security software. The contract, valued at €770 million ($835.22 million), will run until the end of 2027, according to an announcement on Thursday.

This deal aligns with Germany’s recent IT security law aimed at restricting untrustworthy suppliers and ensuring robust security measures for government officials. Bechtle’s partnership with Apple underscores the importance of reliable technology and security in government operations.

The agreement comes some time after Apple’s legal challenges in Germany, including an injunction from a German court over a patent case back in 2018. Despite these hurdles, the collaboration with Bechtle demonstrates Apple’s continued commitment to providing secure and trusted devices for essential functions within the public sector.

OpenAI CEO emphasises democratic control in the future of AI

Sam Altman, co-founder and CEO of OpenAI, raises a critical question: ‘Who will control the future of AI?’. He frames it as a choice between a democratic vision, led by the US and its allies to disseminate AI benefits widely, and an authoritarian one, led by nations like Russia and China, aiming to consolidate power through AI. Altman underscores the urgency of this decision, given the rapid advancements in AI technology and the high stakes involved.

Altman warns that while the United States currently leads in AI development, this advantage is precarious due to substantial investments by authoritarian governments. He highlights the risks if these regimes take the lead, such as restricted AI benefits, enhanced surveillance, and advanced cyber weapons. To prevent this, Altman proposes a four-pronged strategy – robust security measures to protect intellectual property, significant investments in physical and human infrastructure, a coherent commercial diplomacy policy, and establishing international norms and safety protocols.

He emphasises proactive collaboration between the US government and the private sector to implement these measures swiftly. Altman believes that proactive efforts today in security, infrastructure, talent development, and global governance can secure a competitive advantage and broad societal benefits. Ultimately, Altman advocates for a democratic vision for AI, underpinned by strategic, timely, and globally inclusive actions to maximise the technology’s benefits while minimising risks.

OpenAI announces major reorganisation to bolster AI safety measures

OpenAI’s AI safety leader, Aleksander Madry, is now working on a new significant research project, according to CEO Sam Altman. OpenAI executives Joaquin Quinonero Candela and Lilian Weng will take over the preparedness team, which evaluates the readiness of the company’s models for general AI. The move is part of a broader strategy to unify OpenAI’s safety efforts.

OpenAI’s preparedness team ensures the safety and readiness of its AI models. Following Madry’s shift to a new research role, he will have an expanded position within the research organization. OpenAI is also addressing safety concerns surrounding its advanced chatbots, which can engage in human-like conversations and generate multimedia content from text prompts.

Joaquin Quinonero Candela and Lilian Weng will lead the preparedness team as part of this strategic change. Researcher Tejal Patwardhan will manage much of the team’s work, ensuring the continued focus on AI safety. The reorganization follows the recent formation of a Safety and Security Committee, led by board members including Sam Altman.

The reshuffle comes amid rising safety concerns as OpenAI’s technologies become more powerful and widely used. The Safety and Security Committee was established earlier this year in preparation for training the next generation of AI models. These developments reflect OpenAI’s ongoing commitment to AI safety and responsible innovation.

Queensland premier criticises AI use in political advertising

The prime minister of Australian state Queensland, Steven Miles, has condemned an AI-generated video created by the LNP opposition, calling it a ‘turning point for our democracy.’ The TikTok video depicts the Queensland premier dancing under text about rising living costs and is clearly marked as AI-generated. Miles has stated that the state Labor party will not use AI-generated advertisements in the upcoming election campaign.

Miles expressed concerns about the potential dangers of AI in political communication, highlighting the need for caution as videos are more likely to be believed than doctored photos. Despite rejecting AI for their own content, Miles dismissed the need for truth in advertising laws, asserting that Labor has no intention of creating deepfake videos.

The LNP defended their use of AI, emphasising that the video was clearly labelled and aimed at highlighting issues like higher rents and increased power prices under Labor. The Electoral Commission of Queensland noted that while the state’s electoral act does not specifically address AI, any false statements about a candidate’s character can be prosecuted.

Experts, including communications lecturer Susan Grantham and QUT’s Patrik Wikstrom, have warned about the broader implications of AI in politics. Grantham pointed out that politicians already using AI for lighter content are at greater risk of being targeted. Wikstrom stressed that the real issue is political communication designed to deceive, echoing concerns raised by a UK elections watchdog about AI deepfakes undermining elections. Australia is also planning to implement tougher laws focusing on deepfakes.

US senators introduce COPIED Act to combat intellectual property theft in creative industry

The Content Origin Protection and Integrity from Edited and Deepfaked Media Bill, also known as the COPIED Act, was introduced on 11 July 2024 by US lawmakers, Senators Marsha Blackburn, Maria Cantrell and Martin Heinrich. The bill is expected to safeguard the intellectual property of creatives, particularly journalists, publishers, broadcasters and artists.

In recent times, the work and images of creatives have been used or modified without consent, at times to generate income. The push for legislation in the area was intensified in January after explicit AI-generated images of the US musician Taylor Swift surfaced on X

According to the bill, images, videos, audio clips and texts are considered deepfakes if they contain ‘synthetic or synthetically modified content that appears authentic to a reasonable person and creates a false understanding or impression’. If moved into legislation, the bill restricts online platforms where US-based customers frequent, and annual revenue of at least $50 million is generated or where 25 million active users are registered for three consecutive months.

Under the bill, companies that deploy or develop AI models must install a feature allowing users to tag such images with contextual or content provenance information, such as their source and history, in a machine-readable format. After that, it would be illegal to remove such tags for any other reason than research, use these images to train subsequent AI models or generate content. Victims will then have the right to sue offenders. 

The COPIED Act is backed by several artist-affiliated groups, including SAG-AFTRA, the National Music Publishers’ Association, the Songwriters Guild of America (SGA), the National Association of Broadcasters as well as The US National Institute of Standards and Technology (NIST), the US Patent and Trademark Office (USPTO) and the US Copyright Office. The bill also has received bipartisan support.

K-Pop’s AI revolution divides fans

AI is currently a hot topic in the K-Pop community, as several top groups, including Seventeen, have begun using the technology to create music videos and write lyrics. Seventeen, one of the most successful K-Pop acts, has incorporated AI-generated scenes in their latest single, ‘Maestro,’ and experimented with AI in songwriting. Band member Woozi expressed a desire to develop alongside technology rather than resist it.

The use of AI has divided fans. Some, like super fan Ashley Peralta, appreciate AI’s ability to help artists overcome creative blocks but worry it might disconnect fans from the artists’ authentic emotions. Podcaster Chelsea Toledo shares similar concerns, fearing AI-generated lyrics might dilute Seventeen’s reputation as a self-producing group known for their personal touch in songwriting and choreography.

Industry professionals, such as producer Chris Nairn, recognise South Korea’s progressive approach to music production. While he acknowledges AI’s potential, he doubts its ability to match top-tier songwriting’s innovation and uniqueness. Music journalist Arpita Adhya points out the immense pressure on K-Pop artists to produce frequent content, which may drive the adoption of AI.

Why does this matter?

The debate reflects broader concerns in the music industry, where Western artists like Billie Eilish and Nicki Minaj have called for regulation to protect human artistry from AI’s encroachment. Fans and industry insiders continue to grapple with the balance between embracing technological advancements and preserving the authenticity that connects artists with their audiences.