The Hollywood actors’ union, SAG-AFTRA, has agreed with the online talent marketplace Narrativ, allowing actors to sell the rights to digitally replicate their voices using AI. The following deal addresses growing concerns among performers about the potential theft of their likenesses through AI, providing them with a way to earn income and retain control over how their voice replicas are used. Actors can set the price for their digital voice, ensuring it meets at least the union’s minimum pay standards, and advertisers must obtain consent for each use.
SAG-AFTRA has praised this agreement as a model for the ethical use of AI in advertising, emphasising the importance of safeguarding performers’ rights in the digital age. The issue of AI-driven voice replication has been a significant concern in Hollywood, highlighted by actress Scarlett Johansson’s accusations against OpenAI for the unauthorised use of her voice. That concern was also central to last year’s Hollywood strike and remains a key issue in ongoing labour disputes involving video game voice actors and motion-capture performers.
In response to the rise of AI-generated deepfakes and their potential misuse, the NO FAKES Act has been introduced in Congress, aiming to make unauthorised AI copying of a person’s voice and likeness illegal. The bill has gained support from major industry players, including SAG-AFTRA, Disney, and The Recording Academy, reflecting widespread concern over the implications of AI in entertainment and beyond.
Dutch copyright enforcement group BREIN has successfully taken down a large language dataset that trains AI models without proper permissions. The dataset contained information gathered from tens of thousands of books, news sites, and Dutch language subtitles from numerous films and TV series. BREIN’s Director, Bastiaan van Ramshorst, noted the difficulty in determining whether and how extensively AI companies had already used the dataset.
The removal comes as the EU prepares to enforce its AI Act, requiring companies to disclose the datasets used in training AI models. The person responsible for offering the Dutch dataset complied with a cease and desist order and removed it from the website where it was available.
Why does this matter?
The following action follows similar moves in other countries, such as Denmark, where a copyright protection group took down a large dataset called ‘Books3’ last year. BREIN did not disclose the individual’s identity behind the dataset, citing Dutch privacy regulations.
On Monday, the world’s largest music label, Universal Music Group (UMG), announced an agreement with Meta Platforms to create new opportunities for artists and songwriters on Meta’s social platforms. The multi-year global agreement includes Meta’s major platforms – Facebook, Instagram, Messenger, and WhatsApp.
The joint statement states, ‘The new agreement reflects the two companies’ shared commitment to protecting human creators and artistry, including ensuring that artists and songwriters are compensated fairly. As part of their multifaceted partnership, Meta and UMG will continue working together to address, among other things, unauthorised AI-generated content that could affect artists and songwriters.
In 2017, UMG and Meta Platforms signed an agreement to license UMG’s music catalogues for use on Facebook’s platforms, thereby creating a new revenue stream for artists to generate income from user-generated videos, as there was no way to monetise this previously, and artists had to rely on complicated legal proceedings to remove unlicensed content. The latest agreement further expands monetisation opportunities for Universal Music’s artists and songwriters, including licensed music for short-form videos.
AI-generated music faces strong opposition from musicians and major record labels over concerns about copyright infringement. Grammy-nominated artist Tift Merritt and other prominent musicians have criticised AI music platforms like Udio for producing imitations of their work without permission. Merritt argues that these AI-generated songs are not transformative but amount to theft, harming creativity and human artists.
Major record labels, including Sony, Universal, and Warner Music, have taken legal action against AI companies like Udio and Suno. These lawsuits claim that the companies have used copyrighted recordings to train their systems without proper authorisation, thus creating unfair competition by flooding the market with cheap imitations. The labels argue that such practices drain revenue from real artists and violate copyright laws.
The AI companies defend their technology, asserting that their systems do not infringe on copyrights and that their practices fall under ‘fair use.’ They liken the backlash to past industry fears over new technologies like synthesisers and drum machines. However, the record labels maintain that AI systems misuse copyrighted material to mimic famous artists without appropriate licenses, including Mariah Carey and Bruce Springsteen.
Why does this matter?
These legal battles echo other high-profile copyright cases involving generative AI, such as those against chatbots like OpenAI’s ChatGPT. The outcome of these cases could set significant precedents for using AI in creative industries, with courts needing to address whether AI’s use of copyrighted material constitutes fair use or infringement.
OpenAI has developed a method to detect when ChatGPT is used to write essays or research papers, but the company still needs to release it. That decision results from an internal debate lasting two years, balancing the company’s commitment to transparency with the potential to deter users. One survey found nearly a third of loyal ChatGPT users would be turned off by the anti-cheating technology.
Concerns have been raised that the tool could disproportionately affect non-native English speakers. OpenAI’s spokeswoman emphasised the need for a deliberate approach due to the complexities involved. Employees supporting the tool argue that its benefits outweigh the risks, as AI-generated essays can be completed in seconds, posing a significant issue for educators.
The watermarking method would subtly alter token selection in AI-generated text, creating a detectable pattern invisible to human readers. That method is reported to be 99.9% effective, but there are concerns it could be bypassed through translation or text modifications. OpenAI is still determining how to provide access to the detector while preventing misuse.
Despite the effectiveness of watermarking, internal discussions at OpenAI have been ongoing since before ChatGPT’s launch in 2022. A 2023 survey showed global support for AI detection tools, but many ChatGPT users feared false accusations of AI use. OpenAI explores alternative approaches to address these concerns while maintaining AI transparency and credibility.
Following a recent lawsuit by the Recording Industry Association of America (RIAA) against music generation startups Udio and Suno, Suno admitted in a court filing that it trained its AI model using copyrighted songs. Suno claimed this was legal under the fair-use doctrine.
The RIAA’s lawsuit, filed on 24 June, alleges that both startups used copyrighted music without permission to train their models. Suno’s admission is the first direct acknowledgement of this practice. Suno CEO Mikey Shulman defended the use of copyrighted material on the open internet, comparing it to a kid learning to write rock songs after listening to the genre.
The RIAA responded by calling Suno’s actions ‘industrial scale infringement’ that does not qualify as fair use. They argued that such practices harm artists by repackaging their work and competing directly with the originals. The outcome of this case, still in its early stages, could set a significant precedent for AI model training and copyright law.
A formal complaint has been filed with the Agency for Access to Public Information (AAIP) of Argentina against Meta, the parent company of Facebook, WhatsApp and Instagram. The case is in line with the international context of increasing scrutiny on the data protection practices of large technology companies.
The presentation was made by lawyers specialising in personal data protection, Facundo Malaureille and Daniel Monastersky, directors of the Diploma in Data Governance at the CEMA University. The complaint signals the company’s use of personal data for AI training.
The presentation consists of 22 points and requests that Meta Argentina explain its practices for collecting and using personal data for AI training. The AAIP will evaluate and respond to this presentation as the enforcement authority of the Personal Data Protection Law of Argentina (Law 25,326).
The country’s technological and legal community is closely watching the development of this case, given that the outcome of this complaint could impact innovation in AI and the protection of personal data in Argentina in the coming years.
Bechtle has secured a significant framework agreement with the German government to provide up to 300,000 iPhones and iPads, all equipped with approved Apple security software. The contract, valued at €770 million ($835.22 million), will run until the end of 2027, according to an announcement on Thursday.
This deal aligns with Germany’s recent IT security law aimed at restricting untrustworthy suppliers and ensuring robust security measures for government officials. Bechtle’s partnership with Apple underscores the importance of reliable technology and security in government operations.
The agreement comes some time after Apple’s legal challenges in Germany, including an injunction from a German court over a patent case back in 2018. Despite these hurdles, the collaboration with Bechtle demonstrates Apple’s continued commitment to providing secure and trusted devices for essential functions within the public sector.
Sam Altman, co-founder and CEO of OpenAI, raises a critical question: ‘Who will control the future of AI?’. He frames it as a choice between a democratic vision, led by the US and its allies to disseminate AI benefits widely, and an authoritarian one, led by nations like Russia and China, aiming to consolidate power through AI. Altman underscores the urgency of this decision, given the rapid advancements in AI technology and the high stakes involved.
Altman warns that while the United States currently leads in AI development, this advantage is precarious due to substantial investments by authoritarian governments. He highlights the risks if these regimes take the lead, such as restricted AI benefits, enhanced surveillance, and advanced cyber weapons. To prevent this, Altman proposes a four-pronged strategy – robust security measures to protect intellectual property, significant investments in physical and human infrastructure, a coherent commercial diplomacy policy, and establishing international norms and safety protocols.
He emphasises proactive collaboration between the US government and the private sector to implement these measures swiftly. Altman believes that proactive efforts today in security, infrastructure, talent development, and global governance can secure a competitive advantage and broad societal benefits. Ultimately, Altman advocates for a democratic vision for AI, underpinned by strategic, timely, and globally inclusive actions to maximise the technology’s benefits while minimising risks.
OpenAI’s AI safety leader, Aleksander Madry, is now working on a new significant research project, according to CEO Sam Altman. OpenAI executives Joaquin Quinonero Candela and Lilian Weng will take over the preparedness team, which evaluates the readiness of the company’s models for general AI. The move is part of a broader strategy to unify OpenAI’s safety efforts.
OpenAI’s preparedness team ensures the safety and readiness of its AI models. Following Madry’s shift to a new research role, he will have an expanded position within the research organization. OpenAI is also addressing safety concerns surrounding its advanced chatbots, which can engage in human-like conversations and generate multimedia content from text prompts.
Joaquin Quinonero Candela and Lilian Weng will lead the preparedness team as part of this strategic change. Researcher Tejal Patwardhan will manage much of the team’s work, ensuring the continued focus on AI safety. The reorganization follows the recent formation of a Safety and Security Committee, led by board members including Sam Altman.
The reshuffle comes amid rising safety concerns as OpenAI’s technologies become more powerful and widely used. The Safety and Security Committee was established earlier this year in preparation for training the next generation of AI models. These developments reflect OpenAI’s ongoing commitment to AI safety and responsible innovation.