In the world of video game development, the rise of AI has sparked concern among performers who fear it could threaten their jobs. Motion capture actors like Noshir Dalal, who perform the physical movements that bring game characters to life, worry that AI could be used to replicate their performances without their consent, potentially reducing job opportunities and diminishing the value of their work.
Dalal, who has played characters in the most popular video games like ‘Star Wars Jedi: Survivor,’ highlights the physical toll and skill required in motion capture work. He argues that AI could allow studios to bypass hiring actors for new projects by reusing data from past performances. The concern is central to the ongoing strike by the Screen Actors Guild-American Federation of Television and Radio Artists (SAG-AFTRA), which represents video game performers and other media professionals. The union is demanding stronger protections against unregulated AI use in the industry.
Why does this matter?
AI’s ability to generate new animations and voices based on existing data is at the heart of the issue. While studios argue that they have offered meaningful AI protections, performers remain sceptical. They worry that the use of AI could lead to ethical dilemmas, such as their likenesses being used in ways they do not endorse, as seen in the controversy surrounding game modifications that use AI to create inappropriate content.
Video game companies have offered wage increases and other benefits as negotiations continue, but the debate over AI protections remains unresolved. Performers like Dalal and others argue that AI could strip away the artistry and individuality that actors bring to their roles without strict controls, leaving them vulnerable to exploitation. The outcome of this dispute could set a precedent for how AI is regulated in the entertainment industry, impacting the future of video game development and beyond.
Plans to introduce AI-powered textbooks in South Korean classrooms have sparked concerns among parents. The government aims to roll out tablets with these advanced textbooks next year, with the goal of using them across all subjects by 2028, excluding music, art, physical education, and ethics. The AI textbooks will be designed to adapt to different learning speeds, and teachers will monitor student progress through dashboards.
However, many parents are uneasy about the impact of this new technology on their children’s well-being. Over 50,000 have signed a petition urging the government to prioritise overall student health rather than focusing solely on technological advancements. They argue that excessive exposure to digital devices is already causing unprecedented issues.
One concerned parent, Lee Sun-youn, highlighted worries about the potential negative effects on children’s brain development and concentration. She pointed out that students in South Korea are already heavily reliant on smartphones and tablets, and increased screen time in classrooms could exacerbate these problems.
The government has yet to provide detailed information on how the AI textbook program will be implemented. As the rollout approaches, the debate over the balance between technology and student welfare continues to intensify.
The Hollywood actors’ union, SAG-AFTRA, has agreed with the online talent marketplace Narrativ, allowing actors to sell the rights to digitally replicate their voices using AI. The following deal addresses growing concerns among performers about the potential theft of their likenesses through AI, providing them with a way to earn income and retain control over how their voice replicas are used. Actors can set the price for their digital voice, ensuring it meets at least the union’s minimum pay standards, and advertisers must obtain consent for each use.
SAG-AFTRA has praised this agreement as a model for the ethical use of AI in advertising, emphasising the importance of safeguarding performers’ rights in the digital age. The issue of AI-driven voice replication has been a significant concern in Hollywood, highlighted by actress Scarlett Johansson’s accusations against OpenAI for the unauthorised use of her voice. That concern was also central to last year’s Hollywood strike and remains a key issue in ongoing labour disputes involving video game voice actors and motion-capture performers.
In response to the rise of AI-generated deepfakes and their potential misuse, the NO FAKES Act has been introduced in Congress, aiming to make unauthorised AI copying of a person’s voice and likeness illegal. The bill has gained support from major industry players, including SAG-AFTRA, Disney, and The Recording Academy, reflecting widespread concern over the implications of AI in entertainment and beyond.
Dutch copyright enforcement group BREIN has successfully taken down a large language dataset that trains AI models without proper permissions. The dataset contained information gathered from tens of thousands of books, news sites, and Dutch language subtitles from numerous films and TV series. BREIN’s Director, Bastiaan van Ramshorst, noted the difficulty in determining whether and how extensively AI companies had already used the dataset.
The removal comes as the EU prepares to enforce its AI Act, requiring companies to disclose the datasets used in training AI models. The person responsible for offering the Dutch dataset complied with a cease and desist order and removed it from the website where it was available.
Why does this matter?
The following action follows similar moves in other countries, such as Denmark, where a copyright protection group took down a large dataset called ‘Books3’ last year. BREIN did not disclose the individual’s identity behind the dataset, citing Dutch privacy regulations.
On Monday, the world’s largest music label, Universal Music Group (UMG), announced an agreement with Meta Platforms to create new opportunities for artists and songwriters on Meta’s social platforms. The multi-year global agreement includes Meta’s major platforms – Facebook, Instagram, Messenger, and WhatsApp.
The joint statement states, ‘The new agreement reflects the two companies’ shared commitment to protecting human creators and artistry, including ensuring that artists and songwriters are compensated fairly. As part of their multifaceted partnership, Meta and UMG will continue working together to address, among other things, unauthorised AI-generated content that could affect artists and songwriters.
In 2017, UMG and Meta Platforms signed an agreement to license UMG’s music catalogues for use on Facebook’s platforms, thereby creating a new revenue stream for artists to generate income from user-generated videos, as there was no way to monetise this previously, and artists had to rely on complicated legal proceedings to remove unlicensed content. The latest agreement further expands monetisation opportunities for Universal Music’s artists and songwriters, including licensed music for short-form videos.
AI-generated music faces strong opposition from musicians and major record labels over concerns about copyright infringement. Grammy-nominated artist Tift Merritt and other prominent musicians have criticised AI music platforms like Udio for producing imitations of their work without permission. Merritt argues that these AI-generated songs are not transformative but amount to theft, harming creativity and human artists.
Major record labels, including Sony, Universal, and Warner Music, have taken legal action against AI companies like Udio and Suno. These lawsuits claim that the companies have used copyrighted recordings to train their systems without proper authorisation, thus creating unfair competition by flooding the market with cheap imitations. The labels argue that such practices drain revenue from real artists and violate copyright laws.
The AI companies defend their technology, asserting that their systems do not infringe on copyrights and that their practices fall under ‘fair use.’ They liken the backlash to past industry fears over new technologies like synthesisers and drum machines. However, the record labels maintain that AI systems misuse copyrighted material to mimic famous artists without appropriate licenses, including Mariah Carey and Bruce Springsteen.
Why does this matter?
These legal battles echo other high-profile copyright cases involving generative AI, such as those against chatbots like OpenAI’s ChatGPT. The outcome of these cases could set significant precedents for using AI in creative industries, with courts needing to address whether AI’s use of copyrighted material constitutes fair use or infringement.
OpenAI has developed a method to detect when ChatGPT is used to write essays or research papers, but the company still needs to release it. That decision results from an internal debate lasting two years, balancing the company’s commitment to transparency with the potential to deter users. One survey found nearly a third of loyal ChatGPT users would be turned off by the anti-cheating technology.
Concerns have been raised that the tool could disproportionately affect non-native English speakers. OpenAI’s spokeswoman emphasised the need for a deliberate approach due to the complexities involved. Employees supporting the tool argue that its benefits outweigh the risks, as AI-generated essays can be completed in seconds, posing a significant issue for educators.
The watermarking method would subtly alter token selection in AI-generated text, creating a detectable pattern invisible to human readers. That method is reported to be 99.9% effective, but there are concerns it could be bypassed through translation or text modifications. OpenAI is still determining how to provide access to the detector while preventing misuse.
Despite the effectiveness of watermarking, internal discussions at OpenAI have been ongoing since before ChatGPT’s launch in 2022. A 2023 survey showed global support for AI detection tools, but many ChatGPT users feared false accusations of AI use. OpenAI explores alternative approaches to address these concerns while maintaining AI transparency and credibility.
Following a recent lawsuit by the Recording Industry Association of America (RIAA) against music generation startups Udio and Suno, Suno admitted in a court filing that it trained its AI model using copyrighted songs. Suno claimed this was legal under the fair-use doctrine.
The RIAA’s lawsuit, filed on 24 June, alleges that both startups used copyrighted music without permission to train their models. Suno’s admission is the first direct acknowledgement of this practice. Suno CEO Mikey Shulman defended the use of copyrighted material on the open internet, comparing it to a kid learning to write rock songs after listening to the genre.
The RIAA responded by calling Suno’s actions ‘industrial scale infringement’ that does not qualify as fair use. They argued that such practices harm artists by repackaging their work and competing directly with the originals. The outcome of this case, still in its early stages, could set a significant precedent for AI model training and copyright law.
A formal complaint has been filed with the Agency for Access to Public Information (AAIP) of Argentina against Meta, the parent company of Facebook, WhatsApp and Instagram. The case is in line with the international context of increasing scrutiny on the data protection practices of large technology companies.
The presentation was made by lawyers specialising in personal data protection, Facundo Malaureille and Daniel Monastersky, directors of the Diploma in Data Governance at the CEMA University. The complaint signals the company’s use of personal data for AI training.
The presentation consists of 22 points and requests that Meta Argentina explain its practices for collecting and using personal data for AI training. The AAIP will evaluate and respond to this presentation as the enforcement authority of the Personal Data Protection Law of Argentina (Law 25,326).
The country’s technological and legal community is closely watching the development of this case, given that the outcome of this complaint could impact innovation in AI and the protection of personal data in Argentina in the coming years.
Bechtle has secured a significant framework agreement with the German government to provide up to 300,000 iPhones and iPads, all equipped with approved Apple security software. The contract, valued at €770 million ($835.22 million), will run until the end of 2027, according to an announcement on Thursday.
This deal aligns with Germany’s recent IT security law aimed at restricting untrustworthy suppliers and ensuring robust security measures for government officials. Bechtle’s partnership with Apple underscores the importance of reliable technology and security in government operations.
The agreement comes some time after Apple’s legal challenges in Germany, including an injunction from a German court over a patent case back in 2018. Despite these hurdles, the collaboration with Bechtle demonstrates Apple’s continued commitment to providing secure and trusted devices for essential functions within the public sector.