Google’s $250M deal to support California newsrooms

Google has entered a $250 million deal with the state of California to support local newsrooms, which have been struggling with widespread layoffs and declining revenues. The decision comes in the wake of proposed legislation that would have required tech companies to pay news providers when they run ads alongside news content. By securing this deal, Google has managed to sidestep such bills.

The Media Guild of the West, a local journalism union, has criticised the deal, calling it a ‘shakedown’ that fails to address the real issues plaguing the industry. They argue that the deal’s financial commitments are minimal compared to the wealth tech giants have allegedly ‘stolen’ from newsrooms.

The deal includes the creation of the News Transformation Fund, supported by Google and taxpayers, which will distribute funds to news organisations in California over five years. Additionally, the National AI Innovation Accelerator, funded by Google, will support various industries, including journalism, by exploring the use of AI in their work.

While some, including California Governor Gavin Newsom, have praised the initiative, others remain sceptical. Critics argue that the deal needs to be revised, pointing out that only Google contributes financially, with other tech giants like Meta and Amazon absent from the agreement.

The news industry’s challenges are significant, with California seeing a sharp decline in publishers and journalists over the past two decades. Big Tech’s dominance in the advertising market and its impact on publisher traffic have exacerbated these challenges, leading to calls for more robust solutions to sustain local journalism.

New appointment at Google’s AI division

Google has appointed Noam Shazeer, a former Google researcher and co-founder of Character.AI, as co-lead of its main AI project, Gemini. Shazeer will join Jeff Dean and Oriol Vinyals in overseeing the development of AI models at DeepMind, Google’s AI division, which are set to enhance products like Search and Pixel smartphones.

Shazeer rejoined Google after founding Character.AI in 2021. The tech giant secured his return by paying billions and striking a licensing agreement with his former company. Shazeer expressed excitement in a memo to staff, praising the team he has rejoined.

Originally joining Google in 2000, Shazeer was instrumental in the 2017 research that ignited the current AI boom. Character.AI, which leverages these advancements, has attracted significant venture capital, reaching a $1 billion valuation last year.

Google’s decision to bring Shazeer back echoes similar strategies by other tech giants, although these moves have drawn regulatory scrutiny. In related news, a US judge recently ruled that Google’s search engine violated antitrust laws by creating an illegal monopoly.

Anthropic faces lawsuit for copyright infringement

Three authors have filed a class-action lawsuit against the AI company Anthropic in a California federal court, accusing the firm of illegally using their books and hundreds of thousands of others to train its AI chatbot, Claude. The lawsuit, initiated by writers Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson, claims that Anthropic utilised pirated versions of their works to develop the chatbot’s ability to respond to human prompts.

Anthropic, which has received financial backing from major companies like Amazon and Google, acknowledged the lawsuit but declined to comment further due to the ongoing litigation. The legal action against Anthropic is part of a broader trend, with other content creators, including visual artists and news outlets, also suing tech companies over using their copyrighted material in training AI models.

This is not the first time Anthropic has faced such accusations. Music publishers previously sued the company for allegedly misusing copyrighted song lyrics to train Claude. The authors involved in the current case argue that Anthropic has built a multibillion-dollar business by exploiting its intellectual property without permission.

The lawsuit demands financial compensation for the authors and a court order to permanently prevent Anthropic from using their work unlawfully. As the case progresses, it highlights the growing tension between content creators and AI companies over using copyrighted material in developing AI technologies.

Video game actors fight for job security amid AI’s impact on the industry

In the world of video game development, the rise of AI has sparked concern among performers who fear it could threaten their jobs. Motion capture actors like Noshir Dalal, who perform the physical movements that bring game characters to life, worry that AI could be used to replicate their performances without their consent, potentially reducing job opportunities and diminishing the value of their work.

Dalal, who has played characters in the most popular video games like ‘Star Wars Jedi: Survivor,’ highlights the physical toll and skill required in motion capture work. He argues that AI could allow studios to bypass hiring actors for new projects by reusing data from past performances. The concern is central to the ongoing strike by the Screen Actors Guild-American Federation of Television and Radio Artists (SAG-AFTRA), which represents video game performers and other media professionals. The union is demanding stronger protections against unregulated AI use in the industry.

Why does this matter?

AI’s ability to generate new animations and voices based on existing data is at the heart of the issue. While studios argue that they have offered meaningful AI protections, performers remain sceptical. They worry that the use of AI could lead to ethical dilemmas, such as their likenesses being used in ways they do not endorse, as seen in the controversy surrounding game modifications that use AI to create inappropriate content.

Video game companies have offered wage increases and other benefits as negotiations continue, but the debate over AI protections remains unresolved. Performers like Dalal and others argue that AI could strip away the artistry and individuality that actors bring to their roles without strict controls, leaving them vulnerable to exploitation. The outcome of this dispute could set a precedent for how AI is regulated in the entertainment industry, impacting the future of video game development and beyond.

Parents in South Korea question AI textbook program

Plans to introduce AI-powered textbooks in South Korean classrooms have sparked concerns among parents. The government aims to roll out tablets with these advanced textbooks next year, with the goal of using them across all subjects by 2028, excluding music, art, physical education, and ethics. The AI textbooks will be designed to adapt to different learning speeds, and teachers will monitor student progress through dashboards.

However, many parents are uneasy about the impact of this new technology on their children’s well-being. Over 50,000 have signed a petition urging the government to prioritise overall student health rather than focusing solely on technological advancements. They argue that excessive exposure to digital devices is already causing unprecedented issues.

One concerned parent, Lee Sun-youn, highlighted worries about the potential negative effects on children’s brain development and concentration. She pointed out that students in South Korea are already heavily reliant on smartphones and tablets, and increased screen time in classrooms could exacerbate these problems.

The government has yet to provide detailed information on how the AI textbook program will be implemented. As the rollout approaches, the debate over the balance between technology and student welfare continues to intensify.

Hollywood union secures agreement allowing AI voice replication for advertisers

The Hollywood actors’ union, SAG-AFTRA, has agreed with the online talent marketplace Narrativ, allowing actors to sell the rights to digitally replicate their voices using AI. The following deal addresses growing concerns among performers about the potential theft of their likenesses through AI, providing them with a way to earn income and retain control over how their voice replicas are used. Actors can set the price for their digital voice, ensuring it meets at least the union’s minimum pay standards, and advertisers must obtain consent for each use.

SAG-AFTRA has praised this agreement as a model for the ethical use of AI in advertising, emphasising the importance of safeguarding performers’ rights in the digital age. The issue of AI-driven voice replication has been a significant concern in Hollywood, highlighted by actress Scarlett Johansson’s accusations against OpenAI for the unauthorised use of her voice. That concern was also central to last year’s Hollywood strike and remains a key issue in ongoing labour disputes involving video game voice actors and motion-capture performers.

In response to the rise of AI-generated deepfakes and their potential misuse, the NO FAKES Act has been introduced in Congress, aiming to make unauthorised AI copying of a person’s voice and likeness illegal. The bill has gained support from major industry players, including SAG-AFTRA, Disney, and The Recording Academy, reflecting widespread concern over the implications of AI in entertainment and beyond.

Dutch copyright group shuts down AI training dataset

Dutch copyright enforcement group BREIN has successfully taken down a large language dataset that trains AI models without proper permissions. The dataset contained information gathered from tens of thousands of books, news sites, and Dutch language subtitles from numerous films and TV series. BREIN’s Director, Bastiaan van Ramshorst, noted the difficulty in determining whether and how extensively AI companies had already used the dataset.

The removal comes as the EU prepares to enforce its AI Act, requiring companies to disclose the datasets used in training AI models. The person responsible for offering the Dutch dataset complied with a cease and desist order and removed it from the website where it was available.

Why does this matter?

The following action follows similar moves in other countries, such as Denmark, where a copyright protection group took down a large dataset called ‘Books3’ last year. BREIN did not disclose the individual’s identity behind the dataset, citing Dutch privacy regulations.

UMG and Meta sign expanded deal on music monetisation

On Monday, the world’s largest music label, Universal Music Group (UMG), announced an agreement with Meta Platforms to create new opportunities for artists and songwriters on Meta’s social platforms. The multi-year global agreement includes Meta’s major platforms – Facebook, Instagram, Messenger, and WhatsApp.

The joint statement states, ‘The new agreement reflects the two companies’ shared commitment to protecting human creators and artistry, including ensuring that artists and songwriters are compensated fairly. As part of their multifaceted partnership, Meta and UMG will continue working together to address, among other things, unauthorised AI-generated content that could affect artists and songwriters.

In 2017, UMG and Meta Platforms signed an agreement to license UMG’s music catalogues for use on Facebook’s platforms, thereby creating a new revenue stream for artists to generate income from user-generated videos, as there was no way to monetise this previously, and artists had to rely on complicated legal proceedings to remove unlicensed content. The latest agreement further expands monetisation opportunities for Universal Music’s artists and songwriters, including licensed music for short-form videos.

AI music faces legal challenges

AI-generated music faces strong opposition from musicians and major record labels over concerns about copyright infringement. Grammy-nominated artist Tift Merritt and other prominent musicians have criticised AI music platforms like Udio for producing imitations of their work without permission. Merritt argues that these AI-generated songs are not transformative but amount to theft, harming creativity and human artists.

Major record labels, including Sony, Universal, and Warner Music, have taken legal action against AI companies like Udio and Suno. These lawsuits claim that the companies have used copyrighted recordings to train their systems without proper authorisation, thus creating unfair competition by flooding the market with cheap imitations. The labels argue that such practices drain revenue from real artists and violate copyright laws.

The AI companies defend their technology, asserting that their systems do not infringe on copyrights and that their practices fall under ‘fair use.’ They liken the backlash to past industry fears over new technologies like synthesisers and drum machines. However, the record labels maintain that AI systems misuse copyrighted material to mimic famous artists without appropriate licenses, including Mariah Carey and Bruce Springsteen.

Why does this matter?

These legal battles echo other high-profile copyright cases involving generative AI, such as those against chatbots like OpenAI’s ChatGPT. The outcome of these cases could set significant precedents for using AI in creative industries, with courts needing to address whether AI’s use of copyrighted material constitutes fair use or infringement.

OpenAI delays release of anti-cheating tool

OpenAI has developed a method to detect when ChatGPT is used to write essays or research papers, but the company still needs to release it. That decision results from an internal debate lasting two years, balancing the company’s commitment to transparency with the potential to deter users. One survey found nearly a third of loyal ChatGPT users would be turned off by the anti-cheating technology.

Concerns have been raised that the tool could disproportionately affect non-native English speakers. OpenAI’s spokeswoman emphasised the need for a deliberate approach due to the complexities involved. Employees supporting the tool argue that its benefits outweigh the risks, as AI-generated essays can be completed in seconds, posing a significant issue for educators.

The watermarking method would subtly alter token selection in AI-generated text, creating a detectable pattern invisible to human readers. That method is reported to be 99.9% effective, but there are concerns it could be bypassed through translation or text modifications. OpenAI is still determining how to provide access to the detector while preventing misuse.

Despite the effectiveness of watermarking, internal discussions at OpenAI have been ongoing since before ChatGPT’s launch in 2022. A 2023 survey showed global support for AI detection tools, but many ChatGPT users feared false accusations of AI use. OpenAI explores alternative approaches to address these concerns while maintaining AI transparency and credibility.