Senators call for inquiry into AI content summarisation

A group of Democratic senators, led by Amy Klobuchar, has called on the United States Federal Trade Commission (FTC) and the Department of Justice (DOJ) to investigate whether AI tools that summarise online content are anti-competitive. The concern is that AI-generated summaries keep users on platforms like Google and Meta, preventing traffic from reaching the original content creators, which can result in lost advertising revenue for those creators.

The senators argue that platforms profit from using third-party content to generate AI summaries, while publishers are left with fewer opportunities to monetise their work. Content creators are often forced to choose between having their work summarised by AI tools or opting out entirely from being indexed by search engines, risking significant drops in traffic.

There is also a concern that AI features can misappropriate third-party content, passing it off as new material. The senators believe that the dominance of major online platforms is creating an unfair market for advertising revenue, as these companies control how content is monetised and limit the potential for original creators to benefit.

The letter calls for regulators to examine whether these practices violate antitrust laws. The FTC and DOJ will need to determine if the behaviour constitutes exclusionary conduct or unfair competition. The push from legislators could also lead to new laws if current regulations are deemed insufficient.

Elon Musk pushes for AI safety law in California

Elon Musk has urged California to pass the AI bill requiring tech companies to conduct safety testing on their AI models. Musk, who owns Tesla and the social media platform X, has long advocated for AI regulation, likening it to rules for any technology that could pose risks to the public. He specifically called for the passage of California’s SB 1047 bill to address these concerns.

California lawmakers have been busy with AI legislation, attempting to introduce 65 AI-related bills this season. These bills cover a range of issues, including ensuring algorithmic fairness and protecting intellectual property from AI exploitation. However, many of these bills have yet to advance.

On the same day, Microsoft-backed OpenAI supported a different AI bill, AB 3211, which requires companies to label AI-generated content, particularly in light of growing concerns about deepfakes and misinformation, especially in an election year.

The push for AI regulation comes when countries representing a broader portion of the global population are holding elections, raising concerns about the potential impact of AI-generated content on political processes.

Google’s $250M deal to support California newsrooms

Google has entered a $250 million deal with the state of California to support local newsrooms, which have been struggling with widespread layoffs and declining revenues. The decision comes in the wake of proposed legislation that would have required tech companies to pay news providers when they run ads alongside news content. By securing this deal, Google has managed to sidestep such bills.

The Media Guild of the West, a local journalism union, has criticised the deal, calling it a ‘shakedown’ that fails to address the real issues plaguing the industry. They argue that the deal’s financial commitments are minimal compared to the wealth tech giants have allegedly ‘stolen’ from newsrooms.

The deal includes the creation of the News Transformation Fund, supported by Google and taxpayers, which will distribute funds to news organisations in California over five years. Additionally, the National AI Innovation Accelerator, funded by Google, will support various industries, including journalism, by exploring the use of AI in their work.

While some, including California Governor Gavin Newsom, have praised the initiative, others remain sceptical. Critics argue that the deal needs to be revised, pointing out that only Google contributes financially, with other tech giants like Meta and Amazon absent from the agreement.

The news industry’s challenges are significant, with California seeing a sharp decline in publishers and journalists over the past two decades. Big Tech’s dominance in the advertising market and its impact on publisher traffic have exacerbated these challenges, leading to calls for more robust solutions to sustain local journalism.

New appointment at Google’s AI division

Google has appointed Noam Shazeer, a former Google researcher and co-founder of Character.AI, as co-lead of its main AI project, Gemini. Shazeer will join Jeff Dean and Oriol Vinyals in overseeing the development of AI models at DeepMind, Google’s AI division, which are set to enhance products like Search and Pixel smartphones.

Shazeer rejoined Google after founding Character.AI in 2021. The tech giant secured his return by paying billions and striking a licensing agreement with his former company. Shazeer expressed excitement in a memo to staff, praising the team he has rejoined.

Originally joining Google in 2000, Shazeer was instrumental in the 2017 research that ignited the current AI boom. Character.AI, which leverages these advancements, has attracted significant venture capital, reaching a $1 billion valuation last year.

Google’s decision to bring Shazeer back echoes similar strategies by other tech giants, although these moves have drawn regulatory scrutiny. In related news, a US judge recently ruled that Google’s search engine violated antitrust laws by creating an illegal monopoly.

Anthropic faces lawsuit for copyright infringement

Three authors have filed a class-action lawsuit against the AI company Anthropic in a California federal court, accusing the firm of illegally using their books and hundreds of thousands of others to train its AI chatbot, Claude. The lawsuit, initiated by writers Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson, claims that Anthropic utilised pirated versions of their works to develop the chatbot’s ability to respond to human prompts.

Anthropic, which has received financial backing from major companies like Amazon and Google, acknowledged the lawsuit but declined to comment further due to the ongoing litigation. The legal action against Anthropic is part of a broader trend, with other content creators, including visual artists and news outlets, also suing tech companies over using their copyrighted material in training AI models.

This is not the first time Anthropic has faced such accusations. Music publishers previously sued the company for allegedly misusing copyrighted song lyrics to train Claude. The authors involved in the current case argue that Anthropic has built a multibillion-dollar business by exploiting its intellectual property without permission.

The lawsuit demands financial compensation for the authors and a court order to permanently prevent Anthropic from using their work unlawfully. As the case progresses, it highlights the growing tension between content creators and AI companies over using copyrighted material in developing AI technologies.

Video game actors fight for job security amid AI’s impact on the industry

In the world of video game development, the rise of AI has sparked concern among performers who fear it could threaten their jobs. Motion capture actors like Noshir Dalal, who perform the physical movements that bring game characters to life, worry that AI could be used to replicate their performances without their consent, potentially reducing job opportunities and diminishing the value of their work.

Dalal, who has played characters in the most popular video games like ‘Star Wars Jedi: Survivor,’ highlights the physical toll and skill required in motion capture work. He argues that AI could allow studios to bypass hiring actors for new projects by reusing data from past performances. The concern is central to the ongoing strike by the Screen Actors Guild-American Federation of Television and Radio Artists (SAG-AFTRA), which represents video game performers and other media professionals. The union is demanding stronger protections against unregulated AI use in the industry.

Why does this matter?

AI’s ability to generate new animations and voices based on existing data is at the heart of the issue. While studios argue that they have offered meaningful AI protections, performers remain sceptical. They worry that the use of AI could lead to ethical dilemmas, such as their likenesses being used in ways they do not endorse, as seen in the controversy surrounding game modifications that use AI to create inappropriate content.

Video game companies have offered wage increases and other benefits as negotiations continue, but the debate over AI protections remains unresolved. Performers like Dalal and others argue that AI could strip away the artistry and individuality that actors bring to their roles without strict controls, leaving them vulnerable to exploitation. The outcome of this dispute could set a precedent for how AI is regulated in the entertainment industry, impacting the future of video game development and beyond.

Parents in South Korea question AI textbook program

Plans to introduce AI-powered textbooks in South Korean classrooms have sparked concerns among parents. The government aims to roll out tablets with these advanced textbooks next year, with the goal of using them across all subjects by 2028, excluding music, art, physical education, and ethics. The AI textbooks will be designed to adapt to different learning speeds, and teachers will monitor student progress through dashboards.

However, many parents are uneasy about the impact of this new technology on their children’s well-being. Over 50,000 have signed a petition urging the government to prioritise overall student health rather than focusing solely on technological advancements. They argue that excessive exposure to digital devices is already causing unprecedented issues.

One concerned parent, Lee Sun-youn, highlighted worries about the potential negative effects on children’s brain development and concentration. She pointed out that students in South Korea are already heavily reliant on smartphones and tablets, and increased screen time in classrooms could exacerbate these problems.

The government has yet to provide detailed information on how the AI textbook program will be implemented. As the rollout approaches, the debate over the balance between technology and student welfare continues to intensify.

Hollywood union secures agreement allowing AI voice replication for advertisers

The Hollywood actors’ union, SAG-AFTRA, has agreed with the online talent marketplace Narrativ, allowing actors to sell the rights to digitally replicate their voices using AI. The following deal addresses growing concerns among performers about the potential theft of their likenesses through AI, providing them with a way to earn income and retain control over how their voice replicas are used. Actors can set the price for their digital voice, ensuring it meets at least the union’s minimum pay standards, and advertisers must obtain consent for each use.

SAG-AFTRA has praised this agreement as a model for the ethical use of AI in advertising, emphasising the importance of safeguarding performers’ rights in the digital age. The issue of AI-driven voice replication has been a significant concern in Hollywood, highlighted by actress Scarlett Johansson’s accusations against OpenAI for the unauthorised use of her voice. That concern was also central to last year’s Hollywood strike and remains a key issue in ongoing labour disputes involving video game voice actors and motion-capture performers.

In response to the rise of AI-generated deepfakes and their potential misuse, the NO FAKES Act has been introduced in Congress, aiming to make unauthorised AI copying of a person’s voice and likeness illegal. The bill has gained support from major industry players, including SAG-AFTRA, Disney, and The Recording Academy, reflecting widespread concern over the implications of AI in entertainment and beyond.

Dutch copyright group shuts down AI training dataset

Dutch copyright enforcement group BREIN has successfully taken down a large language dataset that trains AI models without proper permissions. The dataset contained information gathered from tens of thousands of books, news sites, and Dutch language subtitles from numerous films and TV series. BREIN’s Director, Bastiaan van Ramshorst, noted the difficulty in determining whether and how extensively AI companies had already used the dataset.

The removal comes as the EU prepares to enforce its AI Act, requiring companies to disclose the datasets used in training AI models. The person responsible for offering the Dutch dataset complied with a cease and desist order and removed it from the website where it was available.

Why does this matter?

The following action follows similar moves in other countries, such as Denmark, where a copyright protection group took down a large dataset called ‘Books3’ last year. BREIN did not disclose the individual’s identity behind the dataset, citing Dutch privacy regulations.

UMG and Meta sign expanded deal on music monetisation

On Monday, the world’s largest music label, Universal Music Group (UMG), announced an agreement with Meta Platforms to create new opportunities for artists and songwriters on Meta’s social platforms. The multi-year global agreement includes Meta’s major platforms – Facebook, Instagram, Messenger, and WhatsApp.

The joint statement states, ‘The new agreement reflects the two companies’ shared commitment to protecting human creators and artistry, including ensuring that artists and songwriters are compensated fairly. As part of their multifaceted partnership, Meta and UMG will continue working together to address, among other things, unauthorised AI-generated content that could affect artists and songwriters.

In 2017, UMG and Meta Platforms signed an agreement to license UMG’s music catalogues for use on Facebook’s platforms, thereby creating a new revenue stream for artists to generate income from user-generated videos, as there was no way to monetise this previously, and artists had to rely on complicated legal proceedings to remove unlicensed content. The latest agreement further expands monetisation opportunities for Universal Music’s artists and songwriters, including licensed music for short-form videos.