Celebrity voices of John Cena and Judi Dench coming to Meta’s AI Chatbot

Meta Platforms is preparing to introduce a new audio feature for its AI chatbot, which will allow users to select voices from five celebrities, including Judi Dench and John Cena. As part of its efforts to enhance user engagement, Meta will offer the voice options across its platforms such as Facebook, Instagram, and WhatsApp.

The announcement is expected at Meta’s annual Connect conference, where the company is also set to unveil augmented-reality glasses and provide updates on its Ray-Ban Meta smart glasses. These developments reflect Meta’s push to integrate AI more deeply into everyday interactions through its various products.

Celebrity voices are set to roll out this week in the US and other English-speaking markets. Meta hopes that this new feature will appeal to users seeking a more personalised experience with its AI chatbot, positioning itself in competition with AI giants like Google and OpenAI.

As part of its broader AI strategy, Meta has shifted focus towards integrating celebrity voices after earlier text-based characters saw limited success. The company is committed to making its chatbot a core feature across its platforms, striving to stay ahead in the competitive AI landscape.

Amazon and Nokia face off in court over patent infringement

A German court has ruled that Amazon is using Nokia’s patented video technologies without obtaining a proper licence, according to a statement from Nokia. The decision, made by the Munich Regional Court, found that Amazon’s streaming devices are illegally utilising Nokia’s patented video-related technologies, which the Finnish company holds rights to.

Nokia’s Chief Licensing Officer, Arvin Patel, expressed satisfaction with the ruling, stating that Amazon has been selling these devices without the necessary licences in place. The ruling highlights ongoing disputes between tech giants over intellectual property.

In response to Nokia’s legal actions, Amazon filed a lawsuit in July in a Delaware federal court, accusing the company from Finland of infringing on a dozen Amazon patents related to cloud-computing technology.

This legal battle is part of a broader pattern of disputes between major tech companies, as patent rights continue to play a critical role in the development of new technologies and services.

Palworld faces lawsuit from Nintendo and Pokémon Co

Nintendo and The Pokémon Company have sued Pocketpair Inc., the maker of ‘Palworld’, for patent infringement. The lawsuit was filed in the Tokyo District Court and aims to halt the game’s distribution, claiming multiple patent violations. Nintendo and Pokémon Co are seeking damages from the Tokyo-based game studio.

‘Palworld’ gained attention as a survival adventure game where players capture and train creatures using guns, a concept many fans dubbed ‘Pokémon with guns’. Pocketpair expressed surprise at the lawsuit, stating they had not yet been informed of the specific patents in question.

The company confirmed it would begin legal proceedings and investigations in response to the claims. It expressed frustration over being forced to divert time from game development due to the legal battle.

Earlier this year, The Pokémon Company had already warned it would pursue any intellectual property violations. Meanwhile, Pocketpair had partnered with Sony in July to promote the global licensing of ‘Palworld’.

EU’s AI Act faces tech giants’ resistance

As the EU finalises its groundbreaking AI Act, major technology firms are lobbying for lenient regulations to minimise the risk of multi-billion dollar fines. The AI Act, agreed upon in May, is the world’s first comprehensive legislation governing AI. However, the details on how general-purpose AI systems like ChatGPT will be regulated remain unclear. The EU has opened the process to companies, academics, and other stakeholders to help draft the accompanying codes of practice, receiving a surge of interest with nearly 1,000 applications.

A key issue at stake is how AI companies, including OpenAI and Stability AI, use copyrighted content to train their models. While the AI Act mandates companies to disclose summaries of the data they use, businesses are divided over how much detail to include, with some advocating for protecting trade secrets. In contrast, others demand transparency from content creators. Major players like Google and Amazon have expressed their commitment to the process, but there are growing concerns about transparency, with some accusing tech giants of trying to avoid scrutiny.

The debate over transparency and copyright has sparked a broader discussion on the balance between regulation and innovation. Critics argue that the EU’s focus on regulation could stifle technological advancements, while others stress the importance of oversight in preventing abuse. Former European Central Bank chief Mario Draghi recently urged the EU to improve its industrial policy to compete with China and the US, emphasising the need for swift decision-making and significant investment in the tech sector.

The finalised code of practice, expected next year, will not be legally binding but will serve as a guideline for compliance. Companies will have until August 2025 to meet the new standards, with non-profits and startups also playing a role in drafting. Some fear that big tech firms could weaken essential transparency measures, underscoring the ongoing tension between innovation and regulation in the digital era.

Runway partners with Lionsgate to revolutionise film-making

Runway, a generative AI startup, has announced a significant partnership with Lionsgate, the studio responsible for popular franchises such as John Wick and Twilight. This collaboration will enable Lionsgate’s creative teams, including filmmakers and directors, to utilise Runway’s AI video-generating models. These models have been trained on the studio’s film catalogue and will be used to enhance their creative work. Michael Burns, vice chair of Lionsgate, emphasised the potential for this partnership to support creative talent.

Runway is considering new opportunities, including licensing its AI models to individual creators, allowing them to create and train custom models. This partnership represents the first public collaboration between a generative AI startup and a major Hollywood studio. Although Disney and Paramount have reportedly been discussing similar partnerships with AI providers, no official agreements have been reached yet.

This deal comes at a time of increased attention on AI in the entertainment industry, due to California’s new laws that regulate the use of AI digital replicas in film and television. Runway is also currently dealing with legal challenges regarding the alleged use of copyrighted works to train its models without permission.

Senators call for inquiry into AI content summarisation

A group of Democratic senators, led by Amy Klobuchar, has called on the United States Federal Trade Commission (FTC) and the Department of Justice (DOJ) to investigate whether AI tools that summarise online content are anti-competitive. The concern is that AI-generated summaries keep users on platforms like Google and Meta, preventing traffic from reaching the original content creators, which can result in lost advertising revenue for those creators.

The senators argue that platforms profit from using third-party content to generate AI summaries, while publishers are left with fewer opportunities to monetise their work. Content creators are often forced to choose between having their work summarised by AI tools or opting out entirely from being indexed by search engines, risking significant drops in traffic.

There is also a concern that AI features can misappropriate third-party content, passing it off as new material. The senators believe that the dominance of major online platforms is creating an unfair market for advertising revenue, as these companies control how content is monetised and limit the potential for original creators to benefit.

The letter calls for regulators to examine whether these practices violate antitrust laws. The FTC and DOJ will need to determine if the behaviour constitutes exclusionary conduct or unfair competition. The push from legislators could also lead to new laws if current regulations are deemed insufficient.

Elon Musk pushes for AI safety law in California

Elon Musk has urged California to pass the AI bill requiring tech companies to conduct safety testing on their AI models. Musk, who owns Tesla and the social media platform X, has long advocated for AI regulation, likening it to rules for any technology that could pose risks to the public. He specifically called for the passage of California’s SB 1047 bill to address these concerns.

California lawmakers have been busy with AI legislation, attempting to introduce 65 AI-related bills this season. These bills cover a range of issues, including ensuring algorithmic fairness and protecting intellectual property from AI exploitation. However, many of these bills have yet to advance.

On the same day, Microsoft-backed OpenAI supported a different AI bill, AB 3211, which requires companies to label AI-generated content, particularly in light of growing concerns about deepfakes and misinformation, especially in an election year.

The push for AI regulation comes when countries representing a broader portion of the global population are holding elections, raising concerns about the potential impact of AI-generated content on political processes.

Google’s $250M deal to support California newsrooms

Google has entered a $250 million deal with the state of California to support local newsrooms, which have been struggling with widespread layoffs and declining revenues. The decision comes in the wake of proposed legislation that would have required tech companies to pay news providers when they run ads alongside news content. By securing this deal, Google has managed to sidestep such bills.

The Media Guild of the West, a local journalism union, has criticised the deal, calling it a ‘shakedown’ that fails to address the real issues plaguing the industry. They argue that the deal’s financial commitments are minimal compared to the wealth tech giants have allegedly ‘stolen’ from newsrooms.

The deal includes the creation of the News Transformation Fund, supported by Google and taxpayers, which will distribute funds to news organisations in California over five years. Additionally, the National AI Innovation Accelerator, funded by Google, will support various industries, including journalism, by exploring the use of AI in their work.

While some, including California Governor Gavin Newsom, have praised the initiative, others remain sceptical. Critics argue that the deal needs to be revised, pointing out that only Google contributes financially, with other tech giants like Meta and Amazon absent from the agreement.

The news industry’s challenges are significant, with California seeing a sharp decline in publishers and journalists over the past two decades. Big Tech’s dominance in the advertising market and its impact on publisher traffic have exacerbated these challenges, leading to calls for more robust solutions to sustain local journalism.

New appointment at Google’s AI division

Google has appointed Noam Shazeer, a former Google researcher and co-founder of Character.AI, as co-lead of its main AI project, Gemini. Shazeer will join Jeff Dean and Oriol Vinyals in overseeing the development of AI models at DeepMind, Google’s AI division, which are set to enhance products like Search and Pixel smartphones.

Shazeer rejoined Google after founding Character.AI in 2021. The tech giant secured his return by paying billions and striking a licensing agreement with his former company. Shazeer expressed excitement in a memo to staff, praising the team he has rejoined.

Originally joining Google in 2000, Shazeer was instrumental in the 2017 research that ignited the current AI boom. Character.AI, which leverages these advancements, has attracted significant venture capital, reaching a $1 billion valuation last year.

Google’s decision to bring Shazeer back echoes similar strategies by other tech giants, although these moves have drawn regulatory scrutiny. In related news, a US judge recently ruled that Google’s search engine violated antitrust laws by creating an illegal monopoly.

Anthropic faces lawsuit for copyright infringement

Three authors have filed a class-action lawsuit against the AI company Anthropic in a California federal court, accusing the firm of illegally using their books and hundreds of thousands of others to train its AI chatbot, Claude. The lawsuit, initiated by writers Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson, claims that Anthropic utilised pirated versions of their works to develop the chatbot’s ability to respond to human prompts.

Anthropic, which has received financial backing from major companies like Amazon and Google, acknowledged the lawsuit but declined to comment further due to the ongoing litigation. The legal action against Anthropic is part of a broader trend, with other content creators, including visual artists and news outlets, also suing tech companies over using their copyrighted material in training AI models.

This is not the first time Anthropic has faced such accusations. Music publishers previously sued the company for allegedly misusing copyrighted song lyrics to train Claude. The authors involved in the current case argue that Anthropic has built a multibillion-dollar business by exploiting its intellectual property without permission.

The lawsuit demands financial compensation for the authors and a court order to permanently prevent Anthropic from using their work unlawfully. As the case progresses, it highlights the growing tension between content creators and AI companies over using copyrighted material in developing AI technologies.