Google is testing an AI-driven feature for YouTube Shorts, allowing creators to alter licensed audio tracks to fit different styles or genres. Part of YouTube’s Dream Track experiment, the feature lets select creators customise songs by simply describing their desired transformation, such as changing the music genre. YouTube’s AI then generates a 30-second soundtrack for the creator’s short video, maintaining the original vocals and lyrics.
The experimental tool has clear attribution rules, ensuring viewers can see that the song has been restyled with AI. Videos featuring these AI-enhanced tracks will display the original song information and note that AI was used to alter the sound. This setup helps protect the rights of original music creators while giving video makers new artistic possibilities.
The Dream Track experiment has been in testing since last year, initially giving creators access to AI-generated artist voices with approved songs. This latest feature now expands to allow broader soundtrack customisation within Shorts, aiming to boost creator flexibility and viewer engagement.
Separately, YouTube is testing a swipe-up feature for its Android app, making it easier to navigate between videos. Available to a limited number of users, the swipe-up gesture now brings a navigation method similar to Instagram Reels, potentially signalling an interface shift across YouTube’s mobile platform.
French news publishers, including Le Monde, Le Figaro, and Le Parisien, have taken legal action against the social media platform X, formerly Twitter, over alleged unpaid content rights. They argue that X has distributed their material without compensation, as required by French ancillary rights laws, which mandate payments to news outlets when digital platforms use their content.
The dispute centres around X’s refusal to open negotiations with French media, contrasting with platforms like Google and Meta, which have reached agreements with publishers. French newspapers contend that X has ignored an order by the Paris Court of Justice from May, which requires the company to disclose financial information needed to determine the amount owed.
In a statement, the publishers emphasised that revenue from these payments supports media independence, plurality, and quality, contributing to freedom of expression and the right to information in society. They argue that securing these funds is vital for sustaining a democratic press.
A representative of the Paris court has confirmed that a hearing will take place on May 15, 2025, where both parties will present their cases. X, owned by billionaire Elon Musk, has yet to comment on the legal challenge.
Disney is establishing a new division, the Office of Technology Enablement, dedicated to advancing the company’s use of AI and mixed reality (XR). Led by Jamie Voris, Disney’s former chief technology officer for its film studio, the unit will oversee projects across Disney’s film, television, and theme park segments to leverage these rapidly evolving technologies. This group will focus on coordinating various initiatives without centralising them, ensuring each project aligns with Disney’s broader technological strategy.
The new office, which will ultimately expand to about 100 employees, comes as Disney looks to tap into cutting-edge AI and augmented reality (AR) applications. Disney Entertainment Co-Chairman Alan Bergman emphasised the importance of exploring AI’s potential while mitigating risks, signaling Disney’s intention to create next-generation experiences for theme parks and home entertainment. Voris’s leadership will be succeeded by Eddie Drake as Disney’s new film studio CTO.
Disney has been actively building expertise in AR and virtual reality (VR) as technology companies like Meta and Apple compete in the emerging AR/VR market. The company also rehired Kyle Laughlin, a specialist in these technologies, as Senior VP of Research and Development for Disney Imagineering, its theme park innovation branch. By assembling a team with expertise in advanced tech, Disney aims to create immersive, engaging experiences for its global audience.
Google has won a trademark lawsuit brought by Shorts International, a British company specialising in short films, over the use of the word ‘shorts’ in YouTube‘s short video platform, YouTube Shorts. London’s High Court found no risk of consumer confusion between Shorts International’s brand and YouTube’s platform, which launched in 2020 as a response to TikTok‘s popularity.
Shorts International, known for its short film television channel, argued that YouTube Shorts infringed on its established trademark. However, Google’s lawyer, Lindsay Lane, countered that it was clear the ‘Shorts’ platform belonged to YouTube, removing any chance of brand confusion.
Judge Michael Tappin ruled in favour of Google, stating that the use of ‘shorts’ by YouTube would not affect the distinctiveness or reputation of Shorts International’s trademark. The court’s decision brings the legal challenge to a close, dismissing all claims of infringement.
ForceField is unveiling its new technology at the 2024 TechCrunch Disrupt, introducing tools aimed at fighting deepfakes and manipulated content. Unlike platforms that flag AI-generated media, ForceField authenticates content directly from devices, ensuring the integrity of digital evidence. Using its HashMarq API, the startup verifies the authenticity of data streams by generating a secure digital signature in real time.
The company uses blockchain technology for smart contracts, safeguarding content without relying on cryptocurrencies or web3 solutions. This system authenticates data collected across various platforms, from mobile apps to surveillance cameras. By tracking metadata like time, location, and surrounding signals, ForceField provides insights that aid journalists, law enforcement, and organisations in verifying the accuracy of submitted media.
ForceField was inspired by CEO MC Spano’s personal experience in 2018, when she struggled to submit video evidence following an assault. Her frustration with the justice system sparked the creation of technology that could simplify evidence submission and ensure its acceptance. Now the startup is working with clients such as Erie Insurance and plans to launch commercially by early 2025, focusing initially on the insurance sector but with applications in media and law enforcement.
The company, which is entirely woman-led, has gained financial backing from several angel investors and strategic partnerships. Spano aims to raise a seed round by year’s end, highlighting the importance of diversity in tech leadership. As AI-generated content continues to flood the internet, ForceField’s tools offer a new way to validate authenticity and restore trust in digital information.
A new podcast titled Virtually Parkinsonbrings back the voice of Sir Michael Parkinson, using AI technology to simulate the late chat show host. Produced by Deep Fusion Films with support from Parkinson’s family, the series aims to recreate his interview style across eight episodes, featuring new conversations with prominent guests.
Mike Parkinson, son of the late broadcaster, explained that the family wanted listeners to know the voice is an AI creation, ensuring transparency. He noted the project was inspired by conversations he had with his father before he passed, saying Sir Michael would have found the concept intriguing, despite being a technophobe.
The release comes amid growing controversy around AI’s role in the creative arts, with many actors and presenters fearing it could undermine their careers. Though AI is often criticised for replacing real talent, Parkinson’s son argued that the podcast offers a unique way to extend his father’s legacy, without replacing a living presenter.
Co-creator Jamie Anderson clarified that the AI version acts as an autonomous host, conducting interviews in a way reflective of Sir Michael’s original style. The podcast seeks to introduce his legacy to younger audiences, while also raising ethical questions about the use of AI to recreate deceased individuals.
Universal Music Group (UMG) has announced a partnership with Los Angeles-based AI music company KLAY Vision to create AI tools designed with an ethical framework for the music industry. According to Universal, the initiative focuses on exploring new opportunities for artists and creating safeguards to protect the music ecosystem as AI continues to evolve in creative spaces. Michael Nash, Universal’s chief digital officer, emphasised the importance of ethical AI use for artists’ rights in a rapidly changing industry.
The collaboration comes as Universal Music faces ongoing legal battles with other AI companies, including Anthropic AI, Suno, and Udio, over the use of its recordings in training music-generating AI models without authorisation. These cases highlight the growing concerns surrounding AI technology’s impact on the creative sector, particularly with respect to artists’ rights and intellectual property.
With this partnership, Universal Music aims to establish AI technologies that support artists’ needs while navigating the complex ethical questions surrounding AI-generated music. By working alongside US based KLAY Vision, Universal hopes to shape the future of AI in music responsibly and to develop solutions that ensure fair treatment of artists and their work.
Perplexity has vowed to contest the copyright infringement claims filed by Dow Jones and the New York Post. The California-based AI company denied the accusations in a blog post, calling them misleading. News Corp, owner of both media entities, launched the lawsuit on Monday, accusing Perplexity of extensive illegal copying of its content.
The conflict began after the two publishers allegedly contacted Perplexity in July with concerns over unauthorised use of their work, proposing a licensing agreement. According to Perplexity, the startup replied the same day, but the media companies decided to move forward with legal action instead of continuing discussions.
CEO Aravind Srinivas expressed his surprise over the lawsuit at the WSJ Tech Live event on Wednesday, noting the company had hoped for dialogue instead. He emphasised Perplexity’s commitment to defending itself against what it considers an unwarranted attack.
Perplexity is challenging Google’s dominance in the search engine market by providing summarised information from trusted sources directly through its platform. The case reflects ongoing tensions between publishers and tech firms over the use of copyrighted content for AI development.
Google has released SynthID Text, a watermarking tool designed to help developers identify AI-generated content. Available for free on platforms like Hugging Face and Google’s Responsible GenAI Toolkit, this open-source technology aims to improve transparency around AI-written text. It works by embedding subtle patterns into the token distribution of text generated by AI models without affecting the quality or speed of the output.
SynthID Text has been integrated with Google’s Gemini models since earlier this year. While it can detect text that has been paraphrased or modified, the tool does have limitations, particularly with shorter text, factual responses, and content translated from other languages. Google acknowledges that its watermarking technique may struggle with these formats but emphasises the tool’s overall benefits.
As the demand for AI-generated content grows, so does the need for reliable detection methods. Countries like China are already mandating watermarking of AI-produced material, and similar regulations are being considered in US, California. The urgency is clear, with predictions that AI-generated content could dominate 90% of online text by 2026, creating new challenges in combating misinformation and fraud.
Meta Platforms and its CEO, Mark Zuckerberg, successfully defended against a lawsuit claiming the company misled shareholders about child safety on Facebook and Instagram. A US federal judge dismissed the case on Tuesday.
Judge Charles Breyer ruled that the plaintiff, Matt Eisner, failed to demonstrate that shareholders experienced financial harm due to Meta’s disclosures. He stated that federal law does not require companies to reveal all decisions regarding child safety measures or focus on their shortcomings.
Eisner had sought to delay Meta’s 2024 annual meeting and void its election results unless the company revised its proxy statement. However, the judge emphasised that many of Meta’s commitments in its proxy materials were aspirational and not legally binding. His dismissal, issued with prejudice, prevents Eisner from filing the same case again.
Meta still faces legal challenges from state attorneys general and hundreds of lawsuits from children, parents, and schools, accusing the company of fostering social media addiction. Other platforms, such as TikTok and Snapchat, also confront similar legal actions.