UK court sides with Google in YouTube Shorts trademark case

Google has won a trademark lawsuit brought by Shorts International, a British company specialising in short films, over the use of the word ‘shorts’ in YouTube‘s short video platform, YouTube Shorts. London’s High Court found no risk of consumer confusion between Shorts International’s brand and YouTube’s platform, which launched in 2020 as a response to TikTok‘s popularity.

Shorts International, known for its short film television channel, argued that YouTube Shorts infringed on its established trademark. However, Google’s lawyer, Lindsay Lane, countered that it was clear the ‘Shorts’ platform belonged to YouTube, removing any chance of brand confusion.

Judge Michael Tappin ruled in favour of Google, stating that the use of ‘shorts’ by YouTube would not affect the distinctiveness or reputation of Shorts International’s trademark. The court’s decision brings the legal challenge to a close, dismissing all claims of infringement.

ForceField offers new solution to combat deepfakes and AI deception

ForceField is unveiling its new technology at the 2024 TechCrunch Disrupt, introducing tools aimed at fighting deepfakes and manipulated content. Unlike platforms that flag AI-generated media, ForceField authenticates content directly from devices, ensuring the integrity of digital evidence. Using its HashMarq API, the startup verifies the authenticity of data streams by generating a secure digital signature in real time.

The company uses blockchain technology for smart contracts, safeguarding content without relying on cryptocurrencies or web3 solutions. This system authenticates data collected across various platforms, from mobile apps to surveillance cameras. By tracking metadata like time, location, and surrounding signals, ForceField provides insights that aid journalists, law enforcement, and organisations in verifying the accuracy of submitted media.

ForceField was inspired by CEO MC Spano’s personal experience in 2018, when she struggled to submit video evidence following an assault. Her frustration with the justice system sparked the creation of technology that could simplify evidence submission and ensure its acceptance. Now the startup is working with clients such as Erie Insurance and plans to launch commercially by early 2025, focusing initially on the insurance sector but with applications in media and law enforcement.

The company, which is entirely woman-led, has gained financial backing from several angel investors and strategic partnerships. Spano aims to raise a seed round by year’s end, highlighting the importance of diversity in tech leadership. As AI-generated content continues to flood the internet, ForceField’s tools offer a new way to validate authenticity and restore trust in digital information.

AI podcast revives Sir Michael Parkinson

A new podcast titled Virtually Parkinson brings back the voice of Sir Michael Parkinson, using AI technology to simulate the late chat show host. Produced by Deep Fusion Films with support from Parkinson’s family, the series aims to recreate his interview style across eight episodes, featuring new conversations with prominent guests.

Mike Parkinson, son of the late broadcaster, explained that the family wanted listeners to know the voice is an AI creation, ensuring transparency. He noted the project was inspired by conversations he had with his father before he passed, saying Sir Michael would have found the concept intriguing, despite being a technophobe.

The release comes amid growing controversy around AI’s role in the creative arts, with many actors and presenters fearing it could undermine their careers. Though AI is often criticised for replacing real talent, Parkinson’s son argued that the podcast offers a unique way to extend his father’s legacy, without replacing a living presenter.

Co-creator Jamie Anderson clarified that the AI version acts as an autonomous host, conducting interviews in a way reflective of Sir Michael’s original style. The podcast seeks to introduce his legacy to younger audiences, while also raising ethical questions about the use of AI to recreate deceased individuals.

Universal Music aims for ethical AI in new KLAY partnership

Universal Music Group (UMG) has announced a partnership with Los Angeles-based AI music company KLAY Vision to create AI tools designed with an ethical framework for the music industry. According to Universal, the initiative focuses on exploring new opportunities for artists and creating safeguards to protect the music ecosystem as AI continues to evolve in creative spaces. Michael Nash, Universal’s chief digital officer, emphasised the importance of ethical AI use for artists’ rights in a rapidly changing industry.

The collaboration comes as Universal Music faces ongoing legal battles with other AI companies, including Anthropic AI, Suno, and Udio, over the use of its recordings in training music-generating AI models without authorisation. These cases highlight the growing concerns surrounding AI technology’s impact on the creative sector, particularly with respect to artists’ rights and intellectual property.

With this partnership, Universal Music aims to establish AI technologies that support artists’ needs while navigating the complex ethical questions surrounding AI-generated music. By working alongside US based KLAY Vision, Universal hopes to shape the future of AI in music responsibly and to develop solutions that ensure fair treatment of artists and their work.

Perplexity disputes copyright allegations

Perplexity has vowed to contest the copyright infringement claims filed by Dow Jones and the New York Post. The California-based AI company denied the accusations in a blog post, calling them misleading. News Corp, owner of both media entities, launched the lawsuit on Monday, accusing Perplexity of extensive illegal copying of its content.

The conflict began after the two publishers allegedly contacted Perplexity in July with concerns over unauthorised use of their work, proposing a licensing agreement. According to Perplexity, the startup replied the same day, but the media companies decided to move forward with legal action instead of continuing discussions.

CEO Aravind Srinivas expressed his surprise over the lawsuit at the WSJ Tech Live event on Wednesday, noting the company had hoped for dialogue instead. He emphasised Perplexity’s commitment to defending itself against what it considers an unwarranted attack.

Perplexity is challenging Google’s dominance in the search engine market by providing summarised information from trusted sources directly through its platform. The case reflects ongoing tensions between publishers and tech firms over the use of copyrighted content for AI development.

Google unveils open-source watermark for AI text

Google has released SynthID Text, a watermarking tool designed to help developers identify AI-generated content. Available for free on platforms like Hugging Face and Google’s Responsible GenAI Toolkit, this open-source technology aims to improve transparency around AI-written text. It works by embedding subtle patterns into the token distribution of text generated by AI models without affecting the quality or speed of the output.

SynthID Text has been integrated with Google’s Gemini models since earlier this year. While it can detect text that has been paraphrased or modified, the tool does have limitations, particularly with shorter text, factual responses, and content translated from other languages. Google acknowledges that its watermarking technique may struggle with these formats but emphasises the tool’s overall benefits.

As the demand for AI-generated content grows, so does the need for reliable detection methods. Countries like China are already mandating watermarking of AI-produced material, and similar regulations are being considered in US, California. The urgency is clear, with predictions that AI-generated content could dominate 90% of online text by 2026, creating new challenges in combating misinformation and fraud.

Meta prevails in shareholder child safety lawsuit

Meta Platforms and its CEO, Mark Zuckerberg, successfully defended against a lawsuit claiming the company misled shareholders about child safety on Facebook and Instagram. A US federal judge dismissed the case on Tuesday.

Judge Charles Breyer ruled that the plaintiff, Matt Eisner, failed to demonstrate that shareholders experienced financial harm due to Meta’s disclosures. He stated that federal law does not require companies to reveal all decisions regarding child safety measures or focus on their shortcomings.

Eisner had sought to delay Meta’s 2024 annual meeting and void its election results unless the company revised its proxy statement. However, the judge emphasised that many of Meta’s commitments in its proxy materials were aspirational and not legally binding. His dismissal, issued with prejudice, prevents Eisner from filing the same case again.

Meta still faces legal challenges from state attorneys general and hundreds of lawsuits from children, parents, and schools, accusing the company of fostering social media addiction. Other platforms, such as TikTok and Snapchat, also confront similar legal actions.

Thousands of artists protest AI’s unlicensed use of their work

Thousands of creatives, including Kevin Bacon, Thom Yorke, and Julianne Moore, have signed a petition opposing the unlicensed use of their work to train AI. The 11,500 signatories believe that such practices threaten their livelihoods and call for better protection of creative content.

The petition argues that using creative works without permission for AI development is an ‘unjust threat’ to the people behind those works. Signatories from various industries, including musicians, writers, and actors, are voicing concerns over how their work is being used by AI companies.

British composer Ed Newton-Rex, who organised the petition, has spoken out against AI companies, accusing them of ‘dehumanising’ art by treating it as mere ‘training data’. He highlighted the growing concerns among creatives about how AI may undermine their rights and income.

The United Kingdom government is currently exploring new regulations to address the issue, including a potential ‘opt out’ model for AI data scraping, as lawmakers look for ways to protect creative content in the digital age.

AI company Perplexity faces lawsuit from Dow Jones and New York Post

Dow Jones and the New York Post have taken legal action against AI startup Perplexity AI, accusing the company of unlawfully copying their copyrighted content. The lawsuit is part of a wider dispute between publishers and tech companies over the use of news articles and other content without permission to train and operate AI systems.

Perplexity AI, which aims to disrupt the search engine market, assembles information from websites it deems authoritative and presents AI-generated summaries. Publishers claim that Perplexity bypasses their websites, depriving them of advertising and subscription revenue, and undermines the work of journalists.

The lawsuit, filed in the Southern District of New York, argues that Perplexity’s AI generates answers based on a vast database of news articles, often copying content verbatim. News Corp, owner of Dow Jones and the New York Post, is asking the court to block Perplexity’s use of its articles and to destroy any databases containing copyrighted material.

Perplexity has also faced allegations from other media organisations, including Forbes and Wired. While the company has introduced a revenue-sharing programme with some publishers, many news outlets continue to resist, seeking stronger legal protections for their content.

Blade Runner producer takes legal action over AI image use

Alcon Entertainment, the producer behind Blade Runner 2049, has filed a lawsuit against Tesla and Warner Bros, accusing them of misusing AI-generated images that resemble scenes from the movie to promote Tesla’s new autonomous cybercab. Filed in California, the lawsuit alleges violations of US copyright law and claims Tesla falsely implied a partnership with Alcon through the use of the imagery.

Alcon stated that it had rejected Warner Bros’ request to use official Blade Runner images for Tesla’s cybercab event on October 10. Despite this, Tesla allegedly proceeded with AI-created visuals that mirrored the film’s style. Alcon is concerned this could confuse its brand partners, especially ahead of its upcoming Blade Runner 2099 series for Amazon Prime.

Though no specific damages were mentioned, Alcon emphasized that it has invested hundreds of millions in the Blade Runner brand and argued that Tesla’s actions had caused substantial financial harm.