EU files WTO complaint against China’s patent practices

The European Commission has filed a complaint with the World Trade Organization (WTO) against China, accusing the country of ‘unfair and illegal’ practices regarding worldwide royalty rates for European standard essential patents (SEPs). According to the Commission, China has empowered its courts to set global royalty rates for the EU companies, particularly in the telecoms sector, without the consent of the patent holders.

The case focuses on SEPs, which are crucial for technologies like 5G, used in mobile phones. European companies such as Nokia and Ericsson hold many of these patents. The Commission claims that China’s actions force European companies to reduce their royalty rates globally, providing Chinese manufacturers with unfairly low access to European technologies.

The European Union has requested consultations with China, marking the first step in WTO dispute resolution. If a resolution is not reached within 60 days, the EU can request the formation of an adjudicating panel, which typically takes about a year to issue a final report. This case is linked to a previous EU dispute at the WTO concerning China’s anti-suit injunctions, which restrict telecom patent holders’ ability to enforce intellectual property rights in courts outside China.

Zuckerberg defends AI training as copyright dispute deepens

Mark Zuckerberg has defended Meta’s use of a dataset containing copyrighted e-books to train its AI models, Llama. The statement emerged from a deposition linked to the ongoing Kadrey v. Meta Platforms lawsuit, which is one of many cases challenging the use of copyrighted content in AI training. Meta reportedly relied on the controversial dataset LibGen, despite internal concerns over potential legal risks.

LibGen, a platform known for providing unauthorised access to copyrighted works, has faced numerous lawsuits and shutdown orders. Newly unsealed court documents suggest that Zuckerberg approved using the dataset to develop Meta’s Llama models. Employees allegedly flagged the dataset as problematic, warning it might undermine the company’s standing with regulators. During questioning, Zuckerberg compared the situation to YouTube’s efforts to remove pirated content, arguing against blanket bans on datasets with copyrighted material.

Meta’s practices are under heightened scrutiny as legal battles pit AI companies against copyright holders. The deposition indicates that Meta considered balancing copyright concerns with practical AI development needs. However, the company faces mounting allegations that it disregarded ethical boundaries, sparking broader debates about fair use and intellectual property in AI training.

AFP partnership strengthens Mistral’s global reach

Mistral, a Paris-based AI company, has entered a groundbreaking partnership with Agence France-Presse (AFP) to enhance the accuracy of its chatbot, Le Chat. The deal signals Mistral’s determination to broaden its scope beyond foundational model development.

Through the agreement, Le Chat will gain access to AFP’s extensive archive, which includes over 2,300 daily stories in six languages and records dating back to 1983. While the focus remains on text content, photos and videos are not part of the multi-year arrangement. By incorporating AFP’s multilingual and multicultural resources, Mistral aims to deliver more accurate and reliable responses tailored to business needs.

The partnership bolsters Mistral’s standing against AI leaders like OpenAI and Anthropic, who have also secured similar content agreements. Le Chat’s enhanced features align with Mistral’s broader strategy to develop user-friendly applications that rival popular tools such as ChatGPT and Claude.

Mistral’s co-founder and CEO, Arthur Mensch, emphasised the importance of the partnership, describing it as a step toward offering clients a unique and culturally diverse AI solution. The agreement reinforces Mistral’s commitment to innovation and its global relevance in the rapidly evolving AI landscape.

Digital art website crippled by OpenAI bot scraping

Triplegangers, was forced offline after a bot from OpenAI relentlessly scraped its website, treating it like a distributed denial-of-service (DDoS) attack. The AI bot sent tens of thousands of server requests, attempting to download hundreds of thousands of detailed 3D images and descriptions from the company’s extensive database of digital human models.

The sudden spike in traffic crippled Ukrainian Triplegangers’ servers and left CEO Oleksandr Tomchuk grappling with an unexpected problem. The company, which sells digital assets to video game developers and 3D artists, discovered that OpenAI’s bot operated across hundreds of IP addresses to gather its data. Despite having terms of service that forbid such scraping, the company had not configured the necessary robot.txt file to block the bot.

After days of disruption, Tomchuk implemented protective measures by updating the robot.txt file and using Cloudflare to block specific bots. However, he remains frustrated by the lack of transparency from OpenAI and the difficulty in determining exactly what data was taken. With rising costs and increased monitoring now necessary, he warns that other businesses remain vulnerable.

Tomchuk criticised AI companies for placing the responsibility on small businesses to block unwanted scraping, comparing it to a digital shakedown. “They should be asking permission, not just scraping data,” he said, urging companies to take greater precautions against AI crawlers that can compromise their sites.

Meta accused of using pirated books for AI

A group of authors, including Ta-Nehisi Coates and Sarah Silverman, has accused Meta Platforms of using pirated books to train its AI systems with CEO Mark Zuckerberg’s approval. Newly disclosed court documents filed in California allege that Meta knowingly relied on the LibGen dataset, which contains millions of pirated works, to develop its large language model, Llama.

The lawsuit, initially filed in 2023, claims Meta infringed on copyright by using the authors’ works without permission. The authors argue that internal Meta communications reveal concerns within the company about the dataset’s legality, which were ultimately overruled. Meta has not yet responded to the latest allegations.

The case is one of several challenging the use of copyrighted materials to train AI systems. While defendants in similar lawsuits have cited fair use, the authors contend that newly uncovered evidence strengthens their claims. They have requested permission to file an updated complaint, adding computer fraud allegations and revisiting dismissed claims related to copyright management information.

US District Judge Vince Chhabria has allowed the authors to file an amended complaint but expressed doubts about the validity of some new claims. The outcome of the case could have broader implications for how AI companies utilise copyrighted content in training data.

Grok chatbot now available on iOS

Elon Musk’s AI company, xAI, has launched a standalone iOS app for its chatbot, Grok, marking a major expansion beyond its initial availability to X users. The app is now live in several countries, including the US, Australia, and India, allowing users to access the chatbot directly on their iPhones.

The Grok app offers features such as real-time data retrieval from the web and X, text rewriting, summarising long content, and even generating images from text prompts. xAI highlights Grok’s ability to create photorealistic images with minimal restrictions, including the use of public figures and copyrighted material.

In addition to the app, xAI is working on a dedicated website, Grok.com, which will soon make the chatbot available on browsers. Initially limited to X’s paying subscribers, Grok rolled out a free version in November and made it accessible to all users earlier this month. The launch marks a notable push by xAI to establish Grok as a versatile, widely available AI assistant.

Apple to settle Siri privacy lawsuit for $95 million amidst ongoing user consent concerns

Apple has agreed to pay $95 million to settle a class action lawsuit alleging its Siri voice assistant violated users’ privacy. The lawsuit claimed that Apple recorded users’ private conversations without consent when the ‘Hey, Siri’ feature was unintentionally triggered. These recordings were allegedly shared with third parties, including advertisers, leading to targeted ads based on private discussions.

The class period for the lawsuit spans from 17 September 2014 to 31 December 2024 and applies to users of Siri-enabled devices like iPhones and Apple Watches. Affected users could receive up to $20 per device. Apple denied any wrongdoing but settled the case to avoid prolonged litigation.

The settlement amount is a small fraction of Apple’s annual profits, with the company making nearly $94 billion in net income last year. While the company and plaintiffs’ lawyers have yet to comment on the settlement, the plaintiffs may seek up to $28.5 million in legal fees and expenses. A similar lawsuit involving Google’s Voice Assistant is also underway in a California federal court.

Anthropic settles copyright infringement lawsuit with major music publishers over AI training practices

Anthropic, the company behind the Claude AI model, has agreed to resolve aspects of a copyright infringement lawsuit filed by major music publishers. The lawsuit, initiated in October 2023 by Universal Music Group, ABKCO, Concord Music Group, and others, alleged that Anthropic’s AI system unlawfully distributed lyrics from over 500 copyrighted songs, including tracks by Beyoncé and Maroon 5.

The publishers argued that Anthropic improperly used data from licensed platforms to train its models without permission. Under the settlement approved by US District Judge Eumi Lee, Anthropic will maintain and extend its guardrails designed to prevent copyright violations in existing and future AI models.

The company also agreed to collaborate with music publishers to address potential infringements and resolve disputes through court intervention if necessary. Anthropic reiterated its commitment to fair use principles and emphasised that its AI is not intended for copyright infringement.

Despite the agreement, the legal battle isn’t over. The music publishers have requested a preliminary injunction to prevent Anthropic from using their lyrics in future model training. A court decision on this request is expected in the coming months, keeping the spotlight on how copyright law applies to generative AI.

OpenAI delays Media Manager amid creator backlash

In May, OpenAI announced plans for ‘Media Manager,’ a tool to allow creators to control how their content is used in AI training, aiming to address intellectual property (IP) concerns. The project remains unfinished seven months later, with critics claiming it was never prioritised internally. The tool was intended to identify copyrighted text, images, audio, and video, allowing creators to include or exclude their work from OpenAI’s training datasets. However, its future remains uncertain, with no updates since August and missed deadlines.

The delay comes amidst growing backlash from creators and a wave of lawsuits against OpenAI. Plaintiffs, including prominent authors and artists, allege that the company trained its AI models on their works without authorisation. While OpenAI provides ad hoc opt-out mechanisms, critics argue these measures are cumbersome and inadequate.

Media Manager was seen as a potential solution, but experts doubt its effectiveness in addressing complex legal and ethical challenges, including global variations in copyright law and the burden placed on creators to protect their works. OpenAI continues to assert that its AI models transform, rather than replicate, copyrighted material, defending itself under ‘fair use’ protections.

While the company has implemented filters to minimise IP conflicts, lacking comprehensive tools like Media Manager leaves unresolved questions about compliance and compensation. As OpenAI battles legal challenges, the effectiveness and impact of Media Manager—if it ever launches—remain uncertain in the face of an evolving IP landscape.

LG unveils world’s first 5K2K bendable gaming monitor

LG aims to captivate attendees at CES 2025 with the introduction of the 45GX990A, a gaming monitor described as the ‘world’s first bendable 5K2K display’. The 45-inch OLED screen boasts a resolution of 5,120 x 2,160 pixels, a 21:9 aspect ratio, and an impressive ability to transition between a flat and 900R curvature for immersive gameplay.

Advanced WOLED technology powers the monitor, delivering vivid colours, true blacks, and reduced eye strain. LG has also incorporated its Anti-Glare & Low Reflection (AGLR) coating, designed to reduce distracting screen glare. The device includes a rapid 0.03ms response time and compatibility with Nvidia G-SYNC and AMD FreeSync Premium Pro.

Gamers can expect customisable presets tailored to various genres, such as FPS, RPG, and racing simulators. Connectivity is robust, featuring DisplayPort 2.1, HDMI 2.1, and USB-C with 90W power delivery, making the 45GX990A a versatile choice for PC and console users.

The Ultragear GX9 series will also introduce two additional models. These include the 45GX950A, featuring a fixed 800R curvature, and the 39GX90SA, a smaller yet equally striking 39-inch variant. All models will be showcased at the highly anticipated CES 2025 event.