AI startup Perplexity has expanded its publisher partnerships, adding media outlets such as the Los Angeles Times and The Independent. These new partners will benefit from a program that shares ad revenue when their content is referenced on the platform. The initiative also provides publishers with access to Perplexity’s API and analytics tools, enabling them to track content performance and trends.
The program, launched in July, has attracted notable partners from Japan, Spain, and Latin America, including Prisa Media and Newspicks. Existing collaborators include TIME, Der Spiegel, and Fortune. Perplexity highlighted the importance of diverse media representation, stating that the partnerships enhance the accuracy and depth of its AI-powered responses.
Backed by Amazon founder Jeff Bezos and Nvidia, Perplexity aims to challenge Google’s dominance in the search engine market. The company has also begun testing advertising on its platform, seeking to monetise its AI search capabilities.
Perplexity’s growth has not been without challenges. It faces lawsuits from News Corp-owned publishers, including Dow Jones and New York Post, over alleged copyright violations. The New York Times has also issued a cease-and-desist notice, demanding the removal of its content from Perplexity’s generative AI tools.
Cate Blanchett has voiced her concerns about the societal implications of AI, describing the threat as ‘very real.’ In an interview with the BBC, the Australian actress shared her scepticism about advancements like driverless cars and AI‘s potential to replicate human voices, noting the broader risks for humanity. Blanchett emphasised that AI could replace anyone, not just actors, and criticised some technological advancements as ‘experimentation for its own sake.’
While promoting Rumours, her new apocalyptic comedy film, Blanchett described the plot as reflective of modern anxieties. The film, directed by Guy Maddin, portrays world leaders navigating absurd situations, offering both satire and a critique of detachment from reality. Blanchett highlighted how the story reveals the vulnerability and artificiality of political figures once removed from their structures of power.
Maddin shared that his characters emerged from initial disdain but evolved into figures of empathy as the narrative unfolds. Blanchett added that both actors and politicians face infantilisation within their respective systems, highlighting parallels in their perceived disconnection from the real world.
Five Canadian news companies have launched a lawsuit against OpenAI, claiming its AI systems violate copyright laws. Torstar, Postmedia, The Globe and Mail, The Canadian Press, and CBC/Radio-Canada allege the company uses their journalism without permission or compensation. The legal filing, made in Ontario’s superior court, seeks damages and a permanent ban on OpenAI using their materials unlawfully.
The companies argue that OpenAI has deliberately appropriated their intellectual property for commercial purposes. In their statement, they emphasised the public value of journalism and condemned OpenAI’s actions as illegal. OpenAI, however, defended its practices, stating that its models rely on publicly available data and comply with fair use and copyright principles. The firm also noted its efforts to collaborate with publishers and provide mechanisms for opting out.
The case follows a trend of lawsuits by various creators, including authors and artists, against AI companies over the use of copyrighted content, and the Canadian lawsuit does not name Microsoft, a major OpenAI backer. Separately, Elon Musk recently expanded a legal case accusing both companies of attempting to dominate the generative AI market unlawfully.
A Texas federal jury has ordered Samsung Electronics to pay $118M to Netlist, a US-based computer memory company, for patent infringement. The case centers on Netlist’s patented technology that boosts power efficiency and accelerates data processing in high-performance memory products used in cloud computing and data-intensive systems.
This ruling marks another major win for Netlist, which previously secured a $303M verdict against Samsung last year and $445M against Micron in May. The jury also determined Samsung’s actions were willful, leaving open the possibility of higher penalties.
Samsung denies the claims, asserting that the patents are invalid and that its technology operates differently from Netlist’s. Meanwhile, the legal battle continues with Samsung filing a countersuit in US, Delaware, accusing Netlist of failing to license the patents on fair terms.
Australia’s government has abandoned a proposal to fine social media platforms up to 5% of their global revenue for failing to curb online misinformation. The decision follows resistance from various political parties, making the legislation unlikely to pass the Senate.
Communications Minister Michelle Rowland stated the proposal aimed to enhance transparency and hold tech companies accountable for limiting harmful misinformation online. Despite broad public support for tackling misinformation, opposition from conservative and crossbench politicians stalled the plan.
The centre-left Labor government, currently lagging in polls, faces criticism for its approach. Greens senator Sarah Hanson-Young described the proposed law as a ‘half-baked option,’ adding to calls for more robust measures against misinformation.
Industry group DIGI, including Meta, argued the proposal merely reinforced an existing code. Australia’s tech regulation efforts are part of broader concerns about foreign platforms undermining national sovereignty.
OpenAI is under scrutiny after engineers accidentally erased key evidence in an ongoing copyright lawsuit filed by The New York Times and Daily News. The publishers accuse OpenAI of using their copyrighted content to train its AI models without authorisation.
The issue arose when OpenAI provided virtual machines for the plaintiffs to search its training datasets for infringed material. On 14 November 2024, OpenAI engineers deleted the search data stored on one of these machines. While most of the data was recovered, the loss of folder structures and file names rendered the information unusable for tracing specific sources in the training process.
Plaintiffs are now forced to restart the time-intensive search, leading to concerns over OpenAI’s ability to manage its own datasets. Although the deletion is not suspected to be intentional, lawyers argue that OpenAI is best equipped to perform searches and verify its use of copyrighted material. OpenAI maintains that training AI on publicly available data falls under fair use, but it has also struck licensing deals with major publishers like the Associated Press and News Corp. The company has neither confirmed nor denied using specific copyrighted works for its AI training.
Actor and filmmaker Ben Affleck has weighed in on the ongoing debate over AI in the entertainment industry, arguing that AI poses little immediate threat to actors and screenwriters. Speaking to CNBC, Affleck stated that while AI can replicate certain styles, it lacks the creative depth required to craft meaningful narratives or performances, likening it to a poor substitute for human ingenuity.
Affleck, co-founder of a film studio with fellow actor Matt Damon, expressed optimism about AI’s role in Hollywood, suggesting it might even generate new opportunities for creative professionals. However, he raised concerns about its potential impact on the visual effects industry, which could face significant disruptions as AI technologies advance.
Strikes by Hollywood unions last year highlighted fears that AI could replace creative talent. Affleck remains sceptical of such a scenario, maintaining that storytelling and human performance remain uniquely human domains that AI is unlikely to master soon.
Asian News International (ANI), one of India’s largest news agencies, has filed a lawsuit against OpenAI, accusing it of using copyrighted news content to train its AI models without authorisation. ANI alleges that OpenAI’s ChatGPT generated false information attributed to the agency, including fabricated interviews, which it claims could harm its reputation and spread misinformation.
The case, filed in the Delhi High Court, is India’s first legal action against OpenAI on copyright issues. While the court summoned OpenAI to respond, it declined to grant an immediate injunction, citing the complexity of the matter. A detailed hearing is scheduled for January, and an independent expert may be appointed to examine the case’s copyright implications.
OpenAI has argued that copyright laws don’t protect factual data and noted that websites can opt out of data collection. ANI’s counsel countered that public access does not justify content exploitation, emphasising the risks posed by AI inaccuracies. The case comes amid growing global scrutiny of AI companies over their use of copyrighted material, with similar lawsuits ongoing in the US, Canada, and Germany.
Brussels is planning new rules requiring Chinese firms to transfer technology and build factories in Europe to qualify for EU subsidies. These measures will apply to a €1 billion battery development scheme launching in December, potentially setting a precedent for other clean technology initiatives.
The proposals echo China’s own approach to foreign businesses, which compels them to share intellectual property to access its markets. The European Commission has also implemented tariffs on Chinese electric vehicles and stricter rules for hydrogen technology, aimed at reducing reliance on cheaper imports that undercut local manufacturers.
Chinese companies such as CATL and Envision Energy are already investing heavily in European facilities. However, domestic challenges persist, with Sweden’s Northvolt struggling financially as it attempts to scale up battery production. Batteries are critical for electric vehicles, making supply chains essential for Europe’s transition to greener technologies.
Critics warn that these tougher trade policies could disrupt EU climate goals by driving up costs for consumers. While the measures aim to support European industries, experts suggest they risk creating uncertainty and hindering innovation.
David Attenborough has criticised American AI firms for cloning his voice to narrate partisan reports. Outlets such as The Intellectualist have used his distinctive voice for topics including US politics and the war in Ukraine.
The broadcaster described these acts as ‘identity theft’ and expressed profound dismay over losing control of his voice after decades of truthful storytelling. Scarlett Johansson has faced a similar issue, with AI mimicking her voice for an online persona called ‘Sky’.
Experts warn that such technology poses risks to reputations and legacies. Dr Jennifer Williams of Southampton University highlighted the troubling implications for Attenborough’s legacy and authenticity in the public eye.
Regulations to prevent voice cloning remain absent, raising concerns about its misuse. The Intellectualist has yet to comment on Attenborough’s allegations.