OpenAI locks down operations after DeepSeek model concerns

OpenAI has significantly tightened its internal security following reports that DeepSeek may have replicated its models. DeepSeek allegedly used distillation techniques to launch a competing product earlier this year, prompting a swift response.

OpenAI has introduced strict access protocols to prevent information leaks, including fingerprint scans, offline servers, and a policy restricting internet use without approval. Sensitive projects such as its AI o1 model are now discussed only by approved staff within designated areas.

The company has also boosted cybersecurity staffing and reinforced its data centre defences. Confidential development information is now shielded through ‘information tenting’.

These actions coincide with OpenAI’s $30 billion deal with Oracle to lease 4.5 gigawatts of data centre capacity across the United States. The partnership plays a central role in OpenAI’s growing Stargate infrastructure strategy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Court ruling raises alarm over saved ChatGPT chats

A US federal court has ordered OpenAI to preserve nearly all user chats with ChatGPT, including those that users had deleted. The decision comes as part of The New York Times’s ongoing copyright lawsuit, triggering widespread privacy concerns.

The ruling means that millions of personal conversations, previously thought erased, will remain accessible during litigation. These exchanges may include medical queries, relationship issues, and other private matters shared in confidence.

Privacy advocates argue that users were not notified or allowed to object. Critics warn the US ruling sets a dangerous precedent, enabling mass data preservation in lawsuits unrelated to most users.

The Times claims users may have deleted chats to hide copyright infringement. Lawyers and privacy experts counter that people delete chats for legitimate, non-infringing reasons and should retain control over their data.

Legal experts call the preservation order excessive, noting it undermines trust in AI tools and could lead to a chilling effect on their use. The decision could reshape how user privacy is treated in tech litigation for years.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google hit with EU complaint over AI Overviews

After a formal filing by the Independent Publishers Alliance, Google has faced an antitrust complaint in the European Union over its AI Overviews feature.

The group alleges that Google has been using web content without proper consent to power its AI-generated summaries, causing considerable harm to online publishers.

The complaint claims that publishers have lost traffic, readers and advertising revenue due to these summaries. It also argues that opting out of AI Overviews is not a real choice unless publishers are prepared to vanish entirely from Google’s search results.

AI Overviews were launched over a year ago and now appear at the top of many search queries, summarising information using AI. Although the tool has expanded rapidly, critics argue it drives users away from original publisher websites, especially news outlets.

Google has responded by stating its AI search tools allow users to ask more complex questions and help businesses and creators get discovered. The tech giant also insisted that web traffic patterns are influenced by many factors and warned against conclusions based on limited data.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cloudflare’s new tool lets publishers charge AI crawlers

Cloudflare, which powers 20% of the web, has launched a new marketplace called Pay per Crawl, aiming to redefine how website owners interact with AI companies.

The platform allows publishers to set a price for AI crawlers to access their content instead of allowing unrestricted scraping or blocking. Website owners can decide to charge a micropayment for each crawl, permit free access, or block crawlers altogether, gaining more control over their material.

Over the past year, Cloudflare introduced tools for publishers to monitor and block AI crawlers, laying the groundwork for the marketplace. Major publishers like Conde Nast, TIME and The Associated Press have joined Cloudflare in blocking AI crawlers by default, supporting a permission-based approach.

The company also now blocks AI bots by default on all new sites, requiring site owners to grant access.

Cloudflare’s data reveals that AI crawlers scrape websites far more aggressively than traditional search engines, often without sending equivalent referral traffic. For example, OpenAI’s crawler scraped sites 1,700 times for every referral, compared to Google’s 14 times.

As AI agents evolve to gather and deliver information directly, it raises challenges for publishers who rely on site visits for revenue.

Pay per Crawl could offer a new business model for publishers in an AI-driven world. Cloudflare envisions a future where AI agents operate with a budget to access quality content programmatically, helping users synthesise information from trusted sources.

For now, both publishers and AI companies need Cloudflare accounts to set crawl rates, with Cloudflare managing payments. The company is also exploring stablecoins as a possible payment method in the future.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI rock band’s Spotify rise fuels calls for transparency

A mysterious indie rock band called The Velvet Sundown has shot to popularity on Spotify, and may be powered by AI. Their debut track, Dust on the Wind, has racked up over 380,000 plays since 20 June and helped attract more than 470,000 monthly listeners.

The song bears a resemblance to the 1977 Kansas hit Dust in the Wind, prompting suspicion from Reddit users. The band’s profile picture and Instagram photos appear AI-generated, while the band members listed — such as ‘Milo Rains’ and ‘Rio Del Mar’ — have no online trace.

Despite the clues, Spotify does not label the group as AI-generated. Their songs are appearing in curated playlists like Discover Weekly. Only Deezer, a French streaming service, has identified The Velvet Sundown as likely created by generative AI models like Suno or Udio.

Deezer began tagging AI music in June and now detects over 20,000 entirely artificial tracks each day. Another AI band, The Devil Inside, has also gained traction. Their song Bones in the River has over 1.6 million plays on Spotify, but lacks credited creators.

On Deezer, the same track is labelled as AI-generated and linked to Hungarian musician László Tamási — a rare human credit for bot-made music. While Deezer takes a transparent approach, Spotify, Apple Music, and Amazon Music have not announced detection systems or labelling plans.

Deezer CEO Alexis Lanternier said AI is ‘not inherently good or bad,’ but called for transparency to protect artist rights and user trust. Legal battles are already underway. US record labels have sued Suno and Udio for mass copyright infringement, though the companies argue it falls under fair use.

As AI-generated music continues to rise, platforms face increasing pressure to inform users and draw more precise lines between human and machine-made art.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Denmark proposes landmark law to protect citizens from deepfake misuse

Denmark’s Ministry of Culture has introduced a draft law aimed at safeguarding citizens’ images and voices under national copyright legislation, Azernews reports. The move marks a significant step in addressing the misuse of deepfake technologies.

The proposed bill prohibits using an individual’s likeness or voice without prior consent, enabling affected individuals to claim compensation. While satire and parody remain exempt, the legislation explicitly bans the unauthorised use of deepfakes in artistic performances.

Under the proposed framework, online platforms that fail to remove deepfake content upon request could be subject to fines. The legislation will apply only within Denmark and is expected to pass with up to 90% parliamentary support.

The bill follows recent incidents involving manipulated videos of Denmark’s Prime Minister and legal challenges against the creators of pornographic deepfakes.

If adopted, Denmark would become the first country in the region to implement such legal measures. The proposal is expected to spark broader discussions across Europe on the ethical boundaries of AI-generated content.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI training with pirated books triggers massive legal risk

A US court has ruled that AI company Anthropic engaged in copyright infringement by downloading millions of pirated books to train its language model, Claude.

Although the court found that using copyrighted material for AI training could qualify as ‘fair use’ under US law when the content is transformed, it also held that acquiring the content illegally instead of licensing it lawfully constituted theft.

Judge William Alsup described AI as one of the most transformative technologies of our time. Still, he stated that Anthropic obtained millions of digital books from pirate sites such as LibGen and Pirate Library Mirror.

He noted that buying the same books later in print form does not erase the initial violation, though it may reduce potential damages.

The penalties for wilful copyright infringement in the US could reach up to $150,000 per work, meaning total compensation might run into the billions.

The case highlights the fine line between transformation and theft and signals growing legal pressure on AI firms to respect intellectual property instead of bypassing established licensing frameworks.

Australia, which uses a ‘fair dealing’ system rather than ‘fair use’, already offers flexible licensing schemes through organisations like the Copyright Agency.

CEO Josephine Johnston urged policymakers not to weaken Australia’s legal framework in favour of global tech companies, arguing that licensing provides certainty for developers and fair payment to content creators.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta wins copyright case over AI training

Meta has won a copyright lawsuit brought by a group of authors who accused the company of using their books without permission to train its Llama generative AI.

A US federal judge in San Francisco ruled the AI training was ‘transformative’ enough to qualify as fair use under copyright law.

Judge Vince Chhabria noted, however, that future claims could be more successful. He warned that using copyrighted books to build tools capable of flooding the market with competing works may not always be protected by fair use, especially when such tools generate vast profits.

The case involved pirated copies of books, including Sarah Silverman’s memoir ‘The Bedwetter’ and Junot Diaz’s award-winning novel ‘The Brief Wondrous Life of Oscar Wao’. Meta defended its approach, stating that open-source AI drives innovation and relies on fair use as a key legal principle.

Chhabria clarified that the ruling does not confirm the legality of Meta’s actions, only that the plaintiffs made weak arguments. He suggested that more substantial evidence and legal framing might lead to a different outcome in future cases.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Anthropic AI training upheld as fair use; pirated book storage heads to trial

A US federal judge has ruled that Anthropic’s use of books to train its AI model falls under fair use, marking a pivotal decision for the generative AI industry.

The ruling, delivered by US District Judge William Alsup in San Francisco, held that while AI training using copyrighted works was lawful, storing millions of pirated books in a central library constituted copyright infringement.

The case involves authors Andrea Bartz, Charles Graeber and Kirk Wallace Johnson, who sued Anthropic last year. They claimed the Amazon- and Alphabet-backed firm had used pirated versions of their books without permission or compensation to train its Claude language model.

The proposed class action lawsuit is among several lawsuits filed by copyright holders against AI developers, including OpenAI, Microsoft, and Meta.

Judge Alsup stated that Anthropic’s training of Claude was ‘exceedingly transformative’, likening it to how a human reader learns to write by studying existing works. He concluded that the training process served a creative and educational function that US copyright law protects under the doctrine of fair use.

‘Like any reader aspiring to be a writer, Anthropic’s LLMs trained upon works not to replicate them but to create something different,’ the ruling said.

However, Alsup drew a clear line between fair use and infringement regarding storage practices. Anthropic’s copying and storage of over 7 million books in what the court described as a ‘central library of all the books in the world’ was not covered by fair use.

The judge ordered a trial scheduled for December to determine how much Anthropic may owe in damages. US copyright law permits statutory damages of up to $150,000 per work for wilful infringement.

Anthropic argued in court that its use of the books was consistent with copyright law’s intent to promote human creativity.

The company claimed that its system studied the writing to extract uncopyrightable insights and to generate original content. It also maintained that the source of the digital copies was irrelevant to the fair use determination.

Judge Alsup disagreed, noting that downloading content from pirate websites when lawful access was possible may not qualify as a reasonable step. He expressed scepticism that infringers could justify acquiring such copies as necessary for a later claim of fair use.

The decision is the first judicial interpretation of fair use in the context of generative AI. It will likely influence ongoing legal battles over how AI companies source and use copyrighted material for model training. Anthropic has not yet commented on the ruling.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Perplexity AI bot now makes videos on X

Perplexity’s AI chatbot, now integrated with X (formerly Twitter), has introduced a feature that allows users to generate short AI-created videos with sound.

By tagging @AskPerplexity with a brief prompt, users receive eight-second clips featuring computer-generated visuals and audio, including dialogue. The move is as a potential driver of engagement on the Elon Musk-owned platform.

However, concerns have emerged over the possibility of misinformation spreading more easily. Perplexity claims to have installed strong filters to limit abuse, but X’s poor content moderation continues to fuel scepticism.

The feature has already been used to create imaginative videos involving public figures, sparking debates around ethical use.

The competition between Perplexity’s ‘Ask’ bot and Musk’s Grok AI is intensifying, with the former taking the lead in multimedia capabilities. Despite its popularity on X, Grok does not currently support video generation.

Meanwhile, Perplexity is expanding to other platforms, including WhatsApp, offering AI services directly without requiring a separate app or registration.

Legal troubles have also surfaced. The BBC is threatening legal action against Perplexity over alleged unauthorised use of its content for AI training. In a strongly worded letter, the broadcaster has demanded content deletion, compensation, and a halt to further scraping.

Perplexity dismissed the claims as manipulative, accusing the BBC of misunderstanding technology and copyright law.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!