OpenAI partners with major news outlets

OpenAI has signed multiple content-sharing deals with major media outlets, including Politico, Vox, Wired, and Vanity Fair, allowing their content to be featured in ChatGPT.

As part of the deal with The Washington Post, ChatGPT will display summaries, quotes, and links to the publication’s original reporting in response to relevant queries. OpenAI has secured similar partnerships with over 20 news publishers and 160 outlets in 20 languages.

The Washington Post’s head of global partnerships, Peter Elkins-Williams, emphasised the importance of meeting audiences where they are, ensuring ChatGPT users have access to impactful reporting.

OpenAI’s media partnerships head, Varun Shetty, noted that more than 500 million people use ChatGPT weekly, highlighting the significance of these collaborations in providing timely, trustworthy information to users.

OpenAI has worked to avoid criticism related to copyright infringement, having previously faced legal challenges, particularly from the New York Times, over claims that chatbots were trained on millions of articles without permission.

While OpenAI sought to dismiss these claims, a US district court allowed the case to proceed, intensifying scrutiny over AI’s use of news content.

Despite these challenges, OpenAI continues to form agreements with leading publications, such as Hearst, Condé Nast, Time magazine, and Vox Media, helping ensure their journalism reaches a wider audience.

Meanwhile, other publications have pursued legal action against AI companies like Cohere for allegedly using their content without consent to train AI models.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI films are now eligible for the Oscar awards

The Academy of Motion Picture Arts and Sciences has officially made films that incorporate AI eligible for Oscars, reflecting AI’s growing influence in cinema. Updated rules confirm that the use of generative AI or similar tools will neither help nor harm a film’s chances of nomination.

These guidelines, shaped with input from the Academy’s Science and Technology Council, aim to keep human creativity at the forefront, despite the increasing presence of digital tools in production.

Recent Oscar-winning films have already embraced AI. Adrien Brody’s performance in The Brutalist was enhanced using AI to refine his Hungarian accent, while Emilia Perez, a musical that claimed an award, used voice-cloning technology to support its cast.

Such tools can convincingly replicate voices and visual styles, making them attractive to filmmakers instead of relying solely on traditional methods, but not without raising industry-wide concerns.

The 2023 Hollywood strikes highlighted the tension between artistic control and automation. Writers and actors protested the threat posed by AI to their livelihoods, leading to new agreements that limit the use of AI-generated content and protect individuals’ likenesses.

Actress Susan Sarandon voiced fears about unauthorised use of her image, and Scarlett Johansson echoed concerns about digital impersonation.

Despite some safeguards, many in the industry remain wary. Animators argue that AI lacks the emotional nuance needed for truly compelling storytelling, and Rokit Flix’s co-founder Jonathan Kendrick warned that AI might help draft scenes, but can’t deliver the depth required for an Oscar-worthy film.

Alongside the AI rules, the Academy also introduced a new voting requirement. Members must now view every nominated film in a category before casting their final vote, to encourage fairer decisions in this shifting creative environment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI startup Cluely offers controversial cheating tool

A controversial new startup called Cluely has secured $5.3 million in seed funding to expand its AI-powered tool designed to help users ‘cheat on everything,’ from job interviews to exams.

Founded by 21-year-old Chungin ‘Roy’ Lee and Neel Shanmugam—both former Columbia University students—the tool works via a hidden browser window that remains invisible to interviewers or test supervisors.

The project began as ‘Interview Coder,’ originally intended to help users pass technical coding interviews on platforms like LeetCode.

Both founders faced disciplinary action at Columbia over the tool, eventually dropping out of the university. Despite ethical concerns, Cluely claims its technology has already surpassed $3 million in annual recurring revenue.

The company has drawn comparisons between its tool and past innovations like the calculator and spellcheck, arguing that it challenges outdated norms in the same way. A viral launch video showing Lee using Cluely on a date sparked backlash, with critics likening it to a scene from Black Mirror.

Cluely’s mission has sparked widespread debate over the use of AI in high-stakes settings. While some applaud its bold approach, others worry it promotes dishonesty.

Amazon, where Lee reportedly landed an internship using the tool, declined to comment on the case directly but reiterated that candidates must agree not to use unauthorised tools during the hiring process.

The startup’s rise comes amid growing concern over how AI may be used—or misused—in both professional and personal spheres.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI startup caught in Dev Mode trademark row

Figma has issued a cease-and-desist letter to Swedish AI startup Loveable over the use of the term ‘Dev Mode,’ a name Figma trademarked in 2023.

Loveable recently introduced its own Dev Mode feature, prompting the design platform to demand the startup stop using the name, citing its established use and intellectual property rights.

Figma’s version of Dev Mode helps bridge the gap between designers and developers, while Loveable’s tool allows users to preview and edit code without linking to GitHub.

Despite their differing functions, Figma insists on protecting the trademark, even though ‘developer mode’ is a widely used phrase across many software platforms. Companies such as Atlassian and Wix have used similar terminology long before Figma obtained the trademark.

The legal move arrives as Figma prepares for an initial public offering, following Adobe’s failed acquisition attempt in 2023. The sudden emphasis on brand protection suggests the company is taking extra care with its intellectual assets ahead of its potential stock market debut.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI site faces backlash for copying Southern Oregon news

A major publishing organisation has issued a formal warning to Good Daily News, an AI-powered news aggregator, demanding it cease the unauthorised scraping of content from local news outlets across Southern Oregon and beyond. The News Media Alliance, which represents 2,200 publishers, sent the letter on 25 March, urging the national operator to respect publishers’ rights and stop reproducing material without permission.

Good Daily runs over 350 online ‘local’ news websites across 47 US states, including Daily Medford and Daily Salem in Oregon. Though the platforms appear locally based, they are developed using AI and managed by one individual, Matt Henderson, who has registered mailing addresses in both Ashland, Oregon and Austin, Texas. Content is reportedly scraped from legitimate local news sites, rewritten by AI, and shared in newsletters, sometimes with source links, but often without permission.

News Media Alliance president Danielle Coffey said such practices undermine the time, resources, and revenue of local journalism. Many publishers use digital tools to block automated scrapers, though this comes at a financial cost. The organisation is working with the Oregon Newspaper Publishers Association and exploring legal options. Others in the industry, including Heidi Wright of the Fund for Oregon Rural Journalism, have voiced strong support for the warning, calling for greater action to defend the integrity of local news.

For more information on these topics, visit diplomacy.edu.

Gerry Adams targets Meta over use of his books

Gerry Adams, the former president of Sinn Féin, is considering legal action against Meta for allegedly using his books to train AI. Adams claims that at least seven of his books were included in a large collection of copyrighted material Meta used to develop its AI systems.

He has handed the matter over to his solicitor. The books in question include his autobiography Before the Dawn, prison memoir Cage Eleven, and reflections on Northern Ireland’s peace process Hope and History, among others.

Adams is not the only author voicing concerns about Meta’s use of copyrighted works. A group of writers filed a US court case in January, accusing Meta of using the controversial Library Genesis (LibGen) database, which hosts over 7.5 million books, many believed to be pirated.

The discovery followed a searchable database of titles from LibGen being published by The Atlantic, which led several authors to identify their works being used to train Meta’s Llama AI model.

The Society of Authors has condemned Meta’s actions, with chair Vanessa Fox O’Loughlin calling the move ‘shocking and devastating’ for authors.

Many authors are concerned that AI models like Llama, which power tools such as chatbots, could undermine their work by reproducing creative content without permission. Meta has defended its actions, claiming that its use of information to train AI models is in line with existing laws.

Adams, a prolific author and former MP, joins other Northern Irish writers, including Booker Prize winner Anna Burns, in opposing the use of their work for AI training without consent.

For more information on these topics, visit diplomacy.edu.

EU refuses to soften tech laws for Trump trade deal

The European Union has firmly ruled out dismantling its strict digital regulations in a bid to secure a trade deal with Donald Trump. Henna Virkkunen, the EU’s top official for digital policy, said the bloc remained fully committed to its digital rulebook instead of relaxing its standards to satisfy American demands.

While she welcomed a temporary pause in US tariffs, she made clear that the EU’s regulations were designed to ensure fairness and safety for all companies, regardless of origin, and were not intended as a direct attack on US tech giants.

Tensions have mounted in recent weeks, with Trump officials accusing the EU of unfairly targeting American firms through regulatory means. Executives like Mark Zuckerberg have criticised the EU’s approach, calling it a form of censorship, while the US has continued imposing tariffs on European goods.

Virkkunen defended the tougher obligations placed on large firms like Meta, Apple and Alphabet, explaining that greater influence came with greater responsibility.

She also noted that enforcement actions under the Digital Markets Act and Digital Services Act aim to ensure compliance instead of simply imposing large fines.

Although France has pushed for stronger retaliation, the European Commission has held back from launching direct countermeasures against US tech firms, instead preparing a range of options in case talks fail.

Virkkunen avoided speculation on such moves, saying the EU preferred cooperation to conflict. At the same time, she is advancing a broader tech strategy, including plans for five AI gigafactories, while also considering adjustments to the EU’s AI Act to better support small businesses and innovation.

Acknowledging creative industries’ concerns over generative AI, Virkkunen said new measures were needed to ensure fair compensation for copyrighted material used in AI training instead of leaving European creators unprotected.

The Commission is now exploring licensing models that could strike a balance between enabling innovation and safeguarding rights, reflecting the bloc’s intent to lead in tech policy without sacrificing democratic values or artistic contributions.

For more information on these topics, visit diplomacy.edu.

Blockchain app ARK fights to keep human creativity ahead of AI

Nearly 20 years after his AI career scare, screenwriter Ed Bennett-Coles and songwriter Jamie Hartman have developed ARK, a blockchain app designed to safeguard creative work from AI exploitation.

The platform lets artists register ownership of their ideas at every stage, from initial concept to final product, using biometric security and blockchain verification instead of traditional copyright systems.

ARK aims to protect human creativity in an AI-dominated world. ‘It’s about ring-fencing the creative process so artists can still earn a living,’ Hartman told AFP.

The app, backed by Claritas Capital and BMI, uses decentralised blockchain technology instead of centralised systems to give creators full control over their intellectual property.

Launching summer 2025, ARK challenges AI’s ‘growth at all costs’ mentality by emphasising creative journeys over end products.

Bennett-Coles compares AI content to online meat delivery, efficient but soulless, while human artistry resembles a grandfather’s butcher trip, where the experience matters as much as the result.

The duo hopes their solution will inspire industries to modernise copyright protections before AI erodes them completely.

For more information on these topics, visit diplomacy.edu.

OpenAI’s Sam Altman responds to Miyazaki’s AI animation concerns

The recent viral trend of AI-generated Ghibli-style images has taken the internet by storm. Using OpenAI’s GPT-4o image generator, users have been transforming photos, from historic moments to everyday scenes, into Studio Ghibli-style renditions.

A trend like this has caught the attention of notable figures, including celebrities and political personalities, sparking both excitement and controversy.

While some praise the trend for democratising art, others argue that it infringes on copyright and undermines the efforts of traditional artists. The debate intensified when Hayao Miyazaki, the co-founder of Studio Ghibli, became a focal point.

In a 2016 documentary, Miyazaki expressed his disdain for AI in animation, calling it ‘an insult to life itself’ and warning that humanity is losing faith in its creativity.

OpenAI’s CEO, Sam Altman, recently addressed these concerns, acknowledging the challenges posed by AI in art but defending its role in broadening access to creative tools. Altman believes that technology empowers more people to contribute, benefiting society as a whole, even if it complicates the art world.

Miyazaki’s comments and Altman’s response highlight a growing divide in the conversation about AI and creativity. As the debate continues, the future of AI in art remains a contentious issue, balancing innovation with respect for traditional artistic practices.

For more information on these topics, visit diplomacy.edu.

Copyright lawsuits against OpenAI and Microsoft combined in AI showdown

Twelve copyright lawsuits filed against OpenAI and Microsoft have been merged into a single case in the Southern District of New York.

The US judicial panel on multidistrict litigation decided to consolidate, despite objections from many plaintiffs who argued their cases were too distinct.

The lawsuits claim that OpenAI and Microsoft used copyrighted books and journalistic works without consent to train AI tools like ChatGPT and Copilot.

The plaintiffs include high-profile authors—Ta-Nehisi Coates, Sarah Silverman, Junot Díaz—and major media outlets such as The New York Times and Daily News.

The panel justified the centralisation by citing shared factual questions and the benefits of unified pretrial proceedings, including streamlined discovery and avoidance of conflicting rulings.

OpenAI has defended its use of publicly available data under the legal doctrine of ‘fair use.’

A spokesperson stated the company welcomed the consolidation and looked forward to proving that its practices are lawful and support innovation. Microsoft has not yet issued a comment on the ruling.

The authors’ attorney, Steven Lieberman, countered that this is about large-scale theft. He emphasised that both Microsoft and OpenAI have, in their view, infringed on millions of protected works.

Some of the same authors are also suing Meta, alleging the company trained its models using books from the shadow library LibGen, which houses over 7.5 million titles.

Simultaneously, Meta faced backlash in the UK, where authors protested outside the company’s London office. The demonstration focused on Meta’s alleged use of pirated literature in its AI training datasets.

The Society of Authors has called the actions illegal and harmful to writers’ livelihoods.

Amazon also entered the copyright discussion this week, confirming its new Kindle ‘Recaps’ feature uses generative AI to summarise book plots.

While Amazon claims accuracy, concerns have emerged online about the reliability of AI-generated summaries.

In the UK, lawmakers are also reconsidering copyright exemptions for AI companies, facing growing pressure from creative industry advocates.

The debate over how AI models access and use copyrighted material is intensifying, and the decisions made in courtrooms and parliaments could radically change the digital publishing landscape.

For more information on these topics, visit diplomacy.edu.