OpenAI sunsets Sora app after 6 months of scrutiny

OpenAI is moving to shut down the Sora app, its consumer-facing AI video platform, according to an official X post on 24 March. The move follows months of scrutiny around AI-generated video, including concerns over deepfakes, copyright, and harmful synthetic media.

The reported shutdown comes shortly after OpenAI retired Sora 1 in the United States on 13 March 2026 and replaced it with Sora 2 as the default experience. OpenAI’s help documentation says the older version remains available only in countries where the newer one has not yet launched, while support pages for the standalone Sora app are still live. The product changes also follow the announcement of new copyright settings for the latest video generation model.

That makes the current picture more complex than a simple sunset. Public OpenAI help pages still describe tools on iOS, Android, and the web, while news reports say the company has now decided to wind down the app itself. OpenAI had also recently indicated that it plans to integrate Sora video generation into ChatGPT, which could help explain why the standalone product is being reconsidered.

Sora became one of OpenAI’s most visible consumer media products, but it also drew sustained scrutiny over deepfakes, non-consensual content, and copyrighted characters. Such concerns remained central even as OpenAI added additional controls to the platform, including new consent and traceability measures to enhance AI video safety. AP reported that pressure from advocacy groups, scholars, and entertainment-sector voices formed part of the backdrop to the shutdown decision.

For users, the immediate issue is preservation of existing content. OpenAI’s Sora 1 sunset FAQ says some legacy material may be exportable for a limited period before deletion, but the company has not yet published a detailed standalone help document explaining the full shutdown. Based on the information now available, the clearest distinction is that OpenAI first retired one legacy version in some markets and is now reportedly ending the standalone app more broadly.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

US releases national AI policy framework

The Trump Administration unveiled a national AI framework to boost competitiveness, security, and benefits for Americans. The plan seeks to ensure that AI innovation supports all citizens while maintaining public trust in the technology.

Six key objectives form the foundation of the policy. These include protecting children online, empowering parents with tools to manage digital safety, strengthening communities and small businesses, respecting intellectual property, defending free speech, and fostering innovation.

The framework also prioritises workforce development to prepare Americans for AI-driven job opportunities.

Federal uniformity is considered critical to the plan’s success. The Administration warns that a patchwork of state regulations could stifle innovation and reduce the United States’ ability to lead globally.

Congress is encouraged to collaborate closely to implement the framework nationwide.

The Administration emphasises that the United States must lead the AI race, ensuring the benefits of AI reach all Americans while addressing challenges such as privacy, security, and equitable access to opportunities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK drops AI copyright opt-out plan amid growing industry divide

The UK Government has abandoned its previous preference for an AI copyright opt-out model, signalling a shift in policy following strong opposition from creative industries.

Ministers now acknowledge that there is no clear consensus on how AI developers should access copyrighted material.

Concerns from writers, artists and rights holders focused on the use of their work in training AI systems without permission.

Liz Kendall confirmed that extensive consultation exposed significant disagreement, prompting the government to step back from its earlier position that would have allowed the use of copyrighted content unless creators opted out.

A joint report from the Department for Science, Innovation and Technology and the Department for Culture, Media and Sport states that further evidence is required before any legislative change.

Policymakers in the UK will assess how copyright frameworks influence AI development, while also examining international regulation, licensing models and ongoing legal disputes.

Government strategy now centres on balancing innovation with fair compensation.

Officials emphasise that creators must retain control over how their work is used, while AI developers require access to high-quality data to remain competitive. Potential measures include labelling AI-generated content to reduce risks linked to disinformation and deepfakes.

No timeline has been set for reform, reflecting the complexity of aligning economic growth with intellectual property protection.

The debate unfolds alongside broader ambitions outlined by Rachel Reeves, who has identified AI as a central driver of future economic expansion, with the UK aiming to lead adoption across the G7.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

AI in filmmaking raises job fears as creative roles face pressure

Growing concern over AI in filmmaking emerged at a major conference, where veteran director Steven Spielberg rejected its use as a replacement for human creativity. He emphasised that storytelling should remain in human hands rather than being driven by automation.

Rapid advances in AI video tools have unsettled the industry, raising fears among editors and visual effects workers. Joshua Davies, chief innovation officer at a video platform, pointed to concerns over jobs, copyright and future production methods.

Current tools remain limited, particularly when handling complex camera movements or maintaining consistency across scenes. AI is instead being used to support production by filling gaps where footage cannot be filmed due to time or budget limits.

Studios are already exploring how AI can be integrated into production pipelines following recent disruptions. A fast and low-cost Super Bowl advert highlighted its potential, although human creative input remained essential.

Lower production costs are expected, but full automation is still unlikely in the near term. AI could help independent creators compete, while strong storytelling continues to define success.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU urged to push digital tax despite US opposition

Calls for an EU-wide digital services tax are growing, as Pasquale Tridico, chair of the European Parliament’s subcommittee on tax matters, urged Brussels to act despite strong opposition from the US. He argued that such a measure would make Europe’s tax system fairer in a market dominated by foreign tech firms.

Tensions have increased as Washington threatens tariffs on countries introducing digital taxes targeting major platforms. Existing national levies in countries like France contrast with the absence of a unified EU approach due to member state control over taxation.

The proposal comes amid wider strain in transatlantic relations, with disputes over trade, regulation and influence on EU policymaking. US criticism has also focused on European rules such as the Digital Services Act and the Digital Markets Act.

Supporters argue that a digital tax would apply equally to global companies, not only US firms, while addressing imbalances between sectors. Digital businesses can generate large profits without the same physical costs faced by traditional industries.

Further proposals include new approaches to taxing wealth, reflecting how digitalisation blurs the line between income and capital. Advocates say such reforms are needed to adapt taxation to the modern economy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Publishers challenge OpenAI over alleged copyright infringement

Legal pressure is increasing on OpenAI as Encyclopaedia Britannica and Merriam-Webster file a lawsuit accusing the company of large-scale copyright violations.

According to the complaint, nearly 100,000 copyrighted articles were allegedly used without authorisation to train large language models. Publishers also argue that AI-generated outputs can reproduce parts of their content, raising concerns about unauthorised distribution.

Additional claims focus on how AI systems retrieve and present information. The lawsuit argues that retrieval-augmented generation tools may rely on proprietary databases, potentially undermining publishers’ business models by reducing traffic to original sources.

Concerns are also raised about inaccurate outputs attributed to publishers, which could affect trust in established information providers. The case highlights ongoing tensions between AI development and intellectual property protections.

Growing legal disputes involving media organisations, including The New York Times, suggest that courts will play a key role in defining how copyrighted material can be used in AI training.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Human made labels emerge as industries react to AI expansion

Organisations around the world are developing certification labels designed to show that products or creative work were made by humans rather than AI. New badges such as ‘Human made’, ‘AI free’ and ‘Proudly Human’ are appearing across books, films, marketing and websites as industries respond to the rapid spread of AI tools.

At least eight initiatives are now attempting to create a label that could achieve global recognition similar to the Fair Trade mark. Experts warn that competing definitions and inconsistent certification systems could confuse consumers unless a universal standard is agreed upon.

Some schemes allow creators to download AI-free badges with little or no verification, while others use paid auditing processes that rely on analysts and AI detection tools. Researchers note that defining ‘human-made’ is increasingly difficult because AI technologies are embedded in many everyday software tools.

Creative industries are at the centre of the debate as generative AI rapidly produces books, films and music at lower cost and higher speed. Advocates of certification argue that verified human-created content may gain greater value if consumers can clearly distinguish it from AI-generated work.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Writer files lawsuit against Grammarly over AI feature using experts’ identities

A journalist has filed a class action lawsuit against Grammarly after the company introduced an AI feature that generated editorial feedback by imitating well-known writers and public figures without their permission.

The legal complaint was submitted by investigative journalist Julia Angwin, who argued that the tool unlawfully used the identities and reputations of authors and commentators.

The feature, known as ‘Expert Review’, produced automated critiques presented as if they came from figures such as Stephen King, Carl Sagan and technology journalist Kara Swisher.

Such a feature was available to subscribers paying an annual fee and was designed to simulate professional editorial guidance.

Critics quickly questioned both the quality of the generated feedback and the decision to associate the tool with real individuals who had not authorised the use of their names or expertise.

Technology writer Casey Newton tested the system by submitting one of his own articles and receiving automated feedback attributed to an AI version of Swisher. The response appeared generic, casting doubt on the value of linking such commentary to prominent personalities.

Following criticism from writers and researchers, the feature was disabled. Shishir Mehrotra, chief executive of Grammarly’s parent company Superhuman, issued a public apology while defending the broader concept behind the tool.

The lawsuit reflects growing tensions around AI systems that replicate creative styles or professional expertise.

As generative AI technologies expand across writing and publishing industries, questions surrounding consent, intellectual labour and identity rights are becoming increasingly prominent.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Netflix AI filmmaking push grows with InterPositive acquisition

A deal valued at up to $600 million will see Netflix acquire InterPositive, the AI filmmaking company founded by actor and director Ben Affleck, according to people familiar with the matter.

The transaction, paid in cash, is expected to become one of the largest acquisitions made by the streaming company. The final upfront amount is reportedly lower, with additional payments tied to performance targets. Netflix has not publicly disclosed the financial terms of the deal.

The acquisition is intended to accelerate the use of AI in film production. InterPositive has developed software tools that enable filmmakers to modify existing footage, including removing unwanted elements or adjusting scene backgrounds. Director David Fincher has already used the technology in work on an upcoming film starring Brad Pitt.

The deal reflects a broader trend among entertainment companies exploring AI technologies to streamline production and improve efficiency. Companies including Netflix and Amazon are experimenting with AI tools in film and television production, while Disney has established a partnership with OpenAI.

The growing use of AI in Hollywood has raised concerns among industry workers. Some fear the technology could reduce jobs or allow studios to use creative work to train AI systems without compensation.

Affleck has said the InterPositive technology is designed to support filmmakers rather than replace them. The system requires directors first to shoot original footage before the software can train on the material. The tools can then assist with editing tasks, but do not generate films independently.

Netflix has traditionally avoided large-scale acquisitions, focusing instead on developing its technology internally. Even so, the purchase of InterPositive signals a step toward strengthening the company’s AI capabilities in film production.

‘The filmmaking process, really, since its inception, has been one long technological progression,’ Affleck said in a video released by Netflix. ‘We’ve always been seeking to make it feel more realistic, more honest, and InterPositive, I hope, is another iteration or step in keeping with that long and storied history.’

Affleck founded InterPositive with backing from investment firm RedBird Capital Partners and began seeking investment in 2025 before the company attracted interest from Netflix.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU lawmakers call for stronger copyright safeguards in AI training

The European Parliament has adopted a report urging policymakers to establish a long-term framework protecting copyrighted works used in AI training.

These recommendations aim to ensure that creative industries retain transparency and fair treatment as generative AI technologies expand.

Among the central proposals is the creation of a European register managed by the European Union Intellectual Property Office. The database would list copyrighted works used to train AI systems and identify creators who have chosen to exclude their content from such use.

Lawmakers in the EU are also calling for greater transparency from AI developers, including disclosure of the websites from which training data has been collected. According to the report, failing to meet transparency requirements could raise questions about compliance with existing copyright rules.

The recommendations have received mixed reactions from industry stakeholders.

Organisations representing creators argue that stronger safeguards are necessary to ensure fair remuneration and legal clarity, while technology sector groups caution that additional requirements could create complexity for companies developing AI systems.

The report is not legally binding but signals the political direction of ongoing European discussions on copyright and AI governance.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!