Nintendo denies lobbying the Japanese government over generative AI

The video game company, Nintendo, has denied reports that it lobbied the Japanese government over the use of generative AI. The company issued an official statement on its Japanese X account, clarifying that it has had no contact with authorities.

However, this rumour originated from a post by Satoshi Asano, a member of Japan’s House of Representatives, who suggested that private companies had pressed the government on intellectual property protection concerning AI.

After Nintendo’s statement, Asano retracted his remarks and apologised for spreading misinformation.

Nintendo stressed that it would continue to protect its intellectual property against infringement, whether AI was involved or not. The company reaffirmed its cautious approach toward generative AI in game development, focusing on safeguarding creative rights rather than political lobbying.

The episode underscores the sensitivity around AI in the creative industries of Japan, where concerns about copyright and technological disruption are fuelling debate. Nintendo’s swift clarification signals how seriously it takes misinformation and protects its brand.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI platforms barred from cloning Asha Bhosle’s voice without consent

The Bombay High Court has granted ad-interim relief to Asha Bhosle, barring AI platforms and sellers from cloning her voice or likeness without consent. The 90-year-old playback singer, whose career spans eight decades, approached the court to protect her identity from unauthorised commercial use.

Bhosle filed the suit after discovering platforms offering AI-generated voice clones mimicking her singing. Her plea argued that such misuse damages her reputation and goodwill. Justice Arif S. Doctor found a strong prima facie case and stated that such actions would cause irreparable harm.

The order restrains defendants, including US-based Mayk Inc, from using machine learning, face-morphing, or generative AI to imitate her voice or likeness. Google, also named in the case, has agreed to take down specific URLs identified by Bhosle’s team.

Defendants are required to share subscriber information, IP logs, and payment details to assist in identifying infringers. The court emphasised that cloning the voices of cultural icons risks misleading the public and infringing on individuals’ rights to their identity.

The ruling builds on recent cases in India affirming personality rights and sets an important precedent in the age of generative AI. The matter is scheduled to return to court on 13 October 2025.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Sora 2.0 release reignites debate on intellectual property in AI video

OpenAI has launched Sora 2.0, the latest version of its video generation model, alongside an iOS app available by invitation in the US and Canada. The tool offers advances in physical realism, audio-video synchronisation, and multi-shot storytelling, with built-in safeguards for security and identity control.

The app allows users to create, remix, or appear in clips generated from text or images. A Pro version, web interface, and developer API are expected soon, extending access to the model.

Sora 2.0 has reignited debate over intellectual property. According to The Wall Street Journal, OpenAI has informed studios and talent agencies that their universes could appear in generated clips unless they opt out.

The company defends its approach as an extension of fan creativity, while stressing that real people’s images and voices require prior consent, validated through a verified cameo system.

By combining new creative tools with identity safeguards, OpenAI aims to position Sora 2.0 as a leading platform in the fast-growing market for AI-generated video.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Musk escalates legal battle with new lawsuit against OpenAI

Elon Musk’s xAI has sued OpenAI, alleging a coordinated and unlawful campaign to steal its proprietary technology. The complaint alleges OpenAI targeted former xAI staff to steal source code, training methods, and data centre strategies.

The lawsuit claims OpenAI recruiter Tifa Chen offered large packages to engineers who then allegedly uploaded xAI’s source code to personal devices. Notable incidents include Xuechen Li confessing to code theft and Jimmy Fraiture allegedly transferring confidential files via AirDrop repeatedly.

Legal experts note the case centres on employee poaching and the definition of xAI’s ‘secret sauce,’ including GPU racking, vendor contracts, and operational playbooks.

Liability may depend on whether OpenAI knowingly directed recruiters, while the company could defend itself by showing independent creation with time-stamped records.

xAI is seeking damages, restitution, and injunctions requiring OpenAI to remove its materials and destroy models built using them. The lawsuit is Musk’s latest legal action against OpenAI, following a recent antitrust case with Apple over alleged market dominance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Spotify launches new policies on AI and music spam

Spotify announced new measures to address AI risks in music, aiming to protect artists’ identities and preserve trust on the platform. The company said AI can boost creativity but also enable harmful content like impersonations and spam that exploit artists and cut into royalties.

A new impersonation policy has been introduced, clarifying that AI-generated vocal clones of artists are only permitted with explicit authorisation. Spotify is strengthening processes to block fraudulent uploads and mismatches, giving artists quicker recourse when their work is misused.

The platform will launch a new spam filter this year to detect and curb manipulative practices like mass uploads and artificially short tracks. The system will be deployed cautiously, with updates added as new abuse tactics emerge, in order to safeguard legitimate creators.

In addition, Spotify will back an industry standard for AI disclosures in music credits, allowing artists and rights holders to show how AI was used in production. The company said these steps show its commitment to protecting artists, ensuring transparency, and fair royalties as AI reshapes the music industry.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU and Indonesia free trade deal strengthens tech and digital supply chains

The European Union and Indonesia have concluded negotiations on a Comprehensive Economic Partnership Agreement (CEPA) and an Investment Protection Agreement (IPA), strongly emphasising technology, digitalisation and sustainable industries.

The agreements are designed to expand trade, secure critical raw materials, and drive the green and digital transitions.

Under the CEPA, tariffs on 98.5% of trade lines will be removed, cutting costs by €600 million annually and giving EU companies greater access to Indonesia’s fast-growing technology sectors, including electric vehicles, electronics and pharmaceuticals.

European firms will also gain full ownership rights in key service areas such as computers and telecommunications, helping deepen integration of digital supply chains.

A deal that embeds commitments to the Paris Agreement while promoting renewable energy and low-carbon technologies. It also includes cooperation on digital standards, intellectual property protections and trade facilitation for sectors vital to Europe’s clean tech and digital industries.

With Indonesia as a leading producer of critical raw materials, the agreement secures sustainable and predictable access to inputs essential for semiconductors, batteries and other strategic technologies.

Launched in 2016, the negotiations concluded after the political agreement reached in July 2025 between Presidents Ursula von der Leyen and Prabowo Subianto. The texts will undergo legal review before the EU and Indonesia ratification, opening a new chapter in tech-enabled trade and innovation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Hollywood studios take legal action against MiniMax for AI copyright infringement

Disney, Warner Bros. Discovery and NBCUniversal have filed a lawsuit in California against Chinese AI company MiniMax, accusing it of large-scale copyright infringement.

The studios allege that MiniMax’s Hailuo AI service generates unauthorised images and videos featuring well-known characters such as Darth Vader, marketing itself as a ‘Hollywood studio in your pocket’ instead of respecting copyright laws.

According to the complaint, MiniMax, reportedly worth $4 billion, ignored cease-and-desist requests and continues to profit from copyrighted works. The studios argue that the company could easily implement safeguards, pointing to existing controls that already block violent or explicit content.

MiniMax’s approach, as they claim, represents a serious threat to both creators and the broader film industry, which contributes hundreds of billions of dollars to the US economy.

Plaintiffs, including Disney’s Marvel and Lucasfilm units, Universal’s DreamWorks Animation and Warner Bros.’ DC Comics, are seeking statutory damages of up to $150,000 per infringed work or unspecified compensation.

They are also asking for an injunction to prevent MiniMax from continuing its alleged violations instead of simply paying damages.

The Motion Picture Association has backed the lawsuit, with its chairman Charles Rivkin warning that unchecked copyright infringement could undermine millions of jobs and the cultural value created by the American film industry.

MiniMax, based in Shanghai, has not responded publicly to the claims but has previously described itself as a global AI foundation model company with over 157 million users worldwide.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Educators rethink assignments as AI becomes widespread

Educators are confronting a new reality as AI tools like ChatGPT become widespread among students. Traditional take-home assignments and essays are increasingly at risk as students commonly use AI chatbots to complete schoolwork.

Schools are responding by moving more writing tasks into the classroom and monitoring student activity. Teachers are also integrating AI into lessons, teaching students how to use it responsibly for research, summarising readings, or improving drafts, rather than as a shortcut to cheat.

Policies on AI use still vary widely. Some classrooms allow AI tools for grammar checks or study aids, while others enforce strict bans. Teachers are shifting away from take-home essays, adopting in-class tests, lockdown browsers, or flipped classrooms to manage AI’s impact better. 

The inconsistency often leaves students unsure about acceptable use and challenges educators to uphold academic integrity.

Institutions like the University of California, Berkeley, and Carnegie Mellon have implemented policies promoting ‘AI literacy,’ explaining when and how AI can be used, and adjusting assessments to prevent misuse.

As AI continues improving, educators seek a balance between embracing technology’s potential and safeguarding academic standards. Teachers emphasise guidance, structured use, and supervision to ensure AI supports learning rather than undermining it.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI-generated film sparks copyright battle as it heads to Cannes

OpenAI has taken a significant step into entertainment by backing Critterz, the first animated feature film generated with GPT models.

Human artists sketch characters and scenes, while AI transforms them into moving images. The $30 million project, expected to finish in nine months, is far cheaper and faster than traditional animation and could debut at the Cannes Film Festival in 2026.

Yet the film has triggered a fierce copyright debate in India and beyond. Under India’s Copyright Act of 1957, only human works are protected.

Legal experts argue that while AI can be used as a tool when human skill and judgement are clearly applied, autonomously generated outputs may not qualify for copyright at all.

The uncertainty carries significant risks. Producers may struggle to combat piracy or unauthorised remakes, while streaming platforms and investors could hesitate to support projects without clear ownership rights.

A recent case involving an AI tool credited as a co-author of a painting, later revoked, shows how untested the law remains.

Global approaches vary. The US and the EU require human creativity for copyright, while the UK recognises computer-generated works under certain conditions.

In India, lawyers suggest contracts provide the safest path until the law evolves, with detailed agreements on ownership, revenue sharing and disclosure of AI input.

The government has already set up an expert panel to review the Copyright Act, even as AI-driven projects and trailers rapidly gain popularity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Canadian news publishers clash with OpenAI in landmark copyright case

OpenAI is set to argue in an Ontario court that a copyright lawsuit by Canadian news publishers should be heard in the United States. The case, the first of its kind in Canada, alleges that OpenAI scraped Canadian news content to train ChatGPT without permission or payment.

The coalition of publishers, including CBC/Radio-Canada, The Globe and Mail, and Postmedia, says the material was created and hosted in Ontario, making the province the proper venue. They warn that accepting OpenAI’s stance would undermine Canadian sovereignty in the digital economy.

OpenAI, however, says the training of its models and web crawling occurred outside Canada and that the Copyright Act cannot apply extraterritorially. It argues the publishers are politicising the case by framing it as a matter of sovereignty rather than jurisdiction.

The dispute reflects a broader global clash over how generative AI systems use copyrighted works. US courts are already handling several similar cases, though no clear precedent has been established on whether such use qualifies as fair use.

Publishers argue Canadian courts must decide the matter domestically, while OpenAI insists it belongs in US courts. The outcome could shape how copyright laws apply to AI training and digital content across borders.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot