Educators rethink assignments as AI becomes widespread

Educators are confronting a new reality as AI tools like ChatGPT become widespread among students. Traditional take-home assignments and essays are increasingly at risk as students commonly use AI chatbots to complete schoolwork.

Schools are responding by moving more writing tasks into the classroom and monitoring student activity. Teachers are also integrating AI into lessons, teaching students how to use it responsibly for research, summarising readings, or improving drafts, rather than as a shortcut to cheat.

Policies on AI use still vary widely. Some classrooms allow AI tools for grammar checks or study aids, while others enforce strict bans. Teachers are shifting away from take-home essays, adopting in-class tests, lockdown browsers, or flipped classrooms to manage AI’s impact better. 

The inconsistency often leaves students unsure about acceptable use and challenges educators to uphold academic integrity.

Institutions like the University of California, Berkeley, and Carnegie Mellon have implemented policies promoting ‘AI literacy,’ explaining when and how AI can be used, and adjusting assessments to prevent misuse.

As AI continues improving, educators seek a balance between embracing technology’s potential and safeguarding academic standards. Teachers emphasise guidance, structured use, and supervision to ensure AI supports learning rather than undermining it.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI-generated film sparks copyright battle as it heads to Cannes

OpenAI has taken a significant step into entertainment by backing Critterz, the first animated feature film generated with GPT models.

Human artists sketch characters and scenes, while AI transforms them into moving images. The $30 million project, expected to finish in nine months, is far cheaper and faster than traditional animation and could debut at the Cannes Film Festival in 2026.

Yet the film has triggered a fierce copyright debate in India and beyond. Under India’s Copyright Act of 1957, only human works are protected.

Legal experts argue that while AI can be used as a tool when human skill and judgement are clearly applied, autonomously generated outputs may not qualify for copyright at all.

The uncertainty carries significant risks. Producers may struggle to combat piracy or unauthorised remakes, while streaming platforms and investors could hesitate to support projects without clear ownership rights.

A recent case involving an AI tool credited as a co-author of a painting, later revoked, shows how untested the law remains.

Global approaches vary. The US and the EU require human creativity for copyright, while the UK recognises computer-generated works under certain conditions.

In India, lawyers suggest contracts provide the safest path until the law evolves, with detailed agreements on ownership, revenue sharing and disclosure of AI input.

The government has already set up an expert panel to review the Copyright Act, even as AI-driven projects and trailers rapidly gain popularity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Canadian news publishers clash with OpenAI in landmark copyright case

OpenAI is set to argue in an Ontario court that a copyright lawsuit by Canadian news publishers should be heard in the United States. The case, the first of its kind in Canada, alleges that OpenAI scraped Canadian news content to train ChatGPT without permission or payment.

The coalition of publishers, including CBC/Radio-Canada, The Globe and Mail, and Postmedia, says the material was created and hosted in Ontario, making the province the proper venue. They warn that accepting OpenAI’s stance would undermine Canadian sovereignty in the digital economy.

OpenAI, however, says the training of its models and web crawling occurred outside Canada and that the Copyright Act cannot apply extraterritorially. It argues the publishers are politicising the case by framing it as a matter of sovereignty rather than jurisdiction.

The dispute reflects a broader global clash over how generative AI systems use copyrighted works. US courts are already handling several similar cases, though no clear precedent has been established on whether such use qualifies as fair use.

Publishers argue Canadian courts must decide the matter domestically, while OpenAI insists it belongs in US courts. The outcome could shape how copyright laws apply to AI training and digital content across borders.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Microsoft brings Anthropic AI into Office 365 as OpenAI tensions rise

The US tech giant Microsoft is expanding its AI strategy by integrating Anthropic’s Claude models into Office 365, adding them to apps like Word, Excel and Outlook instead of relying solely on OpenAI.

Internal tests reportedly showed Anthropic’s systems outperforming OpenAI in specific reasoning and data-processing tasks, prompting Microsoft to adopt a hybrid approach while maintaining OpenAI as a frontier partner.

The shift reflects growing strain between Microsoft and OpenAI, with disputes over intellectual property and cloud infrastructure as well as OpenAI’s plans for greater independence.

By diversifying suppliers, Microsoft reduces risks, lowers costs and positions itself to stay competitive while OpenAI prepares for a potential public offering and develops its own data centres.

Anthropic, backed by Amazon and Google, has built its reputation on safety-focused AI, appealing to Microsoft’s enterprise customers wary of regulatory pressures.

Analysts believe the move could accelerate innovation, spark a ‘multi-model era’ of AI integration, and pressure OpenAI to enhance its technology faster.

The decision comes amid Microsoft’s push to broaden its AI ecosystem, including its in-house MAI-1 model and partnerships with firms like DeepSeek.

Regulators are closely monitoring these developments, given Microsoft’s dominant role in AI investment and the potential antitrust implications of its expanding influence.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Anthropic AI faces legal setback in authors’ piracy lawsuit

A federal judge has rejected the $1.5 billion settlement Anthropic agreed to in a piracy lawsuit filed by authors.

Judge William Alsup expressed concerns that the deal was ‘nowhere close to complete’ and could be forced on writers without proper input.

The lawsuit involves around 500,000 authors whose works were allegedly used without permission to train Anthropic’s large language models. The proposed settlement would have granted $3,000 per work, a sum far exceeding previous copyright recoveries.

However, the judge criticised the lack of clarity regarding the list of works, authors, notification process, and claim forms.

Alsup instructed the lawyers to provide clear notice to class members and allow them to opt in or out. He also emphasised that Anthropic must be shielded from future claims on the same issue. The court set deadlines for a final list of works by September 15 and approval of all related documents by October 10.

The ruling highlights ongoing legal challenges for AI companies using copyrighted material for training large language models instead of relying solely on licensed or public-domain data.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Orson Welles lost film reconstructed with AI

More than 80 years after Orson Welles’ The Magnificent Ambersons was cut and lost, AI is being used to restore 43 missing minutes of the film.

Amazon-backed Showrunner, led by Edward Saatchi, is experimenting with AI technology to rebuild the destroyed sequences as part of a broader push to reimagine how Hollywood might use AI in storytelling.

The project is not intended for commercial release, since Showrunner has not secured rights from Warner Bros. or Concord, but instead aims to explore what could have been the director’s original vision.

The initiative marks a shift in the role of AI in filmmaking. Rather than serving only as a tool for effects, dubbing or storyboarding, it is being positioned as a foundation for long-form narrative creation.

Showrunner is developing AI models capable of sustaining complex plots, with the goal of eventually generating entire films. Saatchi envisions the platform as a type of ‘Netflix of AI,’ where audiences might one day interact with intellectual property and generate their own stories.

To reconstruct The Magnificent Ambersons, the company is combining traditional techniques with AI tools. New sequences will be shot with actors, while AI will be used for face and pose transfer to replicate the original cast.

Thousands of archival set photographs are being used to digitally recreate the film’s environments.

Filmmaker Brian Rose, who has rebuilt 30,000 missing frames over five years, has reconstructed set movements and timing to match the lost scenes, while VFX expert Tom Clive will assist in refining the likenesses of the original actors.

A project that underlines both the creative possibilities and ethical tensions surrounding AI in cinema. While the reconstructed footage will not be commercially exploited, it raises questions about the use of copyrighted material in training AI and the risk of replacing human creators.

For many, however, the experiment offers a glimpse of what Welles’ ambitious work might have looked like had it survived intact.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Trilateral quantum talks highlight innovation and security priorities

The United States, Japan, and South Korea held two Trilateral Quantum Cooperation meetings this week in Seoul and Tokyo. Officials and experts from government and industry gathered to discuss securing quantum ecosystems against cyber, physical, and intellectual property threats.

The US State Department stressed that joint efforts will ensure breakthroughs in quantum computing benefit citizens while safeguarding innovation. Officials said cooperation is essential as quantum technologies could reshape industries, global power balances, and economic prosperity.

The President of South Korea, Lee Jae Myung, described the partnership as entering a ‘golden era’, noting that Seoul, Washington, and Tokyo must work together both to address North Korea and to drive technological progress.

The talks come as Paul Dabbar, the former CEO of Bohr Quantum Technology, begins his role as US Deputy Secretary of Commerce. Dabbar brings experience in deploying emerging quantum network technologies to the new trilateral framework.

North Korea has also signalled interest in quantum computing for economic development. Analysts note that quantum’s lower energy demand compared to supercomputers could appeal to a country plagued by chronic power shortages.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Apple sued over use of pirated books in AI training

Apple is facing a new copyright lawsuit after two authors alleged the company used pirated copies of their books to train its OpenELM AI models. Filed in Northern California, the case claims Apple used the authors’ works without permission, payment, or credit.

The lawsuit seeks class-action status, adding Apple to a growing list of technology firms accused of misusing copyrighted works for AI training.

The action comes amid a wider legal storm engulfing AI companies. Anthropic recently agreed to a $1.5 billion settlement with authors who alleged its Claude chatbot was trained on their works without authorisation, in what lawyers called the most significant copyright recovery in history.

Microsoft, Meta, and OpenAI also face similar lawsuits over alleged reliance on unlicensed material in their datasets.

Analysts warn Apple could face heightened scrutiny given its reputation as a privacy-focused company. Any finding that its AI models were trained on pirated material could cause serious reputational harm alongside potential financial penalties.

The case also underscores the broader unresolved debate over whether AI training constitutes fair use or unlawful exploitation of creative works.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Anthropic settles $1.5 billion copyright case with authors

The AI startup, Anthropic, has agreed to pay $1.5 billion to settle a copyright lawsuit accusing the company of using pirated books to train its Claude AI chatbot.

The proposed deal, one of the largest of its kind, comes after a group of authors claimed the startup deliberately downloaded unlicensed copies of around 500,000 works.

According to reports, Anthropic will pay about $3,000 per book and add interest while agreeing to destroy datasets containing the material. A California judge will review the settlement terms on 8 September before finalising them.

Lawyers for the plaintiffs described the outcome as a landmark, warning that using pirated websites for AI training is unlawful.

The case reflects mounting legal pressure on the AI industry, with companies such as OpenAI and Microsoft also facing copyright disputes. The settlement followed a June ruling in which a judge said using the books to train Claude was ‘transformative’ and qualified as fair use.

Anthropic said the deal resolves legacy claims while affirming its commitment to safe AI development.

Despite the legal challenges, Anthropic continues to grow rapidly. Earlier in August, the company secured $13 billion in funding for a valuation of $183 billion, underlining its rise as one of the fastest-growing players in the global technology sector.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google avoids breakup as court ruling fuels AI Mode expansion

A US district judge has declined to order a breakup of Google, softening the blow of a 2024 ruling that found the company had illegally monopolised online search.

The decision means Google can press ahead with its shift from a search engine into an answer engine, powered by generative AI.

Google’s AI Mode replaces traditional blue links with direct responses to queries, echoing the style of ChatGPT. While the feature is optional for now, it could become the default.

That alarms publishers, who depend on search traffic for advertising revenue. Studies suggest chatbots reduce referral clicks by more than 90 percent, leaving many sites at risk of collapse.

Google is also experimenting with inserting ads into AI Mode, though it remains unclear how much revenue will flow to content creators. Websites can block their data from being scraped, but doing so would also remove them from Google search entirely.

Despite these concerns, Google argues that competition from ChatGPT, Perplexity, and other AI tools shows that new rivals are reshaping the search landscape.

The judge even cited the emergence of generative AI as a factor that altered the case against Google, underlining how the rise of AI has become central to the future of the internet.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!