Copyright lawsuits against OpenAI and Microsoft combined in AI showdown

Twelve copyright lawsuits filed against OpenAI and Microsoft have been merged into a single case in the Southern District of New York.

The US judicial panel on multidistrict litigation decided to consolidate, despite objections from many plaintiffs who argued their cases were too distinct.

The lawsuits claim that OpenAI and Microsoft used copyrighted books and journalistic works without consent to train AI tools like ChatGPT and Copilot.

The plaintiffs include high-profile authors—Ta-Nehisi Coates, Sarah Silverman, Junot Díaz—and major media outlets such as The New York Times and Daily News.

The panel justified the centralisation by citing shared factual questions and the benefits of unified pretrial proceedings, including streamlined discovery and avoidance of conflicting rulings.

OpenAI has defended its use of publicly available data under the legal doctrine of ‘fair use.’

A spokesperson stated the company welcomed the consolidation and looked forward to proving that its practices are lawful and support innovation. Microsoft has not yet issued a comment on the ruling.

The authors’ attorney, Steven Lieberman, countered that this is about large-scale theft. He emphasised that both Microsoft and OpenAI have, in their view, infringed on millions of protected works.

Some of the same authors are also suing Meta, alleging the company trained its models using books from the shadow library LibGen, which houses over 7.5 million titles.

Simultaneously, Meta faced backlash in the UK, where authors protested outside the company’s London office. The demonstration focused on Meta’s alleged use of pirated literature in its AI training datasets.

The Society of Authors has called the actions illegal and harmful to writers’ livelihoods.

Amazon also entered the copyright discussion this week, confirming its new Kindle ‘Recaps’ feature uses generative AI to summarise book plots.

While Amazon claims accuracy, concerns have emerged online about the reliability of AI-generated summaries.

In the UK, lawmakers are also reconsidering copyright exemptions for AI companies, facing growing pressure from creative industry advocates.

The debate over how AI models access and use copyrighted material is intensifying, and the decisions made in courtrooms and parliaments could radically change the digital publishing landscape.

For more information on these topics, visit diplomacy.edu.

Sam Altman’s AI cricket post fuels India speculation

A seemingly light-hearted social media post by OpenAI CEO Sam Altman has stirred a wave of curiosity and scepticism in India. Altman shared an AI-generated anime image of himself as a cricket player dressed in an Indian jersey, which quickly went viral among Indian users.

While some saw it as a fun gesture, others questioned the timing and motives, speculating whether it was part of a broader strategy to woo Indian audiences. This isn’t the first time Altman has publicly praised India.

In recent weeks, he lauded the country’s rapid adoption of AI technology, calling it ‘amazing to watch’ and even said it was outpacing the rest of the world. His comments marked a shift from a more dismissive stance during a 2023 visit when he doubted India’s potential to compete with OpenAI’s large-scale models.

However, during his return visit in February 2025, he expressed interest in collaborating with Indian authorities on affordable AI solutions. The timing of Altman’s praise coincides with a surge in Indian users on OpenAI’s platforms, now the company’s second-largest market.

Meanwhile, OpenAI faces a legal tussle with several Indian media outlets over their alleged content misuse. Despite this, the potential of India’s booming AI market—projected to hit $8 billion by 2025—makes the country a critical frontier for global tech firms.

Experts argue that Altman’s overtures are more about business than sentiment. With increasing competition from rival AI models like DeepSeek and Gemini, maintaining and growing OpenAI’s Indian user base has become vital. As technology analyst Nikhil Pahwa said, ‘There’s no real love; it’s just business.’

For more information on these topics, visit diplomacy.edu.

Thailand strengthens cybersecurity with Google Cloud

Thailand’s National Cyber Security Agency (NCSA) has joined forces with Google Cloud to strengthen the country’s cyber resilience, using AI-based tools and shared threat intelligence instead of relying solely on traditional defences.

The collaboration aims to better protect public agencies and citizens against increasingly sophisticated cyber threats.

A key part of the initiative involves deploying Google Cloud Cybershield for centralised monitoring of security events across government bodies. Instead of having fragmented monitoring systems, this unified approach will help streamline incident detection and response.

The partnership also brings advanced training for cybersecurity personnel in the public sector, alongside regular threat intelligence sharing.

Google Cloud Web Risk will be integrated into government operations to automatically block websites hosting malware and phishing content, instead of relying on manual checks.

Google further noted the impact of its anti-scam technology in Google Play Protect, which has prevented over 6.6 million high-risk app installation attempts in Thailand since its 2024 launch—enhancing mobile safety for millions of users.

For more information on these topics, visit diplomacy.edu.

TikTok deal stalled amid US-China trade tensions

Negotiations to divest TikTok’s US operations have been halted following China’s indication that it would not approve the deal. The development came after President Donald Trump announced increased tariffs on Chinese imports.

The proposed arrangement involved creating a new US-based company to manage TikTok’s American operations, with US investors holding a majority stake and ByteDance retaining less than 20%. This plan had received approvals from existing and new investors, ByteDance, and the US government.

In response to the stalled negotiations, President Trump extended the deadline for ByteDance to sell TikTok’s US assets by 75 days, aiming to allow more time for securing necessary approvals.

He emphasised the desire to continue collaborating with TikTok and China to finalise the deal, expressing a preference to avoid shutting the app in the US.

The future of TikTok in the US remains unpredictable as geopolitical tensions and trade disputes continue to influence the negotiations.

On one side, such a reaction from the Chinese government could have been expected in exchange for the increase of US tariffs on Chinese products; on the other side, by extending the deadline, Trump would be able to maintain his protectionist policy while collecting sympathies from 170 million US citizens using the app, which now is a victim in their eyes as it faces potential banning if the US-China trade war doesn’t calm down and a resolution is not reached within the extended timeframe.

For more information on these topics, visit diplomacy.edu.

Meta unveils Llama 4 models to boost AI across platforms

Meta has launched Llama 4, its latest and most advanced family of open-weight AI models, aiming to enhance the intelligence of Meta AI across services like WhatsApp, Instagram, and Messenger.

Instead of keeping these models cloud-restricted, Meta has made them available for download through its official Llama website and Hugging Face, encouraging wider developer access.

Two models, Llama 4 Scout and Maverick, are now publicly available. Scout, the lighter model with 17 billion active parameters, supports a 10 million-token context window and can run on a single Nvidia H100 GPU.

It outperforms rivals like Google’s Gemma 3 and Mistral 3.1 in benchmark tests. Maverick, the more capable model, uses the same number of active parameters but with 128 experts, offering competitive performance against GPT-4o and DeepSeek v3 while being more efficient.

Meta also revealed the Llama 4 Behemoth model, still in training, which serves as a teacher for the rest of the Llama 4 line. Instead of targeting lightweight use, Behemoth focuses on heavy multimodal tasks with 288 billion active parameters and nearly two trillion in total.

Meta claims it outpaces GPT-4.5, Claude Sonnet 3.7, and Gemini 2.0 Pro in key STEM-related evaluations.

These open-weight AI models allow local deployment instead of relying on cloud APIs, though some licensing limits may apply. With Scout and Maverick already accessible, Meta is gradually integrating Llama 4 capabilities into its messaging and social platforms worldwide.

For more information on these topics, visit diplomacy.edu.

National Crime Agency responds to AI crime warning

The National Crime Agency (NCA) has pledged to ‘closely examine’ recommendations from the Alan Turing Institute after a recent report highlighted the UK’s insufficient preparedness for AI-enabled crime.

The report, from the Centre for Emerging Technology and Security (CETaS), urges the NCA to create a task force to address AI crime within the next five years.

Despite AI-enabled crime being in its early stages, the report warns that criminals are rapidly advancing their use of AI, outpacing law enforcement’s ability to respond.

CETaS claims that UK police forces have been slow to adopt AI themselves, which could leave them vulnerable to increasingly sophisticated crimes, such as child sexual abuse, cybercrime, and fraud.

The Alan Turing Institute emphasises that although AI-specific legislation may be needed eventually, the immediate priority is for law enforcement to integrate AI into their crime-fighting efforts.

An initiative like this would involve using AI tools to combat AI-enabled crimes effectively, as fraudsters and criminals exploit AI’s potential to deceive.

While AI crime remains a relatively new phenomenon, recent examples such as the $25 million Deepfake CFO fraud show the growing threat.

The report also highlights the role of AI in phishing scams, romance fraud, and other deceptive practices, warning that future AI-driven crimes may become harder to detect as technology evolves.

For more information on these topics, visit diplomacy.edu.

New Jersey criminalises the harmful use of AI deepfakes

New Jersey has become one of several US states to criminalise the creation and distribution of deceptive AI-generated media, commonly known as deepfakes. Governor Phil Murphy signed the legislation on Wednesday, introducing civil and criminal penalties for those who produce or share such media.

If deepfakes are used to commit further crimes like harassment, they may now be treated as a third-degree offence, punishable by fines up to $30,000 or up to five years in prison.

The bill was inspired by a disturbing incident at a New Jersey school where students shared explicit AI-generated images of a classmate.

Governor Murphy had initially vetoed the legislation in March, calling for changes to reduce the risk of constitutional challenges. Lawmakers later amended the bill, which passed with overwhelming support in both chambers.

Instead of ignoring the threat posed by deepfakes, the law aims to deter their misuse while preserving legitimate applications of AI.

‘This legislation takes a proactive approach,’ said Representative Lou Greenwald, one of the bill’s sponsors. ‘We are safeguarding New Jersey residents and offering justice to victims of digital abuse.’

A growing number of US states are taking similar action, particularly around election integrity and online harassment. While 27 states now target AI-generated sexual content, others have introduced measures to limit political deepfakes.

States like Texas and Minnesota have banned deceptive political media outright, while Florida and Wisconsin require clear disclosures. New Jersey’s move reflects a broader push to keep pace with rapidly evolving technology and its impact on public trust and safety.

For more information on these topics, visit diplomacy.edu.

Anthropic introduces Claude to revolutionise learning and teaching

Claude for Education, launched by Anthropic, introduces a specialised AI for higher education, aiming to support universities in teaching, learning, and administration.

The initiative includes key features like Learning mode, full campus access for top universities, and partnerships with organisations like Internet2 and Instructure to integrate AI into academic tools.

Learning mode helps students develop critical thinking by guiding them through problems with Socratic questioning instead of providing direct answers. It also offers templates for research and study.

Key academic partnerships include Northeastern University, London School of Economics, and Champlain College, all of which will benefit from campus-wide access to Claude. These partnerships ensure AI’s responsible integration and accessibility for all students.

New student programs, such as the Claude Campus Ambassadors and API credit initiatives, provide opportunities for students to engage with and build on AI tools.

The launch also coincides with efforts to integrate AI into the academic plans of institutions like Northeastern University, which is pioneering AI adoption in higher education with its ‘Northeastern 2025’ initiative.

For more information on these topics, visit diplomacy.edu.

AppLovin joins TikTok takeover frenzy

As the 5 April deadline approaches for TikTok to secure a non-Chinese buyer or face a US ban, the list of potential acquirers continues to grow.

Marketing platform AppLovin has submitted a preliminary bid to acquire TikTok’s operations outside of China, aiming to expand its footprint in the global digital advertising arena.

AppLovin’s move adds to the mounting interest in TikTok, with Amazon and a consortium led by OnlyFans founder Tim Stokely also entering the fray.

These developments come amid US government concerns over TikTok’s Chinese ownership, which officials argue poses national security risks, a claim that TikTok and its parent company, ByteDance, have consistently denied.

The White House has taken an unusually active role in facilitating the sale.

President Donald Trump indicates openness to a deal wherein China approves the transaction in exchange for relief from US tariffs on Chinese imports.

This intertwining of trade negotiations and tech acquisitions underscores the complex geopolitical landscape influencing the fate of TikTok in the US.

Private equity firm Blackstone is also evaluating a minority investment in TikTok’s US operations, potentially joining non-Chinese shareholders like Susquehanna International Group and General Atlantic in contributing fresh capital.

The future of TikTok, an app used by nearly half of all Americans, remains uncertain as the deadline looms and negotiations continue.

For more information on these topics, visit diplomacy.edu.

Authors in London protest Meta’s copyright violations

A wave of protest has hit Meta’s London headquarters today as authors and publishing professionals gather to voice their outrage over the tech giant’s reported use of pirated books to develop AI tools.

Among the protesters are acclaimed novelists Kate Mosse and Tracy Chevalier and poet Daljit Nagra, who assembled in Granary Square near Meta’s King’s Cross office to deliver a complaint letter from the Society of Authors (SoA).

At the heart of the protest is Meta’s alleged reliance on LibGen, a so-called ‘shadow library’ known for hosting over 7.5 million books, many without the consent of their authors.

A recent searchable database published by The Atlantic revealed that thousands of copyrighted works, including those by renowned authors, may have been used to train Meta’s AI models, provoking public outcry and legal action in the US.

Vanessa Fox O’Loughlin, chair of the SoA, condemned Meta’s reported actions as ‘illegal, shocking, and utterly devastating for writers,’ arguing that such practices devalue authors’ time and creativity.

‘A book can take a year or longer to write. Meta has stolen books so that their AI can reproduce creative content, potentially putting these same authors out of business’ she said.

Meta has denied any wrongdoing, with a spokesperson stating that the company respects intellectual property rights and believes its AI training practices comply with existing laws.

Still, the damage to trust within the creative community appears significant. Author AJ West, who discovered his novels were listed on LibGen, described the experience as a personal violation:

‘I was horrified to see that my novels were on the LibGen database, and I’m disgusted by the government’s silence on the matter,’ he said, adding, ‘To have my beautiful books ripped off like this without my permission and without a penny of compensation then fed to the AI monster feels like I’ve been mugged.’

Legal action is already underway in the US, where a group of high-profile writers, including Ta-Nehisi Coates, Junot Díaz, and Sarah Silverman, have filed a lawsuit against Meta for copyright infringement.

The suit alleges that Meta CEO Mark Zuckerberg and other top executives knew that LibGen hosts pirated content when they greenlit its use for AI development.

The protest is also aimed at UK lawmakers. Authors like Richard Osman and Kazuo Ishiguro have joined the call for British officials to summon Meta executives before parliament.

The Society of Authors has launched a petition on Change.org that has already attracted over 7,000 signatures.

Demonstrators were urged to bring placards and spread their message online using hashtags like #MetaBookThieves and #MakeItFair as they rally against alleged copyright violations and for broader protection of creative work in the age of AI.

The case, one of the lots, describes the increasingly tense relationship between the tech industry, content and data policies in training AI systems, which hardly depend on the written word and the most various literature, facts, and info from the written tradition to be trained (and thus able) to respond to most various user requests and alongside be accurate in their responses.

For more information on these topics, visit diplomacy.edu.