Canadian news publishers clash with OpenAI in landmark copyright case

OpenAI is set to argue in an Ontario court that a copyright lawsuit by Canadian news publishers should be heard in the United States. The case, the first of its kind in Canada, alleges that OpenAI scraped Canadian news content to train ChatGPT without permission or payment.

The coalition of publishers, including CBC/Radio-Canada, The Globe and Mail, and Postmedia, says the material was created and hosted in Ontario, making the province the proper venue. They warn that accepting OpenAI’s stance would undermine Canadian sovereignty in the digital economy.

OpenAI, however, says the training of its models and web crawling occurred outside Canada and that the Copyright Act cannot apply extraterritorially. It argues the publishers are politicising the case by framing it as a matter of sovereignty rather than jurisdiction.

The dispute reflects a broader global clash over how generative AI systems use copyrighted works. US courts are already handling several similar cases, though no clear precedent has been established on whether such use qualifies as fair use.

Publishers argue Canadian courts must decide the matter domestically, while OpenAI insists it belongs in US courts. The outcome could shape how copyright laws apply to AI training and digital content across borders.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Nepal lifts social media ban after protests

The Nepali government has lifted its ban on major social media platforms following days of nationwide protests led largely by youth demanding action against corruption.

The ban, which blocked access to 26 social media sites including WhatsApp, Facebook, Instagram, LinkedIn, and YouTube, was introduced in an effort to curb misinformation, online fraud, and hate speech, according to government officials.

However, critics accused the administration of using the restrictions to stifle dissent and silence public outrage.

Thousands of demonstrators took to the streets in Kathmandu and other major cities in Nepal, voicing frustration over rising unemployment, inflation, and what they described as a lack of accountability among political leaders.

The protests quickly gained momentum, with digital freedom becoming a central theme alongside anti-corruption demands.

The United Nations Office for the High Commissioner of Human Rights addressed the situation, stating: “We have received several deeply worrying allegations of unnecessary or disproportionate use of force by security forces during protests organized by youth groups demonstrating against corruption and the recent Government ban on social media platforms.”

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK publishers fear Google AI summaries hit revenues

UK publishers warn that Google’s AI Overviews significantly cut website traffic, threatening fragile online revenues.

Reach, owner of the Mirror and Daily Express, said readers often settle for the AI summary instead of visiting its sites. DMG Media told regulators that click-through rates had fallen by up to 89% since the rollout.

Publishers argue that they provide accurate reporting that fuels Google’s search results, yet they see no financial return when users no longer click through. Concerns are growing over Google’s conversational AI Mode, which displays even fewer links.

Google insists that search traffic has remained stable year-on-year and claims that AI overviews offer users more opportunities to find quality links. Still, a coalition of publishers has filed a complaint with the UK Competition and Markets Authority, alleging misuse of their content.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenAI boss, Sam Altman, fuels debate over dead internet theory

Sam Altman, chief executive of OpenAI, has suggested that the so-called ‘dead internet theory’ may hold some truth. The idea, long dismissed as a conspiracy theory, claims much of the online world is now dominated by computer-generated content rather than real people.

Altman noted on X that he had not previously taken the theory seriously but believed there were now many accounts run by large language models.

His remark drew criticism from users who argued that OpenAI itself had helped create the problem by releasing ChatGPT in 2022, which triggered a surge of automated content.

The spread of AI systems has intensified debate over whether online spaces are increasingly filled with artificially generated voices.

Some observers also linked Altman’s comments to his work on World Network, formerly Worldcoin, a project launched in 2019 to verify human identity online through biometric scans. That initiative has been promoted as a potential safeguard against the growing influence of AI-driven systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Publishers set to earn from Comet Plus, Perplexity’s new initiative

Perplexity has announced Comet Plus, a new service that will pay premium publishers to provide high-quality news content as an alternative to clickbait. The company has not disclosed its roster of partners or payment structure, though reports suggest a pool of $42.5 million.

Publishers have long criticised AI services for exploiting their work without compensation. Perplexity, backed by Amazon’s Jeff Bezos, said Comet Plus will create a fairer system and reward journalists for producing trusted content in the era of AI.

The platform introduces a revenue model based on three streams: human visits, search citations, and agent actions. Perplexity argues this approach better reflects how people consume information today, whether by browsing manually, seeking AI-generated answers, or using AI agents.

The company stated that the initiative aims to rebuild trust between readers and publishers, while ensuring that journalism thrives in a changing digital economy. The initial group of publishing partners will be revealed later.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Trump threatens sanctions on EU over Digital Services Act

Only five days after the Joint Statement on a United States-European Union framework on an agreement on reciprocal, fair and balanced trade (‘Framework Agreement’), the Trump administration is weighing an unprecedented step against the EU over its new tech rules.

According to The Japan Times and Reuters, US officials are discussing sanctions on the EU or member state representatives responsible for implementing the Digital Services Act (DSA), a sweeping law that forces online platforms to police illegal content. Washington argues the regulation censors Americans and unfairly burdens US companies.

While governments often complain about foreign rules they deem restrictive, directly sanctioning allied officials would mark a sharp escalation. So far, discussions have centred on possible visa bans, though no decision has been made.

Last week, Internal State Department meetings focused on whom such measures might target. Secretary of State Marco Rubio has ordered US diplomats in Europe to lobby against the DSA, urging allies to amend or repeal the law.

Washington insists that the EU is curbing freedom of speech under the banner of combating hate speech and misinformation, while the EU maintains that the act is designed to protect citizens from illegal material such as child exploitation and extremist propaganda.

‘Freedom of expression is a fundamental right in the EU. It lies at the heart of the DSA,’ an EU Commission spokesperson said, rejecting US accusations as ‘completely unfounded.’

Trump has framed the dispute in broader terms, threatening tariffs and export restrictions on any country that imposes digital regulations he deems discriminatory. In recent months, he has repeatedly warned that measures like the DSA, or national digital taxes, are veiled attacks on US companies and conservative voices online. At the same time, the administration has not hesitated to sanction foreign officials in other contexts, including a Brazilian judge overseeing cases against Trump ally Jair Bolsonaro.

US leaders, including Vice President JD Vance, have accused European authorities of suppressing right-wing parties and restricting debate on issues such as immigration. In contrast, European officials argue that their rules are about fairness and safety and do not silence political viewpoints. At a transatlantic conference earlier this year, Vance stunned European counterparts by charging that the EU was undermining democracy, remarks that underscored the widening gap.

The question remains whether Washington will take the extraordinary step of sanctioning officials in Brussels or the EU capitals. Such action could further destabilise an already fragile trade relationship while putting the US squarely at odds with Europe over the future of digital governance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI in justice: Bridging the global access gap or deepening inequalities

At least 5 billion people worldwide lack access to justice, a human right enshrined in international law. In many regions, particularly low and middle-income countries, millions face barriers to justice, ranging from their socioeconomic position to the legal system failure. Meanwhile, AI has entered the legal sector at full speed and may offer legitimate solutions to bridge this justice gap.

Through chatbots, automated document review, predictive legal analysis, and AI-enabled translation, AI holds promise to improve efficiency and accessibility. Yet, the rise of AI in legal systems across the globe suggests the digitalisation of our legal systems.

While it may serve as a tool to break down access barriers, AI legal tools could also introduce the automation of bias in our judicial systems, unaccountable decision-making, and act as an accelerant to a widening digital divide. AI is capable of meaningfully expanding equitable justice, but its implementation must safeguard human rights principles. 

Improving access to justice

Across the globe, AI legal assistance pilot programmes are underway. The UNHCR piloted an AI agent to improve legal communication barriers in Jordan. AI transcribes, translates, and organises refugee queries. With its help, users can streamline their caseload management, which is key to keeping operations smooth even under financial strain

NGOs working to increase access to justice, such as Migrasia in Hong Kong, have begun using AI-powered chatbots to triage legal queries from migrant workers, offering 24/7 multilingual legal assistance.

While it is clear that these tools are designed to assist rather than replace human legal experts, they are showing they have the potential to significantly reduce delays by streamlining processes. In the UK, AI transcription tools are being used to provide victims of serious sexual crimes with access to judges’ sentencing remarks and explanations of legal language. This tool enhances transparency for victims, especially those seeking emotional closure. 

Even as these programmes are only being piloted, a UNESCO survey found that 44% of judicial workers across 96 countries are currently using AI tools, like ChatGPT, for tasks such as drafting and translating documents. For example, the Morrocan judiciary has already integrated AI technology into its legal system.

AI tools help judges prepare judgments for various cases, as well as streamline legal document preparation. The technology allows for faster document drafting in a multilingual environment. Soon, AI-powered case analysis, based on prior case data, may also provide legal experts with predictive outcomes. AI tools have the opportunity and are already beginning to, break down barriers to justice and ultimately improve the just application of the law. 

Risking human rights

While AI-powered legal assistance can provide affordable access, improve outreach to rural or marginalised communities, close linguistic divides, and streamline cases, it also poses a serious risk to human rights. The most prominent concerns surround bias and discrimination, as well as widening the digital divide.

Deploying AI without transparency can lead to algorithmic systems perpetuating systematic inequalities, such as racial or ethnic biases. Meanwhile, the risk of black box decision-making, through the use of AI tools with unexplainable outputs, can make it difficult to challenge legal decisions, undermining due process and the right to a fair trial.

Experts emphasise that the integration of AI into legal systems must focus on supporting human judgment, rather than outright replacing it. Whether AI is biased by its training datasets or simply that it becomes a black box over time, AI usage is in need of foresighted governance and meaningful human oversight. 

 Sphere, Adult, Female, Person, Woman, Astronomy, Outer Space, Planet, Globe, Head
Image via Pixabay / jessica45

Additionally, AI will greatly impact economic justice, especially for those in low-income or marginalised communities. Legal professionals lack necessary training and skills needed to effectively use AI tools. In many legal systems, lawyers, judges, clerks, and assistants do not feel confident explaining AI outputs or monitoring their use.

However, this lack of education undermines the necessary accountability and transparency needed to integrate AI meaningfully. It may lead to misuse of the technology, such as unverified translations, which can lead to legal errors. 

While the use of AI improves efficiency, it may erode public trust when legal actors fail to use it correctly or the technology reflects systematic bias. The judiciary in Texas, US, warned about this concern in an opinion that detailed the fear of integrating opaque systems into the administration of justice. Public trust in the legal system is already eroding in the US, with just over a third of Americans expressing confidence in 2024.

The incorporation of AI into the legal system threatens to derail the public’s faith that is left. Meanwhile, those without access to digital connectivity or literacy education may be further excluded from justice. Many AI tools are developed by for-profit actors, raising questions about justice accessibility in an AI-powered legal system. Furthermore, AI providers will have access to sensitive case data, which poses a risk of misuse and even surveillance. 

The policy path forward

As already stated, for AI to be integrated into legal systems and help bridge the justice gap, it must take on the role of assisting to human judges, lawyers, and other legal actors, but it cannot replace them. In order for AI to assist, it must be transparent, accountable, and a supplement to human reason. UNESCO and some regional courts in Eastern Africa advocate for judicial training programmes, thorough guidelines, and toolkits that promote the ethical use of AI.

The focus of legal AI education must be to improve AI literacy and to teach bias awareness, as well as inform users of digital rights. Legal actors must keep pace with the innovation and integration level of AI. They are the core of policy discussions, as they understand existing norms and have firsthand experience of how the technology affects human rights. 

Other actors are also at play in this discussion. Taking a multistakeholder approach that centres on existing human rights frameworks, such as the Toronto Declaration, is the path to achieving effective and workable policy. Closing the justice gap by utilising AI hinges on the public’s access to the technology and understanding how it is being used in their legal systems. Solutions working to demystify black box decisions will be key to maintaining and improving public confidence in their legal systems. 

The future of justice

AI has the transformative capability to help bridge the justice gap by expanding reach, streamlining operations, and reducing cost. AI has the potential to be a tool for the application of justice and create powerful improvements to inclusion in our legal systems.

However, it also poses the risk of deepening inequalities and decaying public trust. AI integration must be governed by human rights norms of transparency and accountability. Regulation is possible through education and discussion predicated on adherence to ethical frameworks. Now is the time to invest in digital literacy to create legal empowerment, which ensures that AI tools are developed to be contestable and serve as human-centric support. 

AI, justice, law
Image via Pixabay / souandresantana

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot




Elon Musk calls Grok’s brief suspension a dumb error

Elon Musk’s AI chatbot Grok was briefly suspended from X, then returned without its verification badge and with a controversial video pinned to its replies. Confusing and contradictory explanations appeared in multiple languages, leaving users puzzled.

English posts blamed hateful conduct and Israel-Gaza comments, while French and Portuguese messages mentioned crime stats or technical bugs. Musk called the situation a ‘dumb error’ and admitted Grok was unsure why it had been suspended.

Grok’s suspension follows earlier controversies, including antisemitic remarks and introducing itself as ‘MechaHitler.’ xAI blamed outdated code and internet memes, revealing that Grok often referenced Musk’s public statements on sensitive topics.

The company has updated the chatbot’s prompts and promised ongoing monitoring, amid internal tensions and staff resignations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI news summaries to affect the future of journalism

Generative AI tools like ChatGPT significantly impact traditional online news by reducing search traffic to media websites.

As these AI assistants summarise news content directly in search results, users are less likely to click through to the sources, threatening already struggling publishers who depend on ad revenue and subscriptions.

A Pew Research Centre study found that when AI summaries appear in search, users click suggested links half as often as in traditional search formats.

Matt Karolian of the Boston Globe Media warns that the next few years will be especially difficult for publishers, urging them to adapt or risk being ‘swept away.’

While some, like the Boston Globe, have gained a modest number of new subscribers through ChatGPT, these numbers pale compared to other traffic sources.

To adapt, publishers are turning to Generative Engine Optimisation (GEO), tailoring content so AI tools can be used and cited more effectively. Some have blocked crawlers to prevent data harvesting, while others have reopened access to retain visibility.

Legal battles are unfolding, including a major lawsuit from The New York Times against OpenAI and Microsoft. Meanwhile, licensing deals between tech giants and media organisations are beginning to take shape.

With nearly 15% of under-25s now relying on AI for news, concerns are mounting over the credibility of information. As AI reshapes how news is consumed, the survival of original journalism and public trust in it face grave uncertainty.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

X challenges India’s expanded social media censorship in court

Tensions have escalated between Elon Musk’s social media platform, X, and the Indian government over extensive online content censorship measures.

Triggered by a seemingly harmless post describing a senior politician as ‘useless,’ the incident quickly spiralled into a significant legal confrontation.

X has accused Prime Minister Narendra Modi’s administration of overstepping constitutional bounds by empowering numerous government bodies to issue content-removal orders, significantly expanding the scope of India’s digital censorship.

At the heart of the dispute lies India’s increased social media content regulation since 2023, including launching the Sahyog platform, a centralised portal facilitating direct content-removal orders from officials to tech firms.

X rejected participating in Sahyog, labelling it a ‘censorship portal,’ and subsequently filed a lawsuit in Karnataka High Court earlier this year, contesting the legality of India’s directives and website, which it claims undermine free speech.

Indian authorities justify their intensified oversight by pointing to the need to control misinformation, safeguard national security, and prevent societal discord. They argue that the measures have broad support within the tech community. Indeed, major players like Google and Meta have reportedly complied without public protest, though both companies have declined to comment on their stance.

However, the court documents reveal that the scope of India’s censorship requests extends far beyond misinformation.

Authorities have reportedly targeted satirical cartoons depicting politicians unfavorably, criticism regarding government preparedness for natural disasters, and even media coverage of serious public incidents like a deadly stampede at a railway station.

While Musk and Prime Minister Modi maintain an outwardly amicable relationship, the conflict presents significant implications for X’s operations in India, one of its largest user bases.

Musk, a self-proclaimed free speech advocate, finds himself at a critical juncture, navigating between principles and the imperative to expand his business ventures within India’s substantial market.

Source: Reuters

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!