The decision follows calls from the technology sector for an exemption allowing AI developers to use copyrighted material without permission or payment.
Attorney-General Michelle Rowland said the Government aims to support innovation and creativity but will not weaken existing copyright protections. The Government plans to explore fair licensing options to support AI innovation while ensuring creators are paid fairly.
The Copyright and AI Reference Group will focus on fair AI use, more explicit copyright rules for AI works, and simpler enforcement through a possible small claims forum.
The Government said Australia must prepare for AI-related copyright challenges while keeping strong protections for creators. Collaboration between the technology and creative sectors will be essential to ensure that AI development benefits everyone.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A US federal judge has permanently barredNSO Group, a commercial spyware company, from targeting WhatsApp and, in the same ruling, cut damages owed to Meta from $168 million to $4 million.
The decision by Judge Phyllis Hamilton of the Northern District of California stems from NSO’s 2019 hack of WhatsApp, when the company’s Pegasus spyware targeted 1,400 users through a zero-click exploit. The injunction bans NSO from accessing or assisting access to WhatsApp’s systems, a restriction the firm previously warned could threaten its business model.
An NSO spokesperson said the order ‘will not apply to NSO’s customers, who will continue using the company’s technology to help protect public safety,’ but declined to clarify how that interpretation aligns with the court’s wording. By contrast, Will Cathcart, head of WhatsApp, stated on X that the decision ‘bans spyware maker NSO from ever targeting WhatsApp and our global users again.’
Pegasus has allegedly been used against journalists, activists, and dissidents worldwide. The ruling sets an important precedent for US companies whose platforms have been compromised by commercial surveillance firms.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Blaszczyk observes that figures such as Peter Thiel contribute to a discourse that questions the very value of human existence, but equally worrying are the voices using humanist, democratic, and romantic rhetoric to preserve the status quo. These narratives can be weaponised by actors seeking to reassure the public while avoiding strong regulation.
The article analyses executive orders, AI action plans, and regulatory proposals that promise human flourishing or protect civil liberties, but often do so under deregulatory frameworks or with voluntary oversight.
For example, the EU AI Act is praised, yet criticised for gaps and loopholes; many ‘human-in-the-loop’ provisions risk making humans mere rubber stampers.
Blaszczyk suggests that nominal humanism is used as a rhetorical shield. Humans are placed formally at the centre of laws and frameworks, copyright, free speech, democratic values, but real influence, rights protection, and liability often remain minimal.
He warns that without enforcement, oversight and accountability, human-centred AI policies risk becoming slogans rather than safeguards.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
YouTube has agreed to a $24.5 million settlement to resolve a lawsuit filed by President Donald Trump, stemming from the platform’s decision to suspend his account after the 6 January 2021 Capitol riot.
The lawsuit was part of a broader legal push by Trump against major tech companies over what he calls politically motivated censorship.
As part of the deal, YouTube will donate $22 million to the Trust for the National Mall on Trump’s behalf, funding a new $200 million White House ballroom project. Another $2.5 million will go to co-plaintiffs, including the American Conservative Union and author Naomi Wolf.
The settlement includes no admission of wrongdoing by YouTube and was intended to avoid further legal costs. The move follows similar multimillion-dollar settlements by Meta and X, which also suspended Trump’s accounts post-January 6.
Critics argue the settlement signals a retreat from consistent content moderation. Media scholar Timothy Koskie warned it sets a troubling precedent for global digital governance and selective enforcement.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Business Insider has issued a memo saying journalists may use AI to help draft stories, while making it clear that authors remain fully responsible for what is published under their names.
The guidelines define what kinds of AI use are permitted, such as assisting with research or generating draft text, but stress that final edits, fact-checking, and the author’s voice must be preserved.
Some staff welcomed the clarity after months of uncertainty, saying the new policy could help speed up routine work. Others raised concerns about preserving editorial quality and resisting over-reliance on AI for creativity or original insight.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
OpenAI is set to argue in an Ontario court that a copyright lawsuit by Canadian news publishers should be heard in the United States. The case, the first of its kind in Canada, alleges that OpenAI scraped Canadian news content to train ChatGPT without permission or payment.
The coalition of publishers, including CBC/Radio-Canada, The Globe and Mail, and Postmedia, says the material was created and hosted in Ontario, making the province the proper venue. They warn that accepting OpenAI’s stance would undermine Canadian sovereignty in the digital economy.
OpenAI, however, says the training of its models and web crawling occurred outside Canada and that the Copyright Act cannot apply extraterritorially. It argues the publishers are politicising the case by framing it as a matter of sovereignty rather than jurisdiction.
The dispute reflects a broader global clash over how generative AI systems use copyrighted works. US courts are already handling several similar cases, though no clear precedent has been established on whether such use qualifies as fair use.
Publishers argue Canadian courts must decide the matter domestically, while OpenAI insists it belongs in US courts. The outcome could shape how copyright laws apply to AI training and digital content across borders.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The ban, which blocked access to 26 social media sites including WhatsApp, Facebook, Instagram, LinkedIn, and YouTube, was introduced in an effort to curb misinformation, online fraud, and hate speech, according to government officials.
However, critics accused the administration of using the restrictions to stifle dissent and silence public outrage.
Thousands of demonstrators took to the streets in Kathmandu and other major cities in Nepal, voicing frustration over rising unemployment, inflation, and what they described as a lack of accountability among political leaders.
The protests quickly gained momentum, with digital freedom becoming a central theme alongside anti-corruption demands.
The United Nations Office for the High Commissioner of Human Rights addressed the situation, stating: “We have received several deeply worrying allegations of unnecessary or disproportionate use of force by security forces during protests organized by youth groups demonstrating against corruption and the recent Government ban on social media platforms.”
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
UK publishers warn that Google’s AI Overviews significantly cut website traffic, threatening fragile online revenues.
Reach, owner of the Mirror and Daily Express, said readers often settle for the AI summary instead of visiting its sites. DMG Media told regulators that click-through rates had fallen by up to 89% since the rollout.
Publishers argue that they provide accurate reporting that fuels Google’s search results, yet they see no financial return when users no longer click through. Concerns are growing over Google’s conversational AI Mode, which displays even fewer links.
Google insists that search traffic has remained stable year-on-year and claims that AI overviews offer users more opportunities to find quality links. Still, a coalition of publishers has filed a complaint with the UK Competition and Markets Authority, alleging misuse of their content.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Sam Altman, chief executive of OpenAI, has suggested that the so-called ‘dead internet theory’ may hold some truth. The idea, long dismissed as a conspiracy theory, claims much of the online world is now dominated by computer-generated content rather than real people.
Altman noted on X that he had not previously taken the theory seriously but believed there were now many accounts run by large language models.
His remark drew criticism from users who argued that OpenAI itself had helped create the problem by releasing ChatGPT in 2022, which triggered a surge of automated content.
The spread of AI systems has intensified debate over whether online spaces are increasingly filled with artificially generated voices.
Some observers also linked Altman’s comments to his work on World Network, formerly Worldcoin, a project launched in 2019 to verify human identity online through biometric scans. That initiative has been promoted as a potential safeguard against the growing influence of AI-driven systems.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Perplexity has announced Comet Plus, a new service that will pay premium publishers to provide high-quality news content as an alternative to clickbait. The company has not disclosed its roster of partners or payment structure, though reports suggest a pool of $42.5 million.
Publishers have long criticised AI services for exploiting their work without compensation. Perplexity, backed by Amazon’s Jeff Bezos, said Comet Plus will create a fairer system and reward journalists for producing trusted content in the era of AI.
The platform introduces a revenue model based on three streams: human visits, search citations, and agent actions. Perplexity argues this approach better reflects how people consume information today, whether by browsing manually, seeking AI-generated answers, or using AI agents.
The company stated that the initiative aims to rebuild trust between readers and publishers, while ensuring that journalism thrives in a changing digital economy. The initial group of publishing partners will be revealed later.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!