Rethinking AI in journalism with global cooperation

At the Internet Governance Forum 2025 in Lillestrøm, Norway, a vibrant multistakeholder session spotlighted the ethical dilemmas of AI in journalism and digital content. The event was hosted by R&W Media and introduced the Haarlem Declaration, a global initiative to promote responsible AI practices in digital media.

Central to the discussion was unveiling an ‘ethical AI checklist,’ designed to help organisations uphold human rights, transparency, and environmental responsibility while navigating AI’s expanding role in content creation. Speakers emphasised a people-centred approach to AI, advocating for tools that support rather than replace human decision-making.

Ernst Noorman, the Dutch Ambassador for Cyber Affairs, called for AI policies rooted in international human rights law, highlighting Europe’s Digital Services and AI Acts as potential models. Meanwhile, grassroots organisations from the Global South shared real-world challenges, including algorithmic bias, language exclusions, and environmental impacts.

Taysir Mathlouthi of Hamleh detailed efforts to build localised AI models in Arabic and Hebrew, while Nepal’s Yuva organisation, represented by Sanskriti Panday, explained how small NGOs balance ethical use of generative tools like ChatGPT with limited resources. The Global Forum for Media Development’s Laura Becana Ball introduced the Journalism Cloud Alliance, a collective aimed at making AI tools more accessible and affordable for newsrooms.

Despite enthusiasm, participants acknowledged hurdles such as checklist fatigue, lack of capacity, and the need for AI literacy training. Still, there was a shared sense of urgency and optimism, with the consensus that ethical frameworks must be embedded from the outset of AI development and not bolted on as an afterthought.

In closing, organisers invited civil society and media groups to endorse the Harlem Declaration and co-create practical tools for ethical AI governance. While challenges remain, the forum set a clear agenda: ethical AI in media must be inclusive, accountable, and co-designed by those most affected by its implementation.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

Parliamentarians at IGF 2025 call for action on information integrity

At the Internet Governance Forum 2025 in Lillestrøm, Norway, global lawmakers and experts gathered to confront one of the most pressing challenges of our digital era: the societal impact of misinformation and disinformation, especially amid the rapid advance of AI. Framed by the UN Global Principles for Information Integrity, the session spotlighted the urgent need for resilient, democratic responses to online erosion of public trust.

AI’s disruptive power took centre stage, with speakers citing alarming trends—deepfakes manipulated global election narratives in over a third of national polls in 2024 alone. Experts like Lindsay Gorman from the German Marshall Fund warned of a polluted digital ecosystem where fabricated video and audio now threaten core democratic processes.

UNESCO’s Marjorie Buchser expanded the concern, noting that generative AI enables manipulation and redefines how people access information, often diverting users from traditional journalism toward context-stripped AI outputs. However, regulation alone was not touted as a panacea.

Instead, panellists promoted ‘democracy-affirming technologies’ that embed transparency, accountability, and human rights at their foundation. The conversation urged greater investment in open, diverse digital ecosystems, particularly those supporting low-resource languages and underrepresented cultures. At the same time, multiple voices called for more equitable research, warning that Western-centric data and governance models skew current efforts.

In the end, a recurring theme echoed across the room: tackling information manipulation is a collective endeavour that demands multistakeholder cooperation. From enforcing technical standards to amplifying independent journalism and bolstering AI literacy, participants called for governments, civil society, and the tech industry to build unified, future-proof solutions that protect democratic integrity while preserving the fundamental right to free expression.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

Social media overtakes TV as main news source in the US

Social media and video platforms have officially overtaken traditional television and news websites as the primary way Americans consume news, according to new research from the Reuters Institute. Over half of respondents (54%) now turn to platforms like Facebook, YouTube, and X (formerly Twitter) for their news, surpassing TV (50%) and dedicated news websites or apps (48%).

The study highlights the growing dominance of personality-driven news, particularly through social video, with figures like podcaster Joe Rogan reaching nearly a quarter of the population weekly. That shift poses serious challenges for traditional media outlets as more users gravitate toward influencers and creators who present news in a casual or partisan style.

There is concern, however, about the accuracy of this new media landscape. Nearly half of global respondents identify online influencers as major sources of false or misleading information, on par with politicians.

At the same time, populist leaders are increasingly using sympathetic online hosts to bypass tough questions from mainstream journalists, often spreading unchecked narratives. The report also notes a rise in AI tools for news consumption, especially among Gen Z, though public trust in AI’s ability to deliver reliable news remains low.

Despite the rise of alternative platforms like Threads and Mastodon, they’ve struggled to gain traction. Even as user habits change, one constant remains: people still value reliable news sources, even if they turn to them less often.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Graphite spyware used against European reporters, experts warn

A new surveillance scandal has emerged in Europe as forensic evidence confirms that an Israeli spyware firm Paragon used its Graphite tool to target journalists through zero-click attacks on iOS devices. The attacks, requiring no user interaction, exposed sensitive communications and location data.

Citizen Lab and reports from Schneier on Security identified the spyware on multiple journalists’ devices on April 29, 2025. The findings mark the first confirmed use of Paragon’s spyware against members of the press, raising alarms over digital privacy and press freedom.

Backed by US investors, Paragon has operated outside of Israel under claims of aiding national security. But its spyware is now at the center of a widening controversy, particularly in Italy, where the government recently ended its contract with the company after two journalists were targeted.

Experts warn that such attacks undermine the confidentiality crucial to journalism and could erode democratic safeguards. Even Apple’s secure devices proved vulnerable, according to Bleeping Computer, highlighting the advanced nature of Graphite.

The incident has sparked calls for tighter international regulation of spyware firms. Without oversight, critics argue, tools meant for fighting crime risk being used to silence dissent and target civil society.

The Paragon case underscores the urgent need for transparency, accountability, and stronger protections in an age of powerful, invisible surveillance tools.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Rights groups condemn Jordan’s media crackdown

At least 12 independent news websites in Jordan have been blocked by the authorities without any formal legal justification or opportunity for appeal. Rights groups have condemned the move as a serious violation of constitutional and international protections for freedom of expression.

The Jordanian Media Commission issued the directive on 14 May 2025, citing vague claims such as ‘spreading media poison’ and ‘targeting national symbols’, without providing evidence or naming the sites publicly.

The timing of the ban suggests it was a retaliatory act against investigative reports alleging profiteering by state institutions in humanitarian aid efforts to Gaza. Affected outlets were subjected to intimidation, and the blocks were imposed without judicial oversight or a transparent legal process.

Observers warn this sets a dangerous precedent, reflecting a broader pattern of repression under Jordan’s Cybercrime Law No. 17 of 2023, which grants sweeping powers to restrict online speech.

Civil society organisations call for the immediate reversal of the ban, transparency over its legal basis, and access to judicial remedies for affected platforms.

They urge a comprehensive review of the cybercrime law to align it with international human rights standards. Press freedom, they argue, is a pillar of democratic society and must not be sacrificed under the guise of combating disinformation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Telegram founder Durov to address Oslo Freedom Forum remotely amid legal dispute

Telegram founder Pavel Durov will deliver a livestreamed keynote at the Oslo Freedom Forum, following a French court decision barring him from international travel. The Human Rights Foundation (HRF), which organizes the annual event, expressed disappointment at the court’s ruling.

Durov, currently under investigation in France, was arrested in August 2024 on charges related to child sexual abuse material (CSAM) distribution and failure to assist law enforcement.

He was released on €5 million bail but ordered to remain in the country and report to police twice a week. Durov maintains the charges are unfounded and says Telegram complies with law enforcement when possible.

Recently, Durov accused French intelligence chief Nicolas Lerner of pressuring him to censor political voices ahead of elections in Romania. France’s DGSE denies the allegation, saying meetings with Durov focused solely on national security threats.

The claim has sparked international debate, with figures like Elon Musk and Edward Snowden defending Durov’s stance on free speech.

Supporters say the legal action against Durov may be politically motivated and warn it could set a dangerous precedent for holding tech executives accountable for user content. Critics argue Telegram must do more to moderate harmful material.

Despite legal restrictions, HRF says Durov’s remote participation is vital for ongoing discussions around internet freedom and digital rights.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Chicago Sun-Times under fire for fake summer guide

The Chicago Sun-Times has come under scrutiny after its 18 May issue featured a summer guide riddled with fake books, quotes, and experts, many of which appear to have been generated by AI.

Among genuine titles like Call Me By Your Name, readers encountered fictional works wrongly attributed to real authors, such as Min Jin Lee and Rebecca Makkai. The guide also cited individuals who do not appear to exist, including a professor at the University of Colorado and a food anthropologist at Cornell.

Although the guide carried the Sun-Times logo, the newspaper claims it wasn’t written or approved by its editorial team. It stated that the section had been licensed from a national content partner, reportedly Hearst, and is now being removed from digital editions.

Victor Lim, the senior director of audience development, said the paper is investigating how the content was published and is working to update policies to ensure third-party material aligns with newsroom standards.

Several stories in the guide lack bylines or feature names linked to questionable content. Marco Buscaglia, credited for one piece, admitted to using AI ‘for background’ but failed to verify the sources this time, calling the oversight ‘completely embarrassing.’

The incident echoes similar controversies at other media outlets where AI-generated material has been presented alongside legitimate reporting. Even when such content originates from third-party providers, the blurred line between verified journalism and fabricated stories continues to erode reader trust.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI partners with major news outlets

OpenAI has signed multiple content-sharing deals with major media outlets, including Politico, Vox, Wired, and Vanity Fair, allowing their content to be featured in ChatGPT.

As part of the deal with The Washington Post, ChatGPT will display summaries, quotes, and links to the publication’s original reporting in response to relevant queries. OpenAI has secured similar partnerships with over 20 news publishers and 160 outlets in 20 languages.

The Washington Post’s head of global partnerships, Peter Elkins-Williams, emphasised the importance of meeting audiences where they are, ensuring ChatGPT users have access to impactful reporting.

OpenAI’s media partnerships head, Varun Shetty, noted that more than 500 million people use ChatGPT weekly, highlighting the significance of these collaborations in providing timely, trustworthy information to users.

OpenAI has worked to avoid criticism related to copyright infringement, having previously faced legal challenges, particularly from the New York Times, over claims that chatbots were trained on millions of articles without permission.

While OpenAI sought to dismiss these claims, a US district court allowed the case to proceed, intensifying scrutiny over AI’s use of news content.

Despite these challenges, OpenAI continues to form agreements with leading publications, such as Hearst, Condé Nast, Time magazine, and Vox Media, helping ensure their journalism reaches a wider audience.

Meanwhile, other publications have pursued legal action against AI companies like Cohere for allegedly using their content without consent to train AI models.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI site faces backlash for copying Southern Oregon news

A major publishing organisation has issued a formal warning to Good Daily News, an AI-powered news aggregator, demanding it cease the unauthorised scraping of content from local news outlets across Southern Oregon and beyond. The News Media Alliance, which represents 2,200 publishers, sent the letter on 25 March, urging the national operator to respect publishers’ rights and stop reproducing material without permission.

Good Daily runs over 350 online ‘local’ news websites across 47 US states, including Daily Medford and Daily Salem in Oregon. Though the platforms appear locally based, they are developed using AI and managed by one individual, Matt Henderson, who has registered mailing addresses in both Ashland, Oregon and Austin, Texas. Content is reportedly scraped from legitimate local news sites, rewritten by AI, and shared in newsletters, sometimes with source links, but often without permission.

News Media Alliance president Danielle Coffey said such practices undermine the time, resources, and revenue of local journalism. Many publishers use digital tools to block automated scrapers, though this comes at a financial cost. The organisation is working with the Oregon Newspaper Publishers Association and exploring legal options. Others in the industry, including Heidi Wright of the Fund for Oregon Rural Journalism, have voiced strong support for the warning, calling for greater action to defend the integrity of local news.

For more information on these topics, visit diplomacy.edu.

Russia fines Telegram over extremist content

A Moscow court has fined the messaging platform Telegram 7 million roubles (approximately $80,000) for failing to remove content allegedly promoting terrorist acts and inciting anti-government protests, according to TASS (Russian state news agency).

The court ruled that Telegram did not comply with legal obligations to take down materials deemed extremist, including calls to sabotage railway systems in support of Ukrainian forces and to overthrow the Russian government.

The judgement cited specific Telegram channels accused of distributing such content. Authorities argue that these channels played a role in encouraging public unrest and potentially supporting hostile actions against the Russian state.

The decision adds to the long-standing tension between Russia’s media watchdogs and Telegram, which remains one of the most widely used messaging platforms across Russia and neighbouring countries.

Telegram has not stated in response to the fine, and it is unclear whether the company plans to challenge the court’s ruling. 

The platform was founded by Russian-born entrepreneur Pavel Durov and is currently headquartered in Dubai, boasting close to a billion users globally. 

Telegram’s decentralised nature and encrypted messaging features have made it popular among users seeking privacy, but it has also drawn criticism from governments citing national security concerns.

Durov himself returned to Dubai in March after months in France following his 2024 arrest linked to accusations that Telegram was used in connection with fraud, money laundering, and the circulation of illegal content.

Although he has denied any wrongdoing, the incident has further strained the company’s relationship with authorities in Russia.

This latest legal action reflects Russia’s ongoing crackdown on digital platforms accused of facilitating dissent or undermining state control.

With geopolitical tensions still high, especially surrounding the conflict in Ukraine, platforms like Telegram face increasing scrutiny and legal pressure in multiple jurisdictions.