New IRIS report links AI narratives to civic action

A report by International Resource for Impact and Storytelling examines how organisations worldwide are adapting to AI and algorithm-driven platforms. It focuses on how technology and storytelling are being used to support democracy and counter harmful narratives.

The study draws on insights from 10 organisations, identifying key approaches such as co-opting technology, countering surveillance and disinformation, and innovating in storytelling. These strategies aim to reshape narratives and challenge authoritarian pressures.

Examples include campaigns addressing digital surveillance, projects using journalism to amplify marginalised voices, and creative approaches to civic engagement. The report also highlights the role of artists and storytellers in influencing how AI is understood.

The findings highlight the growing importance of narrative and culture in the digital landscape, as organisations experiment with new forms of communication and resistance. The research reflects global efforts to align AI with democratic values.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

European Ombudsman criticises Commission over X risk report access

The European Ombudswoman has criticised the European Commission’s handling of a request for public access to a risk assessment report submitted by social media platform X under the Digital Services Act.

The case concerned a journalist’s request to access X’s 2023 risk assessment report, which large online platforms must provide under the DSA. The Commission refused to assess the report for possible disclosure, arguing that access could undermine X’s commercial interests, an ongoing DSA investigation and an independent audit.

The Ombudswoman found it unreasonable for the Commission to rely on a general presumption of non-disclosure rather than individually assessing the report. She said the circumstances in which the EU courts have allowed such presumptions differ from the rules applying to DSA risk assessment reports.

Although X has since made the report public with redactions, the Ombudswoman recommended that the Commission conduct its own assessment and aim to give the journalist the widest access possible, including potentially to parts redacted by the company. If access is refused for any sections, the Commission must explain why.

The finding of maladministration highlights the importance of transparency in the oversight of very large online platforms under the DSA, particularly where documents are relevant to public scrutiny of platform risk management and regulatory enforcement.

Why does it matter?

The case tests how far transparency obligations around very large online platforms can be limited by broad claims of commercial sensitivity or ongoing investigations. DSA risk assessment reports are central to understanding how platforms identify and manage systemic risks, so access decisions affect public oversight of the EU digital regulation as much as the rights of individual requesters.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI-driven disinformation threatens public trust, Nobel economist warns

Research by Nobel Prize-winning economist Joseph Stiglitz and Columbia University’s Maxim Ventura-Bolet argues that AI could worsen the economics of misinformation by making low-quality and misleading content cheaper and easier to produce at scale.

According to an analysis in The Strategist, their economic modelling suggests that digital markets reward misleading and emotionally charged content because it attracts engagement, advertising revenue and data collection. The analysis argues that without regulation, markets are likely to produce more disinformation and less reliable information as AI lowers the cost of content production.

The article says social media platforms and AI systems have reshaped how people consume information. Instead of visiting original news sources, users increasingly rely on algorithm-driven feeds, search summaries and AI-generated overviews, reducing traffic and revenue for original publishers.

It also argues that AI systems can intensify the problem by producing large volumes of convincing but unreliable material quickly and cheaply. Since AI tools depend on online information for training and outputs, distorted or misleading data can feed back into the information ecosystem and further reduce quality.

The analysis links the issue to political polarisation, warning that audiences are more likely to engage with information that reinforces existing beliefs. That demand can further reward producers of misleading content while putting additional pressure on public-interest journalism.

Stiglitz and Ventura-Bolet argue that market forces alone will not correct the decline in information quality. The article says possible responses include stronger platform accountability for content amplification, obligations to address coordinated disinformation campaigns and intellectual property protections for news producers.

The analysis also points to Australia’s memorandum of understanding with Anthropic as a sign of engagement between government and AI companies, while stressing that voluntary cooperation is not a substitute for regulation.

Why does it matter?

The analysis highlights how AI and platform algorithms can affect the economic incentives behind public information, not only the speed at which false content spreads. If engagement-based systems continue to reward misleading material while weakening the revenue base for quality journalism, the risks extend beyond individual misinformation incidents to the overall reliability of the online information environment.

That matters for democratic debate, public trust and informed decision-making. It also raises regulatory questions about platform accountability, the use of news content by AI systems and whether voluntary agreements with technology companies are enough to protect the information ecosystem.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our chatbot!  

UNESCO, Lebanon and Télé Liban launch campaign to promote media literacy

Lebanon’s Ministry of Information, UNESCO and Télé Liban have launched a nationwide media and information literacy campaign aimed at raising public awareness of misinformation and encouraging more responsible information sharing.

Funded by UNIFIL, the initiative, titled ‘Share Responsibly: Be Part of the Truth, Not Misinformation’, uses short episodes inspired by daily life in Lebanon to show how misleading information can spread and shape public perception.

The campaign features Yara Bou Monsef in scenarios set in taxis, shops, elevators and other public spaces, illustrating how people encounter and respond to misinformation in everyday situations. Through these examples, the organisers aim to encourage audiences to verify information before sharing it online or offline.

The initiative forms part of broader efforts to strengthen media and information literacy, promote critical thinking and support more resilient and informed communities.

Why does it matter?

Misinformation campaigns are often discussed in relation to elections, conflict or online platforms, but public resilience also depends on everyday information habits. By using familiar public spaces and locally recognisable scenarios, the campaign frames media literacy as a civic skill rather than only a technical or platform-governance issue.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Philippines presses Meta for faster action on online disinformation

The Philippine government is intensifying pressure on Meta to act more quickly to address harmful online disinformation, arguing that the company’s current enforcement approach is insufficient to address rapidly spreading false content that can affect public order, economic confidence, and national security. The latest move comes in the form of a formal response from the Department of Information and Communications Technology, following an earlier joint request involving the Presidential Communications Office and the Department of Justice.

Officials acknowledged Meta’s willingness to engage and its existing moderation policies, but said broad descriptions of enforcement mechanisms fall short of what the situation requires. According to the DICT, the government is seeking clear commitments, faster intervention processes, and measurable outcomes rather than general assurances about existing platform rules.

The pressure campaign is tied to concerns that false and misleading online content can trigger real-world harm, especially during politically and economically sensitive periods. Government statements have linked the problem to panic-inducing disinformation that could affect fuel prices, economic stability, and public trust, and have warned that inadequate action from Meta could lead to legal and regulatory consequences.

The latest DICT response sharpens that message. While recognising Meta’s engagement, the agency said general explanations of moderation policies were not enough, arguing that what is needed now are faster enforcement processes, concrete commitments, and measurable results. The government has tied that position to its wider ‘Kontra Fake News’ campaign, which it says is intended to protect access to accurate information while holding those who deliberately spread falsehoods accountable.

The dispute is also part of a broader institutional shift. The DICT, Presidential Communications Office, and Department of Justice have moved towards a more coordinated response to digital disinformation, including a memorandum of agreement aimed at a whole-of-government approach to false content and related threats such as deepfakes. That makes the Meta case more than a platform-specific complaint: it is becoming part of a wider governance and enforcement strategy.

In the meantime, officials of the Philippines have tried to draw a line between legitimate expression and harmful manipulation. The government says freedom of expression remains protected, but that protection does not extend to coordinated or deliberately harmful disinformation that can trigger panic or erode confidence in public institutions. That distinction is likely to become more important if talks with Meta fail and the government moves towards tougher intervention.

The broader significance of the case lies in what it says about platform governance. Rather than accepting general assurances about moderation systems, governments are increasingly demanding faster, more transparent, and more locally responsive enforcement from major technology companies. In the Philippine case, that pressure is now being expressed through a formal inter-agency effort that could test how far states are willing to go when platforms are seen as too slow to respond to politically and economically sensitive disinformation.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Minnesota weighs AI free speech limits

The National Constitution Center reports that Minnesota lawmakers are considering a constitutional amendment to exclude AI systems from free speech protections. The proposal would clarify that such rights apply to people, not machines.

According to the National Constitution Center, the amendment would add language stating that AI does not have the right to speak, write or publish sentiments freely. Human free speech protections would remain unchanged under the proposal.

The article highlights ongoing debate around the measure, with supporters arguing it distinguishes human rights from technological tools, while critics warn it could affect how AI-generated content is treated under the law.

The National Constitution Center notes that the proposal reflects broader tensions over how legal systems should address AI and free expression as the issue develops in Minnesota.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Digital equity advances as UNESCO promotes Universal Acceptance

UNESCO has reinforced the importance of Universal Acceptance as a foundation for multilingual digital inclusion during a global event hosted in Hyderabad.

An initiative that seeks to ensure that all languages and scripts function equally across the internet, strengthening digital access and participation.

The discussion linked linguistic diversity with broader principles such as digital rights, media literacy, and freedom of expression.

Universal Acceptance was presented as a core element of digital equality, enabling users to access online services regardless of language or script.

Through its partnership with ICANN, UNESCO is advancing efforts to ensure that domain names and email systems support all valid linguistic formats. These initiatives aim to remove technical barriers that limit participation in the digital economy.

An initiative that reflects a broader global effort to create a more inclusive and accessible internet. Strengthening multilingual infrastructure is expected to play a key role in shaping a more equitable and representative digital environment.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

OHCHR seeks inputs on protecting human rights defenders in the digital age

The Office of the UN High Commissioner for Human Rights has issued a call for inputs to support a report on how new and emerging technologies are affecting human rights defenders, including women human rights defenders, in the digital age.

Issued under Human Rights Council resolution 58/23, the call sought submissions by 31 March 2026 and forms part of a wider effort to examine how digital technologies are reshaping the conditions under which defenders work, communicate, and stay safe.

According to the OHCHR, the report will look at how digital and emerging technologies affect the work, privacy, communications, and security of human rights defenders. The call notes that digital tools have transformed both how defenders operate and the threats they face, with consequences for their safety online and offline.

The questions set out in the call are organised into four broad areas: legislative and regulatory measures, digital communications, privacy restrictions, and corporate responses. The OHCHR specifically asks for information on online safety and cybercrime laws, internet shutdowns, platform attacks, content moderation, surveillance tools, biometric surveillance, encryption, AI-related risks, and how companies assess and respond to harms affecting human rights defenders on their services.

The OHCHR invited member states, civil society, industry, and other stakeholders to submit written inputs in English, French, or Spanish. Those submissions will inform online consultations in April and the preparation of a report to the Human Rights Council under resolution 58/23.

Why does it matter?

Because the call treats the digital environment facing human rights defenders as a governance issue in its own right, rather than only as a technical or security concern. It brings together surveillance, platform accountability, encryption, AI, online harassment, and internet shutdowns under a single human rights framework, while signalling that the OHCHR wants evidence not only on state conduct, but also on how private companies shape civic space in the digital age.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

MIT develops AI framework to test ethics in autonomous systems

Researchers at MIT have introduced a new framework designed to evaluate the ethical impact of autonomous systems used in high-stakes environments. The approach aims to identify cases where AI-driven decisions may be technically efficient but fail to meet fairness expectations.

Growing reliance on AI in areas such as energy distribution and traffic management has raised concerns about unintended bias. Cost-optimised systems can still disadvantage communities, especially when ethical factors are hard to measure.

The framework, known as SEED-SET, separates objective performance metrics from subjective human values. A large language model is used to simulate stakeholder preferences, enabling the system to compare scenarios and detect where outcomes diverge from ethical expectations.

Testing shows the method generates more relevant scenarios while reducing manual analysis. Findings highlight its potential to improve transparency and support more balanced decision-making before AI systems are deployed.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

France moves toward social media restrictions for children under 15

Legislative efforts in France signal a shift toward stricter governance of youth access to digital platforms, with policymakers preparing to debate a ban on social media use for children under 15.

A proposal that forms part of a broader strategy to address concerns over online harms and excessive screen exposure among adolescents.

The draft law in France extends beyond access restrictions, proposing a digital curfew for older teenagers and expanding existing school phone bans to include high schools.

These measures reflect increasing reliance on regulatory intervention instead of voluntary platform safeguards, as evidence links prolonged digital engagement with risks such as cyberbullying, disrupted sleep patterns and exposure to harmful content.

Political backing for the initiative has emerged from figures aligned with Emmanuel Macron, reinforcing the government’s position that stronger oversight of digital environments is necessary. The proposal also mirrors developments in Australia, where similar restrictions have already entered into force.

A debate that is further influenced by legal actions targeting major platforms, including TikTok and Meta, amid allegations that algorithmic systems contribute to harmful user experiences.

The outcome of the parliamentary discussions in France is expected to shape future approaches to child safety, platform accountability and digital rights governance across Europe.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!