Navigating disinformation

Communication is the cornerstone of societal interaction, shaping how we exchange and understand information. In our increasingly digital world, however, spreading misinformation and disinformation poses a significant threat. This challenge affects public trust, democratic processes, and human rights, making it crucial to understand and address the complexities of information disorder.

Here, you can navigate disinformation and content policy starting with the AI assistant based on Diplo’s report ‘Decoding Disinformation: Lessons from Case Studies’:


1. Understanding Information Disorder

There is no consensus on how to define problems related to information disorder. This lack of clarity may negatively impact the effectiveness of responses and also create tensions between policies to combat information disorder and freedom of expression. Information disorder is an umbrella term that encompasses misinformation, disinformation, and hate speech:

  • Misinformation refers to incorrect or misleading information shared without the intent to deceive. This can arise from misunderstandings or errors, where individuals believe they are sharing accurate information.
  • Disinformation, in contrast, is deliberately false information spread with the intent to deceive and manipulate. Disinformation can be used to influence public opinion, disrupt social systems, or undermine political stability.
  • Hate speech is another vector that threatens information integrity. Also a contested concept, it has been defined by the UN as ‘any kind of communication in speech, writing or behaviour, that attacks or uses pejorative or discriminatory language with reference to a person or a group on the basis of who they are, in other words, based on their religion, ethnicity, nationality, race, colour, descent, gender or other identity factor’.

As seen in the definitions above, the difference between mis- and disinformation lies with intent. While disinformation is intended to deceive and is spread with hostile intent (to inflict harm), misinformation refers to the unintentional spread of inaccurate information shared in good faith. As pointed out by the United Nations (2023), however, the distinction between mis- and disinformation can be difficult to determine in practice. Misinformation can be rooted in disinformation, as deliberate lies and misleading narratives are weaponised, fed into public discourse, and passed on unwittingly.

All forms of information disorder can significantly damage democratic processes, erode public trust, and contribute to social discord. For example, disinformation campaigns may distort election outcomes or exacerbate societal divisions, undermining the integrity of democratic institutions.

The image shows a venn diagram with three sections labelled Misinformation, Disinformation, and Hate Speech. The centre, where they overlap is labelled ‘pollutes the information ecosystem and threatens human progress.
Threats to information integrity (adapted from United Nations, 2023)

The phenomenon of ‘information disorder’ isn’t new; it has historical roots dating back to ancient times. However, the digital era has amplified its reach and impact, posing unprecedented challenges. The internet and social media have revolutionised information dissemination, enabling rapid and widespread sharing. Social media platforms’ algorithms prioritise sensational and emotionally charged content, which can lead to the viral spread of misinformation. This environment creates echo chambers where false information can flourish, influencing public opinion on a large scale.

2. Interplay with other areas of digital governance 

The issue of mis- and disinformation is traversal and presents an interplay with several other digital policy areas. It is possible to map these interplays by using the taxonomy of digital policy developed by Diplo and adopted by the Digital Watch Observatory of the Geneva Internet Platform. Mapping these interplays is important in order to prevent potential unintended spillovers that policies to tackle false information may generate, as well as to identify pressure points that could be leveraged in the context of holistic policies to counter information disorder.

The image shows a diagram labelled with seven dimensions of disinformation. These are: Regulatory issues with impact on disinformation; Disinformation and sociocultural issues; Disinformation, shutdowns, and the impact on development; Disinformation and human rights; Online business models and economic incentives to disinformation; Cybersecturity, information influence operations, and national security; and finally, Disinformation and infrastructure.

Disinformation and infrastructure

Infrastructure is fundamental to the spread of information, including misinformation. Internet service providers and digital standards impact how content travels and is managed. While these operators generally don’t deal with content directly, they play a role in whether and how content is regulated. For instance, AI-generated content, such as deepfakes, is increasingly a concern, prompting efforts to create standards for transparency and detection (World Standards Cooperation, 2024).

Cybersecurity and national security

Disinformation campaigns often target societal vulnerabilities to create distrust and disrupt democratic processes. Notable examples include alleged Russian interference in US elections and the use of deepfakes during the Ukraine conflict. Different countries address these threats in varied ways: some through state-led initiatives to combat false news, while others advocate for public-private partnerships focusing on content management, psychological defence, and media literacy.

Economic incentives and disinformation

Disinformation often has financial, political, or social motives. Financial gains from disinformation can come from direct donations or advertising revenue. Social media platforms, driven by ad revenue, often prioritise sensational content, including disinformation, due to their algorithms. The impact of these algorithms on disinformation and the role of user preferences remain areas needing more transparency and research.

Disinformation and human rights

Disinformation impacts human rights, particularly the right to hold opinions and freedom of expression. Article 19 of the Universal Declaration of Human Rights and the International Covenant on Civil and Political Rights (ICCPR) guarantees the right to form opinions without interference. Disinformation blurs the line between fact and falsehood, impairing individuals’ ability to form their own opinions. It can also manipulate thinking processes without consent.

Freedom of expression includes the right to seek and share all kinds of information, whether true or false, and can only be restricted for reasons like protecting others’ rights or national security, as outlined in Article 19(3) of the ICCPR. Disinformation can distort public perception and electoral outcomes, threatening fair elections and democratic processes.

While combating disinformation is crucial, policies may be misused to suppress freedom of expression, allowing governments to impose arbitrary limits. Disinformation often overlaps with hate speech, targeting marginalised groups and inciting violence. Article 20(2) of the ICCPR prohibits advocacy that incites discrimination, hostility, or violence, regardless of its truthfulness.

Disinformation, shutdowns, and development

Internet access is essential for development, but internet shutdowns, increasingly used to control online communication and manage disinformation, harm both freedom of speech and economic growth. Shutdowns, seen in both autocratic and democratic nations, impact the digital economy and restrict access to information.

Disinformation and sociocultural issues

In the context of fighting disinformation, content moderation refers to policies oriented towards preventing the production and dissemination of disinformation, the removal of existing disinformation, or the downgrading of disinformation-related content through algorithms, so it becomes less visible to users. 
Due to the significant increase in the spread of mis- and disinformation, a considerable number of national and regional legal frameworks, as well as private-led initiatives have been introduced to curb or remove false content. Although content policy has a key role in fighting disinformation, these measures should be considered in tandem with other areas, such as human rights protection and media literacy policy. 

Regulatory issues impacting disinformation

Data protection is crucial in the fight against disinformation. By collecting data, platforms can target users with tailored ads, which can enhance disinformation campaigns. High data protection standards help mitigate this issue by limiting the misuse of personal data for disinformation purposes.

3. The role of social media and technology

Social media platforms have become a primary source of news for many people. In 19 developed countries, 84% of Pew Research respondents believe that access to the internet and social media has made people easier to manipulate with false information and rumours. Moreover, 70% of those surveyed consider the spread of false information online to be a major threat, second only to climate change.  Research indicates that false information spreads more quickly and widely than the truth. This phenomenon is driven by the business model of social media platforms and social media algorithms, which favour emotionally engaging and sensational content, thereby amplifying misinformation. 

 Book, Comics, Publication, Adult, Female, Person, Woman, Art, Drawing, Head, Face

One of the key mechanisms behind the social media phenomenon is the algorithmic curation of content. Social media platforms use sophisticated algorithms to determine which posts appear in a user’s feed, based on factors like past behaviour, interests, and interactions. These algorithms are designed to keep users engaged by showing them content that is most likely to capture their attention and prompt interaction. As a result, posts that provoke strong emotional responses—such as anger, fear, or outrage—tend to be favoured. Disinformation, with its often sensational and inflammatory nature, fits perfectly into this model, leading to its widespread dissemination.

This amplification effect is compounded by the phenomenon of ‘echo chambers’ and ‘filter bubbles.’ Social media algorithms tend to reinforce users’ existing beliefs by showing them content that aligns with their views while filtering out opposing perspectives. This creates an environment where users are primarily exposed to information that confirms their biases, making them more susceptible to disinformation that supports their pre-existing opinions. In these echo chambers, false narratives can quickly gain traction, as they are continually reinforced by like-minded individuals and groups.

The viral nature of social media further exacerbates the problem. Disinformation can spread rapidly across networks, reaching large audiences. This speed of dissemination makes it difficult for fact-checkers and other countermeasures to keep up, allowing false information to gain a foothold before it can be debunked. Moreover, once disinformation has been shared widely, it can be challenging to correct the record, as retractions or corrections often do not receive the same level of attention as the original falsehoods. Social media’s defining feature—its ability to connect billions of people and facilitate rapid communication—also makes it an ideal vehicle for the spread of disinformation. 

In parallel, more research is necessary to understand the spread of disinformation, and how social media algorithms interplay with individuals’ active search for content and personal preferences. While some researchers point out that exposure to disinformation seems to be heavily concentrated among a small minority of people who already have extreme views and actively seek this type of content, they also recognise that further research needs to be conducted in non-Western and non-English speaking countries, especially in countries from the Global South. Against this backdrop, policy and regulation that requests companies to share data and information on algorithms with researchers and other vetted actors could be an important step towards a deeper understanding of information disorder. 

 Advertisement, Text, Poster, Newspaper

4. Curbing disinformation and upholding human rights 

Disinformation impacts human rights, particularly the right to hold opinions and freedom of expression. Disinformation blurs the line between fact and falsehood, impairing individuals’ ability to form their own opinions without interference. It can also manipulate thinking processes without consent.

The right to hold opinions and the right to freedom of expression are intertwined, since the former entails the capacity to freely access information necessary to form one’s opinion and to change one’s mind. The right to freedom of expression is broad and encapsulates the freedom to seek, receive, and impart information and ideas of all kinds, regardless of frontiers and through any media, offline or online. 

The right to freedom of expression applies to all kinds of information and ideas, including those that may shock, offend, or disturb, irrespective of the truth or falsehood of the content. Nevertheless, freedom of expression is not absolute, and may be restricted, if some conditions are respected. Article 19 (3) of the ICCPR requires restrictions to be provided by law and to be necessary for the legitimate aim of respecting certain legitimate objectives, such as protecting the rights and reputations of others, and for protecting national security, public order, or public health or morals. Measures to limit disinformation must establish a close and concrete connection to the protection of one of the aforementioned legitimate aims. In addition, all restrictions must be exceptional and narrowly construed. 

Vague laws that confer excessive discretion to fight disinformation are particularly concerning, as they can lead to arbitrary decision-making. Policies initially aimed at fighting disinformation may be easily misused and abused by public authorities, allowing governments greater control and discretion over public discourse, imposing arbitrary or politically motivated limits to freedom of expression.

There is also a grey area between disinformation and hate speech, where disinformation may incite violence, fuel discrimination, and marginalise vulnerable groups. Any advocacy of national, racial, or religious hatred that constitutes incitement to discrimination, hostility, or violence is to be prohibited by law, regardless of any assessment of its truthfulness, according to article 20 (2) of the ICCPR.

5. The interplay between AI and disinformation

The introduction of AI into this mix adds another layer of complexity. AI technologies, particularly those capable of generating deepfakes—highly realistic but entirely fabricated images, videos, or audio—pose significant threats to the integrity of information. Deepfakes have already been used to manipulate public opinion in various countries, raising concerns about their potential to undermine democratic processes, such as elections, by eroding trust in public figures and institutions.

 Crowd, Person, Press Conference, Head, Face, Adult, Male, Man, People, Blackboard
Mainupluated video of Volodymyr Zelensky

The danger of AI-generated disinformation lies not only in its ability to deceive but also in its potential to be mass-produced and disseminated with unprecedented speed and scale. AI-driven bots can amplify disinformation across social media platforms, making it appear more widespread and credible than it is. These bots can engage in coordinated campaigns to flood social media with false narratives, creating an artificial sense of consensus or urgency around a particular issue. This is particularly problematic in environments where critical thinking and media literacy are lacking, as individuals may find it difficult to distinguish between genuine information and AI-generated falsehoods.

Moreover, AI algorithms themselves can inadvertently contribute to the spread of disinformation. Social media platforms use AI to curate content for users, prioritising posts that are likely to generate engagement. Unfortunately, this often means that sensational or emotionally charged content—whether true or false—gets amplified, while more accurate but less engaging information is downplayed.

Despite these risks, AI also holds significant promise as a tool to combat disinformation. When used responsibly, AI can help detect and mitigate the spread of false information. One of the primary ways AI can be leveraged in this fight is through the development of advanced detection systems that identify and flag disinformation.

AI-powered fact-checking tools are already being used to scan content for inaccuracies. These tools can cross-reference statements with verified databases to assess their veracity, helping to quickly identify and debunk false claims. For example, AI algorithms can analyse patterns in text, detect inconsistencies, and compare content against a corpus of trusted information. 

AI can enhance the ability to detect deepfakes and other forms of synthetic media. By analysing subtle inconsistencies in video or audio files—such as unnatural facial movements, mismatched lighting, or anomalies in voice modulation—AI systems can identify content that has been manipulated. Several organisations and researchers are working on developing AI models that can not only detect deepfakes but also watermark AI-generated content to distinguish it from authentic media.

AI can also play a crucial role in tracking the spread of disinformation across networks. By mapping the pathways through which disinformation travels, AI can help identify the sources of false narratives and the actors behind coordinated disinformation campaigns.

Another promising application of AI is in the area of content moderation. Social media platforms are increasingly using AI to monitor and filter content in real-time, removing posts that violate community guidelines or spread harmful disinformation. While human moderators are still essential for making nuanced decisions, AI can assist by handling the sheer volume of content generated on these platforms, ensuring that harmful information is flagged and addressed more quickly.

6. The impact of disinformation on elections

At its core, disinformation undermines the very principles of democracy through amplification of distrust and polarisation, leading to a decline in the credibility of factually accurate news that negatively affects attitudes toward fact-finding and evidence-based research. Mis- and disinformation can shape voter opinions in ways that are neither fair nor transparent. The Parliamentary Assembly of the Council of Europe (PACE) has expressed deep concern about the rising tide of disinformation, emphasising the threat it poses to free and fair elections. The World Economic Forum’s Global Risks Report 2024 identified disinformation as one of the most significant global risks, especially in countries preparing for elections. The integrity of these elections—and by extension, the legitimacy of their governments—is at stake.

 Person, Head, Face

One of the most alarming developments in the disinformation landscape is the use of AI to create sophisticated and convincing false content, such as deepfakes. While the full impact of these deepfakes on voter behaviour remains unclear, their increasing prevalence signals a significant threat to future elections.

In response to these challenges, governments around the world are stepping up their efforts to combat disinformation, particularly in the context of elections. The United States has been at the forefront of this fight, with several states, including Michigan, Minnesota, and California, enacting laws that restrict the use of AI in political communications. These laws aim to prevent the spread of false information and ensure that voters have access to accurate and reliable information.

In the UK, the Elections Act 2022 introduced digital imprints for online political campaigning, requiring campaign materials to clearly state who is behind the content. This measure is designed to increase transparency and help voters identify the sources of political messages they encounter online.

The European Union has also been proactive in addressing the issue of disinformation. The Digital Services Act (DSA) and other initiatives focus on promoting media literacy, monitoring online content, and ensuring that political advertising is transparent and free from external influence. The EU has introduced specific guidelines to protect elections from online threats, emphasising collaboration between digital platforms, authorities, and civil society.

7. Initiatives to tackle disinformation and misinformation 

Efforts to combat information disorder vary across different regions and include efforts within two main fields:

  • Media Literacy Initiatives: Enhancing media literacy is a crucial strategy for combating mis- and disinformation. Initiatives aim to equip individuals with the skills necessary to critically evaluate information sources. 
  • Content Regulation: Various regulations tackle online content, seeking to curb mis- and disinformation. The EU’s DSA, for example, requires large online platforms to address illegal content and handle socially harmful material, establishing higher transparency standards.

7.1. Policy and regulatory frameworks

Online content policy is one of the areas that displays the largest number of regulatory and policy interventions introduced by governments. Among them, the regulatory models of the United States and the European Union stand out. While the USA—where most of the prominent companies that own social media platforms are based—has set the first benchmark in terms of intermediary liability, the EU DSA, which is relevant in the context of fighting mis- and disinformation, has explicitly stated its extraterritorial application, becoming a global reference point. 

Very large platforms (defined according to the criteria established by the DSA) are obliged to prevent the dissemination of harmful content, which does not necessarily have to be illegal. These companies have the obligation to assess the systemic risks arising from the design, operation, and use of their services, as well as from the potential misuse of services (Article 34). In the light of the DSA, disinformation may potentially entail systemic risks, related to their impact on democratic processes, public security, and public health, for example. 

The DSA is complemented by commitments encompassed within the framework of the strengthened Code of Practice on Disinformation. This voluntary code has been signed by large tech companies such as Google, Meta, Microsoft, and TikTok. It brings about certain obligations, such as demonetising advertisements containing disinformation, labelling political advertising more clearly, and empowering fact-checkers and users to spot and flag non-factual information. 

Spotting mis- and disinformation online has become a major challenge. While technology increases the capacity of individuals to receive and impart information, it may also be misused to propagate false content faster. In this scenario, media and information literacy (MIL) has been identified as a key strategy to fight mis- and disinformation. 

UNESCO (2013) defined MIL as ‘a set of competencies that empowers citizens to access, retrieve, evaluate and use, create as well as share information and media content in all formats, using various tools, in a critical, ethical and effective way, in order to participate and engage in personal, professional and societal activities’. A list of seven non-exhaustive competencies related to media literacy can be found in the image below.

By promoting agency, media literacy fosters prebunking and debunking practices. While debunking involves exposing an already disseminated false claim, prebunking tackles disinformation before it has been spread. Audiences are ‘inoculated’ against misleading information, enabling them to recognise and prevent their amplification. This can be achieved, for example, by revealing the main mechanisms and techniques employed in disinformation strategies, and by using pedagogical tools, such as simulations and games. 

 Page, Text
Broad non-exhaustive media and information literacy competencies (adapted from Grizzle et al., 2021)

Some countries have placed significant emphasis on MIL as a strategy to combat disinformation. Finland, for example, is recognised for its comprehensive policies and educational strategies aimed at equipping citizens with critical thinking skills essential for navigating a complex media landscape. The Swedish approach to media literacy places emphasis on strengthening resilience at a societal level. The Swedish Psychological Defence Agency plays a role in strengthening the population’s resilience, including through media literacy activities.

The growing use of AI creates further challenges in the fight against mis- and disinformation. The ease with which AI can generate and spread false information surpasses the capabilities of traditional regulatory and oversight mechanisms. AI literacy can help individuals discern truth from falsehood. In the context of fighting mis- and disinformation AI literacy is structured in three pillars: 

Instilling a sense of ethical responsibility. Integrating discussions about AI’s ethical implications, including its role in misinformation, helps students consider the broader societal impacts of AI. This entails questions about who is accountable when AI is used to misinform, and the ethical obligations of AI developers to prevent misuse of their technologies.

Enhancing critical thinking. It is important to raise awareness about the capabilities of AI to produce convincing, yet false, information. This involves teaching individuals to critically assess online information, distinguish between reliable and unreliable sources, and recognise the mechanisms through which AI can generate and propagate false information. 

Demystifying the technology itself. This includes providing basic knowledge of how AI algorithms are trained and operate, providing individuals with the insights needed to question AI-generated content. 

7.3. Private sector initiatives

The private sector, particularly major tech companies, has also recognized the need to address the threat of disinformation. In 2018, Meta (formerly Facebook) introduced the Ad Library, a database that allows the public to view details of all political ads running on its platforms in a move to increase transparency and help users understand the origins of the content they encounter.

Other companies have taken different approaches. In 2023 X, formerly Twitter, after initially banning political ads has begun to relax its restrictions, allowing certain types of political content to be promoted again. The company argues that political influence should be earned through genuine engagement rather than purchased amplification, but the debate over the best approach to managing political ads continues.

In 2024, a coalition of 20 major tech companies, including OpenAI, Microsoft, and TikTok, launched a joint initiative to combat deceptive AI content that could threaten global elections. The initiative focuses on developing tools to identify and mitigate the spread of false information, raising public awareness, and ensuring that AI-generated content is properly labelled and traceable. Platforms also often collaborate with fact-checking initiatives. 

7.4. International organisations and global coordination 

One of the key roles of international organisations is to provide a platform for dialogue, where countries can share challenges and exchange good practices among a wide range of stakeholders. When discussions mature, international organisations develop guidelines and frameworks that can help synchronise and align national efforts to combat disinformation, while respecting fundamental rights and freedoms. 

Another important role of international organisations is to facilitate research and data sharing on disinformation trends and impacts. By aggregating and disseminating research findings, they help stakeholders understand the evolving nature of disinformation and the effectiveness of different countermeasures. 

Several bodies and organisations across the UN’ system have contributed to sharpening the understanding of problems related to information disorder. The UN Global Principles for Information Integrity stands out, as it provides a framework for multistakeholder action against disinformation. 

According to the UN Global Principles for information integrity, the fight against disinformation should be guided by five key ideas, underpinned by respect for human rights and fundamental freedoms: 

Transparency and research. Increased transparency by tech companies and other information providers can enable a better understanding of how information is spread. Ensuring privacy-preserving data access for a diverse range of researchers helps to fill research gaps and inequalities. Academics, journalists and civil society must be protected and supported in carrying out their vital work free from fear or harassment. 

Societal trust and resilience. In this context, trust refers to the confidence that people have in the sources and reliability of the information that they access, including official sources and information, and in the mechanisms that allow information to flow throughout the ecosystem. Resilience refers to the ability of societies to handle disruptions or manipulative actions within the information ecosystem. 

Healthy incentives. This includes addressing the critical implications for information integrity resulting from business models that depend on targeted advertising and other forms of content monetisation. The framework calls for a fundamental shift in incentive structures, and for the adoption of human rights-driven business models that do not rely on algorithm targeted advertising based on behavioural tracking and personal data. 

Public empowerment. People should have control over their online experience, should be able to make informed decisions as to the media they choose to consume, and should be able to express themselves freely. Public empowerment requires consistent access to diverse and reliable sources of information. It also requires tech companies to enhance user control and choice, including interoperability with a range of services from different providers. 

Independent, free, and pluralistic media. A free press is a cornerstone of information integrity and democratic societies. There are ongoing threats to press freedom, such as online and offline harassment of media workers, as well as the migration of advertising revenue to the digital space, dominated by large tech companies. Robust responses should include support for public interest news organisations, journalists, and media workers, and media development assistance. 

Are there any solutions?

The internet has transformed the way we communicate and access information, offering unprecedented opportunities for connection and engagement. However, it has also become a powerful tool for the spread of disinformation, with far-reaching consequences for society. A major challenge lies in harnessing the positive potential of social media while mitigating its risks.

To achieve this balance, a multifaceted approach is necessary—one that combines technological innovation, regulatory oversight, media literacy, and user responsibility. Social media platforms must continue to evolve, implementing robust measures to detect and counter disinformation while ensuring that their algorithms do not inadvertently promote harmful content. At the same time, governments and civil society must work together to create a regulatory environment that promotes transparency and accountability, without stifling free expression. Any action to combat disinformation should be aligned with international human rights law, in order to protect the pillars of democratic societies.

More should be done to curb economic incentives to disinformation. Companies are expected to conduct human rights risk assessments and due diligence, ensuring their business models and operations do not negatively impact human rights. This includes sharing data and information on algorithms, which could make an assessment of the correlation between the spread of disinformation and ‘ad tech’ business models possible. 

Users, too, have a role to play in combating disinformation on social media. By becoming more informed and critical consumers of information, users can reduce the spread of false narratives and help create a more trustworthy online environment. Media and information literacy may help users to recognize the signs of disinformation, and to be able to verify sources. This is essential for empowering users to navigate the digital landscape responsibly.

Ultimately, the future of social media will depend on our collective ability to navigate its complexities and leverage its power for the greater good. By recognizing the dual role of social media as both an amplifier of disinformation and a potential solution, we can work towards a digital ecosystem that supports informed, healthy, and democratic societies.

 Logo, Advertisement, Poster, Text