Public diplomacy v. disinformation: Are there red lines?

27 Nov 2019 09:30h - 11:00h

Event report

[Read more session reports and updates from the 14th Internet Governance Forum]

The pressing issue of disinformation has created new challenges. There was consensus among the panellists that while disinformation is not a brand new phenomenon, it has now gone beyond what we have dealt with in the past in terms of scale and effect on society, particularly in democratic societies. There is a need to come up with brand new approaches to deal with the consequences of disinformation. The focus of this session was to discuss where to draw the line between public diplomacy and illegitimate interference, if at all possible.

In order to be able to determine where to draw the line, some basic definitions had to be discussed first. Firstly, what is disinformation? Mr Felix Kartte (Intergovernmental Organization, Western Europe and Others) pointed out that many scholarly definitions are content-based definitions. Namely, they refer to disinformation operators as those who deliberately spread false content. Kartte suggested that these definitions are not good enough to detect disinformation. The potential threats of disinformation are actually much broader than the spread of false content. For instance, the manipulation of divisive domestic debates in ways that do not necessarily spread content that could be debunked by fact-checkers. He suggested visiting News Guard which utilises interesting guidelines in detecting disinformation. It rates news websites on journalistic criteria, on a number of factors related to the website itself, and on the DIDI diagnosis (Deception, Intention, Disruption, Interference).

IGF2019

Ms Marilia Maciel (Digital Policy Senior Researcher) also mentioned that it is relevant to classify information disorder into three different types, as detailed by a proposed framework by a report of the Council of Europe. These three types include: the creation of information, the reproduction of information (transforming an idea into media that is consumed online), and the dissemination and further replication of information. The motives of the actors behind each one of these phases are often very different and can range from political to economic. It is necessary to consider what is driving the spread of disinformation to be able to tackle the problem. Mr Goetz Frommholz (Open Society Foundation) highlighted the importance of not focusing on a single instance of false information, but rather putting disinformation in the larger context of stories. Stories and the construction of narratives are what end up changing perceptions and values within societies.

On the basis of the previous considerations regarding disinformation operations, what constitutes public diplomacy and what constitutes illegitimate interference? The main point of consensus throughout the session was that transparency is a key point in the lawful policing of actors at any level. Maciel mentioned the need to apply the principles of international law and update them to this current phenomenon. She also stressed the relevance of stability in order to preserve security in the cyberspace. Guaranteeing stability in the context of elections would entail the security of the technical infrastructure that enables democratic processes (electronic ballots, software). According to a norm developed by the Global Commission of the Stability of Cyberspace, state and non-state actors must not pursue, support, or allow cyber-operations that intend to disrupt technical infrastructure essential to elections, referenda, or plebiscites.

Finally, there were some frameworks and criteria that were discussed during the session that could help the relevant actors understand how to counter disinformation in a legitimate way and how to build democratic resilience, without trespassing their institutional boundaries. However, there are many more complexities to this issue. As Frommholz claimed: ‘We still need to understand a lot more about the connection between online behaviour and offline behaviour.’

By Paula Szewach