Deliberating Governance Approaches to Disinformation

Report

event

Session date
to

assignment

Session:
Workshop 218

linkSession page

[Read more session reports and updates from the 14th Internet Governance Forum]

The session assessed the trade-offs of various recent European policy instruments that address disinformation; the session also debated the role of the different stakeholders involved in shaping and evaluating policies directed towards disinformation. Mr Max Senges (Private Sector, Western European and Others Group (WEOG)) launched the debate on content regulation based on the deliberative polling experiment that was conducted at the IGF in 2015 and 2016.

A representative sample of individuals opposed or supported a set of statements on a scale from 0 to 10 to gain the sample members’ understanding of the subject matter. After deliberation, participants took the same survey again and researchers noted differences. The ultimate aim was to move the dialogue beyond general consensus statements, to assess the pros and cons of specific proposals, and to clarify where better understanding and room for policy action was possible.

The briefing materials of the deliberative poll on disinformation at this year’s IGF were submitted to the audience for discussion. The materials provided a summary of the three main policy responses adopted by European governments to tackle disinformation: self-regulatory codes of practices, direct regulations on online content, and the development of ad hoc legislation. The preliminary results of the survey showed that respondents were more in favour of platforms that were responsible for the removal of hateful content alone rather than having a specific regulation obliging them to do so. Moreover, participants seemed positive towards the concept that platforms should be subject to a duty of care principle, as is the case for the recent UK legislation.

Three main contributions arose from the floor. First, it was stressed that while addressing policy responses to misinformation, we should distinguish between individual disinformation and industrial disinformation; in the latter case we are describing a broader phenomenon, in which, for example, automated bots spread a vast amount of false information. Second, a debate arose on whether the discussion should also challenge business models motivating online platforms, since such models ‘make it easy to manipulate content’ and ‘make disinformation a business model in itself'. Comments from participants varied from demanding more transparency from the platform side regarding their business model (e.g. collection and use of data and function of the algorithm) to an invitation to completely rethink the core of platform business models.

Third, other comments were made on the survey design. It was said that the questionnaire should include statements that also cover the responsibility of the users on how platforms are used. Policies addressing content moderation are tricky in nature as they may touch upon the enjoyment of user rights, such as freedom of expression. In response to this, Mr Vidushi Marda (Civil Society, Asia-Pacific Group) argued that freedom of expression is not an absolute right; rather, it is provided and regulated by law. More clarity is needed on existing community guidelines of platforms, which often lack transparency.

The session concluded by recalling the project We, the Internet launched by Missions Publiques last year. This global project aims to make citizen voices and perspectives heard. Finally, using the deliberative polling technique, around one hundred participants were invited to join the dialogue in each partner country on digital identity, disinformation, and Internet governance.

By Marco Lotti

Share on FacebookTweet