Countering misinformation online: Policies and solutions

14 Nov 2018 11:45h - 13:15h

Event report

[Read more session reports and live updates from the 13th Internet Governance Forum]

This session considered the rise of misinformation as a threat to democracy and political systems. It also addressed the existing mechanisms tackling fake news. On the one hand, self-regulation by the Internet platform cannot be a solution because it is dependent upon the platform’s business purpose (namely profit). On the other hand, stronger regulation by governments often is, in practice, a means to repress freedom of expression and prevent political opponents from critiquing the establishment under the pretext of fighting fake news. The session concluded with the consideration that there is not an existing one-way solution to regulate misinformation but all the panellists agreed on the importance of the media literacy programmes.

Mr Asad Baig, Media Matters for Democracy, Pakistan, moderated the session and clarified that the workshops focused on the evolution on online information content control policies.  Addressing disinformation cannot be done with a simple binary approach, namely the lack of regulation or elimination of problematic content, because the former is not effective in tackling fake news and the latter violates human rights provisions such as freedom of expression (FoE).

He considered that misinformation spreads easily and can entail serious consequences. An example is constituted by the numerous lynching episodes in India happened as a result of misinformation spread on the social media.  

He also warned against AI in picture and video manipulation as it makes it possible to create a new (fake and non-existing) picture of a person by feeding into the algorithm many existing images of that person. He concluded that misinformation is not only spreading through the social media but is also affecting traditional media outlets. This is also due to the profit-based business model which, putting traditional media outlets in competition with the social media, encourages them to publish ‘click-bait’ information quickly. As a consequence, the quality of information is endangered, thus contributing to the erosion of trust towards the traditional media. He noted that there have been increasingly more interventions from governments to regulate fake news (e.g. government-funded fact-checking body in Pakistan) but this raises questions as to what extent such restrictions are compatible with the existing human rights framework.

A speaker from the Center for Policy in Sri Lanka, talked about the importance of placing misinformation into a country’s context. For example, in Sri Lanka after three decades of civil war, there is a widespread mistrust towards political institutions. Moreover, during the last elections in 2015, the Prime Minister highly politicised the media and thus information was manipulated for a precise political purpose. The problem also concerns some important media outlets in the country which refer to misleading information such as the upper-class people refusing to donate blood because of the fear that it would have been offered to lower class people. She concluded by reflecting on how fake news can be tackled when the media are not independent from politicians and when fake news spread in a country with the low literacy rate.

Baig believes that Sri Lanka’s case is similar to the last elections that took place in Pakistan when fake hashtags were generated in order to create more resonance around specific issues. What Media Matters did was to resort to Trends Monitor and, thanks to the help of an algorithm they found that such hashtags were generated by ‘human bots’, namely fake accounts that publish several tweets per minute. In two months, around 225 political hashtags embedded in 68 million tweets were found. An additional complexity is also represented by the fact that such content is created in the local language (e.g. Urdu) so that it is not possible to find it online through a search engine. Another challenge is also represented by the content spread on WhatsApp which is not accessible for research and the content tends to spread equally faster.

The proposed solution of self-regulation by the Internet platform is simply not an effective solution because asking from Twitter and Facebook to take off a few thousand
accounts means that we are outsourcing such activity to an American company but to a foreign government, namely the American one. Moreover, this approach has been proven ineffective in Pakistan for two reasons. Firstly, because even when some fake accounts are taken down, there are many others running simultaneously. Secondly, even if Facebook partnered with a reliable fact-checking agency, content in Urdu was eliminated and replaced with correct information spread only in English.

Ms Roslyn Moore, DW Akademie, talked about the development of media literacy programmes aiming at educating citizens to analyse information, understand media systems, recognise dysfunction and propaganda. She stated that disinformation is a problem affecting all the countries. Currently, DW is running 20 media literacy projects in Africa, Asia and North America. Moreover, in Cambodia and Palestine media education literacy has been included in school curricula.

Mr Padraig Hughes, Media Legal Defence Initiative, explained that his organisation engages with and assists local lawyers in bringing legal challenges to human rights violations before national courts. In particular, they work on violations of freedom of expression principles suppressed by the government in their fight against fake news. He said that usually these regulations aim at preventing people from critiquing or exposing corruption of the government or the state.

He reflected that such repressive legislation has spread after 2001s war on terror and it is usually characterised by vague and overly broad provisions which are in manifest violation of the well-established international freedom of expression standards. He further specified that in a democratic country, any restriction of FoE follows a three-part test, namely it must be provided by law, it must be necessary and proportionate.

 

By Marco Lotti