Assessing the Role of Algorithms in Electoral Processes

Resource type

Session ID
Workshop 112

[Read more session reports and updates from the 14th Internet Governance Forum]

There is a broad consensus that exposure to online disinformation in the context of political processes will bring new challenges to democracy. However, it is hard to keep track of the extent to which disinformation has proliferated or the magnitude of the effect it has over citizens.

The issue of political disinformation in political processes such as elections is a long-standing phenomenon. Ms Lorena Jaume-Palasi (The Ethical Tech Society) pointed out that it is closely linked to what scholars have previously studied as propaganda in the pre-Internet era. At the beginning of the 20th century, Harold Dwight Lasswell developed a theory suggesting that the intended message is directly received and wholly accepted by the receiver (the hypodermic needle model). His model was very important to the study of the effects of media at the time, but nowadays there is wide academic consensus that this theory does not hold. There is no scientific evidence to support the claim that an individual will change their mind on a given topic just by being exposed to a piece of information (be it false or accurate).

Nevertheless, there is still a lot we do not understand about the effects of disinformation during elections. For instance, to what degree is disinformation being spread? Even if citizens do not change their voting behaviour due to disinformation online, what effect does it have over democratic values in general? It might bring forth new issues such as political distrust, political disengagement, and confusion over basic facts, among others.

Electoral processes in democratic countries seem to be currently unprepared to deal with the many issues that are associated with new technologies. The lack of transparency with the algorithms used in large social media platforms make it impossible for government and civil society to understand the full extent of the problem, let alone attempt to monitor and to audit such platforms. Jaume-Palasi also pointed out that large tech companies are not the only ones that are profiting from non-transparent business practices in the realm of algorithms. For instance, there are French, Italian, and German companies exporting algorithmic technologies to African countries that might alter their electoral processes; these actions are often not publicly reported.

During the session, there was also concern not only about the lack of transparency within the private sector, but also about the interference by foreign governments in elections and democratic processes. It was argued that this is an issue that requires international attention.

So, what should governments do? Mr Chris Marsden (University of Sussex) suggested a number of good practices for governments to implement in order to deal with online disinformation. First and foremost, in-depth training in media literacy is necessary. Additionally, strong human-review and appeal processes are needed wherever political processes are mediated using automated decision-making. Finally, there is an urgency for states to develop multistakeholder entities that can independently audit large tech companies.

By Paula Szewach

Share on FacebookTweetShare