Robots against disinformation: Automated trust building?

12 Nov 2020 13:20h - 14:20h

Event report

The session, moderated by Mr Christian Perrone (Lawyer, Institute of Technology and Society of Rio de Janeiro), focused on online disinformation campaigns implemented through the use of social bots, and how to address the public debate imbalances caused by this phenomenon. It explored the opportunity to use the same tools to automate the countering of online disinformation, and highlighted the main challenges and possibilities of using helpful bots to fight harmful bots.

The first part of the session addressed how different stakeholders involved in the issue of automated disinformation leverage the use of social bots. Mr Christopher Tuckwood (Executive Director, Sentinel Project) drew on the work of the Sentinel Project in the context of conflict-affected countries. Tuckwood argued that if malicious actors use automation to push out manipulative, fabricated content towards large audiences, the only practical way to counter them is to try to use similar approaches, at least for monitoring and identifying such trends. Mr Jan Gerlach (Public Policy Manager, Wikimedia Foundation) provided details about the impacts of bots in the management of Wikipedia, showing how they can be helpful in terms of efficiency, quality, and flagging of copyright violations. Wikipedia also deploys machine-learning tools, as illustrated by the system Objective Revision Evaluation System (ORES) placed on several language versions of Wikipedia, which flags erroneous edits and articles to the community of users. Ms Jenna Fung (Community Engagement Lead, DotAsia) argued that bots can increase the accuracy in fact-checking mechanisms. Offering recent instances in the context of Hong Kong, Fung showed how bots can indeed help flag misinformation and false information on social media platforms. Ms Debora Albu (Programme Coordinator, Institute of Technology and Society of Rio de Janeiro) gave the example of a recent project of the Institute of Technology and Society of Rio de Janeiro. PegaBot is a tool that creates more transparency about bot usage in Brazil. On the platform, the user can verify a social media account’s activity and check its probability of being a bot.

The second part of the session focused on the different approaches taken to address these challenges depending on the content at stake, and the context in which these tools are used. Gerlach explained that in the context of Wikipedia, the ultimate goal is to provide verifiable and trustworthy information with a neutral point of view. Social bots can free up capacity for the editors, but also be part of the mix of tools and tactics to address disinformation campaigns. Tuckwood argued that the biggest use of technology at this point would be the ability to better monitor and recognise disinformation and hate speech for early warnings, and not from an intervention perspective. Project Sentinel addresses this issue as part of Project HateBase which is a multilingual automated system for recognising hate speech online on platforms like Twitter, for example. Albu argued that such tools can indeed contribute to countering online disinformation, but require improving media literacy, transparency, and access to reliable information.