Fake News, AI Trolls, and Disinformation: How can the Internet Community Deal with Poison in the System (WS68)

Session: WS68 

19 Dec 2017 - 15:00 to 16:00

Report

[Read more session reports and live updates from the 12th Internet Governance Forum]

The moderator Mr Chris Doten, Chief Innovation Officer, National Democratic Institute (NDI), started the session by touching on core points of the discussion and invited the panel speakers to introduce themselves and share their experience on the session topic.

Mr Matt Chessen, Digital Diplomacy Team Member, US State Department, expressed concern that a lot of emerging artificial intelligence tools could dramatically enhance the impact of computational propaganda. However he emphasised that this conversation and possible solutions have to be focused on preserving the freedom of expression and avoiding censorship. He defended the notion that governments should not use disinformation and other tools of manipulation. He suggested other solutions for governments:

  • Having a credible message based on facts and evidence that acknowledge underlying grievances.
  • Partnering with credible independent and trusted messengers, especially local messengers.
  • Using technology to identify the audiences which they are trying to reach, and effective approaches for then reaching that audience.
  • Using analytics to evaluate the effectiveness of messages and then feeding that information back into the communication processes.

Mr Chessen noted a phenomenon that a lot of people consume disinformation because it is emotionally pleasing for them. There is a need to package truthful information in a way that is more emotionally pleasing than disinformation.

Chessen also mentioned an economic issue of widespread of fake news. It is very cheap to produce lies and to disseminate them through the Internet, while it costs money to produce high-quality news and information. One solution could be the possibility of taxing platforms that systematically propagate disinformation on a regular basis, when it has been proven that the disinformation has created a problem. 

Ms Alina Polyakova, Brookings Institution, focused on an evolution of the tools of political warfare. Looking at Ukraine, Georgia, and other Soviet states, it is obvious that the Russian government is heavily invested in tools of propaganda, and not just the Russian government. The fact is that these tools are becoming cheaper. They are more easily deployed, and at the same time are becoming much more difficult to detect, not just for civil society actors, investigative researchers and reporters, and governments, but also for the platforms themselves: Twitter, Facebook, and Google.

Besides this, Ms Polyakova touched on the question of how to maintain values of openness, democratic values, freedom of expression, and freedom of the press, while still allowing voices of dissent. She emphasised there is a lot that civil society, and independent media, can do to counter misinformation, including participation in setting up a regulatory environment.

Mr Donatien Niyongendako, DefendDefenders, emphasised the importance of discussing the issue of fake news and finding possible solutions, especially for his region. In African countries fake news has become a constant attribute of election processes. There is a need to find out what practices can be adopted in the face of fake news, and how to counteract disinformation. It is no less important to understand what objectives fake news follows, and who the sponsors are. 

Samuel Woolley, Director, Digital Intelligence Lab at the Institute for the Future, specified three main types of computational propaganda attacks:

  • Attacks that focus upon specific social groups online.
  • A lot of back-end and front-end social attacks, which do not just occur through the front end of sites or by sharing fake news or disinformation, but also occur by trying to manipulate algorithms that show trends, or that show in the newsfeed. 
  • A lot of government-sponsored trolling through bot usage which often results in chilling effects or censorship, especially of journalists.

Woolley noted that research is being undertaken to find out what the actual effects can be when bots are used in computational propaganda attacks. 

In conclusion Mr Doten thanked speakers and attendees, and invited them to participate in further discussions.

By Nazgul Kurmanalieva

 

The GIP Digital Watch observatory is provided by

in partnership with

and members of the GIP Steering Committee



 

GIP Digital Watch is operated by

Scroll to Top