AI will solve all problems, but can it?

13 Nov 2018 09:00h - 10:30h

Event report

[Read more session reports and live updates from the 13th Internet Governance Forum]

The session discussed artificial intelligence (AI) as a tool for online content moderation and removal, focussing on dealing with hate speech, disinformation, and extremist content online. While use of some AI-applications is inevitable given the breadth of online content, participants cautioned that AI will not be able to judge context and cultural factors appropriately.

Using an open debate format, the session up broke into smaller groups, each focusing on one of the three topics. The debates were led by experts representing different stakeholder communities: Ms Fanny Hidvegi, Access Now, Mr Jan Gerlach, Wikimedia Foundation, Ms Charlotte Altenhöner-Dion, Council of Europe, and Mr Nicolas Suzor, Queensland University of Technology.

Leading into the discussion, Hidvegi stressed that there is an increasing awareness of illegal and dangerous content online, and in a number of debates, AI appears as the silver bullet. This needs closer scrutiny. Suzor emphasised the need to design AI that solves content issues in a way that respects human rights.

After the discussion, each group reported back on the most relevant pros and cons of AI in addressing hate speech, disinformation, and extremist content online. Regarding hate speech, the group argued that AI-based solutions allow for faster identification of content on a larger scale. Moderating filters on social media can be better designed through the use of AI, and fewer people will be needed for the identification of hate speech. More critical voices emphasised that as the definition of hate speech is diverse and evolving, AI cannot be relied on entirely to identify it. This means that humans are still needed, and the question of whether or not the use of AI is justified from a cost-benefits perspective remains open. Participants acknowledged that civil society needs to be involved in working towards a definition, and of developing content policies throughout the process.

Regarding disinformation, it was pointed out that there is a lack of alternatives to AI. Humans simply cannot cope with the vast amount of content and with the speed of new content appearing. In this sense, AI provides a breadth, depth, and speed that is impossible for humans. The group also argued that AI will be crucial in identifying the sources of misinformation. Further, via sorting algorithms AI can also be utilised to curate – as opposed to delete – content. Content could be demoted or promoted depending on algorithmic decisions on the reliability and authority of sources. Other participants were more critical and pointed out that the technology is not able of judging context, including cultural context, irony, and satire. AI still faces challenges in the area of language translation and cross-checking content across linguistic boundaries. Participants also wondered about the misuse of AI related to de-anonymisation, and the dangers of group profiling and identifying individuals

In the area of extremist content, participants also highlighted the problem of finding and agreeing on a definition. While extreme cases are obvious, there are also grey areas that need careful discussion. Participants highlighted jurisdictional issues relating to different national laws, various policies, and cultural issues. The limits of freedom of speech were a prominent example in this respect. Issues of transparency with regard to how platforms make content-related decisions were also raised.

In conclusion, discussants agreed that AI is no magic bullet, that its application to content policy needs to be carefully designed, and that there will not be one solution which will apply to everything.

 

By Katharina E. Höne