Facebook talks about using both AI and human expertise to tackle online terrorism content

15 Jun 2017

In a recent blog post signed by Facebook’s Monika Bickert, Director of Global Policy Management, and Brian Fishman, Counterterrorism Policy Manager, the company talks about how it uses artificial intelligence (AI) to ‘keep terrorist content off Facebook’. The company currently uses AI technologies to combat terrorist content about ISIS, Al Qaeda and their affiliates, but plans to expand to other organisations soon. Examples of such uses include image matching, understanding languages that might be advocating for terrorism,  and identifying and removing terrors clusters. But Facebook also admits that ‘AI can’t catch everything’ and that human expertise is needed when it comes to understanding more nuanced cases of possible terrorism content.

Explore the issues

One of the main sociocultural issues is content policy, often addressed from the standpoints of human rights (freedom of expression and the right to communicate), government (content control), and technology (tools for content control). Discussions usually focus on three groups of content:

 

 

The GIP Digital Watch observatory is provided by

in partnership with

and members of the GIP Steering Committee



 

GIP Digital Watch is operated by

Scroll to Top