Zuckerberg: Facebook algorithms ‘will identify terrorists’

In a letter discussing the future of Facebook, the social media platform’s CEO Mark Zuckerberg sketched a plan to let artificial intelligence software review content posted on the social network. Zuckerberg writes that the algorithms of the software will be able to identify ‘terrorism, violence, bullying and even prevent suicide’. Although artificial intelligence might already be able to handle some of these cases in 2017, ‘others will not be possible for many years’. The plan will also allow users to filter their own content, within the scope of the law; ‘where is your line on nudity? On violence? On graphic content? On profanity? What you decide will be your personal settings’ and ‘For those who don’t make a decision, the default will be whatever the majority of people in your region selected, like a referendum’.