In its most recent Community Standards Enforcement Report, Facebook has announced a significant increase in the amount of content removed for breaching its rules on hate speech: The number of pieces of content removed due to hate speech increased from 5.7 million in the fourth quarter of 2019 to 9.6 million in the first quarter of 2020. The company attributes this increase to better performing artificial-intelligence-powered technology: 'On Facebook, we continued to expand our proactive detection technology for hate speech to more languages, and improved our existing detection systems. Our proactive detection rate for hate speech increased by more than 8 points over the past two quarters totalling almost a 20-point increase in just one year. As a result, we are able to find more content and can now detect almost 90% of the content we remove before anyone reports it to us.' In a related update, Facebook has announced it is not sharing a data set aimed to help researchers develop new systems to identify multimodal hate speech (content that combines different modalities, such as text and image, making it difficult to be classified as hate speech by technological tools).