AI ‘is not a silver bullet’ for moderating online content, report stresses

A report on the ‘Use of AI in online content moderation’, prepared by Cambridge Consultants and commissioned by the UK Office of Communications (Ofcom) concludes that artificial intelligence (AI) ‘shows promise in moderating online content, but raises some issues’. On the one hand, the report notes that AI can have a significant impact on the content moderation workflow. The technology can be used to improve the pre-moderation stage and flag content for review by humans, thus increasing moderation accuracy. It can also be used to synthesise training data to improve pre-moderation performance. Moreover, AI can assist human moderators by increasing their productivity and reducing the potentially harmful effects of content moderation of individual moderators. On the other hand, using AI for content moderation ‘is not a silver bullet’, as it also raises a number of issues, such as unintentional bias and lack of transparency on how decisions are made. ‘AI is not a silver bullet’, it is said in the report, and ‘even if it can be successfully applied, there will be weaknesses which will be exploited by others to subvert the moderation system’. The report also highlights several policy implications, noting, for example, that the availability of online content moderation services from third-party providers should be encouraged and that a better understanding is needed regarding the performance of AI-based content moderation by individual platforms and moderation services.