Facebook outlines its work on preventing bias in AI

3 May 2018

At its F8 developer conference held on 1–2 May in California, Facebook outlined its work on trying to avoid bias in how its artificial intelligence (AI) systems are functioning. Among other things, the company announced that it has a special team focused on ethics in AI, and has developed solutions to avoid bias in AI-based decisions. One such solution is Fairness Flow, a tool which can determine potential prejudices for or against particular groups of people. According to research scientist Isabel Kloumann, cited by Cnet, Fairness Flow was initially used to ensure that job recommendations were not biased against some groups over others. But the company has now integrated the tool into its machine learning platform, and is working on applying it, 'to evaluate the personal and societal implications of every product' it builds. Facebook also described, in a blog post by Guy Rosen (Vice-president of Product Management), how it is using AI to detect and remove 'bad content, like terrorist videos, hate speech, porn or violence' from the platform.

Explore the issues

Historically, telecommunications, broadcasting, and other related areas were separate industry segments; they used different technologies and were governed by different regulations.

One of the main sociocultural issues is content policy, often addressed from the standpoints of human rights (freedom of expression and the right to communicate), government (content control), and technology (tools for content control). Discussions usually focus on three groups of content:

 

The GIP Digital Watch observatory is provided by

 

 

and members of the GIP Steering Committee



 

GIP Digital Watch is operated by

Scroll to Top