Twitter starts new initiative to analyse harmful impact of algorithms

Twitter is introducing a new initiative to learn about the harmful impact of its algorithms: a project named Responsible Machine Learning initiative. The company affirmed that responsible machine learning has 4 mains pillars that consists of (a) taking responsibility for their algorithmic decisions, (b) equity and fairness of outcomes, (c) transparency about their decisions and how they arrive at them, and (d) enabling agency and algorithmic choice. 

The working group involved in the project is interdisciplinary and has engineers, researchers and data scientists. They are collaborating to assess current unintentional harms in the algorithms Twitter uses. In the upcoming months, Twitter will release analyses involving (a) gender and racial bias of their image cropping algorithm, (b) a fairness assessment to their home timeline recommendations across racial subgroups, and (c) an insight of content recommendations for different political ideologies across seven countries. 

The Responsible ML project may result in changing Twitter. Changes may comprise the exclusions of certain algorithms, allowing users to have more control over the images they Tweet and, ultimately, new standards into how the company design and build policies when they have a relevant impact on one particular community.