Twitter to pay hackers to find bias in automatic image crops

Twitter announced a competition to find hackers and researchers to identify biases in its image cropping algorithm. The platform hopes that by giving teams access to its code and image cropping model will let them find ways that the algorithm could be harmful, including cropping in a way that stereotypes or erases the image’s subject. Twitter is looking for subconscious and unintentional biases and harms. Twitter defines unintentional harm as crops that result from a well-intentioned user posting a regular image on the platform, whereas intentional harm is problematic cropping behavior that could be exploited by someone posting maliciously designed images.