Introducing FACET: A benchmark for fairness in computer vision evaluation

FACET’s primary goal is to assess the impartiality of AI models used for categorising and identifying objects within photos and videos, including people.

Meta expands its AI toolkit with faster models and new datasets.

Meta today released a new AI benchmark, FACET, designed to tackle bias within computer vision systems. FACET is an acronym for ‘FAirness in Computer Vision EvaluaTion.’

What is a computer vision system? It is a technology that allows computers to interpret and understand visual information from the world, much like the human visual system. It enables computers to process and make sense of images and videos by identifying objects, recognising patterns, and extracting valuable information from visual data.

What does it do?

FACET is a dataset of over 30,000 images containing 50,000 people labelled by humans that can help developers test whether AI computer vision systems are biassed. The core objective of FACET is to equip developers with valuable benchmarks for bias detection, focusing on gender and race-related biases. Meta claims that the tool can detect various examples of computer vision bias, including gender, race, age, disability, and cultural bias. The tech giant claims that FACET is more thorough than any of the computer vision bias benchmarks that came before it.

A businesswoman, seen from behind, walks with a suitcase in hand within the virtual realm of cyberspace. In front of her, a virtual screen displays Meta's logo.

How was it trained?

To make FACET, Meta had a group of people mark each of the 32,000 pictures with details about the people in them, like their gender and age. They also noted things like their skin colour, lighting, tattoos, what they were wearing on their head and eyes, their hairstyle, and if they had facial hair. They combined these details with other information about people from a big dataset called Segment Anything 1 Billion.

Why does it matter?

  • Addressing bias in AI algorithms is a critical concern. While tools for detecting it have been in development for some time, ongoing efforts are essential to create fair and unbiased AI systems.
  • FACET is a continuation of Meta’s open-source approach. This, in turn, means that anyone can look under its bonnet to evaluate its capabilities and address vulnerabilities.