Google’s new AI sparks concerns over emotion detection

PaliGemma 2, Google’s latest AI innovation, claims to analyse emotions through images, but experts and researchers warn about the risks of misuse and the scientific uncertainty surrounding emotion recognition.

 Logo, Text, Light, Person

Google’s newest AI, the PaliGemma 2 model, has drawn attention for its ability to interpret emotions in images, a feature unveiled in a recent blog post. Unlike basic image recognition, PaliGemma 2 offers detailed captions and insights about people and scenes. However, its emotion detection capability has sparked heated debates about ethical implications and scientific validity.

Critics argue that emotion recognition is fundamentally flawed, relying on outdated psychological theories and subjective visual cues that fail to account for cultural and individual differences. Studies have shown that such systems often exhibit biases, with one report highlighting how similar models assign negative emotions more frequently to certain racial groups. Google says it performed extensive testing on PaliGemma 2 for demographic biases, but details of these evaluations remain sparse.

Experts also worry about the risks of releasing this AI technology to the public, citing potential misuse in areas like law enforcement, hiring, and border control. While Google emphasises its commitment to responsible innovation, critics like Oxford’s Sandra Wachter caution that without robust safeguards, tools like PaliGemma 2 could reinforce harmful stereotypes and discriminatory practices. The debate underscores the need for a careful balance between technological advancement and ethical responsibility