Whatsapp’s AI image generator shows boy holding a gun-like firearm when prompted ‘Palestine’

Users have criticized Meta for enforcing biassed moderation policies, which they consider censorship.

Woman hands using smartphone

WhatsApp’s AI image generator feature is facing scrutiny after it allegedly displayed images of guns or boys with guns in response to searches for terms like ‘Palestinian,’ ‘Palestine,’ or ‘Muslim boy Palestinian.’

In contrast, searches for ‘Israeli boy’ returned images of children engaged in harmless activities like playing soccer or reading, with no guns depicted. Similarly, searches related to the ‘Israel army’ did not generate images of guns but cartoon illustrations of soldiers smiling and praying.

The Guardian independently verified this issue, and Meta’s employees have reportedly raised concerns about it. The search results varied among users, suggesting a potential bias in the AI-generated images.

Meta-owned WhatsApp encourages users to try its AI image generator feature to ‘create a sticker’ using AI technology. However, the alleged bias in the search results raises concerns about the accuracy and appropriateness of the output. It remains unclear how the AI algorithm produces these results or if specific biases are built into it.

Why does it matter?

An earlier controversy emerged when Instagram’s translation feature reportedly rendered ‘Palestinian,’ followed by the phrase ‘Praise be to Allah’ as ‘Palestinian terrorist.’ Meta apologized for this, attributing it to a glitch. These incidents, coupled with the allegations of biassed search results, further fuel criticism towards Meta from Palestinian creators, activists, and journalists, particularly during periods of escalated violence against Palestinians in Gaza and the West Bank.

The issue also highlights the criticism Meta has faced from Instagram and Facebook users who accuse the company of employing biassed moderation policies that amount to censorship. Users claim their posts have been hidden from others without explanation, leading to a significant drop in engagement. Earlier, Meta acknowledged that during ongoing conflicts, content that did not violate their policies may be unintentionally removed due to the higher volume of reported content.