Meta’s Imagine AI faces scrutiny over historical errors

Imagine AI has produced historically inaccurate images like Black popes and Asian women, prompting concerns about balancing accuracy and adventurousness in AI development.

A person's hand holding Meta's infinite sign.

Meta’s Imagine AI image generator, similar to Google’s Gemini chatbot, has been making historical errors despite efforts to increase diversity in its output. After Gemini generated controversial images, Google faced backlash and stock value loss, leading to a halt in human image generation.

However, Meta’s Imagine AI has continued to produce problematic results, such as showing Black popes and Asian women in historically inaccurate settings. Despite attempts to improve these tools, they remain difficult to access and prone to generating inappropriate content.

The challenge for AI developers lies in balancing adventurousness with accuracy and diversity without perpetuating biases or historical inaccuracies.

Why does it matter?

These episodes show that tech giants’ efforts to combat biases have inadvertently led to overly politically correct outputs. However, this seems more like a consequence of prioritising growth over conducting thorough checks on their products rather than being driven by woke ideologies.

AI experts suggest the complexity of determining suitable outputs poses a challenge with no easy fix. Moreover, the issue may be deeply entrenched in data and algorithms, underscoring the necessity of continued human oversight in AI systems.