Google rushes to fix AI image tool amid bias concerns

Users flagged instances where the tool provided historically inaccurate images, prompting Google to acknowledge the issue and pledge swift improvements.

Buisinessman holding google logo

Google is urgently working to fix its new AI-powered image creation tool, Gemini, amid concerns that it’s overly cautious about avoiding racism. Users criticized the tool for inaccurately depicting genders and ethnicities, such as showing women and people of color when asked for images of America’s founding fathers.

Google acknowledged the issue, stating that Gemini’s AI generated a diverse range of people but was missing the mark in this context. They pledged to improve these depictions promptly and temporarily suspended the tool’s ability to generate images of people while addressing the problem.

Critics accused the company of being overly politically correct, particularly in right-wing circles in the US. Google emphasized its commitment to representation and bias mitigation, promising to adjust the tool based on feedback to better reflect its global user base and historical contexts.

Why does it matter?

This isn’t the first time AI has faced criticism over diversity issues, with Google previously apologizing for its photo app mislabeling black individuals as “gorillas.” OpenAI also faced accusations of perpetuating harmful stereotypes. Experts state that AI image generators struggle with diversity due to biased training data, lacking representation from all backgrounds. Google’s commitment to fix this in Gemini before re-enabling the feature is a positive step towards striving for fair and inclusive representation.