AI tools risk gender bias in women’s health care
Google’s AI model Gemma described men’s health issues more severely than women’s, while Meta’s model showed no gender bias.

AI tools used by over half of England’s local councils may be downplaying women’s physical and mental health issues. Research from LSE found Google’s AI model, Gemma, used harsher terms like ‘disabled’ and ‘complex’ more often for men than women with similar care needs.
The LSE study analysed thousands of AI-generated summaries from adult social care case notes. Researchers swapped only the patient’s gender to reveal disparities.
One example showed an 84-year-old man described as having ‘complex medical history’ and ‘poor mobility’, while the same notes for a woman suggested she was ‘independent’ despite limitations.
Among the models tested, Google’s Gemma showed the most pronounced gender bias, while Meta’s Llama 3 used gender-neutral language.
Lead researcher Dr Sam Rickman warned that biassed AI tools risk creating unequal care provision. Local authorities increasingly rely on such systems to ease social workers’ workloads.
Calls have grown for greater transparency, mandatory bias testing, and legal oversight to ensure fairness in long-term care.
Google said the Gemma model is now in its third generation and under review, though it is not intended for medical use.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!