Why AI systems privilege Western perspectives: ‘The Silicon Gaze’
As AI tools influence healthcare and employment, evidence of Western-centric bias intensifies calls for stronger accountability and global AI governance frameworks.
A new study from the University of Oxford argues that large language models reproduce a distinctly Western hierarchy when asked to evaluate countries, reinforcing long-standing global inequalities through automated judgment.
Analysing more than 20 million English-language responses from ChatGPT’s 4o-mini model, researchers found consistent favouring of wealthy Western nations across subjective comparisons such as intelligence, happiness, creativity, and innovation.
Low-income countries, particularly across Africa, were systematically placed at the bottom of rankings, while Western Europe, the US, and parts of East Asia dominated positive assessments.
According to the study, generative models rely heavily on data availability and dominant narratives, leading to flattened representations that recycle familiar stereotypes instead of reflecting social complexity or cultural diversity.
The researchers describe the phenomenon as the ‘silicon gaze’, a worldview shaped by the priorities of platform owners, developers, and historically uneven training data.
Because large language models are trained on material produced within centuries of structural exclusion, bias emerges not as a malfunction but as an embedded feature of contemporary AI systems.
The findings intensify global debates around AI governance, accountability, and cultural representation, particularly as such systems increasingly influence healthcare, employment screening, education, and public decision-making.
While models are continuously updated, the study underlines the limits of technical mitigation without broader political, regulatory, and epistemic interventions.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
