Why AI systems privilege Western perspectives: ‘The Silicon Gaze’

A new study from the University of Oxford argues that large language models reproduce a distinctly Western hierarchy when asked to evaluate countries, reinforcing long-standing global inequalities through automated judgment.

Analysing more than 20 million English-language responses from ChatGPT’s 4o-mini model, researchers found consistent favouring of wealthy Western nations across subjective comparisons such as intelligence, happiness, creativity, and innovation.

Low-income countries, particularly across Africa, were systematically placed at the bottom of rankings, while Western Europe, the US, and parts of East Asia dominated positive assessments.

According to the study, generative models rely heavily on data availability and dominant narratives, leading to flattened representations that recycle familiar stereotypes instead of reflecting social complexity or cultural diversity.

The researchers describe the phenomenon as the ‘silicon gaze’, a worldview shaped by the priorities of platform owners, developers, and historically uneven training data.

Because large language models are trained on material produced within centuries of structural exclusion, bias emerges not as a malfunction but as an embedded feature of contemporary AI systems.

The findings intensify global debates around AI governance, accountability, and cultural representation, particularly as such systems increasingly influence healthcare, employment screening, education, and public decision-making.

While models are continuously updated, the study underlines the limits of technical mitigation without broader political, regulatory, and epistemic interventions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

How Microsoft is shaping UN reform through digital infrastructure

Microsoft has announced a multi-year pledge to support the United Nations’ UN80 reform initiative, positioning AI and digital infrastructure as central tools for modernising multilateral governance.

The commitment follows agreement among all UN member states on efficiency and financial-stability measures, as the organisation faces growing operational demands amid constrained resources.

The initiative includes a dedicated innovation fund, preferential pricing for digital services, and free AI training for UN staff across agencies and missions.

Rather than focusing on policy direction, Microsoft frames its role as enabling institutional capacity, from procurement and logistics to humanitarian response and development planning, while encouraging other private-sector actors to align behind UN80 priorities.

Microsoft also plans to mobilise partners such as EY to support reform efforts, reinforcing a model where large technology firms contribute expertise, infrastructure, and coordination capacity to global governance systems.

Previous collaborations with UNICEF, UNHCR, ITU, and the ILO are cited as evidence that AI-driven tools can accelerate service delivery at scale.

The pledge highlights how multilateral reform increasingly depends on private technological ecosystems instead of purely intergovernmental solutions.

As AI becomes embedded in the core operations of international institutions, questions around accountability, influence, and long-term dependency are likely to shape debates about the future balance between public authority and corporate power.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT introduces age prediction to strengthen teen safety

New safeguards are being introduced as ChatGPT uses age prediction to identify accounts that may belong to under-18s. Extra protections limit exposure to harmful content while still allowing adults full access.

The age prediction model analyses behavioural and account-level signals, including usage patterns, activity times, account age, and stated age information. OpenAI says these indicators help estimate whether an account belongs to a minor, enabling the platform to apply age-appropriate safeguards.

When an account is flagged as potentially under 18, ChatGPT limits access to graphic violence, sexual role play, viral challenges, self-harm, and unhealthy body image content. The safeguards reflect research on teen development, including differences in risk perception and impulse control.

ChatGPT users who are incorrectly classified can restore full access by confirming their age through a selfie check using Persona, a secure identity verification service. Account holders can review safeguards and begin the verification process at any time via the settings menu.

Parental controls allow further customisation, including quiet hours, feature restrictions, and notifications for signs of distress. OpenAI says the system will continue to evolve, with EU-specific deployment planned in the coming weeks to meet regional regulatory requirements.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UK toy industry trends show promising market recovery amid social media challenges

UK toy industry trends show a recovering market, but face challenges from social media regulations for children.

After Australia introduced a ban on social media for under-16s, UK toy sellers are monitoring the possibility of similar policies.

The UK toy market is rebounding, with sales value rising 6 percent last year, the first growth since 2020. Despite cost-of-living pressures, families continue to prioritise spending on toys, especially during holidays like Christmas.

A major driver of UK toy industry trends is the growth of the ‘kidult’ market. Older children and adults now account for around 30 percent of toy sales and spend more on items such as Lego sets, collectable figurines, and pop-culture merchandise. That shift shows that the sector is no longer reliant solely on younger children.

Social media shapes UK toy industry trends, as platforms promote toys from films, games, music, and sports, with franchises like Pokémon and Minecraft driving consumer interest.

Potential social media restrictions could force the industry to adapt, relying more on in-store promotions, traditional media, or franchise collaborations. The sector must balance child-protection policies with its growing dependence on digital platforms to maintain growth.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Amodei warns US AI chip exports to China risk national security

Anthropic chief executive Dario Amodei has criticised the US decision to allow the export of advanced AI chips to China, warning it could undermine national security. Speaking at the World Economic Forum 2026 in Davos, he questioned whether selling US-made hardware abroad strengthens American influence.

Amodei compared the policy to ‘selling nuclear weapons to North Korea‘, arguing that exporting cutting-edge chips risks narrowing the technological gap between the United States and China. He said Washington currently holds a multi-year lead in advanced chipmaking and AI infrastructure.

Sending powerful hardware overseas could accelerate China’s progress faster than expected, Amodei told Bloomberg. He warned that AI development may soon concentrate unprecedented intelligence within data centres controlled by individual states.

Amodei said AI should not be treated like older technologies such as telecoms equipment. While spreading US technology abroad may have made sense in the past, he argued AI carries far greater strategic consequences.

The debate follows recent rule changes allowing some advanced chips, including Nvidia’s H200 and AMD’s MI325X, to be sold to China. The US administration later announced plans for a 25% tariff on AI chip exports, adding uncertainty for US semiconductor firms.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

WhatsApp faces growing pressure in Russia

Authorities in Russia are increasing pressure on WhatsApp, one of the country’s most widely used messaging platforms. The service remains popular despite years of tightening digital censorship.

Officials argue that WhatsApp refuses to comply with national laws on data storage and cooperation with law enforcement. Meta has no legal presence in Russia and continues to reject requests for user information.

State backed alternatives such as the national messenger Max are being promoted through institutional pressure. Critics warn that restricting WhatsApp targets private communication rather than crime or security threats.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Davos 2026 reveals competing visions for AI

AI has dominated debates at Davos 2026, matching traditional concerns such as geopolitics and global trade while prompting deeper reflection on how the technology is reshaping work, governance, and society.

Political leaders, executives, and researchers agreed that AI development has moved beyond experimentation towards widespread implementation.

Microsoft chief executive Satya Nadella argued that AI should deliver tangible benefits for communities and economies, while warning that adoption will remain uneven due to disparities in infrastructure and investment.

Access to energy networks, telecommunications, and capital was identified as a decisive factor in determining which regions can fully deploy advanced systems.

Other voices at Davos 2026 struck a more cautious tone. AI researcher Yoshua Bengio warned against designing systems that appear too human-like, stressing that people may overestimate machine understanding.

Philosopher Yuval Noah Harari echoed those concerns, arguing that societies lack experience in managing human and AI coexistence and should prepare mechanisms to correct failures.

The debate also centred on labour and global competition.

Anthropic’s Dario Amodei highlighted geopolitical risks and predicted disruption to entry-level white-collar jobs. At the same time, Google DeepMind chief Demis Hassabis forecast new forms of employment alongside calls for shared international safety standards.

Together, the discussions underscored growing recognition that AI governance will shape economic and social outcomes for years ahead.

Diplo is live reporting on all sessions from the World Economic Forum 2026 in Davos.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UNESCO raises alarm over government use of internet shutdowns

Yesterday, UNESCO expressed growing concern over the expanding use of internet shutdowns by governments seeking to manage political crises, protests, and electoral periods.

Recent data indicate that more than 300 shutdowns have occurred across over 54 countries during the past two years, with 2024 recorded as the most severe year since 2016.

According to UNESCO, restricting online access undermines the universal right to freedom of expression and weakens citizens’ ability to participate in social, cultural, and political life.

Access to information remains essential not only for democratic engagement but also for rights linked to education, assembly, and association, particularly during moments of instability.

Internet disruptions also place significant strain on journalists, media organisations, and public information systems that distribute verified news.

Instead of improving public order, shutdowns fracture information flows and contribute to the spread of unverified or harmful content, increasing confusion and mistrust among affected populations.

UNESCO continues to call on governments to adopt policies that strengthen connectivity and digital access rather than imposing barriers.

The organisation argues that maintaining open and reliable internet access during crises remains central to protecting democratic rights and safeguarding the integrity of information ecosystems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK study tests social media restrictions on children’s mental health

A major UK research project will examine how restricting social media use affects children’s mental health, sleep, and social lives, as governments debate tougher rules for under-16s.

The trial involves around 4,000 pupils from 30 secondary schools in Bradford and represents one of the first large-scale experimental studies of its kind.

Participants aged 12 to 15 will either have their social media use monitored or restricted through a research app limiting access to major platforms to one hour per day and imposing a night-time curfew.

Messaging services such as WhatsApp will remain available instead of being restricted, reflecting their role in family communication.

Researchers from the University of Cambridge and the Bradford Centre for Health Data Science will assess changes in anxiety, depression, sleep patterns, bullying, and time spent with friends and family.

Entire year groups within each school will experience the same conditions to capture social effects across peer networks rather than isolated individuals.

The findings, expected in summer 2027, arrive as UK lawmakers consider proposals for a nationwide ban on social media use by under-16s.

Although independent from government policy debates, the study aims to provide evidence to inform decisions in the UK and other countries weighing similar restrictions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New EU cybersecurity package strengthens resilience and ENISA powers

The European Commission has unveiled a broad cybersecurity package that moves the EU beyond certification reform towards systemic resilience across critical digital infrastructure.

Building on plans to expand EU cybersecurity certification beyond products and services, the revised Cybersecurity Act introduces a risk-based framework for securing ICT supply chains, with particular focus on dependencies, foreign interference, and high-risk third-country suppliers.

A central shift concerns supply-chain security as a geopolitical issue. The proposal enables mandatory derisking of mobile telecommunications networks, reinforcing earlier efforts under the 5G security toolbox.

Certification reform continues through a redesigned European Cybersecurity Certification Framework, promising clearer governance, faster scheme development, and voluntary certification that can cover organisational cyber posture alongside technical compliance.

The package also tackles regulatory complexity. Targeted amendments to the NIS2 Directive aim to ease compliance for tens of thousands of companies by clarifying jurisdictional rules, introducing a new ‘small mid-cap’ category, and streamlining incident reporting through a single EU entry point.

Enhanced ransomware data collection and cross-border supervision are intended to reduce fragmentation while strengthening enforcement consistency.

ENISA’s role is further expanded from coordination towards operational support. The agency would issue early threat alerts, assist in ransomware recovery with national authorities and Europol, and develop EU-wide vulnerability management and skills attestation schemes.

Together, the measures signal a shift from fragmented safeguards towards a more integrated model of European cyber sovereignty.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!