AI model maps how humans form emotions

Researchers in Japan have developed an AI framework designed to model how humans form emotional experiences by integrating bodily signals, sensory input and language. The work was led by scientists at Nara Institute of Science and Technology in collaboration with Osaka University.

The AI model draws on the theory of constructed emotion, which suggests emotions are built by the brain rather than hard-wired responses. Physiological data, visual cues and spoken descriptions were analysed together to replicate how people experience feelings in real situations.

Using unlabeled data from volunteers exposed to emotion-evoking images and videos, the system identified emotional patterns without predefined categories. Results showed about 75 percent alignment with participants’ own emotional assessments, well above chance levels.

The Japanese researchers say the approach could support emotion-aware AI applications in healthcare, robotics and mental health support. Findings were published in IEEE Transactions on Affective Computing, with potential benefits for understanding emotions that are difficult to express verbally.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

YouTube’s 2026 strategy places AI at the heart of moderation and monetisation

As announced yesterday, YouTube is expanding its response to synthetic media by introducing experimental likeness detection tools that allow creators to identify videos where their face appears altered or generated by AI.

The system, modelled conceptually on Content ID, scans newly uploaded videos for visual matches linked to enrolled creators, enabling them to review content and pursue privacy or copyright complaints when misuse is detected.

Participation requires identity verification through government-issued identification and a biometric reference video, positioning facial data as both a protective and governance mechanism.

While the platform stresses consent and limited scope, the approach reflects a broader shift towards biometric enforcement as platforms attempt to manage deepfakes, impersonation, and unauthorised synthetic content at scale.

Alongside likeness detection, YouTube’s 2026 strategy places AI at the centre of content moderation, creator monetisation, and audience experience.

AI tools already shape recommendation systems, content labelling, and automated enforcement, while new features aim to give creators greater control over how their image, voice, and output are reused in synthetic formats.

The move highlights growing tensions between creative empowerment and platform authority, as safeguards against AI misuse increasingly rely on surveillance, verification, and centralised decision-making.

As regulators debate digital identity, biometric data, and synthetic media governance, YouTube’s model signals how private platforms may effectively set standards ahead of formal legislation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Snapchat settles social media addiction lawsuit as landmark trial proceeds

Snapchat’s parent company has settled a social media addiction lawsuit in California just days before the first major trial examining platform harms was set to begin.

The agreement removes Snapchat from one of the three bellwether cases consolidating thousands of claims, while Meta, TikTok and YouTube remain defendants.

These lawsuits mark a legal shift away from debates over user content and towards scrutiny of platform design choices, including recommendation systems and engagement mechanics.

A US judge has already ruled that such features may be responsible for harm, opening the door to liability that section 230 protections may not cover.

Legal observers compare the proceedings to historic litigation against tobacco and opioid companies, warning of substantial damages and regulatory consequences.

A ruling against the remaining platforms could force changes in how social media products are designed, particularly in relation to minors and mental health risks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Why AI systems privilege Western perspectives: ‘The Silicon Gaze’

A new study from the University of Oxford argues that large language models reproduce a distinctly Western hierarchy when asked to evaluate countries, reinforcing long-standing global inequalities through automated judgment.

Analysing more than 20 million English-language responses from ChatGPT’s 4o-mini model, researchers found consistent favouring of wealthy Western nations across subjective comparisons such as intelligence, happiness, creativity, and innovation.

Low-income countries, particularly across Africa, were systematically placed at the bottom of rankings, while Western Europe, the US, and parts of East Asia dominated positive assessments.

According to the study, generative models rely heavily on data availability and dominant narratives, leading to flattened representations that recycle familiar stereotypes instead of reflecting social complexity or cultural diversity.

The researchers describe the phenomenon as the ‘silicon gaze’, a worldview shaped by the priorities of platform owners, developers, and historically uneven training data.

Because large language models are trained on material produced within centuries of structural exclusion, bias emerges not as a malfunction but as an embedded feature of contemporary AI systems.

The findings intensify global debates around AI governance, accountability, and cultural representation, particularly as such systems increasingly influence healthcare, employment screening, education, and public decision-making.

While models are continuously updated, the study underlines the limits of technical mitigation without broader political, regulatory, and epistemic interventions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New AI system helps improve cross-neurotype communication

Researchers at Tufts University have developed an AI-based learning tool designed to improve communication between autistic and neurotypical people. The project focuses on helping non-autistic users better understand autistic communication preferences.

The tool, called NeuroBridge, uses large language models to simulate everyday conversations and highlight how wording, tone and clarity can be interpreted differently. Users are guided towards more direct and unambiguous communication styles that reduce misunderstanding.

Unlike many interventions, NeuroBridge does not aim to change how autistic people communicate. The AI system instead trains neurotypical users to adapt their own communication, reflecting principles from the social model of disability.

The research, presented at the ACM SIGACCESS Conference on Computers and Accessibility, received a best student paper award. Early testing showed users gained clearer insight into how everyday language choices can affect cross-neurotype interactions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

WhatsApp faces growing pressure in Russia

Authorities in Russia are increasing pressure on WhatsApp, one of the country’s most widely used messaging platforms. The service remains popular despite years of tightening digital censorship.

Officials argue that WhatsApp refuses to comply with national laws on data storage and cooperation with law enforcement. Meta has no legal presence in Russia and continues to reject requests for user information.

State backed alternatives such as the national messenger Max are being promoted through institutional pressure. Critics warn that restricting WhatsApp targets private communication rather than crime or security threats.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Davos 2026 reveals competing visions for AI

AI has dominated debates at Davos 2026, matching traditional concerns such as geopolitics and global trade while prompting deeper reflection on how the technology is reshaping work, governance, and society.

Political leaders, executives, and researchers agreed that AI development has moved beyond experimentation towards widespread implementation.

Microsoft chief executive Satya Nadella argued that AI should deliver tangible benefits for communities and economies, while warning that adoption will remain uneven due to disparities in infrastructure and investment.

Access to energy networks, telecommunications, and capital was identified as a decisive factor in determining which regions can fully deploy advanced systems.

Other voices at Davos 2026 struck a more cautious tone. AI researcher Yoshua Bengio warned against designing systems that appear too human-like, stressing that people may overestimate machine understanding.

Philosopher Yuval Noah Harari echoed those concerns, arguing that societies lack experience in managing human and AI coexistence and should prepare mechanisms to correct failures.

The debate also centred on labour and global competition.

Anthropic’s Dario Amodei highlighted geopolitical risks and predicted disruption to entry-level white-collar jobs. At the same time, Google DeepMind chief Demis Hassabis forecast new forms of employment alongside calls for shared international safety standards.

Together, the discussions underscored growing recognition that AI governance will shape economic and social outcomes for years ahead.

Diplo is live reporting on all sessions from the World Economic Forum 2026 in Davos.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UNESCO raises alarm over government use of internet shutdowns

Yesterday, UNESCO expressed growing concern over the expanding use of internet shutdowns by governments seeking to manage political crises, protests, and electoral periods.

Recent data indicate that more than 300 shutdowns have occurred across over 54 countries during the past two years, with 2024 recorded as the most severe year since 2016.

According to UNESCO, restricting online access undermines the universal right to freedom of expression and weakens citizens’ ability to participate in social, cultural, and political life.

Access to information remains essential not only for democratic engagement but also for rights linked to education, assembly, and association, particularly during moments of instability.

Internet disruptions also place significant strain on journalists, media organisations, and public information systems that distribute verified news.

Instead of improving public order, shutdowns fracture information flows and contribute to the spread of unverified or harmful content, increasing confusion and mistrust among affected populations.

UNESCO continues to call on governments to adopt policies that strengthen connectivity and digital access rather than imposing barriers.

The organisation argues that maintaining open and reliable internet access during crises remains central to protecting democratic rights and safeguarding the integrity of information ecosystems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK study tests social media restrictions on children’s mental health

A major UK research project will examine how restricting social media use affects children’s mental health, sleep, and social lives, as governments debate tougher rules for under-16s.

The trial involves around 4,000 pupils from 30 secondary schools in Bradford and represents one of the first large-scale experimental studies of its kind.

Participants aged 12 to 15 will either have their social media use monitored or restricted through a research app limiting access to major platforms to one hour per day and imposing a night-time curfew.

Messaging services such as WhatsApp will remain available instead of being restricted, reflecting their role in family communication.

Researchers from the University of Cambridge and the Bradford Centre for Health Data Science will assess changes in anxiety, depression, sleep patterns, bullying, and time spent with friends and family.

Entire year groups within each school will experience the same conditions to capture social effects across peer networks rather than isolated individuals.

The findings, expected in summer 2027, arrive as UK lawmakers consider proposals for a nationwide ban on social media use by under-16s.

Although independent from government policy debates, the study aims to provide evidence to inform decisions in the UK and other countries weighing similar restrictions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Experts warn over unreliable AI medical guidance

AI tools used for health searches are facing growing scrutiny after reports found that some systems provide incorrect or potentially harmful medical advice. Wider public use of generative AI for health queries raises concerns over how such information is generated and verified.

An investigation by The Guardian found that Google AI Overview has sometimes produced guidance contrary to established medical advice. Attention has also focused on data sources, as platforms like ChatGPT frequently draw on user-generated or openly edited material.

Medical experts warn that unverified or outdated information poses risks, especially where clinical guidance changes rapidly. The European Lung Foundation has stressed that health-related AI outputs should meet the same standards as professional medical sources.

Efforts to counter misinformation are now expanding. The European Respiratory Society and its partners are running campaigns to protect public trust in science and encourage people to verify health information with qualified professionals.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!