New AI system helps improve cross-neurotype communication

Researchers at Tufts University have developed an AI-based learning tool designed to improve communication between autistic and neurotypical people. The project focuses on helping non-autistic users better understand autistic communication preferences.

The tool, called NeuroBridge, uses large language models to simulate everyday conversations and highlight how wording, tone and clarity can be interpreted differently. Users are guided towards more direct and unambiguous communication styles that reduce misunderstanding.

Unlike many interventions, NeuroBridge does not aim to change how autistic people communicate. The AI system instead trains neurotypical users to adapt their own communication, reflecting principles from the social model of disability.

The research, presented at the ACM SIGACCESS Conference on Computers and Accessibility, received a best student paper award. Early testing showed users gained clearer insight into how everyday language choices can affect cross-neurotype interactions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

New consortium applies AI to early drug research

A new AI-driven drug discovery initiative with a budget exceeding €60 million has launched, bringing together academic and industry partners across Europe and North America. University College London is acting as the lead academic partner in the UK.

The five-year LIGAND-AI programme is funded through the Innovative Health Initiative and aims to speed up early drug discovery. Researchers will generate large open datasets showing how molecules bind to human proteins, supporting the training of advanced AI models.

The consortium, led by Pfizer and the Structural Genomics Consortium, includes 18 partners in nine countries. Work will focus on proteins linked to diseases such as cancer, neurological conditions and rare disorders.

UK based UCL scientists will help build global research networks and promote open sharing of protein samples and machine learning models. Organisers say the project supports open science and long-term goals to map chemical modulators for every human protein.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Davos 2026 reveals competing visions for AI

AI has dominated debates at Davos 2026, matching traditional concerns such as geopolitics and global trade while prompting deeper reflection on how the technology is reshaping work, governance, and society.

Political leaders, executives, and researchers agreed that AI development has moved beyond experimentation towards widespread implementation.

Microsoft chief executive Satya Nadella argued that AI should deliver tangible benefits for communities and economies, while warning that adoption will remain uneven due to disparities in infrastructure and investment.

Access to energy networks, telecommunications, and capital was identified as a decisive factor in determining which regions can fully deploy advanced systems.

Other voices at Davos 2026 struck a more cautious tone. AI researcher Yoshua Bengio warned against designing systems that appear too human-like, stressing that people may overestimate machine understanding.

Philosopher Yuval Noah Harari echoed those concerns, arguing that societies lack experience in managing human and AI coexistence and should prepare mechanisms to correct failures.

The debate also centred on labour and global competition.

Anthropic’s Dario Amodei highlighted geopolitical risks and predicted disruption to entry-level white-collar jobs. At the same time, Google DeepMind chief Demis Hassabis forecast new forms of employment alongside calls for shared international safety standards.

Together, the discussions underscored growing recognition that AI governance will shape economic and social outcomes for years ahead.

Diplo is live reporting on all sessions from the World Economic Forum 2026 in Davos.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UNESCO raises alarm over government use of internet shutdowns

Yesterday, UNESCO expressed growing concern over the expanding use of internet shutdowns by governments seeking to manage political crises, protests, and electoral periods.

Recent data indicate that more than 300 shutdowns have occurred across over 54 countries during the past two years, with 2024 recorded as the most severe year since 2016.

According to UNESCO, restricting online access undermines the universal right to freedom of expression and weakens citizens’ ability to participate in social, cultural, and political life.

Access to information remains essential not only for democratic engagement but also for rights linked to education, assembly, and association, particularly during moments of instability.

Internet disruptions also place significant strain on journalists, media organisations, and public information systems that distribute verified news.

Instead of improving public order, shutdowns fracture information flows and contribute to the spread of unverified or harmful content, increasing confusion and mistrust among affected populations.

UNESCO continues to call on governments to adopt policies that strengthen connectivity and digital access rather than imposing barriers.

The organisation argues that maintaining open and reliable internet access during crises remains central to protecting democratic rights and safeguarding the integrity of information ecosystems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK study tests social media restrictions on children’s mental health

A major UK research project will examine how restricting social media use affects children’s mental health, sleep, and social lives, as governments debate tougher rules for under-16s.

The trial involves around 4,000 pupils from 30 secondary schools in Bradford and represents one of the first large-scale experimental studies of its kind.

Participants aged 12 to 15 will either have their social media use monitored or restricted through a research app limiting access to major platforms to one hour per day and imposing a night-time curfew.

Messaging services such as WhatsApp will remain available instead of being restricted, reflecting their role in family communication.

Researchers from the University of Cambridge and the Bradford Centre for Health Data Science will assess changes in anxiety, depression, sleep patterns, bullying, and time spent with friends and family.

Entire year groups within each school will experience the same conditions to capture social effects across peer networks rather than isolated individuals.

The findings, expected in summer 2027, arrive as UK lawmakers consider proposals for a nationwide ban on social media use by under-16s.

Although independent from government policy debates, the study aims to provide evidence to inform decisions in the UK and other countries weighing similar restrictions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New EU cybersecurity package strengthens resilience and ENISA powers

The European Commission has unveiled a broad cybersecurity package that moves the EU beyond certification reform towards systemic resilience across critical digital infrastructure.

Building on plans to expand EU cybersecurity certification beyond products and services, the revised Cybersecurity Act introduces a risk-based framework for securing ICT supply chains, with particular focus on dependencies, foreign interference, and high-risk third-country suppliers.

A central shift concerns supply-chain security as a geopolitical issue. The proposal enables mandatory derisking of mobile telecommunications networks, reinforcing earlier efforts under the 5G security toolbox.

Certification reform continues through a redesigned European Cybersecurity Certification Framework, promising clearer governance, faster scheme development, and voluntary certification that can cover organisational cyber posture alongside technical compliance.

The package also tackles regulatory complexity. Targeted amendments to the NIS2 Directive aim to ease compliance for tens of thousands of companies by clarifying jurisdictional rules, introducing a new ‘small mid-cap’ category, and streamlining incident reporting through a single EU entry point.

Enhanced ransomware data collection and cross-border supervision are intended to reduce fragmentation while strengthening enforcement consistency.

ENISA’s role is further expanded from coordination towards operational support. The agency would issue early threat alerts, assist in ransomware recovery with national authorities and Europol, and develop EU-wide vulnerability management and skills attestation schemes.

Together, the measures signal a shift from fragmented safeguards towards a more integrated model of European cyber sovereignty.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cisco and OpenAI push AI-native software development

Cisco has deepened its collaboration with OpenAI to embed agentic AI into enterprise software engineering. The approach reflects a broader shift towards treating AI as operational infrastructure rather than an experimental tool.

Integrating Codex into production workflows exposed it to complex, multi-repository, and security-critical environments. Codex operated across interconnected codebases, running autonomous build and testing loops within existing compliance and governance frameworks.

Operational use delivered measurable results. Engineering teams reported faster builds, higher defect-resolution throughput, and quicker framework migrations, cutting work from weeks to days.

Real-world deployment shaped Codex’s enterprise roadmap, especially around compliance, long-running tasks, and pipeline integration. The collaboration will continue as both organisations pursue AI-native engineering at scale, including within Cisco’s Splunk teams.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK names industry leaders to steer safe AI adoption in finance

The UK government has appointed two senior industry figures as AI Champions to support safe and effective adoption of AI across financial services, as part of a broader push to boost growth and productivity.

Harriet Rees of Starling Bank and Dr Rohit Dhawan of Lloyds Banking Group will work with firms and regulators to help turn rapid AI uptake into practical delivery. Both will report directly to Lucy Rigby, the Economic Secretary to the Treasury.

AI is already widely deployed across the sector, with around three-quarters of UK financial firms using the technology. Analysis indicates AI could add tens of billions of pounds to financial services by 2030, while improving customer services and reducing costs.

The Champions will focus on accelerating trusted adoption, speeding up innovation, and removing barriers to scale. Their remit includes protecting consumers, supporting financial stability, and strengthening the UK’s role as a global economic and technology hub.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU considers further action against Grok over AI nudification concerns

The European Commission has signalled readiness to escalate action against Elon Musk’s AI chatbot Grok, following concerns over the spread of non-consensual sexualised images on the social media platform X.

The EU tech chief Henna Virkkunen told Members of the European Parliament that existing digital rules allow regulators to respond to risks linked to AI-driven nudification tools.

Grok has been associated with the circulation of digitally altered images depicting real people, including women and children, without consent. Virkkunen described such practices as unacceptable and stressed that protecting minors online remains a central priority for the EU enforcement under the Digital Services Act.

While no formal investigation has yet been launched, the Commission is examining whether X may breach the DSA and has already ordered the platform to retain internal information related to Grok until the end of 2026.

Commission President Ursula von der Leyen has also publicly condemned the creation of sexualised AI images without consent.

The controversy has intensified calls from EU lawmakers to strengthen regulation, with several urging an explicit ban on AI-powered nudification under the forthcoming AI Act.

A debate that reflects wider international pressure on governments to address the misuse of generative AI technologies and reinforce safeguards across digital platforms.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI travel influencers begin reshaping digital storytelling

India’s first AI-generated travel influencer, Radhika Subramaniam, has begun attracting sustained audience engagement since her launch in mid-2025, signalling growing acceptance of virtual creators in travel content.

Developed by Collective Artists Network, a talent management company based in India, Radhika initially drew attention through curiosity, but followers increasingly interacted with her posts in ways similar to those of human influencers, according to the company’s leadership.

Industry observers say AI travel influencers offer brands greater efficiency, lower production costs, and more control over storytelling, as virtual creators can be deployed without logistical constraints.

Some creators remain sceptical about whether artificial personas can replicate the emotional authenticity and sensory experiences that shape real-world travel storytelling.

Marketing specialists expect AI and human influencers to coexist, with virtual avatars serving as consistent brand voices while human creators retain value through spontaneity, trust, and personal perspective.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!