UNESCO adopts first global ethical framework for neurotechnology

UNESCO has approved the world’s first global framework on the ethics of neurotechnology, setting new standards to ensure that advances in brain science respect human rights and dignity. The Recommendation, adopted by member states and entering into force on 12 November, establishes safeguards to ensure neurotechnological innovation benefits those in need without compromising mental privacy.

Launched in 2019 under Director-General Audrey Azoulay, the initiative builds on UNESCO’s earlier work on AI ethics. Azoulay described neurotechnology as a ‘new frontier of human progress’ that demands strict ethical boundaries to protect the inviolability of the human mind. The framework reflects UNESCO’s belief that technology should serve humanity responsibly and inclusively.

Neurotechnology, which enables direct interaction with the nervous system, is rapidly expanding, with investment in the sector rising by 700% between 2014 and 2021. While medical uses, such as deep brain stimulation and brain–computer interfaces, offer hope for people with Parkinson’s disease or disabilities, consumer devices that read neural data pose serious privacy concerns. Many users unknowingly share sensitive information about their emotions or mental states through everyday gadgets.

The Recommendation calls on governments to regulate these technologies, ensure they remain accessible, and protect vulnerable groups, especially children and workers. It urges bans on non-therapeutic use in young people and warns against monitoring employees’ mental activity or productivity without explicit consent.

UNESCO also stresses the need for transparency and better regulation of products that may alter behaviour or foster addiction.

Developed after consultations with over 8,000 contributors from academia, industry, and civil society, the framework was drafted by an international group of experts led by scientists Hervé Chneiweiss and Nita Farahany. UNESCO will now help countries translate the principles into national laws, as it has done with its 2021 AI ethics framework.

The Recommendation’s adoption, finalised at the General Conference in Samarkand, marks a new milestone in the global governance of emerging technologies.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

OpenAI outlines roadmap for AI safety, accountability and global cooperation

New recommendations have been published by OpenAI for managing rapid advances in AI, stressing the need for shared safety standards, public accountability, and resilience frameworks.

The company warned that while AI systems are increasingly capable of solving complex problems and accelerating discovery, they also pose significant risks that must be addressed collaboratively.

According to OpenAI, the next few years could bring systems capable of discoveries once thought centuries away.

The firm expects AI to transform health, materials science, drug development and education, while acknowledging that economic transitions may be disruptive and could require a rethinking of social contracts.

To ensure safe development, OpenAI proposed shared safety principles among frontier labs, new public oversight mechanisms proportional to AI capabilities, and the creation of a resilience ecosystem similar to cybersecurity.

It also called for regular reporting on AI’s societal impact to guide evidence-based policymaking.

OpenAI reiterated that the goal should be to empower individuals by making advanced AI broadly accessible, within limits defined by society, and to treat access to AI as a foundational public utility in the years ahead.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

India’s AI roadmap could add $500 billion to economy by 2035

According to the Business Software Alliance, India could add over $500 billion to its economy by 2035 through the widespread adoption of AI.

At the BSA AI Pre-Summit Forum in Delhi, the group unveiled its ‘Enterprise AI Adoption Agenda for India’, which aligns with the goals of the India–AI Impact Summit 2026 and the government’s vision for a digitally advanced economy by 2047.

The agenda outlines a comprehensive policy framework across three main areas: talent and workforce, infrastructure and data, and governance.

It recommends expanding AI training through national academies, fostering industry–government partnerships, and establishing innovation hubs with global companies to strengthen talent pipelines.

BSA also urged greater government use of AI tools, reforms to data laws, and the adoption of open industry standards for content authentication. It called for coordinated governance measures to ensure responsible AI use, particularly under the Digital Personal Data Protection Act.

BSA has introduced similar policy roadmaps in other major markets, apart from India, including the US, Japan, and ASEAN countries, as part of its global effort to promote trusted and inclusive AI adoption.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

LIBE backs new Europol Regulation despite data protection and discrimination warnings

The European Parliament’s civil liberties committee (LIBE) voted to endorse a new Europol Regulation, part of the ‘Facilitators Package’, by 59–10 with four abstentions.

Rights groups and the European Data Protection Supervisor had urged MEPs to reject the proposal, arguing the law fuels discrimination and grants Europol and Frontex unprecedented surveillance capabilities with insufficient oversight.

If approved in plenary later this month, the reform would grant Europol broader powers to collect, process and share data, including biometrics such as facial recognition, and enable exchanges with non-EU states.

Campaigners note the proposal advanced without an impact assessment, contrary to the Commission’s Better Regulation guidance.

Civil society groups warn that the changes risk normalising surveillance in migration management. Access Now’s Caterina Rodelli said MEPs had ‘greenlighted the European Commission’s long-term plan to turn Europe into a digital police state’. At the same time, Equinox’s Sarah Chander called the vote proof the EU has ‘abandoned’ humane, evidence-based policy.

EDRi’s Chloé Berthélémy said the reform legitimises ‘unaccountable and opaque data practices’, creating a ‘data black hole’ that undermines rights and the rule of law. More than 120 organisations called on MEPs to reject the text, arguing it is ‘unlawful, unsafe, and unsubstantiated’.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

New AI tool helps identify suicide-risk individuals

Researchers at Touro University have found that an AI tool can identify suicide risk that standard diagnostic methods often miss. The study, published in the Journal of Personality Assessment, shows that LLMs can analyse speech to detect patterns linked to perceived suicide risk.

Current assessment methods, such as multiple-choice questionnaires, often fail to capture the nuances of an individual’s experience.

The study used Claude 3.5 Sonnet to analyse 164 participants’ audio responses, examining future self-continuity, a key factor linked to suicide risk. The AI detected subtle cues in speech, including coherence, emotional tone, and detail, which traditional tools overlooked.

While the research focused on perceived risk rather than actual suicide attempts, identifying individuals who feel at risk is crucial for timely intervention. LLM predictions could be used in hospitals, hotlines, or therapy sessions as a new tool for mental health professionals.

Beyond suicide risk, large language models may also help detect other mental health conditions such as depression and anxiety, providing faster, more nuanced insights into patients’ mental well-being and supporting early intervention strategies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Naver expands physical AI ambitions with $690 million GPU investment

South Korean technology leader Naver is deepening its AI ambitions through a $690 million investment in graphics processing units from 2025.

A move that aims to strengthen its AI infrastructure and drive the development of physical AI, a field merging digital intelligence with robotics, logistics, and autonomous systems.

Beyond its internal use, Naver plans to monetise its expanded computing power by offering GPU-as-a-Service to clients across sectors, creating new revenue opportunities aligned with its AI ecosystem.

Chief Executive Choi Soo-yeon described physical AI as the firm’s next growth pillar, combining robotics, data, and generative AI to reshape both digital and industrial environments. The company already holds a significant share of the global robotics operating system market, underlining its technological maturity.

An investment that marks a strategic shift from software-based AI to infrastructure-driven intelligence, positioning Naver as a leader in integrating AI with real-world applications.

As global competition intensifies, Naver’s model of coupling high-performance computing with robotics innovation signals the emergence of South Korea as a centre for applied AI technology.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK mobile networks and the Government launch a fierce crackdown on scam calls

Britain’s largest mobile networks have joined the Government to tackle scam calls and texts. Through the second Telecommunications Fraud Charter, they aim to make the UK harder for fraudsters to target.

To achieve this, networks will upgrade systems within a year to prevent foreign call centres from spoofing UK numbers. Additionally, advanced call tracing and AI technology will detect and block suspicious calls and texts before they reach users.

Moreover, clear commitments are in place to support fraud victims, reducing the time it takes for help from networks to two weeks. Consequently, victims will receive prompt, specialist assistance to recover quickly and confidently.

Furthermore, improved data sharing with law enforcement will enable them to track down scammers and dismantle their operations. By collaborating across sectors, organised criminal networks can be disrupted and prevented from targeting the public.

Since fraud is the UK’s most reported crime, it causes financial losses and emotional distress. Additionally, scam calls erode public trust in essential services and cost the telecom industry millions of dollars annually.

Therefore, the Telecoms Charter sets measurable goals, ongoing monitoring, and best practice guidance for networks. Through AI tools, staff training, and public messaging, networks aim to stay ahead of evolving scam tactics.

Finally, international collaboration, such as UK-US actions against Southeast Asian fraud centres, complements these efforts.

Overall, this initiative forms part of a wider Fraud Strategy and Government plan to safeguard citizens.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI brain atlas reveals unprecedented detail in MRI scans

Researchers at University College London have developed NextBrain, an AI-assisted brain atlas that visualises the human brain in unprecedented detail. The tool links microscopic tissue imaging with MRI, enabling rapid and precise analysis of living brain scans.

NextBrain maps 333 brain regions using high-resolution post-mortem tissue data, which is combined into a digital 3D model with the aid of AI. The atlas was created over the course of six years by dissecting, photographing, and digitally reconstructing five human brains.

AI played a crucial role in aligning microscope images with MRI scans, ensuring accuracy while significantly reducing the time required for manual labelling. The atlas detects subtle changes in brain sub-regions, such as the hippocampus, crucial for studying diseases like Alzheimer’s.

Testing on thousands of MRI scans demonstrated that NextBrain reliably identifies brain regions across different scanners and imaging conditions, enabling detailed analysis of ageing patterns and early signs of neurodegeneration.

All data, tools, and annotations are openly available through the FreeSurfer neuroimaging platform. The public release of NextBrain aims to accelerate research, support diagnosis, and improve treatment for neurological conditions worldwide.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Tinder tests AI feature that analyses photos for better matches

Tinder is introducing an AI feature called Chemistry, designed to better understand users through interactive questions and optional access to their Camera Roll. The system analyses personal photos and responses to infer hobbies and preferences, offering more compatible match suggestions.

The feature is being tested in New Zealand and Australia ahead of a broader rollout as part of Tinder’s 2026 product revamp. Match Group CEO Spencer Rascoff said Chemistry will become a central pillar in the app’s evolving AI-driven experience.

Privacy concerns have surfaced as the feature requests permission to scan private photos, similar to Meta’s recent approach to AI-based photo analysis. Critics argue that such expanded access offers limited benefits to users compared to potential privacy risks.

Match Group expects a short-term financial impact, projecting a $14 million revenue decline due to Tinder’s testing phase. The company continues to face user losses despite integrating AI tools for safer messaging, better profile curation and more interactive dating experiences.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU conference highlights the need for collaboration in digital safety and growth

European politicians and experts gathered in Billund for the conference ‘Towards a Safer and More Innovative Digital Europe’, hosted by the Danish Parliament.

The discussions centred on how to protect citizens online while strengthening Europe’s technological competitiveness.

Lisbeth Bech-Nielsen, Chair of the Danish Parliament’s Digitalisation and IT Committee, stated that the event demonstrated the need for the EU to act more swiftly to harness its collective digital potential.

She emphasised that only through cooperation and shared responsibility can the EU match the pace of global digital transformation and fully benefit from its combined strengths.

The first theme addressed online safety and responsibility, focusing on the enforcement of the Digital Services Act, child protection, and the accountability of e-commerce platforms importing products from outside the EU.

Participants highlighted the importance of listening to young people and improving cross-border collaboration between regulators and industry.

The second theme examined Europe’s competitiveness in emerging technologies such as AI and quantum computing. Speakers called for more substantial investment, harmonised digital skills strategies, and better support for businesses seeking to expand within the single market.

A Billund conference emphasised that Europe’s digital future depends on striking a balance between safety, innovation, and competitiveness, which can only be achieved through joint action and long-term commitment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!