UN General Assembly appoints experts to the Independent International Scientific Panel on AI

The UN General Assembly has appointed 40 experts to serve on a newly created Independent International Scientific Panel on Artificial Intelligence, marking the launch of the first global scientific body dedicated to assessing the technology’s impact. The panel, established by a 2025 Assembly resolution, will produce annual evidence-based reports examining AI’s opportunities, risks and broader societal effects.

The members, selected from more than 2,600 candidates, will serve in their personal capacity for a three-year term running from February 2026 to February 2029. According to UN Secretary-General António Guterres, ‘we now have a multidisciplinary group of leading AI experts from across the globe, geographically diverse and gender-balanced, who will provide independent and impartial assessments of AI’s opportunities, risks and impacts, including to the new Global Dialogue on AI Governance’.

The appointments were approved by a recorded vote of 117 in favour to two against, Paraguay and the United States, with two abstentions from Tunisia and Ukraine. The United States requested a recorded vote, strongly objecting to the panel’s creation and arguing that it represents an ‘overreach of the UN’s mandate and competence’.

Other countries pushed back against that view. Uruguay, speaking on behalf of the Group of 77 and China, stressed the call for ‘comprehensive international frameworks that guarantee the fair inclusion of developing countries in shaping the future of AI governance’.

Several delegations highlighted the technology’s potential to improve public services, expand access to education and healthcare, and accelerate progress towards the Sustainable Development Goals.

Supporters of the initiative argued that AI’s global and interconnected nature requires coordinated governance. Spain, co-facilitator of the resolution that created the panel, stressed that AI ‘generates an interdependence that demands governance frameworks that no State can build by itself’ and offered to host the panel’s first in-person meeting.

The European Union and others underlined the importance of scientific excellence, independence and integrity to ensure the panel’s credibility.

The United Kingdom emphasised that trust in the Panel’s independence, scientific rigour, integrity and ability to reflect diverse perspectives are ‘essential ingredients for the Panel’s legitimacy and for its reports to be widely utilised’. China urged the Panel to prioritise capacity-building as a ‘core issue’ in its future work, and Iran urged that the ‘voice of developing countries must be heard’, and that such states must be empowered to benefit from impartial scientific guidance.

Ukraine, while supporting the initiative, expressed concerns about a potential conflict of interest involving an expert nominated by Russia.

In parallel with the AI appointments, the Assembly named two new members to the Joint Inspection Unit, the UN’s independent oversight body responsible for evaluations and investigations across the system. It also noted that Ghana, Saint Vincent and the Grenadines, and Togo had reduced their arrears below the threshold set by Article 19 of the UN Charter, which can suspend a country’s voting rights if dues remain unpaid for two full years.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU decision regulates researcher access to data under the DSA

A document released by the Republican-led House Judiciary Committee revived claims that the EU digital rules amount to censorship. The document concerns a €120 million fine against X under the Digital Services Act and was framed as a ‘secret censorship ruling’, despite publication requirements.

The document provides insight into how the European Commission interprets Article 40 of the DSA, which governs researcher access to platform data. The rule requires huge online platforms to grant qualified researchers access to publicly accessible data needed to study systemic risks in the EU.

Investigators found that X failed to comply with Article 40.12, in force since 2023 and covering public data access. The Commission said X applied restrictive eligibility rules, delayed reviews, imposed tight quotas, and blocked independent researcher access, including scraping.

The decision confirms platforms cannot price access to restrict research, deny access based on affiliation or location, or ban scraping by contract. The European Commission also rejected X’s narrow reading of ‘systemic risk’, allowing broader research contexts.

The ruling also highlights weak internal processes and limited staffing for handling access requests. X must submit an action plan by mid-April 2026, with the decision expected to shape future enforcement of researcher access across major platforms.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI governance becomes urgent for mortgage lenders

Mortgage lenders face growing pressure to govern AI as regulatory uncertainty persists across the United States. States and federal authorities continue to contest oversight, but accountability for how AI is used in underwriting, servicing, marketing, and fraud detection already rests with lenders.

Effective AI risk management requires more than policy statements. Mortgage lenders need operational governance that inventories AI tools, documents training data, and assigns accountability for outcomes, including bias monitoring and escalation when AI affects borrower eligibility, pricing, or disclosures.

Vendor risk has become a central exposure. Many technology contracts predate AI scrutiny and lack provisions on audit rights, explainability, and data controls, leaving lenders responsible when third-party models fail regulatory tests or transparency expectations.

Leading US mortgage lenders are using staged deployments, starting with lower-risk use cases such as document processing and fraud detection, while maintaining human oversight for high-impact decisions. Incremental rollouts generate performance and fairness evidence that regulators increasingly expect.

Regulatory pressure is rising as states advance AI rules and federal authorities signal the development of national standards. Even as boundaries are debated, lenders remain accountable, making early governance and disciplined scaling essential.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI anxiety strains the modern workforce

Mounting anxiety is reshaping the modern workplace as AI alters job expectations and career paths. Pew Research indicates more than a third of employees believe AI could harm their prospects, fuelling tension across teams.

Younger workers feel particular strain, with 92% of Gen Z saying it is vital to speak openly about mental health at work. Communicators and managers must now deliver reassurance while coping with their own pressure.

Leadership expert Anna Liotta points to generational intelligence as a practical way to reduce friction and improve trust. She highlights how tailored communication can reduce misunderstanding and conflict.

Her latest research connects neuroscience, including the role of the vagus nerve, with practical workplace strategies. By combining emotional regulation with thoughtful messaging, she suggests that organisations can calm anxiety and build more resilient teams.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Northumbria graduate uses AI to revolutionise cardiovascular diagnosis

Jack Parker, a Northumbria University alumnus and CEO/co-founder of AIATELLA, is leading a pioneering effort to speed up cardiovascular disease diagnosis using artificial intelligence, cutting diagnostic times from over 30 minutes to under 3 minutes, a potential lifesaver in clinical settings.

His motivation stems from witnessing delays in diagnosis that affected his own father, as well as broader health disparities in the North East, where cardiovascular issues often go undetected until later stages.

Parker’s company, now UK-Finnish, is undergoing clinical evaluation with three NHS trusts in the North East (Northumbria, Newcastle, Sunderland), comparing the AI tool’s performance against cardiologists and radiologists.

The technology has already helped identify individuals needing urgent intervention while working with community organisations in the UK and Finland.

Parker credits Northumbria University’s practical and inclusive education pathway, including a foundation degree and biomedical science degree, with providing the grounding to translate academic knowledge into real-world impact.

Support from the university’s Incubator Hub also helped AIATELLA navigate early business development and access funding networks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Institute of AI Education marks significant step for responsible AI in schools

The Institute of AI Education was officially launched at York St John University, bringing together education leaders, teachers, and researchers to explore practical and responsible approaches to AI in schools.

Discussions at the event focused on critical challenges, including fostering AI literacy, promoting fairness and inclusion, and empowering teachers and students to have agency over how AI tools are used.

The institute will serve as a collaborative hub, offering research-based guidance, professional development, and practical support to schools. A central message emphasised that AI should enhance the work of educators and learners, rather than replace them.

The launch featured interactive sessions with contributions from both education and technology leaders, as well as practitioners sharing real-world experiences of integrating AI into classrooms.

Strong attendance and active participation underscored the growing interest in AI across the education sector, with representatives from the Department for Education highlighting notable progress in early years and primary school settings.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Russia signals no immediate Google ban as Android dependence remains critical

Officials in Russia have confirmed that no plans are underway to restrict access to Google, despite recent public debate about the possibility of a technical block. Anton Gorelkin, a senior lawmaker, said regulators clarified that such a step is not being considered.

Concerns centre on the impact a ban would have on devices running Android, which are used by a significant share of smartphone owners in the country.

A block on Google would disrupt essential digital services instead of encouraging the company to resolve ongoing legal disputes involving unpaid fines.

Gorelkin noted that court proceedings abroad are still in progress, meaning enforcement options remain open. He added that any future move to reduce reliance on Google services should follow a gradual pathway supported by domestic technological development rather than abrupt restrictions.

The comments follow earlier statements from another lawmaker, Andrey Svintsov, who acknowledged that blocking Google in Russia is technically feasible but unnecessary.

Officials now appear focused on creating conditions that would allow local digital platforms to grow without destabilising existing infrastructure.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

EU faces tension over potential ban on AI ‘pornification’

Lawmakers in the European Parliament remain divided over whether a direct ban on AI-driven ‘pornification’ should be added to the emerging digital omnibus.

Left-wing members push for an explicit prohibition, arguing that synthetic sexual imagery generated without consent has created a rapidly escalating form of online abuse. They say a strong legal measure is required instead of fragmented national responses.

Centre and liberal groups take a different position by promoting lighter requirements for industrial AI and seeking clarity on how any restrictions would interact with the AI Act.

They warn that an unrefined ban could spill over into general-purpose models and complicate enforcement across the European market. Their priority is a more predictable regulatory environment for companies developing high-volume AI systems.

Key figures across the political spectrum, including lawmakers such as Assita Kanko, Axel Voss and Brando Benifei, continue to debate how far the omnibus should go.

Some argue that safeguarding individuals from non-consensual sexual deepfakes must outweigh concerns about administrative burdens, while others insist that proportionality and technical feasibility need stronger assessment.

The lack of consensus leaves the proposal in a delicate phase as negotiations intensify. Lawmakers now face growing public scrutiny over how Europe will respond to the misuse of generative AI.

A clear stance from the Parliament is still pending, rather than an assured path toward agreement.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Researchers tackle LLM regression with on policy training

Researchers at MIT, the Improbable AI Lab and ETH Zurich have proposed a fine tuning method to address catastrophic forgetting in large language models. The issue often causes models to lose earlier skills when trained on new tasks.

The technique, called self distillation fine tuning, allows a model to act as both teacher and student during training. In Cambridge and Zurich experiments, the approach preserved prior capabilities while improving accuracy on new tasks.

Enterprise teams often manage separate model variants to prevent regression, increasing operational complexity. The researchers argue that their method could reduce fragmentation and support continual learning, useful for AI, within a single production model.

However, the method requires around 2.5 times more computing power than standard supervised fine tuning. Analysts note that real world deployment will depend on governance controls, training costs and suitability for regulated industries.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Latam-GPT signals new AI ambition in Latin America

Chile has introduced Latam-GPT to strengthen Latin America’s presence in global AI.

The project, developed by the National Centre for Artificial Intelligence with support across South America, aims to correct long-standing biases by training systems on the region’s own data instead of material drawn mainly from the US or Europe.

President Gabriel Boric said the model will help maintain cultural identity and allow the region to take a more active role in technological development.

Latam-GPT is not designed as a conversational tool but rather as a vast dataset that serves as the foundation for future applications. More than eight terabytes of information have been collected, mainly in Spanish and Portuguese, with plans to add indigenous languages as the project expands.

The first version has been trained on Amazon Web Services. At the same time, future work will run on a new supercomputer at the University of Tarapacá, supported by millions of dollars in regional funding.

The model reflects growing interest among countries outside the major AI hubs of the US, China and Europe in developing their own technology instead of relying on foreign systems.

Researchers in Chile argue that global models often include Latin American data in tiny proportions, which can limit accurate representation. Despite questions about resources and scale, supporters believe Latam-GPT can deliver practical benefits tailored to local needs.

Early adoption is already underway, with the Chilean firm Digevo preparing customer service tools based on the model.

These systems will operate in regional languages and recognise local expressions, offering a more natural experience than products trained on data from other parts of the world.

Developers say the approach could reduce bias and promote more inclusive AI across the continent.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!