EU watchdog urges limits on US data access

The European Union’s data protection watchdog has urged stronger safeguards as negotiations continue with the US over access to biometric databases. European Data Protection Supervisor Wojciech Wiewiórowski said limits must ensure Europeans’ data is used only for agreed purposes.

Talks between the EU and the US involve potential arrangements that would allow US authorities to query national biometric systems. Databases across the EU contain sensitive information, including fingerprints and facial recognition data.

Past transatlantic data-sharing agreements between the two have faced legal challenges due to insufficient safeguards. European regulators are closely monitoring the Data Privacy Framework amid ongoing concerns about oversight.

Officials also warned that emerging AI technologies could create new surveillance risks linked to US data access. European authorities said they must negotiate as a unified bloc when dealing with the US.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Sovereign AI becomes a strategic question for governments

Governments across the world are increasingly treating AI as a strategic capability that shapes economic development, public services and national security. Momentum behind the idea of ‘sovereign AI’ is growing as countries reassess who controls the chips, cloud infrastructure, data and models powering modern technology.

Complete control over the entire AI stack remains unrealistic for most economies because of the enormous financial and technological costs involved. Global infrastructure continues to rely heavily on US technology firms, which still operate a large share of data centres and AI systems worldwide.

Policy makers are therefore exploring different approaches to sovereignty across the AI ecosystem rather than pursuing total independence. Strategies range from building domestic computing capacity to adapting global AI models for national languages, regulations and public services.

Several countries already illustrate different approaches. The EU is investing billions in AI infrastructure, Canada protects sensitive computing resources while using global models, and India prioritises applications that serve its multilingual population through public digital systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Nokia and Google Cloud bring AI agents to telecom network APIs

Scripts, manual rules, and layered software tools traditionally ran telecom networks. A new collaboration between Google Cloud and Nokia suggests a shift: software agents can respond to goals rather than just detailed instructions.

The companies are integrating agent-based AI into Nokia’s Network as Code platform, which exposes telecom capabilities through application programming interfaces (APIs). The system allows developers to build applications that interact directly with network features such as connectivity quality, device location checks, or network slicing.

The Google-Nokia partnership introduces an AI layer that enables software agents to determine which network functions to use to achieve a goal. Such changes make development more efficient, as the AI agent can interpret instructions, automatically select the appropriate network capabilities, and reduce the need for developers to call APIs one step at a time manually.

Such automation is increasingly being explored as telecom infrastructure grows more complex with 5G, edge computing, and billions of connected devices. New features such as network slicing provide flexibility for industrial applications, private enterprise networks, and specialised connectivity, but also add operational complexity for operators.

Industry groups, including the GSMA and the 3rd Generation Partnership Project, are developing frameworks to support network APIs and automation. While agent-based AI could help networks operate more like programmable platforms, telecom operators must still address questions around reliability, security, and interoperability before large-scale deployment becomes feasible.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UK to launch new lab for breakthrough AI research

Researchers in the UK will gain a new AI lab designed to drive transformational breakthroughs in healthcare, transport, science, and everyday technology, supported by government funding.

The lab will provide up to £40 million in funding over six years, alongside substantial access to large-scale computing resources, inviting UK researchers to pitch their most ambitious ideas.

The Fundamental AI Research Lab will focus on tackling core AI challenges, including hallucinations, unreliable memory, and unpredictable reasoning.

The lab will support high-risk, blue-sky research rather than simply scaling existing systems. Its goal is to unlock entirely new capabilities that could improve medical diagnoses, infrastructure resilience, scientific discovery, and public services.

UK officials highlighted the country’s strength in world-class universities, AI talent, and a thriving sector attracting over £100 billion in private investment. Experts, including Raia Hadsell of Google DeepMind, will peer-review funding applications, prioritising bold, high-reward proposals.

The initiative is part of the UKRI AI Strategy, which is backed by £1.6 billion and aims to strengthen research and ensure AI benefits society and the economy. UK AI projects like RADAR for rail faults and the IXI Brain Atlas for Alzheimer’s research demonstrate the approach’s potential impact.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI adoption and jobs debated at India summit

Governments, companies and international organisations gathered in India in February for the AI Impact Summit to discuss the future of AI governance and adoption. Participants in India focused on economic impacts, labour market changes and sector specific uses of AI.

Delegates in India also highlighted growing interest in international cooperation on AI governance. Ninety one countries endorsed a declaration supporting shared tools, global collaboration and people centred development of AI.

Language diversity became a central topic during discussions in India. India’s government announced eight foundation AI models designed to support generative AI across the country’s 22 recognised languages.

Debate in India also reflected the growing influence of the Global South in AI policy discussions. Policymakers and experts in India emphasised infrastructure gaps, language diversity and local economic realities shaping AI adoption.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Growing risks from AI meeting transcription tools

Businesses across the US and Europe are confronting new privacy risks as AI transcription tools spread through workplaces. Tools that automatically record and transcribe meetings increasingly capture sensitive conversations without clear consent.

Privacy specialists warn that organisations in the US and Europe previously focused on rules controlling what employees upload into AI systems. Governance efforts now shift towards monitoring what AI tools record during daily work.

AI services such as Otter, Zoom transcription and Microsoft Copilot can record discussions involving performance reviews, health information and legal matters. Companies in the US and Europe face legal exposure when third-party platforms store recordings without strict controls.

Governance teams in the US and Europe are being urged to introduce clear rules on meeting recordings and retention of transcripts. Stronger policies may include consent requirements, limits on recording sensitive meetings and stricter data storage oversight.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenAI tracks how AI shapes student performance over time

AI is increasingly shaping education, offering tools like ChatGPT that provide personalised learning support for students anywhere. Early studies suggest features such as study mode can enhance exam performance, yet understanding AI’s long-term effect on learning remains a challenge.

Traditional research often focuses on test scores, overlooking how students interact with AI over time in real-world settings.

OpenAI, in partnership with Estonia’s University of Tartu and Stanford’s SCALE Initiative, created the Learning Outcomes Measurement Suite to track longitudinal learning outcomes. The framework assesses interactions, engagement, cognitive growth, and alignment with pedagogical principles.

Large-scale trials involve tens of thousands of students, combining AI-driven insights with traditional classroom measures such as exams and observations.

Research shows that guided AI interactions can strengthen understanding, persistence, and problem-solving. Microeconomics students using the study mode achieved around 15% higher exam scores than those relying on traditional online resources.

Beyond short-term results, the measurement suite evaluates deeper learning effects, including motivation, metacognition, and productive engagement, helping educators and developers optimise AI tools for meaningful outcomes.

The suite will be validated through ongoing studies and eventually made available to schools, universities, and education systems worldwide. OpenAI aims to share findings broadly to ensure AI contributes effectively to student learning and cognitive development.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New UNESCO and CENIA agreement targets AI literacy and ethical standards

The UNESCO Regional Office in Santiago and the National Centre for Artificial Intelligence (CENIA) signed a cooperation agreement at the end of February 2026 to promote ethical AI in education across Chile and Latin America.

The framework supports joint initiatives aimed at strengthening digital skills, improving AI literacy and advancing people-centred development models for AI.

Projects under the partnership will focus on training programmes and educational resources designed for a wide range of audiences, including the general public, educators, technical specialists and policymakers.

Collaborative efforts will also encourage dialogue between institutions, governments and industry to support responsible innovation and reinforce regional ecosystems linked to emerging technologies.

An early outcome includes Latam-GPT, the first open large language model for Latin America and the Caribbean. The system will aid education ministries and the UNESCO Regional Observatory on AI, helping guide responsible adoption and monitor developments.

‘Artificial Intelligence represents a historic opportunity to transform our education and productive systems, but its development must be guided by clear ethical principles and a people-centred vision. This partnership with CENIA will enable us to support countries in building capacities and governance frameworks that ensure AI effectively contributes to the common good,’ stated Esther Kuisch Laroche, Director of the UNESCO Regional Office in Santiago.

‘At CENIA, we have been working consistently on applied research and capacity-building, advancing knowledge generation, technology transfer and scientific evidence.

This experience allows us to contribute from both a technical and training perspective to ensure that the development of Artificial Intelligence in the region is grounded in robust and ethical standards, thereby impacting education and productive development. We are convinced that technological progress must be accompanied by training, responsible frameworks and multi-sector collaboration.

For this reason, this agreement with UNESCO represents a strategic step towards strengthening capacity development and the ethical, people-centred adoption of Artificial Intelligence in Latin America and the Caribbean.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Qualcomm pushes Europe to take the lead in the 6G revolution

Europe is being urged to take a leading role in developing sixth-generation wireless technology as global competition intensifies over the future of connectivity and AI.

Speaking at the Mobile World Congress in Barcelona, Wassim Chourbaji of Qualcomm argued that 6G will represent a technological revolution rather than a gradual improvement over existing networks.

The company expects early pre-commercial deployments to begin around 2028, with broader commercialisation targeted for 2029.

Next-generation wireless networks are expected to support physical AI systems capable of interacting with the real world, including robotics, smart glasses, connected vehicles, and advanced sensing technologies.

High-capacity uploads and faster processing between devices and data centres will allow AI systems to analyse video streams and real-time data more efficiently.

Qualcomm has also launched a coalition aimed at accelerating 6G development with partners including Nokia, Ericsson, Amazon, Google and Microsoft.

Advocates argue that combining European industrial strengths with advanced wireless and AI technologies could allow the continent to secure a leading position in the next phase of global digital infrastructure.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

China expands oversight of youth online safety

China has introduced new measures to regulate online information that could affect the physical and mental health of minors. Authorities in China said the rules will take effect on 1 March and aim to improve protection for young internet users.

The regulators identified four categories of online information that may harm minors. The authorities have also addressed emerging risks linked to algorithmic recommendations and generative AI technologies.

The framework in China requires internet platforms and content creators to prevent and respond to harmful material. Regulators said companies must strengthen the monitoring and governance of content affecting minors.

Authorities said the measures are designed to create a cleaner online environment for children. Officials also stressed greater responsibility for platforms that manage digital content used by minors.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot