New UNESCO and CENIA agreement targets AI literacy and ethical standards

The UNESCO Regional Office in Santiago and the National Centre for Artificial Intelligence (CENIA) signed a cooperation agreement at the end of February 2026 to promote ethical AI in education across Chile and Latin America.

The framework supports joint initiatives aimed at strengthening digital skills, improving AI literacy and advancing people-centred development models for AI.

Projects under the partnership will focus on training programmes and educational resources designed for a wide range of audiences, including the general public, educators, technical specialists and policymakers.

Collaborative efforts will also encourage dialogue between institutions, governments and industry to support responsible innovation and reinforce regional ecosystems linked to emerging technologies.

An early outcome includes Latam-GPT, the first open large language model for Latin America and the Caribbean. The system will aid education ministries and the UNESCO Regional Observatory on AI, helping guide responsible adoption and monitor developments.

‘Artificial Intelligence represents a historic opportunity to transform our education and productive systems, but its development must be guided by clear ethical principles and a people-centred vision. This partnership with CENIA will enable us to support countries in building capacities and governance frameworks that ensure AI effectively contributes to the common good,’ stated Esther Kuisch Laroche, Director of the UNESCO Regional Office in Santiago.

‘At CENIA, we have been working consistently on applied research and capacity-building, advancing knowledge generation, technology transfer and scientific evidence.

This experience allows us to contribute from both a technical and training perspective to ensure that the development of Artificial Intelligence in the region is grounded in robust and ethical standards, thereby impacting education and productive development. We are convinced that technological progress must be accompanied by training, responsible frameworks and multi-sector collaboration.

For this reason, this agreement with UNESCO represents a strategic step towards strengthening capacity development and the ethical, people-centred adoption of Artificial Intelligence in Latin America and the Caribbean.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Qualcomm pushes Europe to take the lead in the 6G revolution

Europe is being urged to take a leading role in developing sixth-generation wireless technology as global competition intensifies over the future of connectivity and AI.

Speaking at the Mobile World Congress in Barcelona, Wassim Chourbaji of Qualcomm argued that 6G will represent a technological revolution rather than a gradual improvement over existing networks.

The company expects early pre-commercial deployments to begin around 2028, with broader commercialisation targeted for 2029.

Next-generation wireless networks are expected to support physical AI systems capable of interacting with the real world, including robotics, smart glasses, connected vehicles, and advanced sensing technologies.

High-capacity uploads and faster processing between devices and data centres will allow AI systems to analyse video streams and real-time data more efficiently.

Qualcomm has also launched a coalition aimed at accelerating 6G development with partners including Nokia, Ericsson, Amazon, Google and Microsoft.

Advocates argue that combining European industrial strengths with advanced wireless and AI technologies could allow the continent to secure a leading position in the next phase of global digital infrastructure.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

China expands oversight of youth online safety

China has introduced new measures to regulate online information that could affect the physical and mental health of minors. Authorities in China said the rules will take effect on 1 March and aim to improve protection for young internet users.

The regulators identified four categories of online information that may harm minors. The authorities have also addressed emerging risks linked to algorithmic recommendations and generative AI technologies.

The framework in China requires internet platforms and content creators to prevent and respond to harmful material. Regulators said companies must strengthen the monitoring and governance of content affecting minors.

Authorities said the measures are designed to create a cleaner online environment for children. Officials also stressed greater responsibility for platforms that manage digital content used by minors.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

US introduces ratepayer protection pledge for AI data centres

The United States government has announced a new policy initiative to ensure that the rapid expansion of data centres and AI infrastructure does not increase electricity costs for American households.

The measure, known as the Ratepayer Protection Pledge, places responsibility for additional energy demand on technology companies operating large-scale data centres.

Officials emphasised that reliable data centre infrastructure is critical to maintaining the country’s economic competitiveness and technological leadership. Facilities that power cloud computing, internet services and AI development are expected to continue expanding rapidly, driven by growing demand for advanced digital services.

At the same time, policymakers warned that rising electricity consumption linked to AI could place pressure on energy systems and consumer utility bills. Under the new pledge, hyperscale technology firms and AI companies commit to covering the full cost of the electricity and infrastructure required to operate their data centres.

Participating companies have agreed to finance new power generation resources, upgrade electricity delivery infrastructure and negotiate separate electricity rate structures with utilities and state authorities. The arrangement is designed to ensure that additional energy demand from large data centres does not translate into higher prices for residential consumers.

Seven major technology companies have formally accepted the terms of the pledge. Authorities argue that the initiative will support continued investment in domestic AI and cloud infrastructure while protecting households from rising energy costs and strengthening the resilience of the national power grid.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI upgrades ChatGPT conversations with GPT-5.3 Instant

The most widely used ChatGPT model has received an update from OpenAI, introducing GPT-5.3 Instant to make everyday conversations more coherent, useful, and natural.

An upgrade that focuses on improving tone, contextual understanding, and the flow of dialogue rather than only benchmark performance.

One of the main improvements concerns how the model handles refusals and safety responses. Earlier versions sometimes declined questions that could have been answered safely or delivered overly cautious explanations before responding.

GPT-5.3 Instant instead gives more direct answers while still maintaining safety constraints, reducing interruptions that previously slowed conversations.

The update also improves the way ChatGPT uses information from the web. Instead of simply summarising search results or presenting long lists of links, the model now integrates online information with its own reasoning.

Such an approach aims to produce more relevant answers that highlight key insights at the beginning of responses.

Reliability has also improved. Internal evaluations conducted by OpenAI show reductions in hallucination rates across multiple domains.

When using web sources, hallucinations dropped by roughly 26.8 percent in higher-risk fields such as medicine, law, and finance. Improvements were also recorded when the model relied only on its internal knowledge.

Beyond factual accuracy, the model is designed to feel more natural in conversation. OpenAI says the system now avoids overly preachy language, unnecessary disclaimers, and intrusive remarks that previously disrupted dialogue.

The goal is a more consistent conversational personality across updates, while maintaining the familiar user experience of ChatGPT.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Guterres convenes global UN panel of 40 experts to assess AI risks

UN Secretary-General António Guterres told the inaugural meeting of a new independent group of experts on AI convened by the UN that they have a huge responsibility to help shape how the technology is used ‘for the benefit of humanity’.

‘Individually, you come from diverse regions and disciplines, bringing outstanding expertise in AI and related fields. Collectively, you represent something the world has never seen before,’ the UN chief told scientists on Tuesday at the first meeting of the Independent International Scientific Panel on AI.

The panel brings together 40 experts who aim to help close ‘the AI knowledge gap’ and assess the real impact the frontier technology will have across economies and societies so that countries can act with the same ‘clarity’ on a level playing field.

The experts will provide scientific assessments independent of any government, company or institution – including the UN itself. ‘AI is advancing at lightning speed… no country, no company, and no field of research can see the full picture alone,’ Guterres said. ‘The world urgently needs a shared, global understanding of artificial intelligence; grounded not in ideology, but in science.’

Warning about the stakes involved as AI evolves rapidly, Guterres said the technology will shape peace and security, human rights, and sustainable development for decades to come. ‘I have seen how quickly fear can take hold when facts are missing or distorted – how trust breaks down, and division deepens,’ he said. At a time when ‘geopolitical tensions are rising, and conflicts are raging,’ he stressed that the need for shared understanding and ‘safe and responsible AI could not be greater.’

As AI development accelerates, the Secretary-General also warned the panel that it is ‘in a race against time.’ Addressing concerns about the pace of technological change, he said: ‘Never in the future will we move as slowly as we are moving now. We are indeed in a high level of acceleration.’

Guterres also pointed to earlier work through the UN High-Level Advisory Body on AI, noting that the new scientific panel does not ‘start from zero’. Concluding his remarks, Guterres told the experts: ‘I can think of no more important assignment for our world today.’

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI Readiness Assessment Report highlights India’s progress and gaps in ethical AI

UNESCO and India’s Ministry of Electronics and Information Technology (MeitY) have launched the India AI Readiness Assessment Report during the India AI Impact Summit 2026. The report evaluates the country’s progress in building an ethical and human-centred AI ecosystem.

Developed by UNESCO with the IndiaAI Mission and Ikigai Law as implementing partner, the report draws on consultations with more than 600 stakeholders from government, academia, industry, and civil society. The assessment examined governance, workforce readiness, and infrastructure development.

Principal Scientific Adviser to the Government of India, Dr Ajay Kumar Sood, emphasised the importance of embedding ethics throughout the technology lifecycle. ‘AI is here to make an impact. The question is not how fast we adopt AI, but how thoughtfully we shape it,’ he said.

The report highlights the country’s growing role in global AI development, noting that it accounts for around 16% of the world’s AI talent and has filed more than 86,000 related patents since 2010. It also points to progress in multilingual AI systems and digital public services.

The assessment also identifies policy priorities, including stronger legal frameworks, inclusive workforce transitions, and better access to high-quality datasets. UNESCO officials said the recommendations aim to support responsible AI governance and strengthen public trust.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

X suspends creators over undisclosed AI armed conflict videos

Social media platform X will suspend creators from its revenue-sharing programme if they post AI-generated videos of armed conflict without proper disclosure. The penalty lasts 90 days, with permanent removal for repeat violations.

Head of product Nikita Bier said access to authentic information during war is critical, warning that generative AI makes it easy to mislead audiences. The policy takes effect immediately.

Enforcement will combine generative AI detection tools with the platform’s Community Notes fact-checking system. X, formerly Twitter, says the move is designed to prevent creators from profiting from deceptive conflict content.

The Creator Revenue Sharing Programme allows paid X subscribers to earn advertising income from high-performing posts, but critics argue it encourages sensational material. AI-generated political misinformation and deceptive influencer promotions outside armed conflict scenarios remain unaffected by the new rule.

Financial penalties may limit incentives for the dissemination of misleading war footage, yet broader concerns about AI-driven misinformation on social media persist.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Anthropic introduces powerful and transformative voice mode for Claude Code

Anthropic has introduced a voice mode capability for Claude Code, its AI coding assistant for developers. The feature enables users to interact with the system through spoken commands, marking a step toward more conversational and hands-free coding workflows.

Voice interaction allows developers to execute programming tasks using natural language. By activating voice mode, users can verbally request actions, reflecting a broader shift toward intuitive human-AI collaboration in software development.

The rollout is currently limited, with voice mode available to a small percentage of users before wider deployment. Technical details remain unclear, including potential usage limits and whether external voice AI providers contributed to the feature’s development.

The update builds on Anthropic’s earlier integration of voice interaction in its Claude chatbot. This expansion suggests a wider strategy to embed voice interfaces across AI tools and enhance multimodal interaction experiences.

Competition in AI coding assistants continues to intensify, with multiple technology companies developing similar tools. Within this environment, Claude Code has gained strong adoption and a growing market presence among developers.

User growth and revenue indicators highlight the growing momentum of Anthropic’s AI ecosystem. The company also experienced heightened public visibility following its decision to restrict certain military uses of its AI systems, contributing to a surge in app popularity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

How AI training data is influencing what users believe

A new Yale study, published in PNAS Nexus, has found that AI chatbots can subtly shift users’ social and political opinions, even when asked for factual information and with no intent to persuade.

Researchers tested nearly 1,912 participants, comparing responses to AI-generated summaries of historical events with those to Wikipedia entries, and found measurable differences in opinion.

The culprit, researchers say, is ‘latent bias’, ideological leanings embedded in the data used to train large language models that subtly colour the framing of otherwise accurate responses.

Default summaries generated by GPT-4o consistently nudged readers towards more liberal opinions compared to Wikipedia entries, even without any deliberate prompting.

Senior author Daniel Karell warned that whilst the effects are modest in isolation, they could compound significantly for users who regularly consult chatbots for information.

Unlike Wikipedia, which makes its editorial process transparent, AI development remains largely opaque, giving the companies behind these models an unacknowledged ability to shape public opinion.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!