UNESCO launches regional observatory on AI in education in Latin America and the Caribbean

UNESCO has launched a new regional platform on AI in education for Latin America and the Caribbean, aiming to help governments respond to both a deep learning crisis and the rapid spread of AI tools in schools and universities.

Called the Observatory on Artificial Intelligence in Education for Latin America and the Caribbean, the initiative was launched on 14 April in Santiago, Chile, during the 2026 Forum of the Countries of Latin America and the Caribbean on Sustainable Development.

UNESCO presents the Observatory as the first regional platform anchored in the UN system dedicated to AI in education in Latin America and the Caribbean. It is designed as a multistakeholder mechanism bringing together the region’s 33 ministries of education, along with universities, research centres, teachers, and strategic partners, to generate evidence, strengthen capacities, and support public decision-making on how AI should be used in education.

The initiative is being framed as a response to two pressures at once. UNESCO says the region faces a serious learning crisis, while AI tools are spreading rapidly through classrooms and education systems, with uneven guidance and limited institutional preparedness. In that context, the Observatory is meant to support more context-specific policy development, stronger teacher training, and classroom-tested innovation within ethical frameworks, rather than leaving AI adoption to fragmented local experimentation.

That gives the launch a significance beyond a standard education technology initiative. The core argument is not simply that AI should be introduced into schools, but that governments need a shared regional capacity to shape its use. UNESCO sums that up with a simple principle: AI should not govern education; education should govern AI.

The Observatory is being developed with a broad coalition of regional and international partners, including the Development Bank of Latin America and the Caribbean, Chile’s National Centre for Artificial Intelligence, the Regional Centre for Studies on the Development of the Information Society, ECLAC, the Ceibal Foundation, Fundación Santillana, Tecnológico de Monterrey, ProFuturo, the Universidad del Desarrollo in Chile, and the International Research Centre on Artificial Intelligence. Its advisory council also includes the OECD, the Organisation of Ibero-American States, experts from Harvard University, and the UN Independent International Scientific Panel on AI.

Why does it matter?

The story shows UNESCO moving from broad principles on ethical AI to a more concrete regional governance model. Rather than issuing another general call for responsible AI in education, it is trying to build an institutional platform that can connect evidence, policy, teacher capacity, and public oversight across Latin America and the Caribbean.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Singapore proposes more tailored capital rules for crypto assets

Singapore’s central bank has launched a consultation on new capital rules for crypto-asset exposures, proposing a more differentiated approach than treating all blockchain-based assets as equally risky.

Under the draft framework, tokenised traditional assets and certain stablecoins would fall into a lower-risk category with lighter capital treatment. The proposal also leaves room for some assets on permissionless blockchains to qualify for that category if they meet principle-based risk conditions.

At the same time, the approach remains cautious. Singapore-incorporated banks would face strict exposure limits, including a cap of 2% of Tier 1 capital for qualifying crypto-asset exposures and a 5% Tier 1 capital limit for exposures that give rise to liabilities.

The consultation suggests Singapore is not trying to open the door widely to bank crypto activity, but rather to create a more workable prudential framework for selected forms of tokenised finance. That would allow regulators to distinguish between higher-risk crypto exposures and assets that more closely resemble traditional financial instruments in tokenised form.

The move is significant because it points to a more tailored interpretation of international prudential standards rather than a one-size-fits-all approach. If adopted, it could reduce uncertainty for banks seeking to engage with tokenised assets while preserving tight capital safeguards around the sector.

More broadly, the proposal reflects a cautious effort to integrate parts of the crypto and tokenisation market into mainstream finance without weakening the core logic of bank capital regulation. In that sense, the consultation is less a loosening of rules than an attempt to apply them with greater precision.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

Global stablecoin rule gaps raise concerns in Bank for International Settlements warning

The Bank for International Settlements has warned that diverging national approaches to stablecoin regulation could create openings for regulatory arbitrage as stablecoins become more closely linked to the traditional financial system.

In a recent bulletin, the BIS says the growth of stablecoins is creating policy challenges ranging from anti-money laundering and financial integrity to broader risks to financial stability. It argues that inconsistent regulatory treatment across jurisdictions could allow firms to exploit gaps between rulebooks, making supervision less effective and fragmenting cross-border financial activity.

The BIS also points to broader systemic concerns as stablecoins move closer to mainstream finance. Their expanding role could reshape how funds move through the financial system, with implications for bank funding, credit intermediation, and the transmission of stress during market volatility. Separate BIS research has also found that stablecoins are playing a growing role in safe asset markets, with implications for financial stability and monetary policy transmission.

One key concern is how stablecoin structures could behave under pressure. If large numbers of users redeem at once, issuers may need to liquidate reserve assets quickly, potentially transmitting stress into underlying markets.

The BIS bulletin frames these risks as part of a broader challenge: stablecoins are no longer crypto instruments operating in isolation, but are increasingly linked to core parts of the financial system.

The BIS also warns that regulation is made harder by the fact that many stablecoins circulate on public blockchains. In that environment, conventional controls such as anti-money laundering checks and identity verification are often weakest at the points where users move between crypto markets and traditional finance.

That is why the bulletin stresses the importance of stronger controls at entry and exit points, rather than relying only on rules aimed at issuers themselves.

For some jurisdictions, the concerns go beyond prudential supervision. The BIS says the wider use of foreign-currency-denominated stablecoins could raise concerns about monetary sovereignty and weaken existing foreign exchange controls. That risk is especially relevant in countries where domestic monetary and exchange rate frameworks are more exposed to external pressures.

The broader significance of the warning is that the BIS is pushing for more coordinated and tailored regulation at a moment when stablecoins are moving closer to mainstream use.

Its message is not that all stablecoins should be regulated identically, but that fragmented oversight could undermine policy effectiveness, increase systemic vulnerabilities, and make cross-border risks harder to contain.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

Australian regulator highlights rising AI use across various industries

The Australian Communications and Media Authority reports that AI use is accelerating across telecommunications, media and online gambling sectors. The findings highlight growing adoption alongside increasing complexity in how the technology is applied.

According to the Authority, AI is being used in media to personalise advertising and streamline content production. However, concerns have been raised about misinformation risks and the use of copyrighted material.

In the gambling sector, AI supports predictive analytics, promotions and detection of harmful behaviour, while telecommunications companies use it to improve efficiency, detect scams and strengthen network resilience.

The Authority states that despite efficiency gains, stakeholders are calling for stronger governance, transparency and safeguards as AI adoption expands in Australia.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK regulator selects firms for second cohort of AI testing programme in financial services

The Financial Conduct Authority (FCA) has selected eight firms to join the second cohort of its AI Live Testing programme, with trials beginning in April 2026. The announcement was made at UK FinTech Week.

The initiative allows participants to test AI applications under regulatory oversight, with a focus on risk management and live monitoring. FCA is working with AI assurance specialist Advai to support the deployment of systems across financial markets.

Jessica Rusu, chief data, information and intelligence officer at FCA, said the programme reflects collaboration between regulators and industry. She added that FCA continues to work with firms to support the safe and responsible development of AI in UK financial markets.

The second cohort includes Barclays, Experian, Lloyds Banking Group, UBS, Aereve, Coadjute, GoCardless and Palindrome. FCA noted that use cases include targeted investment support, credit scoring insights, anti-money laundering detection and agentic payments.

FCA will also use the programme to examine emerging concepts, such as targeted support, a lighter-touch regulatory category aimed at addressing the UK’s advice gap. It reported that applications to its innovation services, including the Regulatory Sandbox and Innovation Pathways, increased by 49 percent year on year. A report on AI adoption practices is expected later in 2026, with a full evaluation of the cohort due in 2027.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot 

India forms expert committee to support AI governance framework

India’s Ministry of Electronics and Information Technology has constituted a Technology and Policy Expert Committee to support the country’s AI governance architecture. The committee will advise the AI Governance and Economic Group (AIGEG) on policy design, regulatory measures, and international engagement.

The committee is chaired by the ministry’s Secretary and includes experts from academia, industry, and digital policy. Its mandate is to provide informed input grounded in technological developments, regulatory approaches, and global practices.

AIGEG will set strategic direction and coordinate policy across government. The expert committee will translate technical and policy issues into actionable insights for decision-making.

The framework aims to ensure a dynamic and adaptive approach to AI governance. It also seeks to align strategic, technical, and policy considerations with India’s social and economic context.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot 

EU launches protected data register

The European Commission has introduced a European Register of protected data to improve access to public sector information. The initiative is presented through the data.europa.eu platform as part of wider data-sharing efforts.

According to the Commission, the register provides a central point for discovering protected data held by public authorities. It is designed to make such datasets more visible and easier to locate.

The platform helps users identify conditions under which protected data can be accessed and reused. This includes guidance on legal and technical requirements linked to sensitive datasets.

The European Commission states that the register aims to strengthen transparency and data-driven innovation while supporting access to public sector information across the European Union.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

DIFC unveils plan to build ‘AI-native’ financial centre in Dubai

Dubai International Financial Centre has announced plans to become what it describes as the world’s first ‘AI-native’ financial centre, embedding AI into regulation, business operations, and physical infrastructure rather than treating it as a stand-alone tool.

The initiative is being presented as a broader redesign of how a financial centre functions. Instead of limiting AI to back-office support or isolated digital services, DIFC says it wants AI to shape legal frameworks, compliance processes, client management, and the wider operation of the financial ecosystem.

The plan builds on DIFC’s longer-term AI strategy, launched in 2023 and already tied to changes in data governance and the centre’s wider innovation agenda.

According to DIFC, AI is already being used in areas such as compliance and client services, with further expansion planned across financial workflows, supervisory processes, and institutional decision-making.

DIFC also says the initiative will be supported by a broader ecosystem designed to attract investment, talent, and experimentation. That includes training programmes, venture support, accelerators, and the continued development of its AI-focused innovation infrastructure. The aim is not only to encourage firms to use AI, but to make Dubai a base for building and scaling AI-driven financial services.

The project also extends beyond software and regulation. DIFC says physical infrastructure will evolve alongside digital systems, with plans linked to smart buildings, robotics, autonomous mobility, and digital twins by the end of the decade.

That gives the announcement a broader urban and economic dimension, positioning AI as part of the district’s future design rather than simply a tool used by firms within it.

The broader significance of the move lies in how Dubai is trying to position itself in the global race to shape AI in finance. Rather than focusing only on innovation-friendly rhetoric, DIFC is presenting regulation, infrastructure, skills, and ecosystem-building as part of a single strategy.

If realised in practice, that could strengthen Dubai’s role as a hub for AI-driven financial services and as a testing ground for new governance models.

At the same time, the claim to be the world’s first ‘AI-native’ financial centre should be understood as DIFC’s own description of the project, rather than an independently established category.

The more solid story is that Dubai is trying to make AI part of the operating logic of a financial centre itself, using policy, infrastructure, and investment to support that ambition.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Ofcom closes initial online safety fees notification window

Ofcom has closed the notification window for the initial 2026/27 charging year under the UK’s online safety fees regime, marking an important administrative step in implementing the Online Safety Act. Providers expected to submit a Qualifying Worldwide Revenue return for the first charging year, but who have not yet done so have been asked to contact the regulator as soon as possible.

Under the Online Safety Act 2023, Ofcom’s costs of regulating online safety are to be recovered through fees charged to certain providers of regulated services. Those duties apply to providers whose Qualifying Worldwide Revenue for the relevant period meets or exceeds the threshold set by the Secretary of State, unless they qualify for an exemption.

For the initial 2026/27 charging year, the relevant qualifying period is the 2024 calendar year. The proposed Qualifying Worldwide Revenue threshold is £250 million, while providers are exempt from fee-related duties if their UK referable revenue for that period is below £10 million.

Ofcom says the fees regime is designed to recover its online safety regulatory costs, without exceeding them. The regulator will calculate fees using a single percentage approach based on the total amount to be recovered and the combined revenue base of providers that are liable to pay.

For planning purposes, Ofcom has indicated an annual tariff in the region of 0.02% to 0.03%. However, the final tariff for 2026/27 can only be confirmed once submitted revenue notifications have been assessed. Invoices for the first charging year are expected to be issued by September 2026.

The closure of the notification window is not, in itself, a major policy shift. Its significance lies in showing that the UK’s online safety regime is moving further into its operational phase, where compliance no longer concerns only safety duties and codes, but also the financial architecture needed to support long-term enforcement.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UK invests £500 million in Sovereign AI fund to boost startups

The UK government has launched a £500 million Sovereign AI initiative to support domestic startups, aiming to strengthen national capabilities and reduce reliance on foreign technology providers.

The programme is designed to help companies start, scale and compete globally while remaining rooted in Britain.

An initiative that combines direct investment with broader support, including fast-track visas, access to high-performance computing and assistance in navigating regulation and procurement.

Early backers target firms working on advanced AI infrastructure, life sciences and next-generation computing, reflecting a strategic focus on sectors with long-term economic and security implications.

A central feature is access to national supercomputing resources, addressing one of the most significant barriers to AI development.

By providing large-scale compute capacity and linking it to potential future investment, the programme aims to accelerate research, testing and deployment within the UK ecosystem.

Essentially, the policy signals a shift toward a more interventionist approach, positioning the state as an active investor rather than a passive regulator.

The objective is to anchor innovation domestically, ensuring that intellectual property, talent and economic value remain within the UK as global competition in AI intensifies.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!