UK’s FCA steps up enforcement on crypto

The UK’s Financial Conduct Authority has led its first coordinated crackdown on illegal crypto trading, targeting firms operating without authorisation. The action forms part of wider efforts to enforce compliance in the sector.

According to the Authority, the operation involved identifying and taking action against companies that unlawfully promoted or offered crypto services. The move aims to protect consumers from potential risks.

The regulator stated that illegal crypto promotions can expose users to financial harm and undermine market trust. It emphasised the importance of ensuring firms meet regulatory requirements before operating.

The Authority said the crackdown reflects a stronger enforcement approach to unauthorised crypto activity, with further action expected to support market integrity in the UK.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

World Economic Forum analysis explains what drives startup growth today

Findings from the World Economic Forum (WEF) highlight a shift in how early-stage ventures grow from pilot projects into fully operational businesses.

Evidence gathered from more than 200 start-ups by UpLink, the early-stage innovation initiative by WEF, alongside investors and policymakers, suggests that scaling no longer depends primarily on innovation itself, but on the conditions enabling deployment.

Core and emerging technologies already exist across sectors, yet barriers remain in market adoption, coordination, and institutional readiness.

Resilience has moved from a strategic ambition to an immediate operational requirement. Start-ups are increasingly built around urgent, clearly defined problems, allowing them to adapt quickly in volatile environments shaped by geopolitical tensions, supply chain disruption, and climate pressures.

Strong partnerships have emerged as a central priority, with a significant majority of ventures seeking collaboration with larger corporate actors to gain access to infrastructure, regulatory pathways, and credibility.

Collaboration at early stages is proving essential in reducing risk and accelerating adoption. Traditional scaling models, based on proving technology before securing buyers, are losing effectiveness in complex sectors with high institutional risk.

Shared responsibility across multiple stakeholders enables innovation to move beyond demonstration phases into real-world application, particularly when aligned with procurement systems and regulatory frameworks.

Commercial viability has also become central to scaling success. Impact alone is no longer sufficient, as investors and buyers increasingly prioritise measurable financial outcomes such as cost efficiency, risk reduction, and resilience.

Market signals, including early contracts and partnerships, now outweigh funding rounds as indicators of credibility.

Why does it matter?

The WEF analysis underscores that scalable growth depends less on innovation alone and more on coordinated ecosystems that turn pilots into real-world adoption.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Crypto derivatives rules face overhaul in Thailand consultation

Thailand is moving to simplify access to crypto derivatives markets through proposed regulatory changes aimed at reducing operational barriers for digital asset firms. The Securities and Exchange Commission of Thailand has opened a consultation on letting licensed crypto firms access derivatives without separate corporate entities. 

Current regulations require firms to operate distinct legal structures for derivatives activity, increasing compliance costs and limiting market expansion. The proposed framework consolidates licensing under a single regulatory umbrella while maintaining oversight through internal controls and conflict management rules. 

The reform reflects a broader international shift towards integrating crypto and traditional financial markets within unified trading environments. Similar momentum is visible in the United States, where discussions on crypto perpetual futures are advancing alongside increased institutional activity in derivatives infrastructure.

Market activity is already responding to anticipated changes, including acquisitions of regulated trading platforms to support expanded product offerings. These developments indicate growing alignment between regulatory evolution and industry expansion in digital asset derivatives markets.

Why does it matter? 

These changes represent a broader move toward integrating crypto and traditional markets under unified regulatory frameworks. Reducing structural barriers may improve efficiency and innovation while preserving oversight.

Parallel developments across key jurisdictions also point to growing global competition to set standards for crypto derivatives, with implications for liquidity, access, and institutional participation worldwide. 

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

ILO sets first global framework for AI use in manufacturing sector

The International Labour Organization (ILO) has adopted its first-ever tripartite conclusions on AI in manufacturing, marking a significant policy step in addressing the sector’s digital transformation.

Agreed following a five-day technical meeting in Geneva, the framework brings together governments, employers and workers to shape how AI is integrated into one of the world’s largest employment sectors.

These ILO conclusions respond to the growing impact of AI on manufacturing, which employs nearly 500 million people globally.

Rather than focusing solely on productivity gains, the framework emphasises the need to align technological adoption with labour standards, ensuring that innovation supports decent work, strengthens enterprises and contributes to inclusive economic growth.

Key provisions address skills development, lifelong learning and occupational safety, alongside the protection of fundamental rights at work.

The framework also highlights the importance of social dialogue, recognising that collaboration between stakeholders is essential to managing AI-driven change and mitigating potential disruptions to employment and working conditions.

An agreement that reflects a broader effort to balance efficiency with worker protection, rejecting the notion that productivity and labour rights are competing priorities.

Instead, it positions AI as a tool that, if properly governed, can enhance both economic performance and job quality within the manufacturing sector.

The conclusions will be submitted to the ILO Governing Body in November 2026 for formal approval, with the intention of guiding national policies and international approaches to AI deployment in industry.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

UNESCO launches regional observatory on AI in education in Latin America and the Caribbean

UNESCO has launched a new regional platform on AI in education for Latin America and the Caribbean, aiming to help governments respond to both a deep learning crisis and the rapid spread of AI tools in schools and universities.

Called the Observatory on Artificial Intelligence in Education for Latin America and the Caribbean, the initiative was launched on 14 April in Santiago, Chile, during the 2026 Forum of the Countries of Latin America and the Caribbean on Sustainable Development.

UNESCO presents the Observatory as the first regional platform anchored in the UN system dedicated to AI in education in Latin America and the Caribbean. It is designed as a multistakeholder mechanism bringing together the region’s 33 ministries of education, along with universities, research centres, teachers, and strategic partners, to generate evidence, strengthen capacities, and support public decision-making on how AI should be used in education.

The initiative is being framed as a response to two pressures at once. UNESCO says the region faces a serious learning crisis, while AI tools are spreading rapidly through classrooms and education systems, with uneven guidance and limited institutional preparedness. In that context, the Observatory is meant to support more context-specific policy development, stronger teacher training, and classroom-tested innovation within ethical frameworks, rather than leaving AI adoption to fragmented local experimentation.

That gives the launch a significance beyond a standard education technology initiative. The core argument is not simply that AI should be introduced into schools, but that governments need a shared regional capacity to shape its use. UNESCO sums that up with a simple principle: AI should not govern education; education should govern AI.

The Observatory is being developed with a broad coalition of regional and international partners, including the Development Bank of Latin America and the Caribbean, Chile’s National Centre for Artificial Intelligence, the Regional Centre for Studies on the Development of the Information Society, ECLAC, the Ceibal Foundation, Fundación Santillana, Tecnológico de Monterrey, ProFuturo, the Universidad del Desarrollo in Chile, and the International Research Centre on Artificial Intelligence. Its advisory council also includes the OECD, the Organisation of Ibero-American States, experts from Harvard University, and the UN Independent International Scientific Panel on AI.

Why does it matter?

The story shows UNESCO moving from broad principles on ethical AI to a more concrete regional governance model. Rather than issuing another general call for responsible AI in education, it is trying to build an institutional platform that can connect evidence, policy, teacher capacity, and public oversight across Latin America and the Caribbean.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Singapore proposes more tailored capital rules for crypto assets

Singapore’s central bank has launched a consultation on new capital rules for crypto-asset exposures, proposing a more differentiated approach than treating all blockchain-based assets as equally risky.

Under the draft framework, tokenised traditional assets and certain stablecoins would fall into a lower-risk category with lighter capital treatment. The proposal also leaves room for some assets on permissionless blockchains to qualify for that category if they meet principle-based risk conditions.

At the same time, the approach remains cautious. Singapore-incorporated banks would face strict exposure limits, including a cap of 2% of Tier 1 capital for qualifying crypto-asset exposures and a 5% Tier 1 capital limit for exposures that give rise to liabilities.

The consultation suggests Singapore is not trying to open the door widely to bank crypto activity, but rather to create a more workable prudential framework for selected forms of tokenised finance. That would allow regulators to distinguish between higher-risk crypto exposures and assets that more closely resemble traditional financial instruments in tokenised form.

The move is significant because it points to a more tailored interpretation of international prudential standards rather than a one-size-fits-all approach. If adopted, it could reduce uncertainty for banks seeking to engage with tokenised assets while preserving tight capital safeguards around the sector.

More broadly, the proposal reflects a cautious effort to integrate parts of the crypto and tokenisation market into mainstream finance without weakening the core logic of bank capital regulation. In that sense, the consultation is less a loosening of rules than an attempt to apply them with greater precision.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

Global stablecoin rule gaps raise concerns in Bank for International Settlements warning

The Bank for International Settlements has warned that diverging national approaches to stablecoin regulation could create openings for regulatory arbitrage as stablecoins become more closely linked to the traditional financial system.

In a recent bulletin, the BIS says the growth of stablecoins is creating policy challenges ranging from anti-money laundering and financial integrity to broader risks to financial stability. It argues that inconsistent regulatory treatment across jurisdictions could allow firms to exploit gaps between rulebooks, making supervision less effective and fragmenting cross-border financial activity.

The BIS also points to broader systemic concerns as stablecoins move closer to mainstream finance. Their expanding role could reshape how funds move through the financial system, with implications for bank funding, credit intermediation, and the transmission of stress during market volatility. Separate BIS research has also found that stablecoins are playing a growing role in safe asset markets, with implications for financial stability and monetary policy transmission.

One key concern is how stablecoin structures could behave under pressure. If large numbers of users redeem at once, issuers may need to liquidate reserve assets quickly, potentially transmitting stress into underlying markets.

The BIS bulletin frames these risks as part of a broader challenge: stablecoins are no longer crypto instruments operating in isolation, but are increasingly linked to core parts of the financial system.

The BIS also warns that regulation is made harder by the fact that many stablecoins circulate on public blockchains. In that environment, conventional controls such as anti-money laundering checks and identity verification are often weakest at the points where users move between crypto markets and traditional finance.

That is why the bulletin stresses the importance of stronger controls at entry and exit points, rather than relying only on rules aimed at issuers themselves.

For some jurisdictions, the concerns go beyond prudential supervision. The BIS says the wider use of foreign-currency-denominated stablecoins could raise concerns about monetary sovereignty and weaken existing foreign exchange controls. That risk is especially relevant in countries where domestic monetary and exchange rate frameworks are more exposed to external pressures.

The broader significance of the warning is that the BIS is pushing for more coordinated and tailored regulation at a moment when stablecoins are moving closer to mainstream use.

Its message is not that all stablecoins should be regulated identically, but that fragmented oversight could undermine policy effectiveness, increase systemic vulnerabilities, and make cross-border risks harder to contain.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

Australian regulator highlights rising AI use across various industries

The Australian Communications and Media Authority reports that AI use is accelerating across telecommunications, media and online gambling sectors. The findings highlight growing adoption alongside increasing complexity in how the technology is applied.

According to the Authority, AI is being used in media to personalise advertising and streamline content production. However, concerns have been raised about misinformation risks and the use of copyrighted material.

In the gambling sector, AI supports predictive analytics, promotions and detection of harmful behaviour, while telecommunications companies use it to improve efficiency, detect scams and strengthen network resilience.

The Authority states that despite efficiency gains, stakeholders are calling for stronger governance, transparency and safeguards as AI adoption expands in Australia.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK regulator selects firms for second cohort of AI testing programme in financial services

The Financial Conduct Authority (FCA) has selected eight firms to join the second cohort of its AI Live Testing programme, with trials beginning in April 2026. The announcement was made at UK FinTech Week.

The initiative allows participants to test AI applications under regulatory oversight, with a focus on risk management and live monitoring. FCA is working with AI assurance specialist Advai to support the deployment of systems across financial markets.

Jessica Rusu, chief data, information and intelligence officer at FCA, said the programme reflects collaboration between regulators and industry. She added that FCA continues to work with firms to support the safe and responsible development of AI in UK financial markets.

The second cohort includes Barclays, Experian, Lloyds Banking Group, UBS, Aereve, Coadjute, GoCardless and Palindrome. FCA noted that use cases include targeted investment support, credit scoring insights, anti-money laundering detection and agentic payments.

FCA will also use the programme to examine emerging concepts, such as targeted support, a lighter-touch regulatory category aimed at addressing the UK’s advice gap. It reported that applications to its innovation services, including the Regulatory Sandbox and Innovation Pathways, increased by 49 percent year on year. A report on AI adoption practices is expected later in 2026, with a full evaluation of the cohort due in 2027.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot 

India forms expert committee to support AI governance framework

India’s Ministry of Electronics and Information Technology has constituted a Technology and Policy Expert Committee to support the country’s AI governance architecture. The committee will advise the AI Governance and Economic Group (AIGEG) on policy design, regulatory measures, and international engagement.

The committee is chaired by the ministry’s Secretary and includes experts from academia, industry, and digital policy. Its mandate is to provide informed input grounded in technological developments, regulatory approaches, and global practices.

AIGEG will set strategic direction and coordinate policy across government. The expert committee will translate technical and policy issues into actionable insights for decision-making.

The framework aims to ensure a dynamic and adaptive approach to AI governance. It also seeks to align strategic, technical, and policy considerations with India’s social and economic context.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot