UNESCO initiative drives new digital platform governance frameworks in South Asia

South Asia is strengthening digital platform governance through a rights-based approach shaped by regional cooperation and international guidance.

A workshop led by UNESCO brought together policymakers, civil society and academics to align platform regulation with principles of freedom of expression and access to information.

The discussions focused on addressing governance gaps linked to misinformation, platform accountability and transparency. Participants examined national experiences and identified shared regulatory challenges, emphasising the need for coordinated regional responses instead of fragmented national measures.

An initiative that also validated regional toolkits designed for policymakers and civil society, translating global principles into practical guidance. These tools aim to support the implementation of governance frameworks that reflect local contexts while upholding international human rights standards.

The process builds on UNESCO’s Internet for Trust guidelines, reinforcing a human-centred model of digital governance. Continued collaboration across South Asia is expected to strengthen regulatory capacity and ensure that digital platforms operate with greater accountability and public trust.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

AI capacity partnership links UNDP and Intel in Lesotho and Liberia

The United Nations Development Programme and Intel are working together to expand AI training and digital skills in Lesotho and Liberia under a Memorandum of Understanding signed in March 2025. According to UNDP, the partnership is intended to combine global technical expertise with local leadership as both countries pursue broader digital transformation goals.

Lesotho and Liberia are approaching the issue from different starting points.UNDP says Lesotho is aiming for universal digital access by 2030, while Liberia is investing in AI in higher education and governance systems to prepare for the future digital economy. Through its partnership with Intel, the UN’s global development network says it is helping close gaps in AI literacy and capacity-building so communities can better understand how AI may affect everyday life.

In Lesotho, UNDP says it has already helped establish 40 Digital Skills Learning Labs and train 40 Digital Ambassadors, including teachers, religious leaders, and local influencers. Intel’s ‘AI for Citizens (AI Community Experiences)’ programme was introduced to provide locally relevant training materials for low-connectivity environments. UNDP says the onboarding included virtual sessions using games and storytelling, while analogue activities and puzzles were used to explain concepts such as computer vision.

Liberia’s work has focused more on higher education and the public sector. UNDP says it supported the University of Liberia in designing its first Master of AI programme through six online sessions with global experts and in-person workshops involving 20 faculty members. The collaboration also extended to government, with targeted training for nearly 100 officials on how AI could improve public service delivery and inform policy decisions.

Anshul Sonak, Global Head of Intel Digital Readiness Programs, said: ‘We are deeply honoured to be a part of the AI training collaboration in Liberia with UNDP. Bringing AI skills and digital literacy to a country rich in history and potential was an amazing experience. We look forward to more collaborations in the future and finding more opportunities for Intel to be a player in the region.’

UNDP says future phases may include expanding training to more communities and countries, adapting content to local languages and contexts, and adding online components as connectivity improves. Dhani Spiller, Head of UNDP’s Digital Capacity Lab, said: ‘This partnership shows what’s possible when we combine UNDP’s development mandate with the innovation and technical depth of private-sector leaders.’

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Cloudflare adds LLM layer to client-side security detection pipeline

Cloudflare has announced two changes to its client-side security offering, making Client-Side Security Advanced available to self-serve customers and offering domain-based threat intelligence at no extra cost to all users on the free Client-Side Security bundle. The update is focused on browser-based attacks that can steal data via malicious scripts without visibly disrupting a website’s normal operation.

Cloudflare says its client-side security system assesses 3.5 billion scripts per day and monitors an average of 2,200 scripts per enterprise zone. According to the company, the product relies on browser reporting, including Content Security Policy signals, rather than scanners or application instrumentation, and requires only that traffic be proxied through Cloudflare.

A central part of the announcement is a new detection pipeline combining a Graph Neural Network (GNN) with a Large Language Model (LLM). Cloudflare says the GNN analyses the Abstract Syntax Tree of JavaScript code to identify malicious intent even when scripts are minified or obfuscated. Scripts flagged as suspicious are then passed to an open-source LLM running on Workers AI for a second-stage semantic assessment intended to reduce false positives.

Cloudflare says the GNN is tuned for high recall to identify novel and zero-day threats, but that false alarms remain a challenge at internet scale. Internal evaluation results cited by the company show that the secondary LLM layer reduced false positives in the JS Integrity threat category by nearly three times across the total analysed traffic, lowering the rate from about 0.3% to about 0.1%. On unique scripts, Cloudflare says the false-positive rate fell from about 1.39% to 0.007%.

The company also describes a recent case involving a heavily obfuscated malicious script named core.js. According to Cloudflare, the payload targeted Xiaomi OpenWrt-based home routers, altered DNS settings, and attempted to change admin passwords. Cloudflare says the script was injected through compromised browser extensions rather than by directly compromising a website, and adds that its GNN detected the malicious structure while the LLM confirmed the intent.

Cloudflare argues that the two-stage design provides structural detection via the GNN and broader semantic filtering via the LLM, enabling the company to lower the GNN decision threshold without sharply increasing alert volume. Every script flagged by the GNN is also logged to Cloudflare R2 for later auditing, which the company says helps it review cases where the LLM overrode the initial verdict.

Domain-based threat intelligence is now being made available to all Client-Side Security customers, including those not using the Advanced tier. Cloudflare says the move is partly a response to attacks seen in 2025 against smaller online shops, especially on Magento, where client-side compromises continued for days or weeks after public disclosure. By extending domain-based signals more broadly, the company says site owners can more quickly identify malicious JavaScript or suspicious connections and investigate possible compromises.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Why Geneva’s AI week matters more than a single summit

Geneva will host far more than another technology summit in July 2026. Over the course of a single week, the city will bring together three processes that are usually treated as separate tracks: ITU’s ‘AI for Good Global Summit‘, the inaugural ‘Global Dialogue on AI Governance‘ under UN auspices, and the ‘WSIS Forum 2026‘.

Swiss authorities have already laid out a timetable that shows how closely these strands are now aligned. The Global Dialogue on AI Governance is scheduled for 6 and 7 July, AI for Good will run from 7 to 10 July, and the WSIS Forum will take place from 6 to 10 July.

That overlap is more than a matter of scheduling. A more important signal lies in the fact that the same city will briefly host three different approaches to the global AI debate. The first is the innovation and demonstration layer. AI for Good has long brought together companies, researchers, startups, and international organisations to explore practical uses of AI across healthcare and education, as well as climate and development.

AI for Good and a UN governance dialogue will bring policy and technology discussions together in Geneva.

Recent trade coverage suggests that the 2026 edition will again combine live demonstrations, standards discussions, national strategies, and skills-related conversations, making the summit more than a conventional conference. It is increasingly becoming a showcase for both technological ambition and the policy language surrounding it.

The second layer is diplomatic. The Global Dialogue on AI Governance, which will be held in Geneva for the first time, carries far more weight than a ceremonial UN gathering. As CSIS has argued, the forum should be read as a sign of broader realignment in global AI politics, especially in relation to the US, China, and countries in the Global South.

The questions at stake go beyond safe and responsible AI development. They also include the interoperability of national regulatory approaches, the capacity of developing countries to engage with AI governance, and the distribution of political influence in shaping future rules.

The third layer is developmental and institutional. The WSIS Forum has long served as a platform for debates on the information society, digital cooperation, and development policy. It’s running in parallel to AI for Good, and the new UN dialogue shows that AI is no longer a subject that can remain confined to technical or commercial circles. Instead, AI is being folded more directly into wider debates on inclusion, digital capacity, development, and international cooperation.

That is what makes Geneva’s July calendar noteworthy. The significance lies not simply in the fact that three events are happening at once, but in what their convergence represents. For a few days, technology showcases, multilateral governance talks, and long-running digital development agendas will be forced into the same conversation.

If earlier AI debates could still be treated as separate tracks, July 2026 suggests they are beginning to merge. That convergence may prove to be the more important story.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK backs quantum technology with £2 billion plan

The UK government has unveiled a £2 billion package to accelerate quantum technology development and deploy large-scale quantum computers. The plan aims to position the United Kingdom as a global leader in a field expected to rival AI.

Ministers said the programme will support research, skills and infrastructure while creating high-paid jobs. A new procurement scheme will invite companies to build prototype quantum systems, with the most advanced designs scaled for national use.

The initiative will integrate research, manufacturing and investment to speed up commercial applications in the UK. Officials believe quantum technology could transform sectors such as healthcare, energy and cybersecurity while boosting long-term economic growth.

Industry partnerships and university collaborations will play a central role in delivering the strategy. Experts say the approach could unlock major breakthroughs, though success will depend on sustained investment and global competition.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Brazil study maps age assurance practices across 25 digital services

A new study by CGI.br and NIC.br examines how digital services in Brazil implement age assurance measures. Presented in Brasília during an event on the Digital Child and Adolescent Statute (ECA Digital), the study reviewed 25 popular online services used by children and adolescents.

The study found that most of the services analysed do not apply age checks at the point of registration, including some platforms aimed at adults. According to the release, age assurance usually appears later, when users try to access specific features such as livestreaming or monetisation.

Titled ‘Age assurance practices in 25 digital services used by children in Brazil’, the study analysed governance documents published before the ECA Digital entered into force. From 18 March, the law requires information-society services aimed at children and adolescents in Brazil, or likely to be accessed by them, to adopt effective age-assurance measures and parental supervision.

The study found that 11 of the 25 platforms relied on third-party age-assurance services, particularly social media and generative AI platforms. Official identity document submission was the most common verification method, while selfie-based checks were the most common age-estimation tool. Differences were also found between the minimum ages stated by services and those listed in app stores, and some adult-oriented platforms could still be accessed by younger users with parental consent.

Parental supervision tools were available in 15 of the 25 services, but activation was usually optional and depended on parents or guardians. Transparency also emerged as a weakness: only six services published Brazil-specific reports, and only one explained how its minimum-age policy was applied. Policies were often spread across multiple pages, averaging 22 pages per service, and around 40% of the services provided related information in other languages.

Fábio Senne, General Research Coordinator at Cetic.br | NIC.br, said: ‘One of the study’s central aims was to verify the integrity of the information made available by digital services in Brazil. It is essential that data on age protection be communicated clearly and accessibly, allowing more informed and effective parental supervision.’

Juliana Cunha, manager of the Digital Public Policy Advisory Office at CGI.br | NIC.br, said: ‘This survey was developed to support the debate on implementation of the ECA Digital and to offer a clear understanding of the current landscape. This initiative forms part of a broader set of actions by CGI.br and NIC.br aimed at providing technical evidence to support effective enforcement of the law. Our commitment is to foster a safer and more responsible digital ecosystem for children and adolescents in Brazil.’

The release says the study used as a methodological reference the OECD technical paper ‘Age assurance practices of 50 online services used by children’, published in 2025. Information was collected between 10 and 30 January 2026 from public documents made available by the services in Brazil, totalling 550 pages analysed. The event also marked the launch of TIC Kids Online Brazil 2025, a publication on internet use by children and adolescents aged 9 to 17 in Brazil.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

ILO and World Bank paper says GenAI may deepen labour-market divides

A joint working paper by the International Labour Organization (ILO) and the World Bank says generative AI is likely to reshape labour markets globally, but not in the same way across countries.

The paper finds that advanced economies face greater overall exposure, while developing economies may see disruption arrive faster than productivity gains due to weaker digital infrastructure and differences in how work is organised.

Prepared as a background study for the World Development Report 2026, the paper examines labour-market exposure to GenAI across 135 countries, covering about two-thirds of global employment. According to the study, digital infrastructure and job-task composition are among the main factors shaping the distribution of risks and opportunities between advanced and developing economies.

Exposure is highest in advanced economies, especially in clerical and professional occupations. Lower-income countries are less exposed overall, but the paper says structural constraints reduce their ability to benefit from the technology. A central concern is that workers in jobs vulnerable to automation are often already online, even in poorer settings, meaning displacement could happen relatively quickly.

The paper also says many of the jobs most exposed to automation in developing economies are relatively higher-quality roles, including clerical and administrative work that has often provided a route into decent employment, especially for women and young workers. AI-driven automation, the study warns, could narrow those pathways.

Potential gains are also uneven. Many workers in jobs that could benefit from GenAI lack reliable internet access in lower-income settings. The paper adds that the same occupation title can involve different tasks depending on the country, with workers in poorer economies often carrying out fewer non-routine analytical tasks, relying less on computers, and doing more routine or manual work. Such differences reduce the scope for productivity gains from GenAI deployment.

ILO and the World Bank conclude in the paper that GenAI’s labour-market effects will depend not only on the technology itself, but also on digital connectivity, skills, task organisation, labour-market institutions, and social protection. Expanded digital access, stronger skills policies, and better labour protections are presented as necessary if the gains from GenAI are to be shared more broadly.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

South Korea sets ambition to become AI leader

South Korea has unveiled a national strategy to become one of the world’s top three AI powers by 2028. The plan combines investment in digital infrastructure, data systems and next-generation connectivity.

Authorities aim to expand networks by advancing 5G capabilities and preparing for the commercial deployment of 6G by 2030. Cybersecurity and data integration are also key priorities to support a stronger digital ecosystem.

The strategy includes developing talent across education levels and investing in core technologies such as semiconductors and quantum computing. AI adoption is expected to expand across sectors, including manufacturing, healthcare and agriculture.

The South Korean officials also plan to promote digital inclusion through learning centres and assistive technologies. Coordination between ministries will be strengthened to ensure effective delivery of the long-term roadmap.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Campaign highlights risks of profit-driven digital platforms

A global campaign led by the Norwegian Consumer Council (NCC) has drawn attention to the decline in quality across digital platforms, a phenomenon widely referred to as ‘enshitification’, in which services deteriorate over time as companies prioritise monetisation over user experience.

The initiative has gained momentum through a viral video and coordinated advocacy efforts across multiple regions.

Inshitification is a term coined by journalist Cory Doctorow that describes a pattern in which platforms initially serve users well, then shift towards extracting value from both users and business partners.

In practice, it often results in increased advertising, paywalls, and reduced functionality, with platforms leveraging user dependence to introduce less favourable conditions.

More than 70 advocacy groups across the EU, the US and Norway have urged policymakers to take stronger action, arguing that declining competition and market concentration allow platforms to degrade services without losing users.

Network effects and high switching costs further limit consumer choice, making it difficult to move to alternative platforms even when dissatisfaction grows.

Existing frameworks, such as the Digital Markets Act and the Digital Services Act, aim to address some of these issues by promoting interoperability, transparency, and accountability.

However, experts argue that enforcement remains too slow and insufficient to deter harmful practices, suggesting that stronger regulatory intervention will be necessary to restore balance between consumers, platforms, and competition in the digital economy.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Stanford study warns about the risks of ‘sycophantic’ AI chatbots

A new study from Stanford University has raised concerns about the growing use of AI chatbots for personal advice, highlighting risks linked to a behaviour known as ‘sycophancy’, where systems validate users’ views instead of challenging them.

Researchers argue that such responses are not merely stylistic but have broader consequences for decision-making and social behaviour.

The analysis examined multiple leading models, including ChatGPT, Claude, and Gemini, and found that chatbot responses supported user perspectives far more often than human feedback.

In scenarios involving questionable or harmful actions, systems frequently endorsed behaviour that human evaluators would criticise, raising concerns about reliability in sensitive contexts such as relationships or ethical decisions.

Further experiments involving thousands of participants showed that users tend to prefer and trust sycophantic responses, increasing the likelihood of repeated use.

However, such interactions also appeared to reinforce self-centred thinking and reduce willingness to reconsider or apologise, suggesting a deeper impact on social judgement and interpersonal skills.

Researchers warn that users’ tendency to favour agreeable responses may create incentives for developers to prioritise engagement over accuracy or ethical balance.

The findings highlight the need for oversight and caution, with experts advising against relying on AI systems as substitutes for human guidance in complex personal situations.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!