UNESCO and Oxford University launch global AI course for courts

A free online course aimed at preparing judicial systems for the growing role of AI in legal decision-making has been launched, with UNESCO in partnership with the University of Oxford positioned at the centre of the initiative.

AI is already shaping court processes, influencing evidence assessment, and affecting access to justice. Yet, many legal professionals lack structured guidance to evaluate such systems within a rule-of-law framework.

The UNESCO programme introduces a practical, human rights-based approach to AI, combining legal, ethical, and operational perspectives.

Developed with institutions including Oxford’s Saïd Business School and Blavatnik School of Government, the course equips participants with tools to assess algorithmic outputs, manage risks of bias, and maintain judicial independence in increasingly digital court environments.

Central to UNESCO’s initiative is a newly developed AI and Rule of Law Checklist, designed to help courts scrutinise AI systems and their outputs, including use as evidence.

The course also addresses broader concerns, including fairness, transparency, accountability, and the protection of vulnerable groups, reflecting rising global reliance on AI across justice systems.

Supported by the EU, the course is available globally, free of charge, with certification from the University of Oxford. As AI becomes embedded in judicial processes, capacity-building efforts aim to ensure technological adoption strengthens rather than undermines the rule of law.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU pushes Android changes to open AI competition

The European Commission has outlined draft measures requiring Google to improve interoperability on Android as part of ongoing proceedings under the Digital Markets Act. Regulators are focusing on how third-party AI services can interact with hardware and software features controlled by the Android operating system.

The proposed measures are intended to give competing AI services access to key Android features already used by Google’s own AI services, including Gemini. In practice, that could allow rival services to support actions such as sending messages, sharing content, or completing tasks through user-preferred applications rather than being limited by Google’s default ecosystem.

The Commission’s approach could also make it easier for users to activate alternative AI assistants through customised interactions and device-level features, reducing dependence on default system tools. The broader aim is to give third-party providers a more equal opportunity to innovate and compete in the fast-moving market for AI services on mobile devices.

Feedback on the proposed measures is being gathered as part of the Commission’s specification proceedings under the DMA. The consultation forms part of a wider regulatory effort to enforce fair access to core platform features and strengthen digital competition across European markets, including in the AI sector.

Why does it matter?

The move targets one of the most important control points in the digital economy: the operating system layer. Opening Android features to competing AI services could reduce the structural advantage held by Google and shift power towards a more competitive, multi-provider mobile ecosystem. This is an inference based on the Commission’s stated objective of giving third-party AI services access equivalent to that available to Google’s own AI tools.

Greater interoperability under the Digital Markets Act could reshape how AI reaches users, turning smartphones into more open platforms rather than tightly controlled default environments. At the same time, the case also shows how strongly the EU is trying to apply competition law to the next phase of AI distribution, not only to search, app stores, and browsers.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

UK backs self-learning AI push to advance scientific discovery

The UK’s Sovereign AI Fund has invested in Ineffable Intelligence, a British startup developing self-learning AI systems designed to generate new knowledge rather than rely solely on existing data. The investment is being made alongside the British Business Bank.

The company is building algorithms intended to improve through interaction with their environment, refining outcomes through iterative experimentation. The approach is aimed at enabling AI systems to identify new patterns and solutions for use in science, engineering, and healthcare.

Led by AI researcher David Silver, known for his work in reinforcement learning, the project reflects a broader shift towards more autonomous and exploratory forms of AI. Support from the Sovereign AI Fund is intended to help the company scale its development from within the UK and strengthen longer-term domestic innovation capacity.

The investment forms part of a wider strategy to strengthen sovereign AI capability in the UK, reduce reliance on external technologies, and reinforce domestic expertise. In that context, infrastructure support and talent development are being positioned as part of a broader effort to support the growth of next-generation AI systems and expand the UK’s role in frontier research.

Why does it matter?

Investment in self-learning AI reflects a broader shift in how advanced AI is being developed, from systems that mainly analyse existing information towards systems intended to generate new insights through exploration and interaction. If those approaches prove effective, they could accelerate discovery in fields where conventional modelling and data-driven methods have clear limits. This is an inference based on the company’s stated aims and the government’s framing of the investment.

More broadly, sovereign investment in advanced AI highlights a growing focus on technological independence and strategic control over critical digital capability. Strengthening domestic capacity could help ensure that future AI innovation is developed within national ecosystems, with implications for economic competitiveness and long-term research direction.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Greece accelerates digital governance with AI enforcement and social media age restrictions

Greece is moving to tighten online child protection and expand AI-based public enforcement as part of a broader digital governance agenda, Digital Governance and Artificial Intelligence Minister Dimitris Papastergiou has said.

Under the plan, social media platforms would, from 2027, be required to block access for users under 15 using age verification systems rather than self-declared age data. However, AI is already being used in road safety enforcement, with smart cameras issuing digital fines through government platforms.

The policy includes tools such as Kids Wallet, built on privacy-preserving verification methods that share only age eligibility. Authorities say the aim is to address risks linked to digital addiction while strengthening protections for minors across online environments.

Alongside these measures, AI is already being deployed in road safety enforcement. Smart cameras are being used to issue digital fines through government platforms, with a nationwide rollout planned to expand monitoring and improve compliance.

These measures form part of a wider effort to digitise public administration, reduce inefficiencies, and strengthen accountability. By embedding technology more deeply into everyday governance, Greece is trying to reshape how citizens interact with the state while also addressing long-standing systemic problems.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

EU advances GPAI framework with focus on forecasting systemic risks

At the third meeting of the Signatory Taskforce, the European Commission advanced discussions on how to strengthen oversight of advanced AI systems through the General-Purpose AI Code of Practice, with a particular focus on risk forecasting and harmful manipulation.

The latest GPAI taskforce meeting focused on improving how providers assess and anticipate systemic risks linked to high-impact AI models. A central proposal would require providers to estimate when future systems may exceed the highest systemic risk tier already reached by any of their existing models, using structured forecasting methods.

The Commission is also considering using aggregate forecasts across the industry to provide a broader view of technological trends, including compute capacity, algorithmic efficiency, and data availability. The aim is to improve visibility into how capabilities may evolve across the sector rather than only at the level of individual providers.

Attention was also directed towards harmful manipulation, which the Code treats as a recognised systemic risk. Discussions focused on how providers should develop realistic scenarios for testing and evaluating model behaviour, including in deployment settings such as chatbot interfaces, third-party applications, and agentic systems.

The initiative reflects a wider EU regulatory approach centred on transparency, accountability, and proactive governance in AI development. Rather than waiting for harms to materialise, the Code of Practice is being used to push providers to identify risks earlier and to adopt more structured safety planning for general-purpose AI models with systemic risk.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

UN prepares first Global Dialogue on AI governance ahead of Geneva meeting

The United Nations is advancing preparations for the first Global Dialogue on Artificial Intelligence Governance, set to take place in Geneva on 6–7 July 2026 alongside the AI for Good Summit.

Speaking at a UN Geneva press briefing, Egriselda López, Permanent Representative of El Salvador and co-chair of the Dialogue, said the initiative was established by UN member states as a universal forum to discuss AI governance. The process is intended to bring together governments and stakeholders with the aim of producing tangible outcomes.

López said the initial meeting will be structured around thematic clusters, including one focusing on AI opportunities and implications and another addressing the digital divide. She added that consultations with member states and stakeholders are ongoing to ensure an inclusive format for the discussions.

Rein Tammsaar, Permanent Representative of Estonia and co-chair of the Dialogue, said the forum aims to connect existing AI initiatives and best practices from around the world. He stressed the importance of interoperability and coordination, noting that the Dialogue seeks to create synergies rather than duplicate existing efforts.

According to Tammsaar, additional thematic areas will include interoperability, safety, and human rights. While human rights are expected to be a cross-cutting issue, stakeholders have also called for it to be addressed as a standalone theme.

Amandeep Gill, UN Secretary-General’s Envoy on Technology, described the initiative as part of a broader approach to ensuring that AI benefits humanity as a whole. He said the Dialogue is designed as a ‘dialogue of dialogues’, enabling governments, experts and other stakeholders to exchange knowledge in a rapidly evolving technological environment.

Gill also highlighted the role of the Independent International Scientific Panel on AI, which is expected to present its findings at the Geneva meeting. He noted that global capacity to both use and govern AI remains uneven, underlining the need to address disparities between countries.

Officials emphasised that the Dialogue is intended to complement existing initiatives rather than centralise governance efforts. It will focus on issues such as safety and human rights, while discussions on military uses of AI fall outside its mandate.

A second Global Dialogue on AI Governance meeting is planned for May 2027 in New York, as part of ongoing efforts to develop a more coordinated and inclusive global approach to AI governance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot 

Saudi initiative attempts to link AI with sustainability goals

A new AI-enabled sustainability platform developed with support from the World Economic Forum aims to strengthen partnerships across sectors. The initiative is led by Saudi Arabia’s Ministry of Economy and Planning as part of its wider development agenda.

The platform, known as SUSTAIN, uses AI to match organisations with potential partners and opportunities. It is designed to connect government, businesses, academia, and civil society more efficiently and to help move sustainability projects from planning to implementation.

Developers say the system could accelerate collaboration and support the delivery of higher-impact sustainability projects. Official estimates suggest it could help unlock partnerships worth up to $20 billion in Saudi Arabia and significantly more across the wider region.

The initiative forms part of broader efforts to advance long-term sustainability goals through more coordinated action and practical uses of AI. The project is being developed in Saudi Arabia and presented as a tool to strengthen cross-sector cooperation rather than a stand-alone sustainability programme.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Kazakhstan advances digital economy with AI business assistant

Kazakhstan has introduced an AI-powered assistant designed to simplify the process of starting a business, according to Zhaslan Madiyev. Developed in cooperation with the Ministry of Finance, the platform aims to provide data-driven guidance to early-stage entrepreneurs.

Built around a digital mapping system, the assistant evaluates factors such as nearby businesses, customer flow, and competition. Its recommendations aim to help users choose more viable locations and avoid oversaturated sectors, thereby reducing the risk of duplicating businesses in the same area.

Officials say the tool could reduce startup operating costs by up to half while improving long-term business sustainability. Alongside it, a second AI assistant already provides continuous guidance on tax reporting and regulatory compliance, translating complex requirements into clearer, more practical steps for users. According to Kazakhstani reporting, the tax assistant has already processed more than 5,000 requests.

The development forms part of Kazakhstan’s wider digital transformation agenda, which aims to modernise public services and strengthen the country’s digital economy through practical AI deployment. The government says more than 50 AI-powered services are now being developed to support citizens and businesses.

Why does it matter?

Kazakhstan’s AI assistant points to a shift from basic digital services towards more active, real-time decision support for entrepreneurs. Data-driven recommendations can help reduce startup risks, limit market oversaturation, and support more efficient resource allocation across local economies.

Simplified tax and compliance guidance also targets one of the main barriers facing early-stage businesses: administrative complexity. Placed within Kazakhstan’s broader AI-first digital strategy, the initiative signals a wider move towards a more competitive and operationally AI-driven digital economy.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

England’s Ofqual advises awarding bodies on AI risks in qualifications

Ofqual, the regulator responsible for qualifications, exams, and assessments in England, has issued an advice note to help awarding organisations assess and manage the risks of AI-related malpractice.

The note explains how existing Conditions of Recognition and related Guidance apply where learners use AI tools in ways that could undermine assessment validity. It does not create new regulatory requirements, but is intended to help awarding organisations understand how current expectations apply in this context.

The risks, Ofqual notes, will vary depending on the qualification and assessment design. Relevant factors include who sets the task, how specific it is, the type of output being assessed, the length and timing of the assessment, the level of supervision, access to digital devices and internet connectivity, and differences in delivery across centres.

The advice also points to wider contextual factors, including the stakes attached to an assessment, its weighting within a qualification, and norms around technology use in particular subject areas. Awarding organisations are advised to consider whether changes introduced to reduce vulnerability to AI-related malpractice could, in turn, affect the construct being assessed or assessment validity more broadly.

The note states that awarding organisations must consider the reasonable steps needed to prevent malpractice and manage its effects, with measures proportionate to the identified risks. Possible responses include adapting assessment design, clarifying acceptable and unacceptable uses of AI, introducing supervision or controls on digital access, and requiring authenticity declarations from learners.

Ofqual also advises awarding organisations to review how they detect and investigate suspected malpractice. Statistical or technological tools may support that process, but should be treated as sources of evidence rather than sole determinants, given the risks of false positives and false negatives.

The advice also notes that some qualifications may legitimately require the use of AI as part of the construct being assessed. In such cases, awarding organisations should set clear parameters for how AI may be used and how that use should be demonstrated or referenced.

Ofqual says awarding organisations should keep their arrangements under review as AI tools and patterns of learner use continue to evolve, and should use any cases of malpractice or maladministration to identify weaknesses and prevent recurrence.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Malaysia expands national AI strategy through Microsoft partnership

Malaysia is strengthening its national AI strategy through an expanded partnership with Microsoft, launching the Microsoft Elevate initiative to accelerate AI readiness across society.

The programme aligns with the country’s AI Nation 2030 ambitions and extends digital skills development beyond traditional sectors.

An initiative that targets educators, public sector institutions, small businesses and wider communities, aiming to embed practical AI capabilities into everyday economic and social activity.

Early deployment has already reached tens of thousands of learners, reflecting a shift from pilot programmes to large-scale national implementation.

Government and industry leaders in Malaysia emphasise that long-term competitiveness depends not only on technological investment but on widespread adoption and understanding of AI tools.

The programme therefore prioritises workforce activation, institutional capacity and sustainable integration across sectors.

Malaysia’s approach reflects a broader global trend where public–private partnerships are increasingly central to AI development, focusing on inclusive access, responsible use and real-world application rather than purely technological advancement.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!