UK backs self-learning AI push to advance scientific discovery

The UK’s Sovereign AI Fund has invested in Ineffable Intelligence, a British startup developing self-learning AI systems designed to generate new knowledge rather than rely solely on existing data. The investment is being made alongside the British Business Bank.

The company is building algorithms intended to improve through interaction with their environment, refining outcomes through iterative experimentation. The approach is aimed at enabling AI systems to identify new patterns and solutions for use in science, engineering, and healthcare.

Led by AI researcher David Silver, known for his work in reinforcement learning, the project reflects a broader shift towards more autonomous and exploratory forms of AI. Support from the Sovereign AI Fund is intended to help the company scale its development from within the UK and strengthen longer-term domestic innovation capacity.

The investment forms part of a wider strategy to strengthen sovereign AI capability in the UK, reduce reliance on external technologies, and reinforce domestic expertise. In that context, infrastructure support and talent development are being positioned as part of a broader effort to support the growth of next-generation AI systems and expand the UK’s role in frontier research.

Why does it matter?

Investment in self-learning AI reflects a broader shift in how advanced AI is being developed, from systems that mainly analyse existing information towards systems intended to generate new insights through exploration and interaction. If those approaches prove effective, they could accelerate discovery in fields where conventional modelling and data-driven methods have clear limits. This is an inference based on the company’s stated aims and the government’s framing of the investment.

More broadly, sovereign investment in advanced AI highlights a growing focus on technological independence and strategic control over critical digital capability. Strengthening domestic capacity could help ensure that future AI innovation is developed within national ecosystems, with implications for economic competitiveness and long-term research direction.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Greece accelerates digital governance with AI enforcement and social media age restrictions

Greece is moving to tighten online child protection and expand AI-based public enforcement as part of a broader digital governance agenda, Digital Governance and Artificial Intelligence Minister Dimitris Papastergiou has said.

Under the plan, social media platforms would, from 2027, be required to block access for users under 15 using age verification systems rather than self-declared age data. However, AI is already being used in road safety enforcement, with smart cameras issuing digital fines through government platforms.

The policy includes tools such as Kids Wallet, built on privacy-preserving verification methods that share only age eligibility. Authorities say the aim is to address risks linked to digital addiction while strengthening protections for minors across online environments.

Alongside these measures, AI is already being deployed in road safety enforcement. Smart cameras are being used to issue digital fines through government platforms, with a nationwide rollout planned to expand monitoring and improve compliance.

These measures form part of a wider effort to digitise public administration, reduce inefficiencies, and strengthen accountability. By embedding technology more deeply into everyday governance, Greece is trying to reshape how citizens interact with the state while also addressing long-standing systemic problems.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

EU advances GPAI framework with focus on forecasting systemic risks

At the third meeting of the Signatory Taskforce, the European Commission advanced discussions on how to strengthen oversight of advanced AI systems through the General-Purpose AI Code of Practice, with a particular focus on risk forecasting and harmful manipulation.

The latest GPAI taskforce meeting focused on improving how providers assess and anticipate systemic risks linked to high-impact AI models. A central proposal would require providers to estimate when future systems may exceed the highest systemic risk tier already reached by any of their existing models, using structured forecasting methods.

The Commission is also considering using aggregate forecasts across the industry to provide a broader view of technological trends, including compute capacity, algorithmic efficiency, and data availability. The aim is to improve visibility into how capabilities may evolve across the sector rather than only at the level of individual providers.

Attention was also directed towards harmful manipulation, which the Code treats as a recognised systemic risk. Discussions focused on how providers should develop realistic scenarios for testing and evaluating model behaviour, including in deployment settings such as chatbot interfaces, third-party applications, and agentic systems.

The initiative reflects a wider EU regulatory approach centred on transparency, accountability, and proactive governance in AI development. Rather than waiting for harms to materialise, the Code of Practice is being used to push providers to identify risks earlier and to adopt more structured safety planning for general-purpose AI models with systemic risk.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

UN prepares first Global Dialogue on AI governance ahead of Geneva meeting

The United Nations is advancing preparations for the first Global Dialogue on Artificial Intelligence Governance, set to take place in Geneva on 6–7 July 2026 alongside the AI for Good Summit.

Speaking at a UN Geneva press briefing, Egriselda López, Permanent Representative of El Salvador and co-chair of the Dialogue, said the initiative was established by UN member states as a universal forum to discuss AI governance. The process is intended to bring together governments and stakeholders with the aim of producing tangible outcomes.

López said the initial meeting will be structured around thematic clusters, including one focusing on AI opportunities and implications and another addressing the digital divide. She added that consultations with member states and stakeholders are ongoing to ensure an inclusive format for the discussions.

Rein Tammsaar, Permanent Representative of Estonia and co-chair of the Dialogue, said the forum aims to connect existing AI initiatives and best practices from around the world. He stressed the importance of interoperability and coordination, noting that the Dialogue seeks to create synergies rather than duplicate existing efforts.

According to Tammsaar, additional thematic areas will include interoperability, safety, and human rights. While human rights are expected to be a cross-cutting issue, stakeholders have also called for it to be addressed as a standalone theme.

Amandeep Gill, UN Secretary-General’s Envoy on Technology, described the initiative as part of a broader approach to ensuring that AI benefits humanity as a whole. He said the Dialogue is designed as a ‘dialogue of dialogues’, enabling governments, experts and other stakeholders to exchange knowledge in a rapidly evolving technological environment.

Gill also highlighted the role of the Independent International Scientific Panel on AI, which is expected to present its findings at the Geneva meeting. He noted that global capacity to both use and govern AI remains uneven, underlining the need to address disparities between countries.

Officials emphasised that the Dialogue is intended to complement existing initiatives rather than centralise governance efforts. It will focus on issues such as safety and human rights, while discussions on military uses of AI fall outside its mandate.

A second Global Dialogue on AI Governance meeting is planned for May 2027 in New York, as part of ongoing efforts to develop a more coordinated and inclusive global approach to AI governance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot 

Saudi initiative attempts to link AI with sustainability goals

A new AI-enabled sustainability platform developed with support from the World Economic Forum aims to strengthen partnerships across sectors. The initiative is led by Saudi Arabia’s Ministry of Economy and Planning as part of its wider development agenda.

The platform, known as SUSTAIN, uses AI to match organisations with potential partners and opportunities. It is designed to connect government, businesses, academia, and civil society more efficiently and to help move sustainability projects from planning to implementation.

Developers say the system could accelerate collaboration and support the delivery of higher-impact sustainability projects. Official estimates suggest it could help unlock partnerships worth up to $20 billion in Saudi Arabia and significantly more across the wider region.

The initiative forms part of broader efforts to advance long-term sustainability goals through more coordinated action and practical uses of AI. The project is being developed in Saudi Arabia and presented as a tool to strengthen cross-sector cooperation rather than a stand-alone sustainability programme.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Kazakhstan advances digital economy with AI business assistant

Kazakhstan has introduced an AI-powered assistant designed to simplify the process of starting a business, according to Zhaslan Madiyev. Developed in cooperation with the Ministry of Finance, the platform aims to provide data-driven guidance to early-stage entrepreneurs.

Built around a digital mapping system, the assistant evaluates factors such as nearby businesses, customer flow, and competition. Its recommendations aim to help users choose more viable locations and avoid oversaturated sectors, thereby reducing the risk of duplicating businesses in the same area.

Officials say the tool could reduce startup operating costs by up to half while improving long-term business sustainability. Alongside it, a second AI assistant already provides continuous guidance on tax reporting and regulatory compliance, translating complex requirements into clearer, more practical steps for users. According to Kazakhstani reporting, the tax assistant has already processed more than 5,000 requests.

The development forms part of Kazakhstan’s wider digital transformation agenda, which aims to modernise public services and strengthen the country’s digital economy through practical AI deployment. The government says more than 50 AI-powered services are now being developed to support citizens and businesses.

Why does it matter?

Kazakhstan’s AI assistant points to a shift from basic digital services towards more active, real-time decision support for entrepreneurs. Data-driven recommendations can help reduce startup risks, limit market oversaturation, and support more efficient resource allocation across local economies.

Simplified tax and compliance guidance also targets one of the main barriers facing early-stage businesses: administrative complexity. Placed within Kazakhstan’s broader AI-first digital strategy, the initiative signals a wider move towards a more competitive and operationally AI-driven digital economy.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

England’s Ofqual advises awarding bodies on AI risks in qualifications

Ofqual, the regulator responsible for qualifications, exams, and assessments in England, has issued an advice note to help awarding organisations assess and manage the risks of AI-related malpractice.

The note explains how existing Conditions of Recognition and related Guidance apply where learners use AI tools in ways that could undermine assessment validity. It does not create new regulatory requirements, but is intended to help awarding organisations understand how current expectations apply in this context.

The risks, Ofqual notes, will vary depending on the qualification and assessment design. Relevant factors include who sets the task, how specific it is, the type of output being assessed, the length and timing of the assessment, the level of supervision, access to digital devices and internet connectivity, and differences in delivery across centres.

The advice also points to wider contextual factors, including the stakes attached to an assessment, its weighting within a qualification, and norms around technology use in particular subject areas. Awarding organisations are advised to consider whether changes introduced to reduce vulnerability to AI-related malpractice could, in turn, affect the construct being assessed or assessment validity more broadly.

The note states that awarding organisations must consider the reasonable steps needed to prevent malpractice and manage its effects, with measures proportionate to the identified risks. Possible responses include adapting assessment design, clarifying acceptable and unacceptable uses of AI, introducing supervision or controls on digital access, and requiring authenticity declarations from learners.

Ofqual also advises awarding organisations to review how they detect and investigate suspected malpractice. Statistical or technological tools may support that process, but should be treated as sources of evidence rather than sole determinants, given the risks of false positives and false negatives.

The advice also notes that some qualifications may legitimately require the use of AI as part of the construct being assessed. In such cases, awarding organisations should set clear parameters for how AI may be used and how that use should be demonstrated or referenced.

Ofqual says awarding organisations should keep their arrangements under review as AI tools and patterns of learner use continue to evolve, and should use any cases of malpractice or maladministration to identify weaknesses and prevent recurrence.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Malaysia expands national AI strategy through Microsoft partnership

Malaysia is strengthening its national AI strategy through an expanded partnership with Microsoft, launching the Microsoft Elevate initiative to accelerate AI readiness across society.

The programme aligns with the country’s AI Nation 2030 ambitions and extends digital skills development beyond traditional sectors.

An initiative that targets educators, public sector institutions, small businesses and wider communities, aiming to embed practical AI capabilities into everyday economic and social activity.

Early deployment has already reached tens of thousands of learners, reflecting a shift from pilot programmes to large-scale national implementation.

Government and industry leaders in Malaysia emphasise that long-term competitiveness depends not only on technological investment but on widespread adoption and understanding of AI tools.

The programme therefore prioritises workforce activation, institutional capacity and sustainable integration across sectors.

Malaysia’s approach reflects a broader global trend where public–private partnerships are increasingly central to AI development, focusing on inclusive access, responsible use and real-world application rather than purely technological advancement.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

UNIDIR highlights the security implications of the shift from classical to quantum technologies

The United Nations Institute for Disarmament Research (UNIDIR) has outlined the evolution of digital technologies from early internet systems to emerging quantum capabilities, highlighting their growing impact on global systems and security.

In its analysis, UNIDIR traces the progression from dial-up connectivity and classical computing to advanced technologies such as AI and quantum computing, noting that innovation cycles are accelerating and becoming increasingly interconnected. The organisation states that the transition to quantum technologies represents a significant shift in how data is processed, stored and secured.

Unlike classical systems, quantum computing introduces new capabilities that could transform fields ranging from scientific research to communications.

However, UNIDIR warns that these advances also present risks, particularly in cybersecurity. Quantum technologies could challenge existing encryption methods and expose vulnerabilities in digital infrastructure, with implications for governments, businesses and critical systems.

The analysis also links emerging technologies to broader geopolitical dynamics, noting that competition over technological leadership is becoming a key factor in international security. As digital and physical systems converge, technological developments are increasingly shaping strategic stability.

Why does it matter?

UNIDIR emphasises the need for forward-looking governance, international cooperation and policy coordination to manage these challenges. It calls for stronger dialogue among states and stakeholders to ensure that technological progress supports global security rather than undermines it.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Azerbaijan explores regulatory framework for AI and intellectual property

Azerbaijani lawmakers and experts discussed the legal status of AI systems and their implications for intellectual property (IP) at a policy roundtable in Baku, Trend News Agency reported.

Speaking at the event marking World Intellectual Property Day, Member of the Azerbeijani Parliament Hijran Huseynova said that defining the legal nature of AI remains a key issue as the technology advances.

Participants highlighted differing views on whether AI should be treated as a legal entity or regarded solely as a tool. While some experts argued that AI lacks independent legal standing, others suggested that its ability to make autonomous decisions requires deeper legal examination.

The discussion also addressed whether outputs generated by AI systems can qualify for patent protection, an issue that remains under debate in many jurisdictions.

Huseynova noted that the growing use of AI is raising complex questions about ownership and rights, as traditional intellectual property frameworks are based on human creativity.

Why does it matter?

The debate comes as Azerbaijan advances its national AI strategy for 2025–2028, which includes efforts to establish legal and institutional frameworks for the development and regulation of AI technologies. Officials say these measures aim to address emerging legal challenges and support the responsible adoption of AI as part of the country’s broader digital transformation agenda.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot