AI needs digital public infrastructure to work for citizens, World Economic Forum says

The World Economic Forum says AI will only improve public services at scale if governments build on strong digital public infrastructure rather than fragmented systems and isolated pilot projects.

In a new analysis, the WEF points to digital identity, payments, and data exchange as the core layers that already support service delivery in many countries.

It argues that AI can make those systems more responsive by speeding up tasks such as identity verification, record retrieval, and payment processing.

But the Forum also warns that combining AI with digital public infrastructure will not work without clear safeguards. Interoperability, trust, and consent-based data use are presented as essential to making AI systems effective across public institutions while protecting users.

The wider message is that AI in government is no longer just a question of adoption. For countries hoping to scale public-sector AI, the bigger challenge is whether the underlying digital infrastructure is strong enough to support it.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

Study suggests AI reliance may weaken short-term problem-solving

A recent study by researchers from Carnegie Mellon University, the University of Oxford, MIT, and UCLA suggests that reliance on AI for basic tasks may temporarily weaken cognitive performance.

Participants who used AI tools to complete simple maths and reading exercises initially performed better than those working without assistance. However, once the technology was removed, their accuracy declined, and they were less likely to persist with the tasks.

The findings suggest that even brief exposure to AI support can reduce a person’s willingness to engage in sustained problem-solving, which remains essential to learning and skill development.

Researchers found that participants became more likely to abandon tasks and less able to complete them independently after relying on AI assistance.

The results add to wider concerns about how AI may be reshaping learning habits and intellectual development. Related research from MIT has described a phenomenon called ‘cognitive debt’, in which heavy reliance on AI tools may weaken retention, understanding, and independent reasoning over time.

Taken together, the studies point to a growing tension in AI design. While such tools can improve speed and convenience, they may also reduce the mental effort needed to build lasting cognitive skills. That suggests AI systems may need to be designed to support learning without replacing independent thought altogether.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

UK Defence Innovation opens Biosecurity Frontiers competition with up to £2 million

UK Defence Innovation has opened the Biosecurity Frontiers themed competition, run by the Cabinet Office on behalf of the UK government, and is seeking innovative proposals to help deliver the ambitions of the 2023 UK Biological Security Strategy and the 2025 National Security Strategy.

The competition document states that proposals may be used by multiple government departments, sectors, and frontline users, including the police, the military, and NHS/public health bodies.

Up to £2 million excluding VAT is available, with the government expecting to fund five to seven proposals across three challenge areas: biodetection and biosurveillance; AI and diagnostics, therapeutics, and vaccines; and non-pharmaceutical protective systems.

Individual awards are expected to be in the region of £100,000 to £500,000, though the document states proposals at higher or lower values may also be funded.

The submission deadline is 12:00 midday BST on 10 June 2026. Projects are expected to start in September 2026 and run for no longer than 12 months. Proposals must progress through at least one Technology Readiness Level. For Challenges 1 and 3, projects must reach Technology Readiness Level (TRL) 4-6, while Challenge 2 projects may reach TRL 7.

For biodetection and biosurveillance, the competition seeks capabilities to detect and monitor traditional and novel biological threats, including portable surveillance technologies, computational tools for analysing complex datasets, and permanently installed air surveillance systems in high-footfall locations.

For AI and diagnostics, therapeutics, and vaccines, the document refers to AI-based support for identifying and developing new diagnostic, therapeutic, and vaccine candidates, including structure-based discovery and development tools.

For non-pharmaceutical protective systems, the competition covers lower-cost personal protective equipment, respiratory protective equipment with improved fit, decontamination and disinfection approaches, biodegradable PPE materials, and solutions that remove humans from operations in contaminated areas. The competition document says it is funded by the Integrated Security Fund, which supports priority national security themes in the UK 2025 National Security Strategy.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Software sector faces AI disruption

An analysis from Goodbody Stockbrokers UC suggests that generative AI is reshaping the technology sector, raising questions about the long-term outlook for software as a service models. The report highlights shifting investor sentiment towards software companies.

According to Goodbody Stockbrokers UC, increased AI investment by major technology firms is driving demand for infrastructure and data processing, while also changing how software is developed and used. This shift is influencing spending patterns across the sector.

The report notes that software services have recently underperformed, particularly in the UK and Europe, due to weaker demand, pricing pressure and longer sales cycles. These trends reflect broader uncertainty as AI adoption accelerates.

Goodbody Stockbrokers UC indicates that AI is creating both disruption and opportunity, with the sector adapting to new technology layers and investment priorities as the industry evolves globally.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Generative AI practice note issued by Federal Court of Australia

The Federal Court of Australia published its Generative Artificial Intelligence Practice Note, setting out the Court’s expectations and guidance for the use of generative AI in proceedings before it.

According to the Notice to the Profession, the Practice Note explains what generative AI is, recognises its potential benefits for efficiency, cost reduction, and access to justice, and states that its use must remain consistent with existing legal and professional obligations.

The Practice Note identifies areas requiring particular caution, including the preparation of pleadings, submissions, evidence, and dealings with confidential or protected information. It also explains when disclosure of the use of generative AI may be required.

The Federal Court of Australia says it considered the implications of generative AI for court proceedings throughout 2024 and 2025. That work drew on a public statement by the Chief Justice on 28 March 2025, and a Notice to the Profession issued on 29 April 2025.

The Notice says the Court sought to balance the administration of justice with the responsible adoption of emerging technologies, while maintaining parties’ accountability for material filed. It also says the Court plans to convene a symposium in the coming months on the challenges and benefits of generative AI in Federal Court proceedings.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Researchers flag risks in EU AI changes

A research paper by Hannah van Kolfschooten, Barry Solaiman and Daria Onitiu examines how recent European Union policy proposals could affect safeguards for medical AI under the EU AI Act. The study focuses on changes linked to broader simplification initiatives.

According to the authors, the reforms could maintain the classification of AI-enabled medical devices as high risk while removing key obligations tied to that classification. These include requirements on data governance, risk management and human oversight.

The paper argues that this shift would separate risk classification from the safeguards that give it practical meaning. It suggests that reliance may move back towards existing medical device laws without equivalent AI-specific protections.

The authors warn that such changes could weaken oversight, increase legal uncertainty and affect patient safety where AI systems influence clinical decisions in the European Union.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Ghana expands AI skills with UN and the Government of Japan partners

The Ghanaian Ministry of Communication, Digital Technology and Innovations has launched a public-sector AI capacity development programme in collaboration with the Government of Japan and the United Nations Development Programme. The initiative aims to strengthen digital skills across government institutions.

According to the Ministry, the programme is designed to equip public officials with knowledge of AI and its applications in governance. It focuses on improving decision-making and service delivery, drawing on experience from the UN and Japan.

Why does it matter?

The initiative includes training, practical sessions and policy discussions to support responsible adoption of AI technologies. It also aims to help institutions identify relevant use cases and implementation strategies.

The Ministry presents the interdisciplinary programme as part of broader efforts to advance digital transformation and strengthen institutional capacity in Ghana.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenAI expands cyber defence programme with trusted access and industry partnerships

The US AI research and deployment company, OpenAI, has introduced an expanded cyber defence initiative aimed at strengthening collaboration across the cybersecurity ecosystem.

A programme, known as Trusted Access for Cyber, is designed to provide advanced AI capabilities to vetted organisations while maintaining safeguards based on trust, validation and accountability.

Such an initiative by OpenAI includes financial support through a cybersecurity grant programme, allocating resources to organisations working on software supply chain security and vulnerability research.

By enabling broader access to advanced tools, the programme seeks to support developers and smaller teams that may lack continuous security capacity.

A range of industry participants, including Cisco, Cloudflare and NVIDIA, are involved in testing and applying these capabilities within complex digital environments.

Public sector collaboration is also reflected through partnerships with institutions focused on evaluating AI safety and security standards.

The initiative reflects a broader approach to cybersecurity as a distributed responsibility, where public and private actors contribute to resilience.

It also highlights the increasing role of AI systems in identifying vulnerabilities and supporting defensive research across critical infrastructure and digital services.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Google expands AI partnerships to support digital transformation in Latin America

A series of initiatives aimed at supporting AI adoption across Latin America has been announced by Google, in partnership with the Inter-American Development Bank.

These measures focus on public sector capacity, digital infrastructure and policy development as governments seek to integrate AI into economic and administrative systems.

The initiatives include the release of a policy-oriented report outlining how AI could contribute to regional economic growth, alongside guidance on workforce development, infrastructure expansion and regulatory frameworks.

An approach that emphasises responsible adoption, with attention to balancing innovation with risk management.

A further component involves the creation of an AI training academy for public officials, designed to improve institutional capacity to manage and deploy AI technologies.

In parallel, funding support has been allocated to expand digital public infrastructure (DPI), including cross-border digital identity systems intended to improve service delivery and administrative efficiency.

The programme by Google reflects broader trends in international cooperation on digital transformation, where public and private actors collaborate to scale AI adoption while addressing structural gaps in skills, infrastructure and governance across emerging economies.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Kazakhstan introduces mandatory audits for high-risk AI systems

Kazakhstan has introduced new rules requiring audits of high-risk AI systems before they are included in official government lists. The framework sets out procedures for identifying and publishing trusted AI systems across sectors.

Sectoral authorities will compile and update lists of high-risk AI systems based on applications submitted by system owners. These lists will be published on official government websites to promote transparency and trust.

Applicants must submit formal requests, documents confirming intellectual property rights and a positive audit conclusion. Authorities will review submissions within ten working days, assessing system purpose, functionality and required documentation.

Systems that meet all criteria will be added to the list and published within five working days. If inconsistencies are identified, applicants will be notified and may resubmit documents for review within a shortened timeframe.

Updated versions of the lists will be released as revisions occur, ensuring ongoing oversight of AI systems. The measures aim to support structured monitoring and responsible use of AI technologies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot