Ghana expands AI skills with UN and the Government of Japan partners

The Ghanaian Ministry of Communication, Digital Technology and Innovations has launched a public-sector AI capacity development programme in collaboration with the Government of Japan and the United Nations Development Programme. The initiative aims to strengthen digital skills across government institutions.

According to the Ministry, the programme is designed to equip public officials with knowledge of AI and its applications in governance. It focuses on improving decision-making and service delivery, drawing on experience from the UN and Japan.

Why does it matter?

The initiative includes training, practical sessions and policy discussions to support responsible adoption of AI technologies. It also aims to help institutions identify relevant use cases and implementation strategies.

The Ministry presents the interdisciplinary programme as part of broader efforts to advance digital transformation and strengthen institutional capacity in Ghana.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenAI expands cyber defence programme with trusted access and industry partnerships

The US AI research and deployment company, OpenAI, has introduced an expanded cyber defence initiative aimed at strengthening collaboration across the cybersecurity ecosystem.

A programme, known as Trusted Access for Cyber, is designed to provide advanced AI capabilities to vetted organisations while maintaining safeguards based on trust, validation and accountability.

Such an initiative by OpenAI includes financial support through a cybersecurity grant programme, allocating resources to organisations working on software supply chain security and vulnerability research.

By enabling broader access to advanced tools, the programme seeks to support developers and smaller teams that may lack continuous security capacity.

A range of industry participants, including Cisco, Cloudflare and NVIDIA, are involved in testing and applying these capabilities within complex digital environments.

Public sector collaboration is also reflected through partnerships with institutions focused on evaluating AI safety and security standards.

The initiative reflects a broader approach to cybersecurity as a distributed responsibility, where public and private actors contribute to resilience.

It also highlights the increasing role of AI systems in identifying vulnerabilities and supporting defensive research across critical infrastructure and digital services.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Google expands AI partnerships to support digital transformation in Latin America

A series of initiatives aimed at supporting AI adoption across Latin America has been announced by Google, in partnership with the Inter-American Development Bank.

These measures focus on public sector capacity, digital infrastructure and policy development as governments seek to integrate AI into economic and administrative systems.

The initiatives include the release of a policy-oriented report outlining how AI could contribute to regional economic growth, alongside guidance on workforce development, infrastructure expansion and regulatory frameworks.

An approach that emphasises responsible adoption, with attention to balancing innovation with risk management.

A further component involves the creation of an AI training academy for public officials, designed to improve institutional capacity to manage and deploy AI technologies.

In parallel, funding support has been allocated to expand digital public infrastructure (DPI), including cross-border digital identity systems intended to improve service delivery and administrative efficiency.

The programme by Google reflects broader trends in international cooperation on digital transformation, where public and private actors collaborate to scale AI adoption while addressing structural gaps in skills, infrastructure and governance across emerging economies.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Kazakhstan introduces mandatory audits for high-risk AI systems

Kazakhstan has introduced new rules requiring audits of high-risk AI systems before they are included in official government lists. The framework sets out procedures for identifying and publishing trusted AI systems across sectors.

Sectoral authorities will compile and update lists of high-risk AI systems based on applications submitted by system owners. These lists will be published on official government websites to promote transparency and trust.

Applicants must submit formal requests, documents confirming intellectual property rights and a positive audit conclusion. Authorities will review submissions within ten working days, assessing system purpose, functionality and required documentation.

Systems that meet all criteria will be added to the list and published within five working days. If inconsistencies are identified, applicants will be notified and may resubmit documents for review within a shortened timeframe.

Updated versions of the lists will be released as revisions occur, ensuring ongoing oversight of AI systems. The measures aim to support structured monitoring and responsible use of AI technologies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot 

UK’s NCSC chief warns frontier AI will speed up cybersecurity threats

Dr Richard Horne, chief executive of the United Kingdom’s National Cyber Security Centre (NCSC), said advances in frontier AI models will make it easier, faster, and cheaper to find and exploit software vulnerabilities, increasing pressure on organisations to strengthen their security baseline.

In a piece published on the NCSC website, Horne said the longer-term effect of AI-assisted vulnerability discovery could be positive if technology suppliers use such tools to identify and fix weaknesses across the lifecycle of products and services. He also warned that the path to that outcome brings immediate risks and requires urgent action.

Horne said organisations that have not taken appropriate steps to safeguard their systems will increasingly be exposed as AI lowers the time, skill, and resources needed to identify exploitable weaknesses. He added that pressure to apply security patches quickly will become more acute as these capabilities develop.

Horne added that said organisations should follow established NCSC guidance, including reducing unnecessary exposure to attack, applying security updates rapidly, and monitoring for and responding quickly to malicious activity.

Horne also said these measures must be championed by leaders and boards, describing cyber risk as business risk. He added that government-backed schemes such as Cyber Essentials can help organisations and their customers gain confidence that core security practices are being followed.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU proposes data sharing measures for Google under Digital Markets Act

The European Commission has issued preliminary findings proposing measures for Google under the Digital Markets Act, focusing on access to search engine data.

These measures aim to ensure that third-party services can compete more effectively in digital markets characterised by high concentration.

The proposal would require Google to provide access to key categories of search data, including ranking, query, click and view data, on fair, reasonable and non-discriminatory terms.

Eligible recipients may include competing search engines as well as AI-based services with search functionalities.

Additional provisions address how data should be shared, including frequency, technical access conditions and pricing parameters. The framework also includes safeguards for anonymisation, reflecting the need to balance competition objectives with data protection requirements.

The Commission has opened a public consultation to gather stakeholder input on the proposed measures.

A case that illustrates ongoing efforts to operationalise the Digital Markets Act by addressing structural imbalances in access to data within the platform economy.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Indonesia calls for targeted strategy to close AI development gap

Indonesia is seeking to narrow gaps in AI development through targeted strategies in knowledge, investment and infrastructure. The approach was outlined by Deputy Minister Stella Christie during a policy discussion.

Christie said AI capabilities remain concentrated in developed countries, particularly in research output and patent production. She noted that understanding these gaps is essential to shaping effective national strategies.

She emphasised the need to build specialised capabilities aligned with national strengths, citing sectors such as seaweed research. Investment decisions should focus on areas that match domestic needs and priorities.

On infrastructure, Christie highlighted the importance of data management and local capacity as key components of AI systems. She added that data availability could support development if managed securely and effectively.

Infrastructure expansion, including data centres, must consider a stable and sustainable energy supply. She said coordinated efforts across education, investment and infrastructure are required to strengthen competitiveness.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot 

Microsoft highlights healthcare AI use in emergency response, diagnosis, and hospital operations

Microsoft has published a source feature presenting seven examples of how AI is being used in healthcare and well-being settings in different countries.

The piece frames the examples around pressures on health systems facing tight budgets, rising demand, and growing administrative workloads, and says AI tools are being deployed to reduce documentation burdens, improve information flows, and support working conditions for clinicians and pharmacists.

According to the feature, one example comes from the Munich Fire Department, where an AI operator is being tested to handle non-emergency patient transport calls while handing cases to human staff when needed. Microsoft says the system is intended to free dispatchers to focus on life-threatening emergencies and is currently in beta testing at LMU Klinikum in Munich.

The article also points to the use of ambient clinical documentation technology in the United Kingdom. At Manchester University NHS Foundation Trust, Microsoft says clinicians are using Dragon Copilot to turn clinical conversations into structured medical notes, aiming to reduce paperwork and increase time with patients. The feature cites hospital estimates that the time savings could allow treatment of up to a quarter of a million additional patients each year.

In Kenya, Microsoft highlights an AI-powered app called Zendawa used by independent pharmacies to track inventory, reduce waste, and support business planning. The feature says the app helps forecast stock needs and uses sales data to support loan applications.

Another example comes from Spain, where Microsoft says DxGPT, a diagnostic support tool built on Microsoft Azure, is being used to help identify rare diseases more quickly. The feature links the tool to Foundation 29 and states that it is already integrated into Madrid’s public health system and is expanding to two additional Spanish regions.

Microsoft also points to clinician burnout and documentation pressures in the United States. At Intermountain Health, the article says Dragon Copilot has been integrated into electronic health records and rolled out to more than 2,500 clinicians, with the organisation reporting faster documentation, lower cognitive load, and improved clinician satisfaction and patient engagement.

Cybersecurity recovery is another theme in the feature. Microsoft says Osaka General Medical Center in Japan adopted Microsoft security and cloud tools after a 2022 ransomware attack that disrupted access to servers, patient data, and internal communications. The article presents the case as a broader hospital security reset rather than only a clinical AI deployment example.

A final example focuses on Ribera, a private hospital operator active in Spain, Portugal, and Central Europe. Microsoft says Ribera uses a mix of AI and digital tools to monitor chronic patients, predict risks such as pressure ulcers and falls, and test generative AI for discharge letters in routine procedures, with the stated aim of redirecting clinician time back to patient care.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Romania initiates consortium selection for Black Sea AI gigafactory project

The Ministry of Energy of Romania and the Ministry of Finance of Romania have launched an expression of interest process to select a consortium leader for the Black Sea AI Gigafactory project. The announcement marks a new step in developing large-scale AI infrastructure.

According to the Ministry of Energy of Romania, the selected leader will be responsible for structuring, developing and implementing the project. The process aims to identify partners with strong financial capacity and relevant technical expertise.

The project is described as a strategic initiative to build an advanced AI computing infrastructure, supporting digital and industrial capabilities while strengthening integration within the European AI ecosystem.

This project will lead to the development of digital infrastructure, such as data centres, cloud facilities, semiconductor manufacturing campuses with high-availability/power utility systems, large-scale telecom facilities, or other comparable power-and cooling-intensive facilities integrating critical digital systems.

Authorities state that the initiative is intended to position the Black Sea region as a key location for next generation AI infrastructure and to expand technological capacity in Romania.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Minnesota weighs AI free speech limits

The National Constitution Center reports that Minnesota lawmakers are considering a constitutional amendment to exclude AI systems from free speech protections. The proposal would clarify that such rights apply to people, not machines.

According to the National Constitution Center, the amendment would add language stating that AI does not have the right to speak, write or publish sentiments freely. Human free speech protections would remain unchanged under the proposal.

The article highlights ongoing debate around the measure, with supporters arguing it distinguishes human rights from technological tools, while critics warn it could affect how AI-generated content is treated under the law.

The National Constitution Center notes that the proposal reflects broader tensions over how legal systems should address AI and free expression as the issue develops in Minnesota.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot