Governments urged to build learning systems for the AI era

Governments are facing increased pressure to govern AI effectively, prompting calls for continuous institutional learning. Researchers argue that the public sector must develop adaptive capacity to keep pace with rapid technological change.

Past digital reforms often stalled because administrations focused on minor upgrades rather than redesigning core services. Slow adaptation now carries greater risks, as AI transforms decisions, systems and expectations across government.

Experts emphasise the need for a learning infrastructure that facilitates to reliable flow of knowledge across institutions. Singapore and the UAE have already invested heavily in large-scale capability-building programmes.

Public servants require stronger technical and institutional literacy, supported through ongoing training and open collaboration with research communities. Advocates say that states that embed learning deeply will govern AI more effectively and maintain public trust.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Japan plans large scale investment to boost AI capability

Japan plans to increase generative AI usage to 80 percent as officials push national adoption. Current uptake remains far lower than in the United States and China.

The government intends to raise early usage to 50 percent and stimulate private investment. A trillion yen target highlights the efforts to expand infrastructure and accelerate deployment across various Japanese sectors quickly.

Guidelines stress risk reduction and stronger oversight through an enhanced AI Safety Institute. Critics argue that measures lack detail and fail to address misuse with sufficient clarity.

Authorities expect broader AI use in health care, finance and agriculture through coordinated public-private work. Annual updates will monitor progress as Japan seeks to enhance its competitiveness and strategic capabilities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Mistral AI unveils new open models with broader capabilities

Yesterday, Mistral AI introduced Mistral 3 as a new generation of open multimodal and multilingual models that aim to support developers and enterprises through broader access and improved efficiency.

The company presented both small dense models and a new mixture-of-experts system called Mistral Large 3, offering open-weight releases to encourage wider adoption across different sectors.

Developers are encouraged to build on models in compressed formats that reduce deployment costs, rather than relying on heavier, closed solutions.

The organisation highlighted that Large 3 was trained with extensive resources on NVIDIA hardware to improve performance in multilingual communication, image understanding and general instruction tasks.

Mistral AI underlined its cooperation with NVIDIA, Red Hat and vLLM to deliver faster inference and easier deployment, providing optimised support for data centres along with options suited for edge computing.

A partnership that introduced lower-precision execution and improved kernels to increase throughput for frontier-scale workloads.

Attention was also given to the Ministral 3 series, which includes models designed for local or edge settings in three sizes. Each version supports image understanding and multilingual tasks, with instruction and reasoning variants that aim to strike a balance between accuracy and cost efficiency.

Moreover, the company stated that these models produce fewer tokens in real-world use cases, rather than generating unnecessarily long outputs, a choice that aims to reduce operational burdens for enterprises.

Mistral AI continued by noting that all releases will be available through major platforms and cloud partners, offering both standard and custom training services. Organisations that require specialised performance are invited to adapt the models to domain-specific needs under the Apache 2.0 licence.

The company emphasised a long-term commitment to open development and encouraged developers to explore and customise the models to support new applications across different industries.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI helps detect congenital heart defects in unborn babies

Mount Sinai doctors in New York City are the first to utilise AI to enhance prenatal ultrasounds and detect congenital heart defects more effectively. BrightHeart’s FDA-approved technology is now used at Mount Sinai-affiliated Carnegie Imaging for Women across three Manhattan locations.

Congenital heart defects affect about 1 in 500 newborns and often require urgent intervention.

A study in Obstetrics & Gynecology found AI-assisted ultrasounds detected major defects with over 97 percent accuracy, cut reading time by 18 percent, and raised confidence scores by 19 percent.

The study reviewed 200 fetal ultrasounds from 11 centres across two countries, with and without AI assistance, by obstetricians and maternal-fetal medicine specialists.

AI improved detection, confidence, and efficiency, especially in centres without specialised fetal heart experts.

Experts say AI can level the field of prenatal diagnosis and optimise patient care. Dr Lam-Rachlin and Dr Rebarber emphasised AI’s potential to standardise detection and urged further research for routine clinical use.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

NVIDIA platform lifts leading MoE models

Frontier developers are adopting a mixture-of-experts architecture as the foundation for their most advanced open-source models. Designers now rely on specialised experts that activate only when needed instead of forcing every parameter to work on each token.

Major models, such as DeepSeek-R1, Kimi K2 Thinking, and Mistral Large 3, rise to the top of the Artificial Analysis leaderboard by utilising this pattern to combine greater capability with lower computational strain.

Scaling the architecture has always been the main obstacle. Expert parallelism requires high-speed memory access and near-instant communication between multiple GPUs, yet traditional systems often create bottlenecks that slow down training and inference.

NVIDIA has shifted toward extreme hardware and software codesign to remove those constraints.

The GB200 NVL72 rack-scale system links seventy-two Blackwell GPUs via fast shared memory and a dense NVLink fabric, enabling experts to exchange information rapidly, rather than relying on slower network layers.

Model developers report significant improvements once they deploy MoE designs on NVL72. Performance leaps of up to ten times have been recorded for frontier systems, improving latency, energy efficiency and the overall cost of running large-scale inference.

Cloud providers integrate the platform to support customers in building agentic workflows and multimodal systems that route tasks between specialised components, rather than duplicating full models for each purpose.

Industry adoption signals a shift toward a future where efficiency and intelligence evolve together. MoE has become the preferred architecture for state-of-the-art reasoning, and NVL72 offers a practical route for enterprises seeking predictable performance gains.

NVIDIA positions its roadmap, including the forthcoming Vera Rubin architecture, as the next step in expanding the scale and capability of frontier AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AWS launches frontier agents to boost software development

AWS has launched frontier agents, autonomous AI tools that extend software development teams. The first three – Kiro, AWS Security Agent, and AWS DevOps Agent – enhance development, security, and operations while working independently for extended periods.

Kiro functions as a virtual developer, maintaining context, learning from feedback, and managing tasks across multiple repositories. AWS Security Agent automates code reviews, penetration testing, and enforces organisational security standards.

AWS DevOps Agent identifies root causes of incidents, reduces alerts, and provides proactive recommendations to improve system reliability.

These agents operate autonomously, scale across multiple tasks, and free teams from repetitive work, allowing focus on high-priority projects. Early users, including SmugMug and Commonwealth Bank of Australia, report quicker development, stronger security, and more efficient operations.

By integrating frontier agents into the software development lifecycle, AWS is shifting AI from task assistance to completing complex projects independently, marking a significant step forward in what AI can achieve for development teams.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Honolulu in the US pushes for transparency in government AI use

Growing pressure from Honolulu residents in the US is prompting city leaders to consider stricter safeguards surrounding the use of AI. Calls for greater transparency have intensified as AI has quietly become part of everyday government operations.

Several city departments already rely on automated systems for tasks such as building-plan screening, customer service support and internal administrative work. Advocates now want voters to decide whether the charter should require a public registry of AI tools, human appeal rights and routine audits.

Concerns have deepened after the police department began testing AI-assisted report-writing software without broad consultation. Supporters of reform argue that stronger oversight is crucial to maintain public trust, especially if AI starts influencing high-stakes decisions that impact residents’ lives.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UK ministers advance energy plans for AI expansion

The final AI Energy Council meeting of 2025 took place in London, led by AI Minister Kanishka Narayan alongside energy ministers Lord Vallance and Michael Shanks.

Regulators and industry representatives reviewed how the UK can expedite grid connections and support the necessary infrastructure for expanding AI activity nationwide.

Council members examined progress on government measures intended to accelerate connections for AI data centres. Plans include support for AI Growth Zones, with discounted electricity available for sites able to draw on excess capacity, which is expected to reduce pressure in the broader network.

Ministers underlined AI’s role in national economic ambitions, noting recent announcements of new AI Growth Zones in North East England and in North and South Wales.

They also discussed how forthcoming reforms are expected to help deliver AI-related infrastructure by easing access to grid capacity.

The meeting concluded with a focus on long-term energy needs for AI development. Participants explored ways to unlock additional capacity and considered innovative options for power generation, including self-build solutions.

The council will reconvene in early 2026 to continue work on sustainable approaches for future AI infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI growth threatens millions of jobs across Asia

UN economists warned millions of jobs in Asia could be at risk as AI widens the gap between digitally advanced nations and those lacking basic access and skills. The report compared the AI revolution to 19th-century industrialisation, which created a wealthy few and left many behind.

Women and young adults face the most significant threat from AI in the workplace, while the benefits in health, education, and income are unevenly distributed.

Countries such as China, Singapore, and South Korea have invested heavily in AI and reaped significant benefits. Still, entry-level workers in many South Asian nations remain highly vulnerable to automation and technological advancements.

The UN Development Programme urged governments to consider ethical deployment and inclusivity when implementing AI. Countries such as Cambodia, Papua New Guinea, and Vietnam are focusing on developing simple digital tools to help health workers and farmers who lack reliable internet access.

AI could generate nearly $1 trillion in economic gains across Asia over the next decade, boosting regional GDP growth by about two percentage points. Income disparities mean AI benefits remain concentrated in wealthy countries, leaving poorer nations at a disadvantage.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Quantum money meets Bitcoin: Building unforgeable digital currency

Quantum money might sound like science fiction, yet it is rapidly emerging as one of the most compelling frontiers in modern digital finance. Initially a theoretical concept, it was far ahead of the technology of its time, making practical implementation impossible. Today, thanks to breakthroughs in quantum computing and quantum communication, scientists are reviving the idea, investigating how the principles of quantum physics could finally enable unforgeable quantum digital money. 

Comparisons between blockchain and quantum money are frequent and, on the surface, appear logical, yet can these two visions of new-generation cash genuinely be measured by the same yardstick? 

Origins of quantum money 

Quantum money was first proposed by physicist Stephen Wiesner in the late 1960s. Wiesner envisioned a system in which each banknote would carry quantum particles encoded in specific states, known only to the issuing bank, making the notes inherently secure. 

Due to the peculiarities of quantum mechanics, these quantum states could not be copied, offering a level of security fundamentally impossible with classical systems. At the time, however, quantum technologies were purely theoretical, and devices capable of creating, storing, and accurately measuring delicate quantum states simply did not exist. 

For decades, Wiesner’s idea remained a fascinating thought experiment. Today, the rise of functional quantum computers, advanced photonic systems, and reliable quantum communication networks is breathing new life into the concept, allowing researchers to explore practical applications of quantum money in ways that were once unimaginable.

A new battle for the digital throne is emerging as quantum money shifts from theory to possibility, challenging whether Bitcoin’s decentralised strength can hold its ground in a future shaped by quantum technology.

The no-cloning theorem: The physics that makes quantum money impossible to forge

At the heart of quantum money lies the no-cloning theorem, a cornerstone of quantum mechanics. The principle establishes that it is physically impossible to create an exact copy of an unknown quantum state. Any attempt to measure a quantum state inevitably alters it, meaning that copying or scanning a quantum banknote destroys the very information that ensures its authenticity. 

The unique property makes quantum money exceptionally secure: unlike blockchain, which relies on cryptographic algorithms and distributed consensus, quantum money derives its protection directly from the laws of physics. In theory, a quantum banknote cannot be counterfeited, even by an attacker with unlimited computing resources, which is why quantum money is considered one of the most promising approaches to unforgeable digital currency.

 A new battle for the digital throne is emerging as quantum money shifts from theory to possibility, challenging whether Bitcoin’s decentralised strength can hold its ground in a future shaped by quantum technology.

How quantum money works in theory

Quantum money schemes are typically divided into two main types: private and public. 

In private quantum money systems, a central authority- such as a bank- creates quantum banknotes and remains the only entity capable of verifying them. Each note carries a classical serial number alongside a set of quantum states known solely to the issuer. The primary advantage of this approach is its absolute immunity to counterfeiting, as no one outside the issuing institution can replicate the banknote. However, such systems are fully centralised and rely entirely on the security and infrastructure of the issuing bank, which inherently limits scalability and accessibility.

Public quantum money, by contrast, pursues a more ambitious goal: allowing anyone to verify a quantum banknote without consulting a central authority. Developing this level of decentralisation has proven exceptionally difficult. Numerous proposed schemes have been broken by researchers who have managed to extract information without destroying the quantum states. Despite these challenges, public quantum money remains a major focus of quantum cryptography research, with scientists actively pursuing secure and scalable methods for open verification. 

Beyond theoretical appeal, quantum money faces substantial practical hurdles. Quantum states are inherently fragile and susceptible to decoherence, meaning they can lose their information when interacting with the surrounding environment. 

Maintaining stable quantum states demands highly specialised and costly equipment, including photonic processors, quantum memory modules, and sophisticated quantum error-correction systems. Any error or loss could render a quantum banknote completely worthless, and no reliable method currently exists to store these states over long periods. In essence, the concept of quantum money is groundbreaking, yet real-world implementation requires technological advances that are not yet mature enough for mass adoption. 

A new battle for the digital throne is emerging as quantum money shifts from theory to possibility, challenging whether Bitcoin’s decentralised strength can hold its ground in a future shaped by quantum technology.

Bitcoin solves the duplication problem differently

While quantum money relies on the laws of physics to prevent counterfeiting, Bitcoin tackles the duplication problem through cryptography and distributed consensus. Each transaction is verified across thousands of nodes, and SHA-256 hash functions secure the blockchain against double spending without the need for a central authority. 

Unlike elliptic curve cryptography, which could eventually be vulnerable to large-scale quantum attacks, SHA-256 has proven remarkably resilient; even quantum algorithms such as Grover’s offer only a marginal advantage, reducing the search space from 2256 to 2128– still far beyond any realistic brute-force attempt. 

Bitcoin’s security does not hinge on unbreakable mathematics alone but on a combination of decentralisation, network verification, and robust cryptographic design. Many experts therefore consider Bitcoin effectively quantum-proof, with most of the dramatic threats predicted from quantum computers likely to be impossible in practice. 

Software-based and globally accessible, Bitcoin operates independently of specialised hardware, allowing users to send, receive, and verify value anywhere in the world without the fragility and complexity inherent in quantum systems. Furthermore, the network can evolve to adopt post-quantum cryptographic algorithms, ensuring long-term resilience, making Bitcoin arguably the most battle-hardened digital financial instrument in existence. 

 A new battle for the digital throne is emerging as quantum money shifts from theory to possibility, challenging whether Bitcoin’s decentralised strength can hold its ground in a future shaped by quantum technology.

Could quantum money be a threat to Bitcoin?

In reality, quantum money and Bitcoin address entirely different challenges, meaning the former is unlikely to replace the latter. Bitcoin operates as a global, decentralised monetary network with established economic rules and governance, while quantum money represents a technological approach to issuing physically unforgeable tokens. Bitcoin is not designed to be physically unclonable; its strength lies in verifiability, decentralisation, and network-wide trust.

However, SHA-256- the hashing algorithm that underpins Bitcoin mining and block creation- remains highly resistant to quantum threats. Quantum computers achieve only a quadratic speed-up through Grover’s algorithm, which is insufficient to break SHA-256 in practical terms. Bitcoin also retains the ability to adopt post-quantum cryptographic standards as they mature, whereas quantum money is limited by rigid physical constraints that are far harder to update.

Quantum money also remains too fragile, complex, and costly for widespread use. Its realistic applications are limited to state institutions, military networks, or highly secure financial environments rather than everyday payments. Bitcoin, by contrast, already benefits from extensive global infrastructure, strong market adoption, and deep liquidity, making it far more practical for daily transactions and long-term digital value transfer. 

A new battle for the digital throne is emerging as quantum money shifts from theory to possibility, challenging whether Bitcoin’s decentralised strength can hold its ground in a future shaped by quantum technology.

Where quantum money and blockchain could coexist

Although fundamentally different, quantum money and blockchain technologies have the potential to complement one another in meaningful ways. Quantum key distribution could strengthen the security of blockchain networks by protecting communication channels from advanced attacks, while quantum-generated randomness may enhance cryptographic protocols used in decentralised systems. 

Researchers have also explored the idea of using ‘quantum tokens’ to provide an additional privacy layer within specialised blockchain applications. Both technologies ultimately aim to deliver secure and verifiable forms of digital value. Their coexistence may offer the most resilient future framework for digital finance, combining the physics-based protection of quantum money with the decentralisation, transparency, and global reach of blockchain technology. 

A new battle for the digital throne is emerging as quantum money shifts from theory to possibility, challenging whether Bitcoin’s decentralised strength can hold its ground in a future shaped by quantum technology.

Quantum physics meets blockchain for the future of secure currency

Quantum money remains a remarkable concept, originally decades ahead of its time, and now revived by advances in quantum computing and quantum communication. Although it promises theoretically unforgeable digital currency, its fragility, technical complexity, and demanding infrastructure make it impractical for large-scale use. 

Bitcoin, by contrast, stands as the most resilient and widely adopted model of decentralised digital money, supported by a mature global network and robust cryptographic foundations. 

Quantum money and Bitcoin stand as twin engines of a new digital finance era, where quantum physics is reshaping value creation, powering blockchain innovation, and driving next-generation fintech solutions for secure and resilient digital currency. 

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot