Governments urged to build learning systems for the AI era

Governments are facing increased pressure to govern AI effectively, prompting calls for continuous institutional learning. Researchers argue that the public sector must develop adaptive capacity to keep pace with rapid technological change.

Past digital reforms often stalled because administrations focused on minor upgrades rather than redesigning core services. Slow adaptation now carries greater risks, as AI transforms decisions, systems and expectations across government.

Experts emphasise the need for a learning infrastructure that facilitates to reliable flow of knowledge across institutions. Singapore and the UAE have already invested heavily in large-scale capability-building programmes.

Public servants require stronger technical and institutional literacy, supported through ongoing training and open collaboration with research communities. Advocates say that states that embed learning deeply will govern AI more effectively and maintain public trust.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Japan plans large scale investment to boost AI capability

Japan plans to increase generative AI usage to 80 percent as officials push national adoption. Current uptake remains far lower than in the United States and China.

The government intends to raise early usage to 50 percent and stimulate private investment. A trillion yen target highlights the efforts to expand infrastructure and accelerate deployment across various Japanese sectors quickly.

Guidelines stress risk reduction and stronger oversight through an enhanced AI Safety Institute. Critics argue that measures lack detail and fail to address misuse with sufficient clarity.

Authorities expect broader AI use in health care, finance and agriculture through coordinated public-private work. Annual updates will monitor progress as Japan seeks to enhance its competitiveness and strategic capabilities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Mistral AI unveils new open models with broader capabilities

Yesterday, Mistral AI introduced Mistral 3 as a new generation of open multimodal and multilingual models that aim to support developers and enterprises through broader access and improved efficiency.

The company presented both small dense models and a new mixture-of-experts system called Mistral Large 3, offering open-weight releases to encourage wider adoption across different sectors.

Developers are encouraged to build on models in compressed formats that reduce deployment costs, rather than relying on heavier, closed solutions.

The organisation highlighted that Large 3 was trained with extensive resources on NVIDIA hardware to improve performance in multilingual communication, image understanding and general instruction tasks.

Mistral AI underlined its cooperation with NVIDIA, Red Hat and vLLM to deliver faster inference and easier deployment, providing optimised support for data centres along with options suited for edge computing.

A partnership that introduced lower-precision execution and improved kernels to increase throughput for frontier-scale workloads.

Attention was also given to the Ministral 3 series, which includes models designed for local or edge settings in three sizes. Each version supports image understanding and multilingual tasks, with instruction and reasoning variants that aim to strike a balance between accuracy and cost efficiency.

Moreover, the company stated that these models produce fewer tokens in real-world use cases, rather than generating unnecessarily long outputs, a choice that aims to reduce operational burdens for enterprises.

Mistral AI continued by noting that all releases will be available through major platforms and cloud partners, offering both standard and custom training services. Organisations that require specialised performance are invited to adapt the models to domain-specific needs under the Apache 2.0 licence.

The company emphasised a long-term commitment to open development and encouraged developers to explore and customise the models to support new applications across different industries.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

NVIDIA platform lifts leading MoE models

Frontier developers are adopting a mixture-of-experts architecture as the foundation for their most advanced open-source models. Designers now rely on specialised experts that activate only when needed instead of forcing every parameter to work on each token.

Major models, such as DeepSeek-R1, Kimi K2 Thinking, and Mistral Large 3, rise to the top of the Artificial Analysis leaderboard by utilising this pattern to combine greater capability with lower computational strain.

Scaling the architecture has always been the main obstacle. Expert parallelism requires high-speed memory access and near-instant communication between multiple GPUs, yet traditional systems often create bottlenecks that slow down training and inference.

NVIDIA has shifted toward extreme hardware and software codesign to remove those constraints.

The GB200 NVL72 rack-scale system links seventy-two Blackwell GPUs via fast shared memory and a dense NVLink fabric, enabling experts to exchange information rapidly, rather than relying on slower network layers.

Model developers report significant improvements once they deploy MoE designs on NVL72. Performance leaps of up to ten times have been recorded for frontier systems, improving latency, energy efficiency and the overall cost of running large-scale inference.

Cloud providers integrate the platform to support customers in building agentic workflows and multimodal systems that route tasks between specialised components, rather than duplicating full models for each purpose.

Industry adoption signals a shift toward a future where efficiency and intelligence evolve together. MoE has become the preferred architecture for state-of-the-art reasoning, and NVL72 offers a practical route for enterprises seeking predictable performance gains.

NVIDIA positions its roadmap, including the forthcoming Vera Rubin architecture, as the next step in expanding the scale and capability of frontier AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Honolulu in the US pushes for transparency in government AI use

Growing pressure from Honolulu residents in the US is prompting city leaders to consider stricter safeguards surrounding the use of AI. Calls for greater transparency have intensified as AI has quietly become part of everyday government operations.

Several city departments already rely on automated systems for tasks such as building-plan screening, customer service support and internal administrative work. Advocates now want voters to decide whether the charter should require a public registry of AI tools, human appeal rights and routine audits.

Concerns have deepened after the police department began testing AI-assisted report-writing software without broad consultation. Supporters of reform argue that stronger oversight is crucial to maintain public trust, especially if AI starts influencing high-stakes decisions that impact residents’ lives.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UK ministers advance energy plans for AI expansion

The final AI Energy Council meeting of 2025 took place in London, led by AI Minister Kanishka Narayan alongside energy ministers Lord Vallance and Michael Shanks.

Regulators and industry representatives reviewed how the UK can expedite grid connections and support the necessary infrastructure for expanding AI activity nationwide.

Council members examined progress on government measures intended to accelerate connections for AI data centres. Plans include support for AI Growth Zones, with discounted electricity available for sites able to draw on excess capacity, which is expected to reduce pressure in the broader network.

Ministers underlined AI’s role in national economic ambitions, noting recent announcements of new AI Growth Zones in North East England and in North and South Wales.

They also discussed how forthcoming reforms are expected to help deliver AI-related infrastructure by easing access to grid capacity.

The meeting concluded with a focus on long-term energy needs for AI development. Participants explored ways to unlock additional capacity and considered innovative options for power generation, including self-build solutions.

The council will reconvene in early 2026 to continue work on sustainable approaches for future AI infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI expands investment in mental health safety research

Yesterday, OpenAI launched a new grant programme to support external research on the connection between AI and mental health.

An initiative that aims to expand independent inquiry into how people express distress, how AI interprets complex emotional signals and how different cultures shape the language used to discuss sensitive experiences.

OpenAI also hopes that broader participation will strengthen collective understanding, rather than keeping progress confined to internal studies.

The programme encourages interdisciplinary work that brings together technical specialists, mental health professionals and people with lived experience. OpenAI is seeking proposals that can offer clear outputs, such as datasets, evaluation methods, or practical insights, that improve safety and guidance.

Researchers may focus on patterns of distress in specific communities, the influence of slang and vernacular, or the challenges that appear when mental health symptoms manifest in ways that current systems fail to recognise.

The grants also aim to expand knowledge of how providers use AI within care settings, including where tools are practical, where limitations appear and where risks emerge for users.

Additional areas of interest include how young people respond to different tones or styles, how grief is expressed in language and how visual cues linked to body image concerns can be interpreted responsibly.

OpenAI emphasises that better evaluation frameworks, ethical datasets and annotated examples can support safer development across the field.

Applications are open until 19 December, with decisions expected by mid-January. The programme forms part of OpenAI’s broader effort to invest in well-being and safety research, offering financial support to independent teams working across diverse cultural and linguistic contexts.

The company argues that expanding evidence and perspectives will contribute to a more secure and supportive environment for future AI systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Greek businesses urged to accelerate AI adoption

AI is becoming a central factor in business development, according to Google Cloud executives visiting Athens for Google Cloud Day.

Marianne Janik and Boris Georgiev explained that AI is entering daily life more quickly than many expected, creating an urgent need for companies to strengthen their capabilities. Their visit coincided with the international launch of Gemini 3, the latest version of the company’s AI model.

They argued that enterprises in Greece should accelerate their adoption of AI tools to remain competitive. A slow transition could limit their position in both domestic and international markets.

They also underlined the importance of employees developing new skills that support digital transformation, noting that risk-taking has become a necessary element of strategic progress.

The financial sector is advancing at a faster pace, aided by its long-standing familiarity with digital and analytical tools.

Banks are investing heavily in compliance functions and customer onboarding. Retail is also undergoing a similar transformation, driven by consumer expectations and operational pressures.

Google Cloud Day in Athens brought together a large number of participants, highlighting the sector’s growing interest in practical AI applications and the role of advanced models in shaping business processes.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Safran and UAE institute join forces on AI geospatial intelligence

Safran.AI, the AI division of Safran Electronics & Defence, and the UAE’s Technology Innovation Institute have formed a strategic partnership to develop a next-generation agentic AI geospatial intelligence platform.

The collaboration aims to transform high-resolution satellite imagery into actionable intelligence for defence operations.

The platform will combine human oversight with advanced geospatial reasoning, enabling operators to interpret and respond to emerging situations faster and with greater precision.

Key initiatives include agentic reasoning systems powered by large language models, a mission-specific AI detector factory, and an autonomous multimodal fusion engine for persistent, all-weather monitoring.

Under the agreement, a joint team operating across France and the UAE will accelerate innovation within a unified operational structure.

Leaders from both organisations emphasise that the alliance strengthens sovereign geospatial intelligence capabilities and lays the foundations for decision intelligence in national security.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Valentino faces backlash over AI-generated handbag campaign

Italian fashion house Valentino has come under intense criticism after posting AI-generated advertisements for its DeVain handbag, with social media users calling the imagery ‘disturbing’ and ‘sloppy’. The BBC report describes how the brand’s digital-creative collaboration produced a surreal promotional video that quickly drew hundreds of negative comments on Instagram.

The campaign features morphing models, swirling bodies and shifting Valentino logos, all rendered by generative AI. Although the post clearly labels the material as AI-produced, many viewers noted that the brand’s reliance on the technology made the luxury product appear less appealing.

Commenters accused the company of prioritising efficiency over artistry and argued that advertising should showcase human creativity rather than automated visuals. Industry analysts have noted that the backlash reflects broader tensions within the creative economy.

Getty Images executive Dr Rebecca Swift said audiences often view AI-generated material as ‘less valuable’, mainly when used by luxury labels. Others warned that many consumers interpret the use of generative AI as a sign of cost-cutting rather than innovation.

Brands including H&M and Guess have faced similar criticism for recent AI-based promotional work, fuelling broader concerns about the displacement of models, photographers and stylists.

While AI is increasingly adopted across fashion to streamline design and marketing, experts say brands risk undermining the emotional connection that drives luxury purchasing. Analysts argue that without a compelling artistic vision at its core, AI-generated campaigns may make high-end labels feel less human at a time when customers are seeking more authenticity, not less.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot