Uzbekistan sets principles for responsible AI

A new ethical framework for the development and use of AI technologies has been adopted by Uzbekistan.

The rules, prepared by the Ministry of Digital Technologies, establish unified standards for developers, implementing organisations and users of AI systems, ensuring AI respects human rights, privacy and societal trust.

A framework that is part of presidential decrees and resolutions aimed at advancing AI innovation across the country. It also emphasises legality, transparency, fairness, accountability, and continuous human oversight.

AI systems must avoid discrimination based on gender, nationality, religion, language or social origin.

Developers are required to ensure algorithmic clarity, assess risks and bias in advance, and prevent AI from causing harm to individuals, society, the state or the environment.

Users of AI systems must comply with legislation, safeguard personal data, and operate technologies responsibly. Any harm caused during AI development or deployment carries legal liability.

The Ministry of Digital Technologies will oversee standards, address ethical concerns, foster industry cooperation, and improve digital literacy across Uzbekistan.

An initiative that aligns with broader efforts to prepare Uzbekistan for AI adoption in healthcare, education, transport, space, and other sectors.

By establishing clear ethical principles, the country aims to strengthen trust in AI applications and ensure responsible and secure use nationwide.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UNESCO launches AI guidelines for courts and tribunals

UNESCO has launched new Guidelines for the Use of AI Systems in Courts and Tribunals to ensure AI strengthens rather than undermines human-led justice. The initiative arrives as courts worldwide face millions of pending cases and limited resources.

In Argentina, AI-assisted legal tools have increased case processing by nearly 300%, while automated transcription in Egypt is improving court efficiency.

Judicial systems are increasingly encountering AI-generated evidence, AI-assisted sentencing, and automated administrative processes. AI misuse can have serious consequences, as seen in the UK High Court where false AI-generated arguments caused delays, extra costs, and fines.

UNESCO’s Guidelines aim to prevent such risks by emphasising human oversight, auditability, and ethical AI use.

The Guidelines outline 15 principles and provide recommendations for judicial organisations and individual judges throughout the AI lifecycle. They also serve as a benchmark for developing national and regional standards.

UNESCO’s Judges’ Initiative, which has trained over 36,000 judicial operators in 160 countries, played a key role in shaping and peer-reviewing the Guidelines.

The official launch will take place at the Athens Roundtable on AI and the Rule of Law in London on 4 December 2025. UNESCO aims for the standards to ensure responsible AI use, improve court efficiency, and uphold public trust in the judiciary.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

FCA launches AI Live Testing for UK financial firms

The UK’s Financial Conduct Authority has launched an AI Live Testing initiative to help firms safely deploy AI in financial markets. Major companies, including NatWest, Monzo, Santander, Scottish Widows, Gain Credit, Homeprotect, and Snorkl, are participating in the first cohort.

Firms receive tailored guidance from the FCA and its technical partner, Advai, to develop and assess AI applications responsibly.

AI testing focuses on retail financial services, exploring uses such as debt resolution, financial advice, improving customer engagement, streamlining complaints handling, and supporting smarter spending and saving decisions.

The project aims to answer key questions around evaluation frameworks, governance, live monitoring, and risk management to protect both consumers and markets.

Jessica Rusu, FCA chief data officer, said the initiative helps firms use AI safely while guiding the FCA on its impact in UK financial services. The project complements the FCA’s Supercharged Sandbox, which supports firms in earlier experimentation phases.

Applications for the second AI Live Testing cohort open in January 2026, with participating firms able to start testing in April. Insights from the initiative will inform FCA AI policy, supporting innovation while ensuring responsible deployment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI model boosts accuracy in ranking harmful genetic variants

Researchers have unveiled a new AI model that ranks genetic variants based on their severity. The approach combines deep evolutionary signals with population data to highlight clinically relevant mutations.

The popEVE system integrates protein-scale models with constraints drawn from major genomic databases. Its combined scoring separates harmful missense variants more accurately than leading diagnostic tools.

Clinical tests showed strong performance in developmental disorder cohorts, where damaging mutations clustered clearly. The model also pinpointed likely causal variants in unsolved cases without parental genomes.

Researchers identified hundreds of credible candidate genes with structural and functional support. Findings suggest that AI could accelerate rare disease diagnoses and inform precision counselling worldwide.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Meta expands global push against online scam networks

The US tech giant, Meta, outlined an expanded strategy to limit online fraud by combining technical defences with stronger collaboration across industry and law enforcement.

The company described scams as a threat to user safety and as a direct risk to the credibility of its advertising ecosystem, which remains central to its business model.

Executives emphasised that large criminal networks continue to evolve and that a faster, coordinated response is essential instead of fragmented efforts.

Meta presented recent progress, noting that more than 134 million scam advertisements were removed in 2025 and that reports about misleading advertising fell significantly in the last fifteen months.

It also provided details about disrupted criminal networks that operated across Facebook, Instagram and WhatsApp.

Facial recognition tools played a crucial role in detecting scam content that utilised images of public figures, resulting in an increased volume of removals during testing, rather than allowing wider circulation.

Cooperation with law enforcement remains central to Meta’s approach. The company supported investigations that targeted criminal centres in Myanmar and illegal online gambling operations connected to transfers through anonymous accounts.

Information shared with financial institutions and partners in the Global Signal Exchange contributed to the removal of thousands of accounts. At the same time, legal action continued against those who used impersonation or bulk messaging to deceive users.

Meta stated that it backs bipartisan legislation designed to support a national response to online fraud. The company argued that new laws are necessary to weaken transnational groups behind large-scale scam operations and to protect users more effectively.

A broader aim is to strengthen trust across Meta’s services, rather than allowing criminal activity to undermine user confidence and advertiser investment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New findings reveal untrained AI can mirror human brain responses

Researchers at Johns Hopkins report that brain-inspired AI architectures can display human-like neural activity before any training. Structural design may provide stronger starting points than data-heavy methods. The findings challenge long-held views about how machine intelligence forms.

Researchers tested modified transformers, fully connected networks, and convolutional networks across multiple variants. They compared untrained model responses with neural data from humans and primates viewing identical images. The approach allowed a direct measure of architectural influence.

Transformers and fully connected networks showed limited change when scaled. Convolutional models, by contrast, produced patterns that aligned more closely with human brain activity. Architecture appears to be a decisive factor early in development.

Untrained convolutional models matched aspects of systems trained on millions of images. The results suggest brain-like structures could cut reliance on vast datasets and energy-intensive computation. The implications may reshape how advanced models are engineered.

Further research will examine simple, biologically inspired learning rules. The team plans to integrate these mechanisms into future AI frameworks. The goal is to combine architecture and biology to accelerate meaningful advances.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Governments urged to build learning systems for the AI era

Governments are facing increased pressure to govern AI effectively, prompting calls for continuous institutional learning. Researchers argue that the public sector must develop adaptive capacity to keep pace with rapid technological change.

Past digital reforms often stalled because administrations focused on minor upgrades rather than redesigning core services. Slow adaptation now carries greater risks, as AI transforms decisions, systems and expectations across government.

Experts emphasise the need for a learning infrastructure that facilitates to reliable flow of knowledge across institutions. Singapore and the UAE have already invested heavily in large-scale capability-building programmes.

Public servants require stronger technical and institutional literacy, supported through ongoing training and open collaboration with research communities. Advocates say that states that embed learning deeply will govern AI more effectively and maintain public trust.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Japan plans large scale investment to boost AI capability

Japan plans to increase generative AI usage to 80 percent as officials push national adoption. Current uptake remains far lower than in the United States and China.

The government intends to raise early usage to 50 percent and stimulate private investment. A trillion yen target highlights the efforts to expand infrastructure and accelerate deployment across various Japanese sectors quickly.

Guidelines stress risk reduction and stronger oversight through an enhanced AI Safety Institute. Critics argue that measures lack detail and fail to address misuse with sufficient clarity.

Authorities expect broader AI use in health care, finance and agriculture through coordinated public-private work. Annual updates will monitor progress as Japan seeks to enhance its competitiveness and strategic capabilities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Mistral AI unveils new open models with broader capabilities

Yesterday, Mistral AI introduced Mistral 3 as a new generation of open multimodal and multilingual models that aim to support developers and enterprises through broader access and improved efficiency.

The company presented both small dense models and a new mixture-of-experts system called Mistral Large 3, offering open-weight releases to encourage wider adoption across different sectors.

Developers are encouraged to build on models in compressed formats that reduce deployment costs, rather than relying on heavier, closed solutions.

The organisation highlighted that Large 3 was trained with extensive resources on NVIDIA hardware to improve performance in multilingual communication, image understanding and general instruction tasks.

Mistral AI underlined its cooperation with NVIDIA, Red Hat and vLLM to deliver faster inference and easier deployment, providing optimised support for data centres along with options suited for edge computing.

A partnership that introduced lower-precision execution and improved kernels to increase throughput for frontier-scale workloads.

Attention was also given to the Ministral 3 series, which includes models designed for local or edge settings in three sizes. Each version supports image understanding and multilingual tasks, with instruction and reasoning variants that aim to strike a balance between accuracy and cost efficiency.

Moreover, the company stated that these models produce fewer tokens in real-world use cases, rather than generating unnecessarily long outputs, a choice that aims to reduce operational burdens for enterprises.

Mistral AI continued by noting that all releases will be available through major platforms and cloud partners, offering both standard and custom training services. Organisations that require specialised performance are invited to adapt the models to domain-specific needs under the Apache 2.0 licence.

The company emphasised a long-term commitment to open development and encouraged developers to explore and customise the models to support new applications across different industries.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI helps detect congenital heart defects in unborn babies

Mount Sinai doctors in New York City are the first to utilise AI to enhance prenatal ultrasounds and detect congenital heart defects more effectively. BrightHeart’s FDA-approved technology is now used at Mount Sinai-affiliated Carnegie Imaging for Women across three Manhattan locations.

Congenital heart defects affect about 1 in 500 newborns and often require urgent intervention.

A study in Obstetrics & Gynecology found AI-assisted ultrasounds detected major defects with over 97 percent accuracy, cut reading time by 18 percent, and raised confidence scores by 19 percent.

The study reviewed 200 fetal ultrasounds from 11 centres across two countries, with and without AI assistance, by obstetricians and maternal-fetal medicine specialists.

AI improved detection, confidence, and efficiency, especially in centres without specialised fetal heart experts.

Experts say AI can level the field of prenatal diagnosis and optimise patient care. Dr Lam-Rachlin and Dr Rebarber emphasised AI’s potential to standardise detection and urged further research for routine clinical use.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot