Canada sets national guidelines for equitable AI

Yesterday, Canada released the CAN-ASC-6.2 – Accessible and Equitable Artificial Intelligence Systems standard, marking the first national standard focused specifically on accessible AI.

A framework that ensures AI systems are inclusive, fair, and accessible from design through deployment. Its release coincides with the International Day of Persons with Disabilities, emphasising Canada’s commitment to accessibility and inclusion.

The standard guides organisations and developers in creating AI that accommodates people with disabilities, promotes fairness, prevents exclusion, and maintains accessibility throughout the AI lifecycle.

It provides practical processes for equity in AI development and encourages education on accessible AI practices.

The standard was developed by a technical committee composed largely of people with disabilities and members of equity-deserving groups, incorporating public feedback from Canadians of diverse backgrounds.

Approved by the Standards Council of Canada, CAN-ASC-6.2 meets national requirements for standards development and aligns with international best practices.

Moreover, the standard is available for free in both official languages and accessible formats, including plain language, American Sign Language and Langue des signes québécoise.

By setting clear guidelines, Canada aims to ensure AI serves all citizens equitably and strengthens workforce inclusion, societal participation, and technological fairness.

An initiative that highlights Canada’s leadership in accessible technology and provides a practical tool for organisations to implement inclusive AI systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI and automation need human oversight in decision-making

Leaders from academia and industry in Hyderabad, India are stressing that humans must remain central in decision-making as AI and automation expand across society. Collaborative intelligence, combining AI experts, domain specialists and human judgement, is seen as essential for responsible adoption.

Universities are encouraged to treat students as primary stakeholders, adapting curricula to integrate AI responsibly and avoid obsolescence. Competency-based, values-driven learning models are being promoted to prepare students to question, shape and lead through digital transformation.

Experts highlighted that modern communication is co-produced by humans, machines and algorithms. Designing AI to augment human agency rather than replace it ensures a balance between technology and human decision-making across education and industry.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Legal sector urged to plan for cultural change around AI

A digital agency has released new guidance to help legal firms prepare for wider AI adoption. The report urges practitioners to assess cultural readiness before committing to major technology investment.

Sherwen Studios collected views from lawyers who raised ethical worries and practical concerns. Their experiences shaped recommendations intended to ensure AI serves real operational needs across the sector.

The agency argues that firms must invest in oversight, governance and staff capability. Leaders are encouraged to anticipate regulatory change and build multidisciplinary teams that blend legal and technical expertise.

Industry analysts expect AI to reshape client care and compliance frameworks over the coming years. Firms prepared for structural shifts are likely to benefit most from long-term transformation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

FCA begins live AI testing with UK financial firms

The UK’s Financial Conduct Authority has started a live testing programme for AI with major financial firms. The initiative aims to explore AI’s benefits and risks in retail financial services while ensuring safe and responsible deployment.

Participating firms, including NatWest, Monzo, Santander and Scottish Widows, receive guidance from FCA regulators and technical partner Advai. Use cases being trialled range from debt resolution and financial advice to customer engagement and smarter spending tools.

Insights from the testing will help the FCA shape future regulations and governance frameworks for AI in financial markets. The programme complements the regulator’s Supercharged Sandbox, with a second cohort of firms due to begin testing in April 2026.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Sega cautiously adopts AI in game development

Game development is poised to transform as Sega begins to incorporate AI selectively. The Japanese studio aims to enhance efficiency across production processes while preserving the integrity of creative work, such as character design.

Executives emphasised that AI will primarily support tasks such as content transcription and workflow optimisation, avoiding roles that require artistic skills. Careful evaluation of each potential use case will guide its implementation across projects.

The debate over generative AI continues to divide the gaming industry, with some developers raising concerns that candidates may misrepresent AI-generated work during the hiring process. Studios are increasingly requiring proof of actual creative ability to avoid productivity issues.

Other developers, including Arrowhead Game Studios, emphasise the importance of striking a balance between AI use and human creativity. By reducing repetitive tasks rather than replacing artistic roles, studios aim to enhance efficiency while preserving the unique contributions of human designers.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AstraZeneca backs Pangaea’s AI platform to scale precision healthcare

Pangaea Data, a health-tech firm specialising in patient-intelligence platforms, announced a strategic, multi-year partnership with AstraZeneca to deploy multimodal artificial intelligence in clinical settings. The goal is to bring AI-driven, data-rich clinical decision-making to scale, improving how patients are identified, diagnosed, treated and connected to therapies or clinical trials.

The collaboration will see AstraZeneca sponsoring the configuration, validation and deployment of Pangaea’s enterprise-grade platform, which merges large-scale clinical, imaging, genomic, pathology and real-world data. It will also leverage generative and predictive AI capabilities from Microsoft and NVIDIA for model training and deployment.

Among the planned applications are supporting point-of-care treatment decisions and identifying patients who are undiagnosed, undertreated or misdiagnosed, across diseases ranging from chronic conditions to cancer.

Pangaea’s CEO said the partnership aims to efficiently connect patients to life-changing therapies and trials in a compliant, financially sustainable way. For AstraZeneca, the effort reflects a broader push to integrate AI-driven precision medicine across its R&D and healthcare delivery pipeline.

From a policy and health-governance standpoint, this alliance is important. It demonstrates how multimodal AI, combining different data types beyond standard medical records, is being viewed not just as a research tool, but as a potentially transformative element of clinical care.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Uzbekistan sets principles for responsible AI

A new ethical framework for the development and use of AI technologies has been adopted by Uzbekistan.

The rules, prepared by the Ministry of Digital Technologies, establish unified standards for developers, implementing organisations and users of AI systems, ensuring AI respects human rights, privacy and societal trust.

A framework that is part of presidential decrees and resolutions aimed at advancing AI innovation across the country. It also emphasises legality, transparency, fairness, accountability, and continuous human oversight.

AI systems must avoid discrimination based on gender, nationality, religion, language or social origin.

Developers are required to ensure algorithmic clarity, assess risks and bias in advance, and prevent AI from causing harm to individuals, society, the state or the environment.

Users of AI systems must comply with legislation, safeguard personal data, and operate technologies responsibly. Any harm caused during AI development or deployment carries legal liability.

The Ministry of Digital Technologies will oversee standards, address ethical concerns, foster industry cooperation, and improve digital literacy across Uzbekistan.

An initiative that aligns with broader efforts to prepare Uzbekistan for AI adoption in healthcare, education, transport, space, and other sectors.

By establishing clear ethical principles, the country aims to strengthen trust in AI applications and ensure responsible and secure use nationwide.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UNESCO launches AI guidelines for courts and tribunals

UNESCO has launched new Guidelines for the Use of AI Systems in Courts and Tribunals to ensure AI strengthens rather than undermines human-led justice. The initiative arrives as courts worldwide face millions of pending cases and limited resources.

In Argentina, AI-assisted legal tools have increased case processing by nearly 300%, while automated transcription in Egypt is improving court efficiency.

Judicial systems are increasingly encountering AI-generated evidence, AI-assisted sentencing, and automated administrative processes. AI misuse can have serious consequences, as seen in the UK High Court where false AI-generated arguments caused delays, extra costs, and fines.

UNESCO’s Guidelines aim to prevent such risks by emphasising human oversight, auditability, and ethical AI use.

The Guidelines outline 15 principles and provide recommendations for judicial organisations and individual judges throughout the AI lifecycle. They also serve as a benchmark for developing national and regional standards.

UNESCO’s Judges’ Initiative, which has trained over 36,000 judicial operators in 160 countries, played a key role in shaping and peer-reviewing the Guidelines.

The official launch will take place at the Athens Roundtable on AI and the Rule of Law in London on 4 December 2025. UNESCO aims for the standards to ensure responsible AI use, improve court efficiency, and uphold public trust in the judiciary.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

FCA launches AI Live Testing for UK financial firms

The UK’s Financial Conduct Authority has launched an AI Live Testing initiative to help firms safely deploy AI in financial markets. Major companies, including NatWest, Monzo, Santander, Scottish Widows, Gain Credit, Homeprotect, and Snorkl, are participating in the first cohort.

Firms receive tailored guidance from the FCA and its technical partner, Advai, to develop and assess AI applications responsibly.

AI testing focuses on retail financial services, exploring uses such as debt resolution, financial advice, improving customer engagement, streamlining complaints handling, and supporting smarter spending and saving decisions.

The project aims to answer key questions around evaluation frameworks, governance, live monitoring, and risk management to protect both consumers and markets.

Jessica Rusu, FCA chief data officer, said the initiative helps firms use AI safely while guiding the FCA on its impact in UK financial services. The project complements the FCA’s Supercharged Sandbox, which supports firms in earlier experimentation phases.

Applications for the second AI Live Testing cohort open in January 2026, with participating firms able to start testing in April. Insights from the initiative will inform FCA AI policy, supporting innovation while ensuring responsible deployment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI model boosts accuracy in ranking harmful genetic variants

Researchers have unveiled a new AI model that ranks genetic variants based on their severity. The approach combines deep evolutionary signals with population data to highlight clinically relevant mutations.

The popEVE system integrates protein-scale models with constraints drawn from major genomic databases. Its combined scoring separates harmful missense variants more accurately than leading diagnostic tools.

Clinical tests showed strong performance in developmental disorder cohorts, where damaging mutations clustered clearly. The model also pinpointed likely causal variants in unsolved cases without parental genomes.

Researchers identified hundreds of credible candidate genes with structural and functional support. Findings suggest that AI could accelerate rare disease diagnoses and inform precision counselling worldwide.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot