Canada sets national guidelines for equitable AI

Yesterday, Canada released the CAN-ASC-6.2 – Accessible and Equitable Artificial Intelligence Systems standard, marking the first national standard focused specifically on accessible AI.

A framework that ensures AI systems are inclusive, fair, and accessible from design through deployment. Its release coincides with the International Day of Persons with Disabilities, emphasising Canada’s commitment to accessibility and inclusion.

The standard guides organisations and developers in creating AI that accommodates people with disabilities, promotes fairness, prevents exclusion, and maintains accessibility throughout the AI lifecycle.

It provides practical processes for equity in AI development and encourages education on accessible AI practices.

The standard was developed by a technical committee composed largely of people with disabilities and members of equity-deserving groups, incorporating public feedback from Canadians of diverse backgrounds.

Approved by the Standards Council of Canada, CAN-ASC-6.2 meets national requirements for standards development and aligns with international best practices.

Moreover, the standard is available for free in both official languages and accessible formats, including plain language, American Sign Language and Langue des signes québécoise.

By setting clear guidelines, Canada aims to ensure AI serves all citizens equitably and strengthens workforce inclusion, societal participation, and technological fairness.

An initiative that highlights Canada’s leadership in accessible technology and provides a practical tool for organisations to implement inclusive AI systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI and automation need human oversight in decision-making

Leaders from academia and industry in Hyderabad, India are stressing that humans must remain central in decision-making as AI and automation expand across society. Collaborative intelligence, combining AI experts, domain specialists and human judgement, is seen as essential for responsible adoption.

Universities are encouraged to treat students as primary stakeholders, adapting curricula to integrate AI responsibly and avoid obsolescence. Competency-based, values-driven learning models are being promoted to prepare students to question, shape and lead through digital transformation.

Experts highlighted that modern communication is co-produced by humans, machines and algorithms. Designing AI to augment human agency rather than replace it ensures a balance between technology and human decision-making across education and industry.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

People-First AI Fund awards support to 208 US nonprofits

OpenAI Foundation has named the first recipients of the People-First AI Fund, awarding $40.5 million to 208 community groups across the United States. The grants will be disbursed by the end of the year, with a further $9.5 million in Board-directed funding to follow.

Nationwide listening sessions and recommendations from an independent Nonprofit Commission shaped applications. Nearly 3,000 organisations applied, underscoring strong demand for support across US communities. Final selections were made following a multi-stage human review involving external experts.

Grantees span digital literacy programmes, rural health initiatives and Indigenous media networks. Many operate with limited exposure to AI, reflecting the fund’s commitment to trusted, community-centred groups. California features prominently, consistent with the Foundation’s ties to its home state.

Funded projects span primary care, youth training in agricultural areas, and Tribal AI literacy work. Groups are also applying AI to food networks, disability education, arts and local business support. Each organisation sets priorities through flexible grants.

The programme focuses on AI literacy, community innovation and economic opportunity, with further grants targeting sector-level transformation. OpenAI Foundation says it will continue learning alongside grantees and supporting efforts that broaden opportunity while grounding AI adoption in local US needs.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Legal sector urged to plan for cultural change around AI

A digital agency has released new guidance to help legal firms prepare for wider AI adoption. The report urges practitioners to assess cultural readiness before committing to major technology investment.

Sherwen Studios collected views from lawyers who raised ethical worries and practical concerns. Their experiences shaped recommendations intended to ensure AI serves real operational needs across the sector.

The agency argues that firms must invest in oversight, governance and staff capability. Leaders are encouraged to anticipate regulatory change and build multidisciplinary teams that blend legal and technical expertise.

Industry analysts expect AI to reshape client care and compliance frameworks over the coming years. Firms prepared for structural shifts are likely to benefit most from long-term transformation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

FCA begins live AI testing with UK financial firms

The UK’s Financial Conduct Authority has started a live testing programme for AI with major financial firms. The initiative aims to explore AI’s benefits and risks in retail financial services while ensuring safe and responsible deployment.

Participating firms, including NatWest, Monzo, Santander and Scottish Widows, receive guidance from FCA regulators and technical partner Advai. Use cases being trialled range from debt resolution and financial advice to customer engagement and smarter spending tools.

Insights from the testing will help the FCA shape future regulations and governance frameworks for AI in financial markets. The programme complements the regulator’s Supercharged Sandbox, with a second cohort of firms due to begin testing in April 2026.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Sega cautiously adopts AI in game development

Game development is poised to transform as Sega begins to incorporate AI selectively. The Japanese studio aims to enhance efficiency across production processes while preserving the integrity of creative work, such as character design.

Executives emphasised that AI will primarily support tasks such as content transcription and workflow optimisation, avoiding roles that require artistic skills. Careful evaluation of each potential use case will guide its implementation across projects.

The debate over generative AI continues to divide the gaming industry, with some developers raising concerns that candidates may misrepresent AI-generated work during the hiring process. Studios are increasingly requiring proof of actual creative ability to avoid productivity issues.

Other developers, including Arrowhead Game Studios, emphasise the importance of striking a balance between AI use and human creativity. By reducing repetitive tasks rather than replacing artistic roles, studios aim to enhance efficiency while preserving the unique contributions of human designers.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AstraZeneca backs Pangaea’s AI platform to scale precision healthcare

Pangaea Data, a health-tech firm specialising in patient-intelligence platforms, announced a strategic, multi-year partnership with AstraZeneca to deploy multimodal artificial intelligence in clinical settings. The goal is to bring AI-driven, data-rich clinical decision-making to scale, improving how patients are identified, diagnosed, treated and connected to therapies or clinical trials.

The collaboration will see AstraZeneca sponsoring the configuration, validation and deployment of Pangaea’s enterprise-grade platform, which merges large-scale clinical, imaging, genomic, pathology and real-world data. It will also leverage generative and predictive AI capabilities from Microsoft and NVIDIA for model training and deployment.

Among the planned applications are supporting point-of-care treatment decisions and identifying patients who are undiagnosed, undertreated or misdiagnosed, across diseases ranging from chronic conditions to cancer.

Pangaea’s CEO said the partnership aims to efficiently connect patients to life-changing therapies and trials in a compliant, financially sustainable way. For AstraZeneca, the effort reflects a broader push to integrate AI-driven precision medicine across its R&D and healthcare delivery pipeline.

From a policy and health-governance standpoint, this alliance is important. It demonstrates how multimodal AI, combining different data types beyond standard medical records, is being viewed not just as a research tool, but as a potentially transformative element of clinical care.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU opens antitrust probe into Meta’s WhatsApp AI rollout

Brussels has opened an antitrust inquiry into Meta over how AI features were added to WhatsApp, focusing on whether the updated access policies hinder market competition. Regulators say scrutiny is needed as integrated assistants become central to messaging platforms.

Meta AI has been built into WhatsApp across Europe since early 2025, prompting questions about whether external AI providers face unfair barriers. Meta rejects the accusations and argues that users can reach rival tools through other digital channels.

Italy launched a related proceeding in July and expanded it in November, examining claims that Meta curtailed access for competing chatbots. Authorities worry that dominance in messaging could influence the wider AI services market.

EU officials confirmed the case will proceed under standard antitrust rules rather than the Digital Markets Act. Investigators aim to understand how embedded assistants reshape competitive dynamics in services used by millions.

European regulators say outcomes could guide future oversight as generative AI becomes woven into essential communications. The case signals growing concern about concentrated power in fast-evolving AI ecosystems.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Uzbekistan sets principles for responsible AI

A new ethical framework for the development and use of AI technologies has been adopted by Uzbekistan.

The rules, prepared by the Ministry of Digital Technologies, establish unified standards for developers, implementing organisations and users of AI systems, ensuring AI respects human rights, privacy and societal trust.

A framework that is part of presidential decrees and resolutions aimed at advancing AI innovation across the country. It also emphasises legality, transparency, fairness, accountability, and continuous human oversight.

AI systems must avoid discrimination based on gender, nationality, religion, language or social origin.

Developers are required to ensure algorithmic clarity, assess risks and bias in advance, and prevent AI from causing harm to individuals, society, the state or the environment.

Users of AI systems must comply with legislation, safeguard personal data, and operate technologies responsibly. Any harm caused during AI development or deployment carries legal liability.

The Ministry of Digital Technologies will oversee standards, address ethical concerns, foster industry cooperation, and improve digital literacy across Uzbekistan.

An initiative that aligns with broader efforts to prepare Uzbekistan for AI adoption in healthcare, education, transport, space, and other sectors.

By establishing clear ethical principles, the country aims to strengthen trust in AI applications and ensure responsible and secure use nationwide.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UNESCO launches AI guidelines for courts and tribunals

UNESCO has launched new Guidelines for the Use of AI Systems in Courts and Tribunals to ensure AI strengthens rather than undermines human-led justice. The initiative arrives as courts worldwide face millions of pending cases and limited resources.

In Argentina, AI-assisted legal tools have increased case processing by nearly 300%, while automated transcription in Egypt is improving court efficiency.

Judicial systems are increasingly encountering AI-generated evidence, AI-assisted sentencing, and automated administrative processes. AI misuse can have serious consequences, as seen in the UK High Court where false AI-generated arguments caused delays, extra costs, and fines.

UNESCO’s Guidelines aim to prevent such risks by emphasising human oversight, auditability, and ethical AI use.

The Guidelines outline 15 principles and provide recommendations for judicial organisations and individual judges throughout the AI lifecycle. They also serve as a benchmark for developing national and regional standards.

UNESCO’s Judges’ Initiative, which has trained over 36,000 judicial operators in 160 countries, played a key role in shaping and peer-reviewing the Guidelines.

The official launch will take place at the Athens Roundtable on AI and the Rule of Law in London on 4 December 2025. UNESCO aims for the standards to ensure responsible AI use, improve court efficiency, and uphold public trust in the judiciary.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot