Canada sets national guidelines for equitable AI

Yesterday, Canada released the CAN-ASC-6.2 – Accessible and Equitable Artificial Intelligence Systems standard, marking the first national standard focused specifically on accessible AI.

A framework that ensures AI systems are inclusive, fair, and accessible from design through deployment. Its release coincides with the International Day of Persons with Disabilities, emphasising Canada’s commitment to accessibility and inclusion.

The standard guides organisations and developers in creating AI that accommodates people with disabilities, promotes fairness, prevents exclusion, and maintains accessibility throughout the AI lifecycle.

It provides practical processes for equity in AI development and encourages education on accessible AI practices.

The standard was developed by a technical committee composed largely of people with disabilities and members of equity-deserving groups, incorporating public feedback from Canadians of diverse backgrounds.

Approved by the Standards Council of Canada, CAN-ASC-6.2 meets national requirements for standards development and aligns with international best practices.

Moreover, the standard is available for free in both official languages and accessible formats, including plain language, American Sign Language and Langue des signes québécoise.

By setting clear guidelines, Canada aims to ensure AI serves all citizens equitably and strengthens workforce inclusion, societal participation, and technological fairness.

An initiative that highlights Canada’s leadership in accessible technology and provides a practical tool for organisations to implement inclusive AI systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI and automation need human oversight in decision-making

Leaders from academia and industry in Hyderabad, India are stressing that humans must remain central in decision-making as AI and automation expand across society. Collaborative intelligence, combining AI experts, domain specialists and human judgement, is seen as essential for responsible adoption.

Universities are encouraged to treat students as primary stakeholders, adapting curricula to integrate AI responsibly and avoid obsolescence. Competency-based, values-driven learning models are being promoted to prepare students to question, shape and lead through digital transformation.

Experts highlighted that modern communication is co-produced by humans, machines and algorithms. Designing AI to augment human agency rather than replace it ensures a balance between technology and human decision-making across education and industry.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

People-First AI Fund awards support to 208 US nonprofits

OpenAI Foundation has named the first recipients of the People-First AI Fund, awarding $40.5 million to 208 community groups across the United States. The grants will be disbursed by the end of the year, with a further $9.5 million in Board-directed funding to follow.

Nationwide listening sessions and recommendations from an independent Nonprofit Commission shaped applications. Nearly 3,000 organisations applied, underscoring strong demand for support across US communities. Final selections were made following a multi-stage human review involving external experts.

Grantees span digital literacy programmes, rural health initiatives and Indigenous media networks. Many operate with limited exposure to AI, reflecting the fund’s commitment to trusted, community-centred groups. California features prominently, consistent with the Foundation’s ties to its home state.

Funded projects span primary care, youth training in agricultural areas, and Tribal AI literacy work. Groups are also applying AI to food networks, disability education, arts and local business support. Each organisation sets priorities through flexible grants.

The programme focuses on AI literacy, community innovation and economic opportunity, with further grants targeting sector-level transformation. OpenAI Foundation says it will continue learning alongside grantees and supporting efforts that broaden opportunity while grounding AI adoption in local US needs.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT users gain Jira and Confluence access through Atlassian’s MCP connector

Atlassian has launched a new connector that lets ChatGPT users access Jira and Confluence data via the Model Context Protocol. The company said the Rovo MCP Connector supports task summarisation, issue creation and workflow automation directly inside ChatGPT.

Atlassian noted rising demand for integrations beyond its initial beta ecosystem. Users in Europe and elsewhere can now draw on Jira and Confluence data without switching interfaces, while partners such as Figma and HubSpot continue to expand the MCP network.

Engineering, marketing and service teams can request summaries, monitor task progress and generate issues from within ChatGPT. Users can also automate multi-step actions, including bulk updates. Jira write-back support enables changes to be pushed directly into project workflows.

Security updates sit alongside the connector release. Atlassian said the Rovo MCP Server uses OAuth authentication and respects existing permissions across Jira and Confluence spaces. Administrators can also enforce an allowlist to control which clients may connect.

Atlassian frames the initiative as part of its long-term focus on open collaboration. The company said the connector reflects demand for tools that unify context, search and automation, positioning the MCP approach as a flexible extension of existing team practices.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

FCA begins live AI testing with UK financial firms

The UK’s Financial Conduct Authority has started a live testing programme for AI with major financial firms. The initiative aims to explore AI’s benefits and risks in retail financial services while ensuring safe and responsible deployment.

Participating firms, including NatWest, Monzo, Santander and Scottish Widows, receive guidance from FCA regulators and technical partner Advai. Use cases being trialled range from debt resolution and financial advice to customer engagement and smarter spending tools.

Insights from the testing will help the FCA shape future regulations and governance frameworks for AI in financial markets. The programme complements the regulator’s Supercharged Sandbox, with a second cohort of firms due to begin testing in April 2026.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Sega cautiously adopts AI in game development

Game development is poised to transform as Sega begins to incorporate AI selectively. The Japanese studio aims to enhance efficiency across production processes while preserving the integrity of creative work, such as character design.

Executives emphasised that AI will primarily support tasks such as content transcription and workflow optimisation, avoiding roles that require artistic skills. Careful evaluation of each potential use case will guide its implementation across projects.

The debate over generative AI continues to divide the gaming industry, with some developers raising concerns that candidates may misrepresent AI-generated work during the hiring process. Studios are increasingly requiring proof of actual creative ability to avoid productivity issues.

Other developers, including Arrowhead Game Studios, emphasise the importance of striking a balance between AI use and human creativity. By reducing repetitive tasks rather than replacing artistic roles, studios aim to enhance efficiency while preserving the unique contributions of human designers.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Uzbekistan sets principles for responsible AI

A new ethical framework for the development and use of AI technologies has been adopted by Uzbekistan.

The rules, prepared by the Ministry of Digital Technologies, establish unified standards for developers, implementing organisations and users of AI systems, ensuring AI respects human rights, privacy and societal trust.

A framework that is part of presidential decrees and resolutions aimed at advancing AI innovation across the country. It also emphasises legality, transparency, fairness, accountability, and continuous human oversight.

AI systems must avoid discrimination based on gender, nationality, religion, language or social origin.

Developers are required to ensure algorithmic clarity, assess risks and bias in advance, and prevent AI from causing harm to individuals, society, the state or the environment.

Users of AI systems must comply with legislation, safeguard personal data, and operate technologies responsibly. Any harm caused during AI development or deployment carries legal liability.

The Ministry of Digital Technologies will oversee standards, address ethical concerns, foster industry cooperation, and improve digital literacy across Uzbekistan.

An initiative that aligns with broader efforts to prepare Uzbekistan for AI adoption in healthcare, education, transport, space, and other sectors.

By establishing clear ethical principles, the country aims to strengthen trust in AI applications and ensure responsible and secure use nationwide.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UNESCO launches AI guidelines for courts and tribunals

UNESCO has launched new Guidelines for the Use of AI Systems in Courts and Tribunals to ensure AI strengthens rather than undermines human-led justice. The initiative arrives as courts worldwide face millions of pending cases and limited resources.

In Argentina, AI-assisted legal tools have increased case processing by nearly 300%, while automated transcription in Egypt is improving court efficiency.

Judicial systems are increasingly encountering AI-generated evidence, AI-assisted sentencing, and automated administrative processes. AI misuse can have serious consequences, as seen in the UK High Court where false AI-generated arguments caused delays, extra costs, and fines.

UNESCO’s Guidelines aim to prevent such risks by emphasising human oversight, auditability, and ethical AI use.

The Guidelines outline 15 principles and provide recommendations for judicial organisations and individual judges throughout the AI lifecycle. They also serve as a benchmark for developing national and regional standards.

UNESCO’s Judges’ Initiative, which has trained over 36,000 judicial operators in 160 countries, played a key role in shaping and peer-reviewing the Guidelines.

The official launch will take place at the Athens Roundtable on AI and the Rule of Law in London on 4 December 2025. UNESCO aims for the standards to ensure responsible AI use, improve court efficiency, and uphold public trust in the judiciary.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

FCA launches AI Live Testing for UK financial firms

The UK’s Financial Conduct Authority has launched an AI Live Testing initiative to help firms safely deploy AI in financial markets. Major companies, including NatWest, Monzo, Santander, Scottish Widows, Gain Credit, Homeprotect, and Snorkl, are participating in the first cohort.

Firms receive tailored guidance from the FCA and its technical partner, Advai, to develop and assess AI applications responsibly.

AI testing focuses on retail financial services, exploring uses such as debt resolution, financial advice, improving customer engagement, streamlining complaints handling, and supporting smarter spending and saving decisions.

The project aims to answer key questions around evaluation frameworks, governance, live monitoring, and risk management to protect both consumers and markets.

Jessica Rusu, FCA chief data officer, said the initiative helps firms use AI safely while guiding the FCA on its impact in UK financial services. The project complements the FCA’s Supercharged Sandbox, which supports firms in earlier experimentation phases.

Applications for the second AI Live Testing cohort open in January 2026, with participating firms able to start testing in April. Insights from the initiative will inform FCA AI policy, supporting innovation while ensuring responsible deployment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Texas makes historic investment with $5 million Bitcoin purchase

Texas has become the first US state to fund a strategic cryptocurrency reserve, purchasing approximately $5 million in Bitcoin through BlackRock’s iShares Bitcoin Trust ETF.

The move follows Governor Greg Abbott signing Senate Bill 21, allowing the comptroller’s office to create a public crypto reserve. states, such as New Hampshire and Arizona, have passed similar bills, but Texas is the first to execute an actual purchase.

The ETF acquisition acts as a temporary measure while the state finalises a contract with a cryptocurrency custodian. Comptroller representatives called the purchase a ‘placeholder investment’ while reviewing bids for a permanent custodian.

Lawmakers have allocated $10 million to the reserve, a small portion of Texas’ $338 billion budget, yet supporters argue it marks an important step for the growing crypto industry.

Bitcoin prices have fluctuated significantly this year, peaking above $126,000 in October before dropping to around $85,000 recently. The state’s purchase at roughly $87,000 per bitcoin reflects ongoing market volatility.

Advocates see the investment as forward-looking, citing potential long-term benefits in job creation, tax revenue, and digital asset adoption.

Critics remain sceptical, warning that public crypto investments carry high risk and may favour industry interests over taxpayers. Some economists criticised the move as conflicting with Texas’ conservative fiscal approach and risky government speculation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!