UK positions itself for leadership in the quantum computing race

Quantum computing is advancing as governments and industry pursue new frontiers beyond AI. The UK benefits from strong research traditions and skilled talent. Policymakers see early planning as vital for long-term competitiveness.

Companies across finance, energy and logistics are testing quantum methods for optimisation and modelling. Early pilots suggest that quantum techniques may offer advantages where classical approaches slow down or fail to scale. Interest in practical applications is rising across Europe.

The UK benefits from strong university spinouts and deep industrial partnerships. Joint programmes are accelerating work on molecular modelling and drug discovery. Many researchers argue that early experimentation helps build a more resilient quantum workforce.

New processors promise higher connectivity and lower error rates as the field moves closer to quantum advantage. Research teams are refining designs for future error-corrected systems. Hardware roadmaps indicate steady progress towards more reliable architectures.

Policy support will shape how quickly the UK can translate research into real-world capability. Long-term investments, open scientific collaboration and predictable regulation will be critical. Momentum suggests a decisive period for the country’s quantum ambitions.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Meta begins removing underage users in Australia

Meta has begun removing Australian users under 16 from Facebook, Instagram and Threads ahead of a national ban taking effect on 10 December. Canberra requires major platforms to block younger users or face substantial financial penalties.

Meta says it is deleting accounts it reasonably believes belong to underage teenagers while allowing them to download their data. Authorities expect hundreds of thousands of adolescents to be affected, given Instagram’s large cohort of 13 to 15 year olds.

Regulators argue the law addresses harmful recommendation systems and exploitative content, though YouTube has warned that safety filters will weaken for unregistered viewers. The Australian communications minister has insisted platforms must strengthen their own protections.

Rights groups have challenged the law in court, claiming unjust limits on expression. Officials concede teenagers may try using fake identification or AI-altered images, yet still expect platforms to deploy strong countermeasures.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Cyber Resilience Act signals a major shift in EU product security

EU regulators are preparing to enforce the Cyber Resilience Act, setting core security requirements for digital products in the European market. The law spans software, hardware, and firmware, establishing shared expectations for secure development and maintenance.

Scope captures apps, embedded systems, and cloud-linked features. Risk classes run from default to critical, directing firms to self-assess or undergo third-party checks. Any product sold beyond December 2027 must align with the regulation.

Obligations apply to manufacturers, importers, distributors, and developers. Duties include secure-by-design practices, documented risk analysis, disclosure procedures, and long-term support. Firms must notify ENISA within 24 hours of active exploitation and provide follow-up reports on a strict timeline.

Compliance requires technical files covering threat assessments, update plans, and software bills of materials. High-risk categories demand third-party evaluation, while lower-risk segments may rely on internal checks. Existing certifications help, but cannot replace CRA-specific conformity work.

Non-compliance risks fines, market restrictions, and reputational damage. Organisations preparing early are urged to classify products, run gap assessments, build structured roadmaps, and align development cycles with CRA guidance. EU authorities plan to provide templates and support as firms transition.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Canada sets national guidelines for equitable AI

Yesterday, Canada released the CAN-ASC-6.2 – Accessible and Equitable Artificial Intelligence Systems standard, marking the first national standard focused specifically on accessible AI.

A framework that ensures AI systems are inclusive, fair, and accessible from design through deployment. Its release coincides with the International Day of Persons with Disabilities, emphasising Canada’s commitment to accessibility and inclusion.

The standard guides organisations and developers in creating AI that accommodates people with disabilities, promotes fairness, prevents exclusion, and maintains accessibility throughout the AI lifecycle.

It provides practical processes for equity in AI development and encourages education on accessible AI practices.

The standard was developed by a technical committee composed largely of people with disabilities and members of equity-deserving groups, incorporating public feedback from Canadians of diverse backgrounds.

Approved by the Standards Council of Canada, CAN-ASC-6.2 meets national requirements for standards development and aligns with international best practices.

Moreover, the standard is available for free in both official languages and accessible formats, including plain language, American Sign Language and Langue des signes québécoise.

By setting clear guidelines, Canada aims to ensure AI serves all citizens equitably and strengthens workforce inclusion, societal participation, and technological fairness.

An initiative that highlights Canada’s leadership in accessible technology and provides a practical tool for organisations to implement inclusive AI systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI and automation need human oversight in decision-making

Leaders from academia and industry in Hyderabad, India are stressing that humans must remain central in decision-making as AI and automation expand across society. Collaborative intelligence, combining AI experts, domain specialists and human judgement, is seen as essential for responsible adoption.

Universities are encouraged to treat students as primary stakeholders, adapting curricula to integrate AI responsibly and avoid obsolescence. Competency-based, values-driven learning models are being promoted to prepare students to question, shape and lead through digital transformation.

Experts highlighted that modern communication is co-produced by humans, machines and algorithms. Designing AI to augment human agency rather than replace it ensures a balance between technology and human decision-making across education and industry.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

People-First AI Fund awards support to 208 US nonprofits

OpenAI Foundation has named the first recipients of the People-First AI Fund, awarding $40.5 million to 208 community groups across the United States. The grants will be disbursed by the end of the year, with a further $9.5 million in Board-directed funding to follow.

Nationwide listening sessions and recommendations from an independent Nonprofit Commission shaped applications. Nearly 3,000 organisations applied, underscoring strong demand for support across US communities. Final selections were made following a multi-stage human review involving external experts.

Grantees span digital literacy programmes, rural health initiatives and Indigenous media networks. Many operate with limited exposure to AI, reflecting the fund’s commitment to trusted, community-centred groups. California features prominently, consistent with the Foundation’s ties to its home state.

Funded projects span primary care, youth training in agricultural areas, and Tribal AI literacy work. Groups are also applying AI to food networks, disability education, arts and local business support. Each organisation sets priorities through flexible grants.

The programme focuses on AI literacy, community innovation and economic opportunity, with further grants targeting sector-level transformation. OpenAI Foundation says it will continue learning alongside grantees and supporting efforts that broaden opportunity while grounding AI adoption in local US needs.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT users gain Jira and Confluence access through Atlassian’s MCP connector

Atlassian has launched a new connector that lets ChatGPT users access Jira and Confluence data via the Model Context Protocol. The company said the Rovo MCP Connector supports task summarisation, issue creation and workflow automation directly inside ChatGPT.

Atlassian noted rising demand for integrations beyond its initial beta ecosystem. Users in Europe and elsewhere can now draw on Jira and Confluence data without switching interfaces, while partners such as Figma and HubSpot continue to expand the MCP network.

Engineering, marketing and service teams can request summaries, monitor task progress and generate issues from within ChatGPT. Users can also automate multi-step actions, including bulk updates. Jira write-back support enables changes to be pushed directly into project workflows.

Security updates sit alongside the connector release. Atlassian said the Rovo MCP Server uses OAuth authentication and respects existing permissions across Jira and Confluence spaces. Administrators can also enforce an allowlist to control which clients may connect.

Atlassian frames the initiative as part of its long-term focus on open collaboration. The company said the connector reflects demand for tools that unify context, search and automation, positioning the MCP approach as a flexible extension of existing team practices.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

FCA begins live AI testing with UK financial firms

The UK’s Financial Conduct Authority has started a live testing programme for AI with major financial firms. The initiative aims to explore AI’s benefits and risks in retail financial services while ensuring safe and responsible deployment.

Participating firms, including NatWest, Monzo, Santander and Scottish Widows, receive guidance from FCA regulators and technical partner Advai. Use cases being trialled range from debt resolution and financial advice to customer engagement and smarter spending tools.

Insights from the testing will help the FCA shape future regulations and governance frameworks for AI in financial markets. The programme complements the regulator’s Supercharged Sandbox, with a second cohort of firms due to begin testing in April 2026.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Sega cautiously adopts AI in game development

Game development is poised to transform as Sega begins to incorporate AI selectively. The Japanese studio aims to enhance efficiency across production processes while preserving the integrity of creative work, such as character design.

Executives emphasised that AI will primarily support tasks such as content transcription and workflow optimisation, avoiding roles that require artistic skills. Careful evaluation of each potential use case will guide its implementation across projects.

The debate over generative AI continues to divide the gaming industry, with some developers raising concerns that candidates may misrepresent AI-generated work during the hiring process. Studios are increasingly requiring proof of actual creative ability to avoid productivity issues.

Other developers, including Arrowhead Game Studios, emphasise the importance of striking a balance between AI use and human creativity. By reducing repetitive tasks rather than replacing artistic roles, studios aim to enhance efficiency while preserving the unique contributions of human designers.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Uzbekistan sets principles for responsible AI

A new ethical framework for the development and use of AI technologies has been adopted by Uzbekistan.

The rules, prepared by the Ministry of Digital Technologies, establish unified standards for developers, implementing organisations and users of AI systems, ensuring AI respects human rights, privacy and societal trust.

A framework that is part of presidential decrees and resolutions aimed at advancing AI innovation across the country. It also emphasises legality, transparency, fairness, accountability, and continuous human oversight.

AI systems must avoid discrimination based on gender, nationality, religion, language or social origin.

Developers are required to ensure algorithmic clarity, assess risks and bias in advance, and prevent AI from causing harm to individuals, society, the state or the environment.

Users of AI systems must comply with legislation, safeguard personal data, and operate technologies responsibly. Any harm caused during AI development or deployment carries legal liability.

The Ministry of Digital Technologies will oversee standards, address ethical concerns, foster industry cooperation, and improve digital literacy across Uzbekistan.

An initiative that aligns with broader efforts to prepare Uzbekistan for AI adoption in healthcare, education, transport, space, and other sectors.

By establishing clear ethical principles, the country aims to strengthen trust in AI applications and ensure responsible and secure use nationwide.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!