Google supports UK quantum innovation push

UK researchers will soon be able to work with Google’s advanced quantum chip Willow through a partnership with the National Quantum Computing Centre. The initiative aims to help scientists tackle problems that classical computers cannot solve.

The agreement will allow academics to compete for access to the processor and collaborate with experts from both organisations. Google hopes the programme will reveal practical uses for quantum computing in science and industry.

Quantum technology remains experimental, yet progress from Google, IBM, Amazon and UK firms has accelerated rapidly. Breakthroughs could lead to impactful applications within the next decade.

Government investment has supported the UK’s growing quantum sector, which hosts several cutting-edge machines. Officials estimate the industry could add billions to the UK economy as real-world uses emerge.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

New law requires AI disclosure in advertising in the US

A new law in New York, US, will require advertisers to disclose when AI-generated people appear in commercial content. Governor Kathy Hochul said the measure brings transparency and protects consumers as synthetic avatars become more widespread.

A second law now requires consent from heirs or executors when using a deceased person’s likeness for commercial purposes. The rule updates the state’s publicity rights, which previously lacked clarity in the context of the generative AI era.

Industry groups welcomed the move, saying it addresses the risks posed by unregulated AI usage, particularly for actors in the film and television industries. The disclosure must be conspicuous when an avatar does not correspond to a real human.

Specific expressive works such as films, games and shows are exempt when the avatar matches its use in the work. The laws arrive as national debate intensifies and President-elect Donald Trump signals potential attempts to limit state-level AI regulation.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Taiwan strengthens its role in global semiconductors

Taiwan will continue to produce the world’s most advanced semiconductors domestically to remain a vital player globally. Deputy Foreign Minister Francois Chih-chung Wu said the island’s expertise cannot be easily replicated abroad.

Taiwan has invested in fabs in the US, Japan and Germany, but warned that moving production overseas is complex. The island plans to foster international partnerships while maintaining core technology in-house to safeguard its supply chains.

China’s military pressure on Taiwan has increased concerns over regional stability and global chip supply. Wu emphasised that preventing conflict is the most effective way to secure the semiconductor industry.

Washington and Europe share strategic interests with Taiwan, including the semiconductor industry and navigation in the Taiwan Strait. Wu expressed confidence that the international community would defend these interests, maintaining Taiwan’s essential role in technology.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

RightsX Summit 2025: Governing technology through human rights

Human Rights Day takes place on 10 December each year to commemorate the Universal Declaration of Human Rights (UDHR), adopted by the UN in 1948. It functions as a reminder of shared international commitments to dignity, equality and freedom, and seeks to reaffirm the relevance of these principles to contemporary challenges.

In 2025, the theme ‘Human Rights: Our Everyday Essentials’ aimed to reconnect people with how rights shape daily life, emphasising that rights remain both positive and practical foundations for individual and collective well-being.

 Text, Newspaper, Adult, Male, Man, Person, Accessories, Jewelry, Necklace, Eleanor Roosevelt

Human Rights Day also serves as a moment for reflection and action. In a world shaped by rapid technological change, geopolitical instability and social inequalities, the day encourages institutions, governments and civil society to coordinate on priorities that respond to contemporary threats and opportunities.

In this context, the RightsX Summit was strategically scheduled. By centring discussions on human rights, technology, data and innovation around Human Rights Day, the event reinforced that digital governance issues are central to rights protection in the twenty-first century. The alignment elevated technology from a technical topic to a political and ethical concern within human rights debates.

The RightsX Summit 2025

 Advertisement, Poster, Purple

The summit brought together governments, the UN system, civil society, private sector partners and innovators to explore how technology can advance human rights in the digital age. Its aim was to produce practical insights, solution-focused dialogues and discussions that could inform a future human rights toolbox shaped by technology, data, foresight and partnerships.

Central themes included AI, data governance, predictive analytics, digital security, privacy and other emerging technologies. Discussions analysed how these tools can be responsibly used to anticipate risks, improve monitoring, and support evidence-based decision-making in complex rights contexts.

The summit also examined the challenge of aligning technological deployment with internationally recognised human rights norms, exploring the mechanisms by which innovation can reinforce equity, justice and accountability in digital governance.

The summit emphasised that technological innovation is inseparable from global leadership in human rights. Aligning emerging tools with established norms was highlighted as critical to ensure that digital systems do not exacerbate existing inequalities or create new risks.

Stakeholders were encouraged to consider not only technical capabilities but also the broader social, legal and ethical frameworks within which technology operates.

The 30x30x30 Campaign

 Astronomy, Outer Space, Planet, Globe, Earth, Sphere

The 30x30x30 initiative represents an ambitious attempt to operationalise human rights through innovation. Its objective is to deliver 30 human rights innovations for 30 communities by 2030, aligned with the 30 articles of the UDHR.

The campaign emphasises multistakeholder collaboration by uniting countries, companies and communities as co-creators of solutions that are both technologically robust and socially sensitive. A distinctive feature of 30x30x30 is its focus on scalable, real-world tools that address complex rights challenges.

Examples include AI-based platforms for real-time monitoring, disaster tracking systems, digital storytelling tools and technologies for cyber peace. These tools are intended to serve both institutional responders and local communities, demonstrating how technology can amplify human agency in rights contexts.

The campaign also highlights the interdependence of innovation and human rights. Traditional approaches alone cannot address multidimensional crises such as climate displacement, conflict, or systemic inequality, and innovation without human-rights grounding risks reinforcing existing disparities.

‘Innovation is Political’

 Body Part, Finger, Hand, Person, Baby, Network, Accessories

Volker Türk, UN High Commissioner for Human Rights, emphasised that ‘innovation is political’. He noted that the development and deployment of technology shape who benefits and how, and that decisions regarding access, governance and application of technological tools carry significant implications for equity, justice and human dignity.

This framing highlights the importance of integrating human rights considerations into innovation policy. By situating human rights at the centre of technological development, the summit promoted governance approaches that ensure innovation contributes positively to societal outcomes.

It encouraged multistakeholder responsibility, including governments, companies and civil society, to guide technology in ways that respect and advance human rights.

Human Rights Data Exchange (HRDx)

HRDx is a proposed global platform intended to improve the ethical management of human rights data. It focuses on creating systems where information is governed responsibly, ensuring that privacy, security and protection of personal data are central to its operation.

The platform underlines that managing data is not only a technical issue but also a matter of governance and ethics. By prioritising transparency, accountability and data protection, it aims to provide a framework that supports the responsible use of information without compromising human rights.

Through these principles, HRDx highlights the importance of embedding ethical oversight into technological tools. Its success relies on maintaining the balance between utilising data to inform decision-making and upholding the rights and dignity of individuals. That approach ensures that technology can contribute to human rights protection while adhering to rigorous ethical standards.

Trustworthy AI in human rights

The government has withdrawn the mandate for Sanchar Saathi, responding to public backlash and industry resistance.

AI offers significant opportunities to enhance human rights monitoring and protection. For example, AI can help to analyse large datasets to detect trends, anticipate crises, and identify violations of fundamental freedoms. Predictive analytics can support human rights foresight, enabling early interventions to prevent conflicts, trafficking, or discrimination.

At the same time, trust in AI for decision-making remains a significant challenge. AI systems trained on biassed or unrepresentative data can produce discriminatory outcomes, undermine privacy and erode public trust.

These risks are especially acute in applications where algorithmic decisions affect access to services or determine individual liberties. That requires governance frameworks that ensure transparency, accountability and ethical oversight.

In the human rights context, trustworthy AI means designing systems that are explainable, auditable and accountable. Human oversight remains essential, particularly in decisions with serious implications for individuals’ rights.

The Summit highlighted the importance of integrating human rights principles such as non-discrimination, equality and procedural fairness into AI development and deployment processes.

Ethics, Accountability and Governance

AI, justice, law,

Aligning technology with human rights necessitates robust ethical frameworks, effective governance, and transparent accountability. Digital systems must uphold fairness, transparency, inclusivity, and human dignity throughout their lifecycle, from design to deployment and ongoing operation.

Human rights impact assessments at the design stage help identify potential risks and guide responsible development. Engaging users and affected communities ensures technologies meet real needs.

Continuous monitoring and audits maintain compliance with ethical standards and highlight areas for improvement.

Effective governance ensures responsibilities are clearly defined, decisions are transparent, and corrective actions can be taken when rights are compromised. By combining ethical principles with robust governance and accountability, technology can actively protect and support human rights.

Future pathways for rights-centred innovation

Image of UN Human Rights Council

The integration of human rights into technology represents a long-term project. Establishing frameworks that embed accountability, transparency and ethical oversight ensures that emerging tools enhance freedom, equality and justice.

Digital transformation, when guided by human rights, creates opportunities to address complex challenges. RightsX 2025 demonstrated that innovation, governance and ethical foresight can converge to shape a digital ecosystem that safeguards human dignity while fostering progress.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Users gain new control with Instagram feed algorithm

Instagram has unveiled a new AI-powered feature called ‘Your Algorithm’, giving users control over the topics shown in their Reels feed. The tool analyses viewing history and allows users to indicate which subjects they want to see more or less of.

The feature displays a summary of each user’s top interests and allows typing in specific topics to fine-tune recommendations in real-time. Instagram plans to expand the tool beyond Reels to Explore and other areas of the app.

Launch started in the US, with a global rollout in English expected soon. The initiative comes amid growing calls for social media platforms to provide greater transparency over algorithmic content and avoid echo chambers.

By enabling users to adjust their feeds directly, Instagram aims to offer more personalised experiences while responding to regulatory pressures and societal concerns over harmful content.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Google brings proactive features to Jules AI

Jules AI has been updated to work proactively, helping developers manage routine tasks and fix issues automatically while they focus on complex coding projects. The agent now suggests improvements and prepares fixes without requiring direct input.

Google AI Pro and Ultra subscribers can enable Suggested Tasks, which scan code for actionable improvements starting with #todos comments. Scheduled Tasks let users automate predictable maintenance, keeping projects up to date with minimal effort.

A new integration with Render streamlines the handling of failed deployments by analysing logs, identifying issues, and generating pull requests for review. However, this reduces the time developers spend troubleshooting and maintaining workflow momentum.

By combining proactive task management and automated fixes, Jules aims to be an intelligent partner that supports developers throughout the entire development lifecycle, ensuring smoother, more efficient coding.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

DeVry improves student support with AI

In the US, DeVry University has upgraded its student support system by deploying Salesforce Agentforce 360, aiming to offer faster and more personalised assistance to its 32,000 learners.

The new AI agents provide round-the-clock support for DeVryPro, the university’s online learning programme, ensuring students receive timely guidance.

The platform also simplifies course enrolment through a self-service website, allowing learners to manage enrolment and payments efficiently. Real-time guidance replaces the previous chatbot, helping students access course information and support outside regular hours.

With Data 360 integrating information from multiple systems, DeVry can deliver personalised recommendations while automating time-consuming tasks such as weekly onboarding.

Advisors can now focus on building stronger connections with students and supporting the development of workforce skills.

University leaders emphasise that these advancements reflect a commitment to preparing learners for an AI-driven workforce, combining innovative technology with personalised academic experiences. The initiative positions DeVry as a leader in integrating AI into higher education.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

China pushes global leadership on AI governance

Global discussions on artificial intelligence have multiplied, yet the world still lacks a coherent system to manage the technology’s risks. China is attempting to fill that gap by proposing a new World Artificial Intelligence Cooperation Organisation to coordinate regulation internationally.

Countries face mounting concerns over unsafe AI development, with the US relying on fragmented rules and voluntary commitments from tech firms. The EU has introduced binding obligations through its AI Act, although companies continue to push for weaker oversight.

China’s rapid rollout of safety requirements, including pre-deployment checks and watermarking of AI-generated content, is reshaping global standards as many firms overseas adopt Chinese open-weight models.

A coordinated international framework similar to the structure used for nuclear oversight could help governments verify compliance and stabilise the global AI landscape.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Snowflake launches AI platform for Japan enterprises

Japan’s businesses are set to gain new AI capabilities with the arrival of Snowflake Intelligence, a platform designed to let employees ask complex data questions using natural language.

The tool integrates structured and unstructured data into a single environment, enabling faster and more transparent decision-making.

Early adoption worldwide has seen more than 15,000 AI agents deployed in recent months, reflecting growing demand for enterprise AI. Snowflake Intelligence builds on this momentum by offering rapid text-to-SQL responses, advanced agent management and strong governance controls.

Japanese enterprises are expected to benefit from streamlined workflows, increased productivity, and improved competitiveness as AI agents uncover patterns across various sectors, including finance and manufacturing.

Snowflake aims to showcase the platform’s full capabilities during its upcoming BUILD event in December while promoting broader adoption of data-driven innovation.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Launch of Qai advances Qatar’s AI strategy globally

Qatar has launched Qai, a new national AI company designed to strengthen the country’s digital capabilities and accelerate sustainable development. The initiative supports Qatar’s plans to build a knowledge-based economy and deepen economic diversification under Qatar National Vision 2030.

The company will develop, operate and invest in AI infrastructure both domestically and internationally, offering high-performance computing and secure tools for deploying scalable AI systems. Its work aims to drive innovation while ensuring that governments, companies and researchers can adopt advanced technologies with confidence.

Qai will collaborate closely with research institutions, policymakers and global partners to expand Qatar’s role in data-driven industries. The organisation promotes an approach to AI that prioritises societal benefit, with leaders stressing that people and communities must remain central to technological progress.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!