Time honours leading AI architects worldwide

Time magazine has named the so-called architects of AI as its Person of the Year, recognising leading technologists reshaping global industries. Figures highlighted include Sam Altman, Jensen Huang, Elon Musk, Mark Zuckerberg, Lisa Su, Demis Hassabis, Dario Amodei and Fei-Fei Li.

Time emphasises that major AI developers have placed enormous bets on infrastructure and capability. Their competition and collaboration have accelerated rapid adoption across businesses and households.

The magazine also examined negative consequences linked to rapid deployment, including mental health concerns and reported chatbot-related lawsuits. Economists warn of significant labour disruption as companies adopt automated systems widely.

The editorial team framed 2025 as a tipping point when AI moved into everyday life. The publication resisted using AI-generated imagery for its cover, choosing traditional artists instead. Industry observers say the selection reflects AI’s central role in shaping economic and social priorities throughout the year.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

RightsX Summit 2025: Governing technology through human rights

Human Rights Day takes place on 10 December each year to commemorate the Universal Declaration of Human Rights (UDHR), adopted by the UN in 1948. It functions as a reminder of shared international commitments to dignity, equality and freedom, and seeks to reaffirm the relevance of these principles to contemporary challenges.

In 2025, the theme ‘Human Rights: Our Everyday Essentials’ aimed to reconnect people with how rights shape daily life, emphasising that rights remain both positive and practical foundations for individual and collective well-being.

 Text, Newspaper, Adult, Male, Man, Person, Accessories, Jewelry, Necklace, Eleanor Roosevelt

Human Rights Day also serves as a moment for reflection and action. In a world shaped by rapid technological change, geopolitical instability and social inequalities, the day encourages institutions, governments and civil society to coordinate on priorities that respond to contemporary threats and opportunities.

In this context, the RightsX Summit was strategically scheduled. By centring discussions on human rights, technology, data and innovation around Human Rights Day, the event reinforced that digital governance issues are central to rights protection in the twenty-first century. The alignment elevated technology from a technical topic to a political and ethical concern within human rights debates.

The RightsX Summit 2025

 Advertisement, Poster, Purple

The summit brought together governments, the UN system, civil society, private sector partners and innovators to explore how technology can advance human rights in the digital age. Its aim was to produce practical insights, solution-focused dialogues and discussions that could inform a future human rights toolbox shaped by technology, data, foresight and partnerships.

Central themes included AI, data governance, predictive analytics, digital security, privacy and other emerging technologies. Discussions analysed how these tools can be responsibly used to anticipate risks, improve monitoring, and support evidence-based decision-making in complex rights contexts.

The summit also examined the challenge of aligning technological deployment with internationally recognised human rights norms, exploring the mechanisms by which innovation can reinforce equity, justice and accountability in digital governance.

The summit emphasised that technological innovation is inseparable from global leadership in human rights. Aligning emerging tools with established norms was highlighted as critical to ensure that digital systems do not exacerbate existing inequalities or create new risks.

Stakeholders were encouraged to consider not only technical capabilities but also the broader social, legal and ethical frameworks within which technology operates.

The 30x30x30 Campaign

 Astronomy, Outer Space, Planet, Globe, Earth, Sphere

The 30x30x30 initiative represents an ambitious attempt to operationalise human rights through innovation. Its objective is to deliver 30 human rights innovations for 30 communities by 2030, aligned with the 30 articles of the UDHR.

The campaign emphasises multistakeholder collaboration by uniting countries, companies and communities as co-creators of solutions that are both technologically robust and socially sensitive. A distinctive feature of 30x30x30 is its focus on scalable, real-world tools that address complex rights challenges.

Examples include AI-based platforms for real-time monitoring, disaster tracking systems, digital storytelling tools and technologies for cyber peace. These tools are intended to serve both institutional responders and local communities, demonstrating how technology can amplify human agency in rights contexts.

The campaign also highlights the interdependence of innovation and human rights. Traditional approaches alone cannot address multidimensional crises such as climate displacement, conflict, or systemic inequality, and innovation without human-rights grounding risks reinforcing existing disparities.

‘Innovation is Political’

 Body Part, Finger, Hand, Person, Baby, Network, Accessories

Volker Türk, UN High Commissioner for Human Rights, emphasised that ‘innovation is political’. He noted that the development and deployment of technology shape who benefits and how, and that decisions regarding access, governance and application of technological tools carry significant implications for equity, justice and human dignity.

This framing highlights the importance of integrating human rights considerations into innovation policy. By situating human rights at the centre of technological development, the summit promoted governance approaches that ensure innovation contributes positively to societal outcomes.

It encouraged multistakeholder responsibility, including governments, companies and civil society, to guide technology in ways that respect and advance human rights.

Human Rights Data Exchange (HRDx)

HRDx is a proposed global platform intended to improve the ethical management of human rights data. It focuses on creating systems where information is governed responsibly, ensuring that privacy, security and protection of personal data are central to its operation.

The platform underlines that managing data is not only a technical issue but also a matter of governance and ethics. By prioritising transparency, accountability and data protection, it aims to provide a framework that supports the responsible use of information without compromising human rights.

Through these principles, HRDx highlights the importance of embedding ethical oversight into technological tools. Its success relies on maintaining the balance between utilising data to inform decision-making and upholding the rights and dignity of individuals. That approach ensures that technology can contribute to human rights protection while adhering to rigorous ethical standards.

Trustworthy AI in human rights

The government has withdrawn the mandate for Sanchar Saathi, responding to public backlash and industry resistance.

AI offers significant opportunities to enhance human rights monitoring and protection. For example, AI can help to analyse large datasets to detect trends, anticipate crises, and identify violations of fundamental freedoms. Predictive analytics can support human rights foresight, enabling early interventions to prevent conflicts, trafficking, or discrimination.

At the same time, trust in AI for decision-making remains a significant challenge. AI systems trained on biassed or unrepresentative data can produce discriminatory outcomes, undermine privacy and erode public trust.

These risks are especially acute in applications where algorithmic decisions affect access to services or determine individual liberties. That requires governance frameworks that ensure transparency, accountability and ethical oversight.

In the human rights context, trustworthy AI means designing systems that are explainable, auditable and accountable. Human oversight remains essential, particularly in decisions with serious implications for individuals’ rights.

The Summit highlighted the importance of integrating human rights principles such as non-discrimination, equality and procedural fairness into AI development and deployment processes.

Ethics, Accountability and Governance

AI, justice, law,

Aligning technology with human rights necessitates robust ethical frameworks, effective governance, and transparent accountability. Digital systems must uphold fairness, transparency, inclusivity, and human dignity throughout their lifecycle, from design to deployment and ongoing operation.

Human rights impact assessments at the design stage help identify potential risks and guide responsible development. Engaging users and affected communities ensures technologies meet real needs.

Continuous monitoring and audits maintain compliance with ethical standards and highlight areas for improvement.

Effective governance ensures responsibilities are clearly defined, decisions are transparent, and corrective actions can be taken when rights are compromised. By combining ethical principles with robust governance and accountability, technology can actively protect and support human rights.

Future pathways for rights-centred innovation

Image of UN Human Rights Council

The integration of human rights into technology represents a long-term project. Establishing frameworks that embed accountability, transparency and ethical oversight ensures that emerging tools enhance freedom, equality and justice.

Digital transformation, when guided by human rights, creates opportunities to address complex challenges. RightsX 2025 demonstrated that innovation, governance and ethical foresight can converge to shape a digital ecosystem that safeguards human dignity while fostering progress.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU survey shows strong public backing for digital literacy in schools

A new Eurobarometer survey finds that Europeans want digital skills to hold the same status in schools as reading, mathematics and science.

Citizens view digital competence as essential for learning, future employment and informed participation in public life.

Nine in ten respondents believe that schools should guide pupils on how to handle the harmful effects of digital technologies on their mental health and well-being, rather than treating such issues as secondary concerns.

Most Europeans also support a more structured approach to online information. Eight in ten say digital literacy helps them avoid misinformation, while nearly nine in ten want teachers to be fully prepared to show students how to recognise false content.

A majority continues to favour restrictions on smartphones in schools, yet an even larger share supports the use of digital tools specifically designed for learning.

More than half find that AI brings both opportunities and risks for classrooms, which they believe should be examined in greater depth.

Almost half want the EU to shape standards for the use of educational technologies, including rules on AI and data protection.

The findings will inform the European Commission’s 2030 Roadmap on digital education and skills, scheduled for release next year as part of the Union of Skills initiative.

A survey carried out across all member states reflects a growing expectation that digital education should become a central pillar of Europe’s teaching systems, rather than an optional enhancement.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Seven teams advance in XPRIZE contest backed by Google

XPRIZE has named seven finalist teams in its three-year, $5 million Quantum Applications competition, a global challenge backed by Google Quantum AI, Google.org, and GESDA to accelerate real-world quantum computing use cases.

Selected from 133 submissions, the finalists are developing quantum algorithms that could outperform classical systems on practical tasks linked to sustainability, science, and industry. They will share a $1 million prize at this stage, ahead of a $4 million award pool in 2027.

Google says the competition supports its goal of finding concrete problems where quantum systems can beat leading classical methods. The finalists span materials science, chemistry, optimisation, and biomedical modelling, showing growing momentum behind application-driven research.

The teams include Calbee Quantum, Gibbs Samplers, Phasecraft’s materials group, QuMIT, Xanadu, Q4Proteins, and QuantumForGraphproblem, each proposing algorithms with potential impact ranging from clean-energy materials and advanced semiconductors to drug discovery and molecular analysis.

Finalists now proceed to Phase II, which focuses on benchmarking against classical tools, assessing feasibility, and demonstrating pathways to real-world advantage. A wildcard round in 2026 will offer re-entry for other teams.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Multimodal AI reveals new immune patterns across cancer types

A recent study examined the capabilities of GigaTIME, a multimodal AI framework that models the tumour immune microenvironment by converting routine H and E slides into virtual multiplex immunofluorescence images.

Researchers aimed to solve long-standing challenges in profiling tumour ecosystems by using a scalable and inexpensive technique instead of laboratory methods that require multiple samples and extensive resources.

The study focused on how large image datasets could reveal patterns of protein activity that shape cancer progression and therapeutic response.

GigaTIME was trained on millions of matched cells and applied to more than fourteen thousand slides drawn from a wide clinical network. The system generated nearly 300.000 virtual images and uncovered over 1000 associations between protein channels and clinical biomarkers.

Spatial features such as sharpness, entropy and signal variability were often more informative than density alone, revealing immune interactions that differ strongly across cancer types.

When tested on external tumour collections, the framework maintained strong performance and consistently exceeded the results of comparator models.

The study reported that GigaTIME could identify patterns linked to tumour invasion, survival and stage. Protein combinations offered a clearer view of immune behaviour than single markers, and the virtual signatures aligned with known and emerging genomic alterations.

Certain proteins were easier to infer than others, which reflected structural differences at the cellular level rather than model limitations. The research also suggested that immune evasion mechanisms may shift during advanced disease, altering how proteins such as PD-L1 contribute to tumour progression.

The authors argued that virtual multiplex imaging could expand access to spatial proteomics for both research and clinical practice.

Wider demographic representation and broader protein coverage are necessary for future development, yet the approach demonstrated clear potential to support large population studies instead of the restricted datasets produced through traditional staining methods.

Continued work seeks to build a comprehensive atlas and refine cell-level segmentation to deepen understanding of immune and tumour interactions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Pinterest’s generative tools showcased in AWS report

AWS has released a new case study showing how Pinterest scaled into an AI-powered discovery platform, emphasising the cloud provider’s role in supporting rapid growth and responsible use of generative tools across the service.

AWS says Pinterest now serves around 600 million users and relies on cloud infrastructure to process large visual datasets. Its systems analyse terabytes of content each day and generate high volumes of personalised suggestions across search, shopping, and inspiration features.

The case study details Pinterest’s move from early machine-learning models to multimodal and generative systems built on AWS. It highlights Canvas for image enhancement, improved visual search, and the conversational Pinterest Assistant.

AWS also points to Pinterest’s use of Amazon EKS, EC2 GPU instances, and Bedrock-powered moderation tools as a full-stack approach to responsible AI. Pinterest states that these systems help maintain a safe and positive environment while supporting new commercial and creative features.

AWS cites recent performance metrics as evidence of effective scaling, noting gains in revenue, user activity, and search quality. The company presents the case study as evidence that cloud-based AI infrastructure can support innovation on a global scale.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Norges Bank says digital krone not required for now

Norway’s central bank has concluded that a central bank digital currency is not needed for now, ending several years of research and reaffirming that the country’s existing payment system remains secure, efficient, and widely used.

Norges Bank stated that it found no current requirement for a digital krone to maintain confidence in payments. Cash usage in Norway is among the lowest globally, but authorities argue the present system continues to serve consumers, merchants, and banks effectively.

The decision is not final. Governor Ida Wolden Bache said the assessment reflects timing rather than a rejection of CBDCs, noting the bank could introduce one if conditions change or if new risks emerge in the domestic payments landscape.

Norges Bank continues to examine both retail and wholesale models under the broader EU AI Act framework for digital resilience. It also sees potential in tokenisation, which could deliver efficiency gains and lower settlement risk even if a full CBDC is not introduced.

Experiments with tokenised platforms will continue in collaboration with industry partners. At the same time, the bank prepares a new report for early next year and monitors international work on shared digital currency infrastructure, including a possible digital €.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Adobe brings its leading creative tools straight into ChatGPT

Yesterday, Adobe opened a new chapter for digital creativity by introducing Photoshop, Adobe Express and Adobe Acrobat inside ChatGPT.

The integration gives 800 million weekly users direct access to trusted creative and productivity tools through a conversational interface. Adobe aims to make creative work easier for newcomers by linking its technology to simple written instructions.

Photoshop inside ChatGPT offers selective edits, tone adjustments and creative effects, while Adobe Express brings quick design templates and animation features to people who want polished content without switching between applications.

Acrobat adds powerful document controls, allowing users to organise, edit or redact PDFs inside the chat. Each action blends conversation with Adobe’s familiar toolsets, giving users either simple text-driven commands or fine control through intuitive sliders.

The launch reflects Adobe’s broader investment in agentic AI and its Model Context Protocol. Earlier releases such as Acrobat Studio and AI Assistants for Photoshop and Adobe Express signalled Adobe’s ambition to expand conversational creative experiences.

Adobe also plans to extend an upcoming Firefly AI Assistant across multiple apps to support faster movement from an idea to a finished design.

All three apps are now available to ChatGPT users on desktop, web and iOS, with Android support expanding soon. Adobe positions the integration as an entry point for new audiences who may later move into the full desktop versions for deeper control.

The company expects the partnership to widen access to creative expression by letting anyone edit images, produce designs or transform documents simply by describing what they want to achieve.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU proposes easing environmental rules for datacentres and housing

The European Commission has proposed a significant overhaul of environmental rules, potentially exempting datacentres, AI facilities, and affordable housing from mandatory impact assessments.

Member states would retain discretion over whether such projects require full environmental scrutiny, as part of a broader plan to expedite permitting and reduce reporting obligations for businesses.

The package also repeals a hazardous chemical database, eases polluter obligations, and moves environmental management rules from individual plants to whole companies. The commission states that the changes could save firms €1 billion annually, but green groups warn of potential costs to health and biodiversity.

The proposals align with plans to modernise the EU electricity grid and new climate targets to reduce emissions by 90% compared to 1990. Experts have cautioned that loopholes allowing international carbon credits could weaken domestic emissions reductions.

Corporate sustainability laws are also being scaled back. The revised rules limit the number of companies covered, postpone compliance deadlines to 2029, and remove obligations to implement climate transition plans.

Business lobby groups have welcomed the changes as a more realistic approach to corporate social responsibility and due diligence.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Vietnam passes first AI law to strict safeguards

Vietnam’s National Assembly has passed its first AI Law, advancing the regulation and development of AI nationwide. The legislation was approved with overwhelming support, alongside amendments to the Intellectual Property Law and a revised High Technology Law.

The AI Law will take effect on March 1, 2026.

The law establishes core principles, prohibits certain acts, and outlines a risk management framework for AI systems. The law combines safeguards for high-risk AI with incentives for innovation, including sandbox testing, a National AI Development Fund, and startup vouchers.

AI oversight will be centralised under the Government, led by the Ministry of Science and Technology, with assessments needed only for high-risk systems approved by the Prime Minister. The law allows real-time updates to this list to keep pace with technological advances.

Flexible provisions prevent obsolescence by avoiding fixed technology lists or rigid risk classifications. Lawmakers emphasised the balance between regulation and innovation, aiming to create a safe yet supportive environment for AI growth in Vietnam.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot