Africa urged to focus on practical, local AI solutions rather than frontier models

Africa cannot realistically compete with the massive capital and computing resources driving frontier AI research in the United States and China, and it does not need to do so.

Instead, Nicholas Okumu contends, the continent’s AI strategy should pivot toward building efficient, practical systems tailored to local needs, from healthcare triage tools in referral hospitals to agriculture, education and public finance solutions grounded in African contexts.

Large, resource-intensive models require infrastructure and ecosystems that most African nations cannot marshal, but smaller, efficient models can perform high-value, domain-specific tasks on ordinary hardware.

Drawing on events from innovation forums and real-world examples, the columnist argues that Africa’s historical experience of innovation under constraint positions it well to lead in relevant, efficient AI applications rather than replicating the ambitions of frontier labs.

The article outlines a three-phase pathway: first, building foundational datasets governed by national or regional frameworks; second, deploying AI where it can deliver transformative value; and third, scaling successful tools to regions with similar development constraints.

If this strategy is followed, the piece argues, African-designed AI systems, particularly those that work well in low-resource environments, could become globally valuable.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Libraries lead UK government push to improve digital inclusion and AI confidence

Libraries Connected, supported by a £310,400 grant from the UK Government’s Digital Inclusion Innovation Fund administered by the Department for Science, Industry and Technology (DSIT), is launching Innovating in Trusted Spaces: Libraries Advancing the Digital Inclusion Action Plan.

The programme will run from November 2025 to March 2026 across 121 library branches in Newcastle, Northumberland, Nottingham City and Nottinghamshire, targeting older people, low-income families and individuals with disabilities to ensure they are not left behind amid rapid digital and AI-driven change.

Public libraries are already a leading provider of free internet access and basic digital skills support, offering tens of thousands of public computers and learning opportunities each year. However, only around 27 percent of UK adults currently feel confident in recognising AI-generated content online, underscoring the need for improved digital and media literacy.

The project will create and test a new digital inclusion guide for library staff, focusing on the benefits and risks of AI tools, misinformation and emerging technologies, as well as building a national network of practice for sharing insights.

Partners in the programme include Good Things Foundation and WSA Community, which will help co-design materials and evaluate the initiative’s impact to inform future digital inclusion efforts across communities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI content flood drives ‘slop’ to word of the year

Merriam-Webster has chosen ‘slop’ as its 2025 word of the year, reflecting the rise of low-quality digital content produced by AI. The term originally meant soft mud, but now describes absurd or fake online material.

Greg Barlow, Merriam-Webster’s president, said the word captures how AI-generated content has fascinated, annoyed and sometimes alarmed people. Tools like AI video generators can produce deepfakes and manipulated clips in seconds.

The spike in searches for ‘slop’ shows growing public awareness of poor-quality content and a desire for authenticity. People want real, genuine material rather than AI-driven junk content.

AI-generated slop includes everything from absurd videos to fake news and junky digital books. Merriam-Webster selects its word of the year by analysing search trends and cultural relevance.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Google supports UK quantum innovation push

UK researchers will soon be able to work with Google’s advanced quantum chip Willow through a partnership with the National Quantum Computing Centre. The initiative aims to help scientists tackle problems that classical computers cannot solve.

The agreement will allow academics to compete for access to the processor and collaborate with experts from both organisations. Google hopes the programme will reveal practical uses for quantum computing in science and industry.

Quantum technology remains experimental, yet progress from Google, IBM, Amazon and UK firms has accelerated rapidly. Breakthroughs could lead to impactful applications within the next decade.

Government investment has supported the UK’s growing quantum sector, which hosts several cutting-edge machines. Officials estimate the industry could add billions to the UK economy as real-world uses emerge.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

New law requires AI disclosure in advertising in the US

A new law in New York, US, will require advertisers to disclose when AI-generated people appear in commercial content. Governor Kathy Hochul said the measure brings transparency and protects consumers as synthetic avatars become more widespread.

A second law now requires consent from heirs or executors when using a deceased person’s likeness for commercial purposes. The rule updates the state’s publicity rights, which previously lacked clarity in the context of the generative AI era.

Industry groups welcomed the move, saying it addresses the risks posed by unregulated AI usage, particularly for actors in the film and television industries. The disclosure must be conspicuous when an avatar does not correspond to a real human.

Specific expressive works such as films, games and shows are exempt when the avatar matches its use in the work. The laws arrive as national debate intensifies and President-elect Donald Trump signals potential attempts to limit state-level AI regulation.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Taiwan strengthens its role in global semiconductors

Taiwan will continue to produce the world’s most advanced semiconductors domestically to remain a vital player globally. Deputy Foreign Minister Francois Chih-chung Wu said the island’s expertise cannot be easily replicated abroad.

Taiwan has invested in fabs in the US, Japan and Germany, but warned that moving production overseas is complex. The island plans to foster international partnerships while maintaining core technology in-house to safeguard its supply chains.

China’s military pressure on Taiwan has increased concerns over regional stability and global chip supply. Wu emphasised that preventing conflict is the most effective way to secure the semiconductor industry.

Washington and Europe share strategic interests with Taiwan, including the semiconductor industry and navigation in the Taiwan Strait. Wu expressed confidence that the international community would defend these interests, maintaining Taiwan’s essential role in technology.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

RightsX Summit 2025: Governing technology through human rights

Human Rights Day takes place on 10 December each year to commemorate the Universal Declaration of Human Rights (UDHR), adopted by the UN in 1948. It functions as a reminder of shared international commitments to dignity, equality and freedom, and seeks to reaffirm the relevance of these principles to contemporary challenges.

In 2025, the theme ‘Human Rights: Our Everyday Essentials’ aimed to reconnect people with how rights shape daily life, emphasising that rights remain both positive and practical foundations for individual and collective well-being.

 Text, Newspaper, Adult, Male, Man, Person, Accessories, Jewelry, Necklace, Eleanor Roosevelt

Human Rights Day also serves as a moment for reflection and action. In a world shaped by rapid technological change, geopolitical instability and social inequalities, the day encourages institutions, governments and civil society to coordinate on priorities that respond to contemporary threats and opportunities.

In this context, the RightsX Summit was strategically scheduled. By centring discussions on human rights, technology, data and innovation around Human Rights Day, the event reinforced that digital governance issues are central to rights protection in the twenty-first century. The alignment elevated technology from a technical topic to a political and ethical concern within human rights debates.

The RightsX Summit 2025

 Advertisement, Poster, Purple

The summit brought together governments, the UN system, civil society, private sector partners and innovators to explore how technology can advance human rights in the digital age. Its aim was to produce practical insights, solution-focused dialogues and discussions that could inform a future human rights toolbox shaped by technology, data, foresight and partnerships.

Central themes included AI, data governance, predictive analytics, digital security, privacy and other emerging technologies. Discussions analysed how these tools can be responsibly used to anticipate risks, improve monitoring, and support evidence-based decision-making in complex rights contexts.

The summit also examined the challenge of aligning technological deployment with internationally recognised human rights norms, exploring the mechanisms by which innovation can reinforce equity, justice and accountability in digital governance.

The summit emphasised that technological innovation is inseparable from global leadership in human rights. Aligning emerging tools with established norms was highlighted as critical to ensure that digital systems do not exacerbate existing inequalities or create new risks.

Stakeholders were encouraged to consider not only technical capabilities but also the broader social, legal and ethical frameworks within which technology operates.

The 30x30x30 Campaign

 Astronomy, Outer Space, Planet, Globe, Earth, Sphere

The 30x30x30 initiative represents an ambitious attempt to operationalise human rights through innovation. Its objective is to deliver 30 human rights innovations for 30 communities by 2030, aligned with the 30 articles of the UDHR.

The campaign emphasises multistakeholder collaboration by uniting countries, companies and communities as co-creators of solutions that are both technologically robust and socially sensitive. A distinctive feature of 30x30x30 is its focus on scalable, real-world tools that address complex rights challenges.

Examples include AI-based platforms for real-time monitoring, disaster tracking systems, digital storytelling tools and technologies for cyber peace. These tools are intended to serve both institutional responders and local communities, demonstrating how technology can amplify human agency in rights contexts.

The campaign also highlights the interdependence of innovation and human rights. Traditional approaches alone cannot address multidimensional crises such as climate displacement, conflict, or systemic inequality, and innovation without human-rights grounding risks reinforcing existing disparities.

‘Innovation is Political’

 Body Part, Finger, Hand, Person, Baby, Network, Accessories

Volker Türk, UN High Commissioner for Human Rights, emphasised that ‘innovation is political’. He noted that the development and deployment of technology shape who benefits and how, and that decisions regarding access, governance and application of technological tools carry significant implications for equity, justice and human dignity.

This framing highlights the importance of integrating human rights considerations into innovation policy. By situating human rights at the centre of technological development, the summit promoted governance approaches that ensure innovation contributes positively to societal outcomes.

It encouraged multistakeholder responsibility, including governments, companies and civil society, to guide technology in ways that respect and advance human rights.

Human Rights Data Exchange (HRDx)

HRDx is a proposed global platform intended to improve the ethical management of human rights data. It focuses on creating systems where information is governed responsibly, ensuring that privacy, security and protection of personal data are central to its operation.

The platform underlines that managing data is not only a technical issue but also a matter of governance and ethics. By prioritising transparency, accountability and data protection, it aims to provide a framework that supports the responsible use of information without compromising human rights.

Through these principles, HRDx highlights the importance of embedding ethical oversight into technological tools. Its success relies on maintaining the balance between utilising data to inform decision-making and upholding the rights and dignity of individuals. That approach ensures that technology can contribute to human rights protection while adhering to rigorous ethical standards.

Trustworthy AI in human rights

The government has withdrawn the mandate for Sanchar Saathi, responding to public backlash and industry resistance.

AI offers significant opportunities to enhance human rights monitoring and protection. For example, AI can help to analyse large datasets to detect trends, anticipate crises, and identify violations of fundamental freedoms. Predictive analytics can support human rights foresight, enabling early interventions to prevent conflicts, trafficking, or discrimination.

At the same time, trust in AI for decision-making remains a significant challenge. AI systems trained on biassed or unrepresentative data can produce discriminatory outcomes, undermine privacy and erode public trust.

These risks are especially acute in applications where algorithmic decisions affect access to services or determine individual liberties. That requires governance frameworks that ensure transparency, accountability and ethical oversight.

In the human rights context, trustworthy AI means designing systems that are explainable, auditable and accountable. Human oversight remains essential, particularly in decisions with serious implications for individuals’ rights.

The Summit highlighted the importance of integrating human rights principles such as non-discrimination, equality and procedural fairness into AI development and deployment processes.

Ethics, Accountability and Governance

AI, justice, law,

Aligning technology with human rights necessitates robust ethical frameworks, effective governance, and transparent accountability. Digital systems must uphold fairness, transparency, inclusivity, and human dignity throughout their lifecycle, from design to deployment and ongoing operation.

Human rights impact assessments at the design stage help identify potential risks and guide responsible development. Engaging users and affected communities ensures technologies meet real needs.

Continuous monitoring and audits maintain compliance with ethical standards and highlight areas for improvement.

Effective governance ensures responsibilities are clearly defined, decisions are transparent, and corrective actions can be taken when rights are compromised. By combining ethical principles with robust governance and accountability, technology can actively protect and support human rights.

Future pathways for rights-centred innovation

Image of UN Human Rights Council

The integration of human rights into technology represents a long-term project. Establishing frameworks that embed accountability, transparency and ethical oversight ensures that emerging tools enhance freedom, equality and justice.

Digital transformation, when guided by human rights, creates opportunities to address complex challenges. RightsX 2025 demonstrated that innovation, governance and ethical foresight can converge to shape a digital ecosystem that safeguards human dignity while fostering progress.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Users gain new control with Instagram feed algorithm

Instagram has unveiled a new AI-powered feature called ‘Your Algorithm’, giving users control over the topics shown in their Reels feed. The tool analyses viewing history and allows users to indicate which subjects they want to see more or less of.

The feature displays a summary of each user’s top interests and allows typing in specific topics to fine-tune recommendations in real-time. Instagram plans to expand the tool beyond Reels to Explore and other areas of the app.

Launch started in the US, with a global rollout in English expected soon. The initiative comes amid growing calls for social media platforms to provide greater transparency over algorithmic content and avoid echo chambers.

By enabling users to adjust their feeds directly, Instagram aims to offer more personalised experiences while responding to regulatory pressures and societal concerns over harmful content.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Google brings proactive features to Jules AI

Jules AI has been updated to work proactively, helping developers manage routine tasks and fix issues automatically while they focus on complex coding projects. The agent now suggests improvements and prepares fixes without requiring direct input.

Google AI Pro and Ultra subscribers can enable Suggested Tasks, which scan code for actionable improvements starting with #todos comments. Scheduled Tasks let users automate predictable maintenance, keeping projects up to date with minimal effort.

A new integration with Render streamlines the handling of failed deployments by analysing logs, identifying issues, and generating pull requests for review. However, this reduces the time developers spend troubleshooting and maintaining workflow momentum.

By combining proactive task management and automated fixes, Jules aims to be an intelligent partner that supports developers throughout the entire development lifecycle, ensuring smoother, more efficient coding.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

DeVry improves student support with AI

In the US, DeVry University has upgraded its student support system by deploying Salesforce Agentforce 360, aiming to offer faster and more personalised assistance to its 32,000 learners.

The new AI agents provide round-the-clock support for DeVryPro, the university’s online learning programme, ensuring students receive timely guidance.

The platform also simplifies course enrolment through a self-service website, allowing learners to manage enrolment and payments efficiently. Real-time guidance replaces the previous chatbot, helping students access course information and support outside regular hours.

With Data 360 integrating information from multiple systems, DeVry can deliver personalised recommendations while automating time-consuming tasks such as weekly onboarding.

Advisors can now focus on building stronger connections with students and supporting the development of workforce skills.

University leaders emphasise that these advancements reflect a commitment to preparing learners for an AI-driven workforce, combining innovative technology with personalised academic experiences. The initiative positions DeVry as a leader in integrating AI into higher education.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!