UN experts are intensifying efforts to shape a people-first approach to AI, warning that unchecked adoption could deepen inequality and disrupt labour markets. AI offers productivity gains, but benefits must outweigh social and economic risks, the organisation says.
UN Secretary-General António Guterres has repeatedly stressed that human oversight must remain central to AI decision-making. UN efforts now focus on ethical governance, drawing on the Global Digital Compact to align AI with human rights.
Education sits at the heart of the strategy. UNESCO has warned against prioritising technology investment over teachers, arguing that AI literacy should support, not replace, human development.
Labour impacts also feature prominently, with the International Labour Organization predicting widespread job transformation rather than inevitable net losses.
Access and rights remain key concerns. The UN has cautioned that AI dominance by a small group of technology firms could widen global divides, while calling for international cooperation to regulate harmful uses, protect dignity, and ensure the technology serves society as a whole.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Google has unveiled AlphaGenome, a new AI research tool designed to analyse the human genome and uncover the genetic roots of disease. The announcement was made in Paris, where researchers described the model as a major step forward.
AlphaGenome focuses on non-coding DNA, which makes up most of the human genome and plays a key role in regulating genes. Google scientists in Paris said the system can analyse extremely long DNA sequences at high resolution.
The model was developed by Google DeepMind using public genomic datasets from humans and mice. Researchers in Paris said the tool predicts how genetic changes influence biological processes inside cells.
Independent experts in the UK welcomed the advance but urged caution. Scientists at University of Cambridge and the Francis Crick Institute noted that environmental factors still limit what AI models can explain.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Anthropic engineers are increasingly relying on AI to write the code behind the company’s products, with senior staff now delegating nearly all programming tasks to AI systems.
Claude Code lead Boris Cherny said he has not written any software by hand for more than two months, with all recent updates generated by Anthropic’s own models. Similar practices are reportedly spreading across internal teams.
Company leadership has previously suggested AI could soon handle most software engineering work from start to finish, marking a shift in how digital products are built and maintained.
The adoption of AI coding tools has accelerated across the technology sector, with firms citing major productivity gains and faster development cycles as automation expands.
Industry observers note the transition may reshape hiring practices and entry-level engineering roles, as AI increasingly performs core implementation tasks previously handled by human developers.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
French streaming platform Deezer has opened access to its AI music detection tool for rival services, including Spotify. The move follows mounting concern in France and across the industry over the rapid rise of synthetic music uploads.
Deezer said around 60,000 AI-generated tracks are uploaded daily, with 13.4 million detected in 2025. In France, the company has already demonetised 85% of AI-generated streams to redirect royalties to human artists.
The tool automatically tags fully AI-generated tracks, removes them from recommendations and flags fraudulent streaming activity. Spotify, which also operates widely in France, has introduced its own measures but relies more heavily on creator disclosure.
Challenges remain for Deezer in France and beyond, as the system struggles to identify hybrid tracks mixing human and AI elements. Industry pressure continues to grow for shared standards that balance innovation, transparency and fair payment.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Pentagon officials are at odds with AI developer Anthropic over restrictions designed to prevent autonomous weapons targeting and domestic surveillance. The disagreement has stalled discussions under a $200 million contract.
Anthropic has expressed concern about its tools being used in ways that could harm civilians or breach privacy. The company emphasises that human oversight is essential for national security applications.
The dispute reflects broader tensions between Silicon Valley firms and government use of AI. Pentagon officials argue that commercial AI can be deployed as long as it follows US law, regardless of corporate guidelines.
Anthropic’s stance may affect its Pentagon contracts as the firm prepares for a public offering. The company continues to engage with officials while advocating for ethical AI deployment in defence operations.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Director Darren Aronofsky’s creative studio, Primordial Soup, has released the first episodes of On This Day… 1776, a short-form animated series that uses generative AI technology from Google DeepMind to visualise pivotal events from the American Revolution ahead of the 250th anniversary of the Declaration of Independence.
Episodes are published weekly on TIME’s YouTube channel throughout 2026, with each one focusing on a specific date in 1776.
The project combines AI-generated visuals with traditional post-production elements, including colour grading and voice performances by SAG-AFTRA actors, to expand narrative possibilities while retaining human creative input.
Aronofsky and collaborators describe the series as an example of how thoughtful, artist-led AI use can enhance storytelling rather than replace artistic craft.
The initiative is part of a broader trend in entertainment where AI tools are being explored as creative accelerators, though reactions have been mixed on social media, with some viewers questioning the quality and artistic decisions in early episodes.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Millions of South Africans are set to gain access to AI and digital skills through a partnership between Microsoft South Africa and the national broadcaster SABC Plus. The initiative will deliver online courses, assessments, and recognised credentials directly to learners’ devices.
Building on Microsoft Elevate and the AI Skills Initiative, the programme follows the training of 1.4 million people and the credentialing of nearly half a million citizens since 2025. SABC Plus, with over 1.9 million registered users, provides an ideal platform to reach diverse communities nationwide.
AI and data skills are increasingly critical for employability, with global demand for AI roles growing rapidly. Microsoft and SABC aim to equip citizens with practical, future-ready capabilities, ensuring learning opportunities are not limited by geography or background.
The collaboration also complements Microsoft’s broader initiatives in South Africa, including Ikamva Digital, ElevateHer, Civic AI, and youth certification programmes, all designed to foster inclusion and prepare the next generation for a digital economy.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
OpenAI has developed an internal AI data agent designed to help employees move from complex questions to reliable insights in minutes. The tool allows teams to analyse vast datasets using natural language instead of manual SQL-heavy workflows.
Across engineering, finance, research and product teams, the agent reduces friction by locating the right tables, running queries and validating results automatically. Built on GPT-5.2, it adapts as it works, correcting errors and refining its approach without constant human input.
Context plays a central role in the system’s accuracy, combining metadata, human annotations, code-level insights and institutional knowledge. A built-in memory function stores non-obvious corrections, helping the agent improve over time and avoid repeated mistakes.
To maintain trust, OpenAI evaluates the agent continuously using automated tests that compare generated results with verified benchmarks. Strong access controls and transparent reasoning ensure the system remains secure, reliable and aligned with existing data permissions.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
A survey of contact centre and customer experience (CX) leaders finds that AI has become ‘non-negotiable’ for organisations seeking to deliver efficient, personalised, and data-driven customer service.
Respondents reported widespread use of AI-enabled tools such as chatbots, virtual agents, and conversational analytics to handle routine queries, triage requests and surface insights from large volumes of interaction data.
CX leaders emphasised AI’s ability to boost service quality and reduce operational costs, enabling faster response times and better outcomes across channels.
Many organisations are investing in AI platforms that integrate with existing systems to automate workflows, assist human agents, and personalise interactions based on real-time customer context.
Despite optimism, leaders also noted challenges, including data quality, governance, skills gaps and maintaining human oversight, and stressed that AI should augment, not replace, human agents.
The article underscores that today’s competitive CX landscape increasingly depends on strategic AI adoption rather than optional experimentation.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A coalition of researchers and experts has identified future research directions aimed at enhancing AI safety, robustness and quality as systems are increasingly integrated into critical functions.
The work highlights the need for improved tools to evaluate, verify and monitor AI behaviour across diverse real-world contexts, including methods to detect harmful outputs, mitigate bias and ensure consistent performance under uncertainty.
The discussion emphasises that technical quality attributes such as reliability, explainability, fairness and alignment with human values should be core areas of focus, especially for high-stakes applications in healthcare, transport, finance and public services.
Researchers advocate for interdisciplinary approaches, combining insights from computer science, ethics, and the social sciences to address systemic risks and to design governance frameworks that balance innovation with public trust.
The article also notes emerging strategies such as formal verification techniques, benchmarks for robustness and continuous post-deployment auditing, which could help contain unintended consequences and improve the safety of AI models before and after deployment at scale.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!