Chinese court limits liability for AI hallucinations

A court in China has ruled that AI developers are not automatically liable for hallucinations produced by their systems. The decision was issued by the Hangzhou Internet Court in eastern China and sets an early legal precedent.

Judges found that AI-generated content should be treated as a service rather than a product in such cases. In China, users must therefore prove developer fault and show concrete harm caused by the erroneous output.

The case involved a user in China who relied on AI-generated information about a university campus that did not exist. The court ruled no damages were owed, citing a lack of demonstrable harm and no authorisation for the AI to make binding promises.

The Hangzhou Internet Court warned that strict liability could hinder innovation in China’s AI sector. Legal experts say the ruling clarifies expectations for developers while reinforcing the need for user warnings about AI limitations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Education and rights central to UN AI strategy

UN experts are intensifying efforts to shape a people-first approach to AI, warning that unchecked adoption could deepen inequality and disrupt labour markets. AI offers productivity gains, but benefits must outweigh social and economic risks, the organisation says.

UN Secretary-General António Guterres has repeatedly stressed that human oversight must remain central to AI decision-making. UN efforts now focus on ethical governance, drawing on the Global Digital Compact to align AI with human rights.

Education sits at the heart of the strategy. UNESCO has warned against prioritising technology investment over teachers, arguing that AI literacy should support, not replace, human development.

Labour impacts also feature prominently, with the International Labour Organization predicting widespread job transformation rather than inevitable net losses.

Access and rights remain key concerns. The UN has cautioned that AI dominance by a small group of technology firms could widen global divides, while calling for international cooperation to regulate harmful uses, protect dignity, and ensure the technology serves society as a whole.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google launches AlphaGenome AI tool

Google has unveiled AlphaGenome, a new AI research tool designed to analyse the human genome and uncover the genetic roots of disease. The announcement was made in Paris, where researchers described the model as a major step forward.

AlphaGenome focuses on non-coding DNA, which makes up most of the human genome and plays a key role in regulating genes. Google scientists in Paris said the system can analyse extremely long DNA sequences at high resolution.

The model was developed by Google DeepMind using public genomic datasets from humans and mice. Researchers in Paris said the tool predicts how genetic changes influence biological processes inside cells.

Independent experts in the UK welcomed the advance but urged caution. Scientists at University of Cambridge and the Francis Crick Institute noted that environmental factors still limit what AI models can explain.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Deezer opens AI detection tool to rivals

French streaming platform Deezer has opened access to its AI music detection tool for rival services, including Spotify. The move follows mounting concern in France and across the industry over the rapid rise of synthetic music uploads.

Deezer said around 60,000 AI-generated tracks are uploaded daily, with 13.4 million detected in 2025. In France, the company has already demonetised 85% of AI-generated streams to redirect royalties to human artists.

The tool automatically tags fully AI-generated tracks, removes them from recommendations and flags fraudulent streaming activity. Spotify, which also operates widely in France, has introduced its own measures but relies more heavily on creator disclosure.

Challenges remain for Deezer in France and beyond, as the system struggles to identify hybrid tracks mixing human and AI elements. Industry pressure continues to grow for shared standards that balance innovation, transparency and fair payment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Anthropic challenges Pentagon over military AI use

Pentagon officials are at odds with AI developer Anthropic over restrictions designed to prevent autonomous weapons targeting and domestic surveillance. The disagreement has stalled discussions under a $200 million contract.

Anthropic has expressed concern about its tools being used in ways that could harm civilians or breach privacy. The company emphasises that human oversight is essential for national security applications.

The dispute reflects broader tensions between Silicon Valley firms and government use of AI. Pentagon officials argue that commercial AI can be deployed as long as it follows US law, regardless of corporate guidelines.

Anthropic’s stance may affect its Pentagon contracts as the firm prepares for a public offering. The company continues to engage with officials while advocating for ethical AI deployment in defence operations.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft and SABC Plus drives digital skills access in South Africa

Millions of South Africans are set to gain access to AI and digital skills through a partnership between Microsoft South Africa and the national broadcaster SABC Plus. The initiative will deliver online courses, assessments, and recognised credentials directly to learners’ devices.

Building on Microsoft Elevate and the AI Skills Initiative, the programme follows the training of 1.4 million people and the credentialing of nearly half a million citizens since 2025. SABC Plus, with over 1.9 million registered users, provides an ideal platform to reach diverse communities nationwide.

AI and data skills are increasingly critical for employability, with global demand for AI roles growing rapidly. Microsoft and SABC aim to equip citizens with practical, future-ready capabilities, ensuring learning opportunities are not limited by geography or background.

The collaboration also complements Microsoft’s broader initiatives in South Africa, including Ikamva Digital, ElevateHer, Civic AI, and youth certification programmes, all designed to foster inclusion and prepare the next generation for a digital economy.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

US cloud dominance sparks debate about Europe’s digital sovereignty

European technology leaders are increasingly questioning the long-held assumption that information technology operates outside politics, amid growing concerns about reliance on US cloud providers and digital infrastructure.

At HiPEAC 2026, Nextcloud chief executive Frank Karlitschek argued that software has become an instrument of power, warning that Europe’s dependence on American technology firms exposes organisations to legal uncertainty, rising costs, and geopolitical pressure.

He highlighted conflicts between EU privacy rules and US surveillance laws, predicting continued instability around cross-border data transfers and renewed risks of services becoming legally restricted.

Beyond regulation, Karlitschek pointed to monopoly power among major cloud providers, linking recent price increases to limited competition and warning that vendor lock-in strategies make switching increasingly difficult for European organisations.

He presented open-source and locally controlled cloud systems as a path toward digital sovereignty, urging stronger enforcement of EU competition rules alongside investment in decentralised, federated technology models.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

NVIDIA expands open AI tools for robotics

NVIDIA has unveiled a new suite of open physical AI models and frameworks aimed at accelerating robotics and autonomous systems development. The announcement was made at CES 2026 in the US.

The new tools span simulation, synthetic data generation, training orchestration and edge deployment in the US. NVIDIA said the stack enables robots and autonomous machines to reason, learn and act in real-world environments using shared 3D standards.

Developers in the US showcased applications ranging from construction and factory robots to surgical and service systems. Companies, including Caterpillar and NEURA Robotics, demonstrated how digital twins and open AI models improve safety and efficiency.

NVIDIA said open-source collaboration is central to advancing physical AI in the US and globally. The company aims to shorten development cycles while supporting safer deployment of autonomous machines across industries.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Google brings AI agent to Chrome in the US

Google is rolling out an AI-powered browsing agent inside Chrome, allowing users to automate routine online tasks. The feature is being introduced in the US for AI Pro and AI Ultra subscribers.

The Gemini agent can interact directly with websites in the US, including opening pages, clicking buttons and completing complex online forms. Testers reported successful use for tasks such as tax paperwork and licence renewals.

Google said Gemini AI integrates with password management tools while requiring user confirmation for payments and final transactions. Security safeguards and fraud detection systems have been built into Chrome for US users.

The update reflects Alphabet’s strategy to reposition Chrome in the US as an intelligent operating agent. Google aims to move beyond search toward AI-driven personal task management.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Physical AI becomes central to LG’s robotics and automation ambitions

LG Group affiliates are expanding into physical AI by combining robotics hardware, industrial data, and advanced AI models. The strategy aims to deliver integrated autonomous systems across industries. The group is positioning itself along the complete robotics value chain.

LG Electronics is strengthening its role in robotic actuators that enable precise humanoid movement. Leveraging decades of motor engineering, it recently launched the AXIUM actuator brand. The company has also expanded its investments across robotics manufacturers.

The company’s AI Research division is working on programs that help machines understand the real world. Its special lab puts seeing and language skills into robots and factory systems. The aim is for machines to predict and act autonomously in real time.

The CNS division is teaching robots the skills they need for different jobs. LG Display is making robot screens using bendable panels that perform well in harsh environments. Both groups are using their cars’ and factories’ know-how to build robots.

Power and sensing tools complete the group’s robot plans. LG Energy Solution makes powerful batteries for moving robots, while LG Innotek creates cameras and sensors. Group leaders see building intelligent machines as key to future growth.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!