Chinese court limits liability for AI hallucinations

A court in China has ruled that AI developers are not automatically liable for hallucinations produced by their systems. The decision was issued by the Hangzhou Internet Court in eastern China and sets an early legal precedent.

Judges found that AI-generated content should be treated as a service rather than a product in such cases. In China, users must therefore prove developer fault and show concrete harm caused by the erroneous output.

The case involved a user in China who relied on AI-generated information about a university campus that did not exist. The court ruled no damages were owed, citing a lack of demonstrable harm and no authorisation for the AI to make binding promises.

The Hangzhou Internet Court warned that strict liability could hinder innovation in China’s AI sector. Legal experts say the ruling clarifies expectations for developers while reinforcing the need for user warnings about AI limitations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Moltbook AI vulnerability exposes user data and API keys

A critical security flaw has emerged in Moltbook, a new AI agent social network launched by Octane AI.

The vulnerability allowed unauthenticated access to user profiles, exposing email addresses, login tokens, and API keys for registered agents. The platform’s rapid growth, claimed to have 1.5 million users, was largely artificial, as a single agent reportedly created hundreds of thousands of fake accounts.

Moltbook enables AI agents to post, comment, and form sub-communities, fostering interactions that range from AI debates to token-related activities.

Analysts warned that prompt injections and unregulated agent interactions could lead to credential theft or destructive actions, including data exfiltration or account hijacking. Experts described the platform as both a milestone in scale and a serious security concern.

Developers have not confirmed any patches, leaving users and enterprises exposed. Security specialists advised revoking API keys, sandboxing AI agents, and auditing potential exposures.

The lack of safeguards on the platform highlights the risks of unchecked AI agent networks, particularly for organisations that may rely on them without proper oversight.

An incident that underscores the growing need for stronger governance in AI-powered social networks. Experts stress that without enforced security protocols, such platforms could be exploited at scale, affecting both individual users and corporate systems.

The Moltbook case serves as a warning about prioritising hype over security in emerging AI applications.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Grok returns to Indonesia as X agrees to tightened oversight

Indonesia has restored access to Grok after receiving guarantees from X that stronger safeguards will be introduced to prevent further misuse of the AI tool.

Authorities suspended the service last month following the spread of sexualised images on the platform, making Indonesia the first country to block the system.

Officials from the Ministry of Communications and Digital Affairs said that access had been reinstated on a conditional basis after X submitted a written commitment outlining concrete measures to strengthen compliance with national law.

The ministry emphasised that the document serves as a starting point for evaluation instead of signalling the end of supervision.

However, the government warned that restrictions could return if Grok fails to meet local standards or if new violations emerge. Indonesian regulators stressed that monitoring would remain continuous, and access could be withdrawn immediately should inconsistencies be detected.

The decision marks a cautious reopening rather than a full reinstatement, reflecting Indonesia’s wider efforts to demand greater accountability from global platforms deploying advanced AI systems within its borders.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Why smaller AI models may be the smarter choice

Most everyday jobs do not actually need the most powerful, cutting-edge AI models, argues Jovan Kurbalija in his blog post ‘Do we really need frontier AI for everyday work?’. While frontier AI systems dominate headlines with ever-growing capabilities, their real-world value for routine professional tasks is often limited. For many people, much of daily work remains simple, repetitive, and predictable.

Kurbalija points out that large parts of professional life, from administration and law to healthcare and corporate management, operate within narrow linguistic and cognitive boundaries. Daily communication relies on a small working vocabulary, and most decision-making follows familiar mental patterns.

In this context, highly complex AI models are often unnecessary. Smaller, specialised systems can handle these tasks more efficiently, at lower cost and with fewer risks.

Using frontier AI for routine work, the author suggests, is like using a sledgehammer to crack a nut. These large models are designed to handle almost anything, but that breadth comes with higher costs, heavier governance requirements, and stronger dependence on major technology platforms.

In contrast, small language models tailored to specific tasks or organisations can be faster, cheaper, and easier to control, while still delivering strong results.

Kurbalija compares this to professional expertise itself. Most jobs never required having the Encyclopaedia Britannica open on the desk. Real expertise lives in procedures, institutions, and communities, not in massive collections of general knowledge.

Similarly, the most useful AI tools are often those designed to draft standard documents, summarise meetings, classify requests, or answer questions based on a defined body of organisational knowledge.

Diplomacy, an area Kurbalija knows well, illustrates both the strengths and limits of AI. Many diplomatic tasks are highly ritualised and can be automated using rules-based systems or smaller models. But core diplomatic skills, such as negotiation, persuasion, empathy, and trust-building, remain deeply human and resistant to automation. The lesson, he argues, is to automate routines while recognising where AI should stop.

The broader paradox is that large AI platforms may benefit more from users than users benefit from frontier AI. By sitting at the centre of workflows, these platforms collect valuable data and organisational knowledge, even when their advanced capabilities are not truly needed.

As Kurbalija concludes, a more common-sense approach would prioritise smaller, specialised models for everyday work, reserving frontier AI for genuinely complex tasks, and moving beyond the assumption that bigger AI is always better.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

China gives DeepSeek conditional OK for Nvidia H200 chips

China has conditionally approved its leading AI startup DeepSeek to buy Nvidia’s H200 AI chips, with regulatory requirements still being finalised. The decision would add DeepSeek to a growing list of Chinese firms seeking access to the H200, one of Nvidia’s most powerful data-centre chips.

The reported approval follows earlier developments in which ByteDance, Alibaba and Tencent were allowed to purchase more than 400,000 H200 chips in total, suggesting Beijing is moving from broad caution to selective, case-by-case permissions. Separate coverage has described the approvals as a shift after weeks of uncertainty over whether China would allow imports, even as US export licensing was moving forward.

Nvidia’s CEO Jensen Huang, speaking in Taipei, said the company had not received confirmation of DeepSeek’s clearance and indicated the licensing process is still being finalised, underscoring the uncertainty for suppliers and buyers. China’s industry and commerce ministries have been involved in approvals, with conditions reportedly shaped by the state planner, the National Development and Reform Commission.

The H200 has become a high-stakes flashpoint in US-China tech ties because access to top-tier chips directly affects AI capability and competitiveness. US political scrutiny is also rising: a senior US lawmaker has alleged Nvidia provided technical support that helped DeepSeek develop advanced models later used by China’s military, according to a letter published by the House Select Committee on China; Nvidia has pushed back against such claims in subsequent reporting.

DeepSeek is also preparing a next-generation model, V4, expected in mid-February, according to reporting that cited people familiar with the matter, which makes access to high-end compute especially consequential for timelines and performance.

Why does it matter?

If China’s conditional approvals translate into real shipments, they could ease a key bottleneck for Chinese AI development while extending Nvidia’s footprint in a market constrained by geopolitics. At the same time, the episode highlights how AI hardware is now regulated not only by Washington’s export controls but also by Beijing’s import approvals, with companies caught between shifting policy priorities.

Education and rights central to UN AI strategy

UN experts are intensifying efforts to shape a people-first approach to AI, warning that unchecked adoption could deepen inequality and disrupt labour markets. AI offers productivity gains, but benefits must outweigh social and economic risks, the organisation says.

UN Secretary-General António Guterres has repeatedly stressed that human oversight must remain central to AI decision-making. UN efforts now focus on ethical governance, drawing on the Global Digital Compact to align AI with human rights.

Education sits at the heart of the strategy. UNESCO has warned against prioritising technology investment over teachers, arguing that AI literacy should support, not replace, human development.

Labour impacts also feature prominently, with the International Labour Organization predicting widespread job transformation rather than inevitable net losses.

Access and rights remain key concerns. The UN has cautioned that AI dominance by a small group of technology firms could widen global divides, while calling for international cooperation to regulate harmful uses, protect dignity, and ensure the technology serves society as a whole.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google launches AlphaGenome AI tool

Google has unveiled AlphaGenome, a new AI research tool designed to analyse the human genome and uncover the genetic roots of disease. The announcement was made in Paris, where researchers described the model as a major step forward.

AlphaGenome focuses on non-coding DNA, which makes up most of the human genome and plays a key role in regulating genes. Google scientists in Paris said the system can analyse extremely long DNA sequences at high resolution.

The model was developed by Google DeepMind using public genomic datasets from humans and mice. Researchers in Paris said the tool predicts how genetic changes influence biological processes inside cells.

Independent experts in the UK welcomed the advance but urged caution. Scientists at University of Cambridge and the Francis Crick Institute noted that environmental factors still limit what AI models can explain.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Engineers at Anthropic rely on AI for most software creation

Anthropic engineers are increasingly relying on AI to write the code behind the company’s products, with senior staff now delegating nearly all programming tasks to AI systems.

Claude Code lead Boris Cherny said he has not written any software by hand for more than two months, with all recent updates generated by Anthropic’s own models. Similar practices are reportedly spreading across internal teams.

Company leadership has previously suggested AI could soon handle most software engineering work from start to finish, marking a shift in how digital products are built and maintained.

The adoption of AI coding tools has accelerated across the technology sector, with firms citing major productivity gains and faster development cycles as automation expands.

Industry observers note the transition may reshape hiring practices and entry-level engineering roles, as AI increasingly performs core implementation tasks previously handled by human developers.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Deezer opens AI detection tool to rivals

French streaming platform Deezer has opened access to its AI music detection tool for rival services, including Spotify. The move follows mounting concern in France and across the industry over the rapid rise of synthetic music uploads.

Deezer said around 60,000 AI-generated tracks are uploaded daily, with 13.4 million detected in 2025. In France, the company has already demonetised 85% of AI-generated streams to redirect royalties to human artists.

The tool automatically tags fully AI-generated tracks, removes them from recommendations and flags fraudulent streaming activity. Spotify, which also operates widely in France, has introduced its own measures but relies more heavily on creator disclosure.

Challenges remain for Deezer in France and beyond, as the system struggles to identify hybrid tracks mixing human and AI elements. Industry pressure continues to grow for shared standards that balance innovation, transparency and fair payment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Anthropic challenges Pentagon over military AI use

Pentagon officials are at odds with AI developer Anthropic over restrictions designed to prevent autonomous weapons targeting and domestic surveillance. The disagreement has stalled discussions under a $200 million contract.

Anthropic has expressed concern about its tools being used in ways that could harm civilians or breach privacy. The company emphasises that human oversight is essential for national security applications.

The dispute reflects broader tensions between Silicon Valley firms and government use of AI. Pentagon officials argue that commercial AI can be deployed as long as it follows US law, regardless of corporate guidelines.

Anthropic’s stance may affect its Pentagon contracts as the firm prepares for a public offering. The company continues to engage with officials while advocating for ethical AI deployment in defence operations.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!