Winning the AI race means winning developers in China, says Huang of Nvidia

Nvidia CEO Jensen Huang said China is ‘nanoseconds’ behind the US in AI and urged Washington to lead by accelerating innovation and courting developers globally. He argued that excluding China would weaken the reach of US technology and risk splintering the ecosystem into incompatible stacks.

Huang’s remarks came amid ongoing export controls that bar Nvidia’s most advanced processors from the Chinese market. He acknowledged national security concerns but cautioned that strict limits can slow the spread of American tools that underpin AI research, deployment, and scaling.

Hardware remains central, Huang said, citing advanced accelerators and data-centre capacity as the substrate for training frontier models. Yet diffusion matters: widespread adoption of US platforms by global developers amplifies influence, reduces fragmentation, and accelerates innovation.

With sales of top-end chips restricted, Huang warned that Chinese firms will continue to innovate on domestic alternatives, increasing the likelihood of parallel systems. He called for policies that enable US leadership while preserving channels to the developer community in China.

Huang framed the objective as keeping America ahead, maintaining the world’s reliance on an American tech stack, and avoiding strategies that would push away half the world’s AI talent.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

The Inca system that preceded the digital age

Long before the internet and instant messaging, the Inca Empire built its own intricate communication web that spanned thousands of kilometres across the Andes.

In his blog post ‘Quipus and chasquis: The Inca internet of diplomacy,’ Jovan Kurbalija explores how this ancient civilisation mastered the art of connecting people and information without written language or digital tools, relying instead on an ingenious blend of data, logistics, and human networks.

At the heart of this system were the quipus, bundles of knotted cords that stored census data, taxes, inventories, and decrees. Each knot, colour, and string length encoded information much like modern databases do today. Far from being primitive, quipus functioned as the Incas’ version of data storage and computation, a tangible form of coding that required skilled interpreters to read and maintain.

The chasquis, on the other hand, were the living couriers of the empire, elite runners who relayed quipus and oral messages across vast distances using a network of mountain roads. Their relay system ensured that vital information could travel hundreds of kilometres within a day, forming what Kurbalija calls a ‘human internet.’ That combination of endurance, coordination, and trust made the Inca communication network remarkably efficient and resilient.

Beyond its technical brilliance, the Inca system carried a deeper diplomatic purpose. Communication was the glue that held together a vast and diverse empire. By integrating technology, logistics, and human skill, the Incas created a model of governance that balanced diversity and unity.

As Kurbalija concludes, the story of the quipus and chasquis offers a timeless lesson for modern diplomacy. Technology alone does not sustain communication; people do. Whether through knotted cords or encrypted data cables, the challenge remains the same, which is to move information wisely, build trust, and connect societies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google flags adaptive malware that rewrites itself with AI

Hackers are experimenting with malware that taps large language models to morph in real time, according to Google’s Threat Intelligence Group. An experimental family dubbed PROMPTFLUX can rewrite and obfuscate its own code as it executes, aiming to sidestep static, signature-based detection.

PROMPTFLUX interacts with Gemini’s API to request on-demand functions and ‘just-in-time’ evasion techniques, rather than hard-coding behaviours. GTIG describes the approach as a step toward more adaptive, partially autonomous malware that dynamically generates scripts and changes its footprint.

Investigators say the current samples appear to be in development or testing, with incomplete features and limited Gemini API access. Google says it has disabled associated assets and has not observed a successful compromise, yet warns that financially motivated actors are exploring such tooling.

Researchers point to a maturing underground market for illicit AI utilities that lowers barriers for less-skilled offenders. State-linked operators in North Korea, Iran, and China are reportedly experimenting with AI to enhance reconnaissance, influence, and intrusion workflows.

Defenders are turning to AI, using security frameworks and agents like ‘Big Sleep’ to find flaws. Teams should expect AI-assisted obfuscation, emphasise behaviour-based detection, watch model-API abuse, and lock down developer and automation credentials.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Courts signal limits on AI in legal proceedings

A High Court judge warned that a solicitor who pushed an expert to accept an AI-generated draft breached their duty. Mr Justice Waksman called it a gross breach and cited a case from the latest survey.
He noted 14% of experts would accept such terms, which is unacceptable.

Updated guidance clarifies what limited judicial AI use is permissible. Judges may use a private ChatGPT 365 for summaries with confidential prompts. There is no duty to disclose, but the judgment must be the judge’s own.

Waksman cautioned against legal research or analysis done by AI. Hallucinated authorities and fake citations have already appeared. Experts must not let AI answer the questions they are retained to decide.

Survey findings show wider use of AI for drafting and summaries. Waksman drew a bright line between back-office aids and core duties. Convenience cannot trump independence, accuracy and accountability.

For practitioners, two rules follow. Solicitors must not foist AI-drafted expert opinions, and experts should refuse. Within courts, limited, non-determinative AI may assist, but outcomes must be human.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Oracle and Ci4CC join forces to advance AI in cancer research

Oracle Health and Life Sciences has announced a strategic collaboration with the Cancer Center Informatics Society (Ci4CC) to accelerate AI innovation in oncology. The partnership unites Oracle’s healthcare technology with Ci4CC’s national network of cancer research institutions.

The two organisations plan to co-develop an electronic health record system tailored to oncology, integrating clinical and genomic data for more effective personalised medicine. They also aim to explore AI-driven drug development to enhance research and patient outcomes.

Oracle executives said the collaboration represents an opportunity to use advanced AI applications to transform cancer research. The Ci4CC President highlighted the importance of collective innovation, noting that progress in oncology relies on shared data and cross-institution collaboration.

The agreement, announced at Ci4CC’s annual symposium in Miami Beach US, remains non-binding but signals growing momentum in AI-driven precision medicine. Both organisations see the initiative as a step towards turning medical data into actionable insights that could redefine oncology care.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Researchers urge governance after LLMs display source-driven bias

Large language models (LLMs) are increasingly used to grade, hire, and moderate text. UZH research shows that evaluations shift when participants are told who wrote identical text, revealing source bias. Agreement stayed high only when authorship was hidden.

When told a human or another AI wrote it, agreement fell, and biases surfaced. The strongest was anti-Chinese across all models, including a model from China, with sharp drops even for well-reasoned arguments.

AI models also preferred ‘human-written’ over ‘AI-written’, showing scepticism toward machine-authored text. Such identity-triggered bias risks unfair outcomes in moderation, reviewing, hiring, and newsroom workflows.

Researchers recommend identity-blind prompts, A/B checks with and without source cues, structured rubrics focused on evidence and logic, and human oversight for consequential decisions.

They call for governance standards: disclose evaluation settings, test for bias across demographics and nationalities, and set guardrails before sensitive deployments. Transparency on prompts, model versions, and calibration is essential.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

‘Wooing and suing’ defines News Corp’s AI strategy

News Corp chief executive Robert Thomson warned AI companies against using unlicensed publisher content, calling recipients of ‘stolen goods’ fair game for pursuit. He said ‘wooing and suing’ would proceed in parallel, with more licensing deals expected after the OpenAI pact.

Thomson argued that high-quality data must be paid for and that ingesting material without permission undermines incentives to produce journalism. He insisted that ‘content crime does not and will not pay,’ signalling stricter enforcement ahead.

While criticising bad actors, he praised partners that recognise publisher IP and are negotiating usage rights. The company is positioning itself to monetise archives and live reporting through structured licences.

He also pointed to a major author settlement with another AI firm as a watershed for compensation over past training uses. The message: legal and commercial paths are both accelerating.

Against this backdrop, News Corp said AI-related revenues are gaining traction alongside digital subscriptions and B2B data services. Further licensing announcements are likely in the coming months.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Suleyman sets limits for safer superintelligence at Microsoft

Microsoft AI says its work toward superintelligence will be explicitly ‘humanist’, designed to keep people at the top of the food chain. In a new blog post, Microsoft AI head Mustafa Suleyman announced a team focused on building systems that are subordinate, controllable, and designed to serve human interests.

Suleyman says superintelligence should not be unbounded. Models will be calibrated, contextualised, and limited to align with human goals. He joined Microsoft last year as its AI CEO, which has begun rolling out its first in-house models for text, voice, and images.

The move lands amid intensifying competition in advanced AI. Under a revised agreement with OpenAI, Microsoft can now independently pursue AGI or partner elsewhere. Suleyman says Microsoft AI will reject race narratives while acknowledging the need to advance capability and governance together.

Microsoft’s initial use cases emphasise an AI companion to help people learn, act, and feel supported; healthcare assistance to augment clinicians; and tools for scientific discovery in areas such as clean energy. The intent is to combine productivity gains with stronger safety controls from the outset.

‘Humans matter more than AI,’ Suleyman writes, casting ‘humanist superintelligence’ as technology that stays on humanity’s team. He frames the programme as a guard against Pandora’s box risks by binding robust systems to explicit constraints, oversight, and application contexts.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

ACCC lawsuit triggers Microsoft’s rethink and apology on Copilot subscription communications

Microsoft apologised after Australia’s regulator said it steered Microsoft 365 users to pricier Copilot plans while downplaying cheaper Classic tiers. The move follows APAC price-rise emails and confusion over Personal and Family increases.

ACCC officials said communications may have denied customers informed choices by omitting equivalent non-AI plans. Microsoft acknowledged it could have been clearer and accepted that Classic alternatives might have saved some subscribers money under the October 2024 changes.

Redmond is offering affected customers refunds for the difference between Copilot and Classic tiers and has begun contacting subscribers in Australia and New Zealand. The company also re-sent its apology email after discovering a broken link to the Classic plans page.

Questions remain over whether similar remediation will extend to Malaysia, Singapore, Taiwan, and Thailand, which also saw price hikes earlier this year. Consumer groups are watching for consistent remedies and plain-English disclosures across all impacted markets.

Regulators have sharpened scrutiny of dark patterns, bundling, and AI-linked upsells as digital subscriptions proliferate. Clear side-by-side plan comparisons and functional disclosures about AI features are likely to become baseline expectations for compliance and customer trust.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UNESCO launches Beruniy Prize to promote ethical AI innovation

UNESCO and the Uzbekistan Arts and Culture Development Foundation have introduced the UNESCO–Uzbekistan Beruniy Prize for Scientific Research on the Ethics of Artificial Intelligence.

The award, presented at the 43rd General Conference in Samarkand, recognises global leaders whose research and policy efforts promote responsible and human-centred AI innovation. Each laureate received $30,000, a Beruniy medal, and a certificate.

Professor Virgilio Almeida was honoured for advancing ethical, inclusive AI and democratic digital governance. Human rights expert Susan Perry and computer scientist Claudia Roda were recognised for promoting youth-centred AI ethics that protect privacy, inclusion, and fairness.

The Institute for AI International Governance at Tsinghua University in China also received the award for promoting international cooperation and responsible AI policy.

UNESCO’s Audrey Azoulay and Gayane Uemerova emphasised that ethics should guide technology to serve humanity, not restrict it. Laureates echoed the need for shared moral responsibility and global cooperation in shaping AI’s future.

The new Beruniy Prize reaffirms that ethics form the cornerstone of progress. By celebrating innovation grounded in empathy, inclusivity, and accountability, UNESCO aims to ensure AI remains a force for peace, justice, and sustainable development.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!