Judge bars NSO Group from using spyware to target WhatsApp in landmark ruling

A US federal judge has permanently barred NSO Group, a commercial spyware company, from targeting WhatsApp and, in the same ruling, cut damages owed to Meta from $168 million to $4 million.

The decision by Judge Phyllis Hamilton of the Northern District of California stems from NSO’s 2019 hack of WhatsApp, when the company’s Pegasus spyware targeted 1,400 users through a zero-click exploit. The injunction bans NSO from accessing or assisting access to WhatsApp’s systems, a restriction the firm previously warned could threaten its business model.

An NSO spokesperson said the order ‘will not apply to NSO’s customers, who will continue using the company’s technology to help protect public safety,’ but declined to clarify how that interpretation aligns with the court’s wording. By contrast, Will Cathcart, head of WhatsApp, stated on X that the decision ‘bans spyware maker NSO from ever targeting WhatsApp and our global users again.’

Pegasus has allegedly been used against journalists, activists, and dissidents worldwide. The ruling sets an important precedent for US companies whose platforms have been compromised by commercial surveillance firms.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT to exit WhatsApp after Meta policy change

OpenAI says ChatGPT will leave WhatsApp on 15 January 2026 after Meta’s new rules banning general-purpose AI chatbots on the platform. ChatGPT will remain available on iOS, Android, and the web, the company said.

Users are urged to link their WhatsApp number to a ChatGPT account to preserve history, as WhatsApp doesn’t support chat exports. OpenAI will also let users unlink their phone numbers after linking.

Until now, users could message ChatGPT on WhatsApp to ask questions, search the web, generate images, or talk to the assistant. Similar third-party bots offered comparable features.

Meta quietly updated WhatsApp’s business API to prohibit AI providers from accessing or using it, directly or indirectly. The change effectively forces ChatGPT, Perplexity, Luzia, Poke, and others to shut down their WhatsApp bots.

The move highlights platform risk for AI assistants and shifts demand toward native apps and web. Businesses relying on WhatsApp AI automations will need alternatives that comply with Meta’s policies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Innovation versus risk shapes Australia’s AI debate

Australia’s business leaders were urged to adopt AI now to stay competitive, despite the absence of hard rules, at the AI Leadership Summit in Brisbane. The National AI Centre unveiled revised voluntary guidelines, and Assistant Minister Andrew Charlton said a national AI plan will arrive later this year.

The guidance sets six priorities, from stress-testing and human oversight to clearer accountability, aiming to give boards practical guardrails. Speakers from NVIDIA, OpenAI, and legal and academic circles welcomed direction but pressed for certainty to unlock stalled investment.

Charlton said the plan will focus on economic opportunity, equitable access, and risk mitigation, noting some harms are already banned, including ‘nudify’ apps. He argued Australia will be poorer if it hesitates, and regulators must be ready to address new threats directly.

The debate centred on proportional regulation: too many rules could stifle innovation, said Clayton Utz partner Simon Newcomb, yet delays and ambiguity can also chill projects. A ‘gap analysis’ announced by Treasurer Jim Chalmers will map which risks existing laws already cover.

CyberCX’s Alastair MacGibbon warned that criminals are using AI to deliver sharper phishing attacks and flagged the return of erotic features in some chatbots as an oversight test. His message echoed across panels: move fast with governance, or risk ceding both competitiveness and safety.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Medical group hit with £100,000 penalty after cyberattack exposes patient data

Emails containing sensitive health data were stolen from the Medical Specialist Group (MSG) in a 2021 cyberattack. The data has been later used in phishing campaigns, prompting the Office of the Data Protection Authority (ODPA) to fine MSG £100,000 for insufficiently safeguarding personal data and breaching data protection legislation.

Investigators found the clinic’s email server was compromised in August 2021 and went undetected for more than three months. Health data is sensitive information that requires stringent protection. However, the ODPA found MSG neglected to install routine security updates for thirteen months, and weaknesses in its threat-detection system led to multiple missed chances to identify unauthorised access to its email server.

The ODPA has ordered MSG to pay £75,000 within 60 days and a further £25,000 after 14 months, with the final amount being waived if it completes an agreed security action plan. MSG stated it has invested in new technology, system monitoring and staff training. The exact number of stolen emails remains unclear, though thousands were left exposed to unauthorised access.

The breach adds to a growing list of cyberattacks targeting the healthcare sector over the past year, including incidents like the Anne Arundel Dermatology cyberattack affecting nearly two million patients and the McLaren Health Care ransomware attack, affecting over 700,000 individuals.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AWS outage turned a mundane DNS slip into global chaos

Cloudflare’s boss summed up the mood after Monday’s chaos, relieved his firm wasn’t to blame as outages rippled across more than 1,000 companies. Snapchat, Reddit, Roblox, Fortnite, banks, and government portals faltered together, exposing how much of the web leans on Amazon Web Services.

AWS is the backbone for a vast slice of the internet, renting compute, storage, and databases so firms avoid running their own stacks. However, a mundane Domain Name System error in its Northern Virginia region scrambled routing, leaving services online yet unreachable as traffic lost its map.

Engineers call it a classic failure mode: ‘It’s always DNS.’ Misconfigurations, maintenance slips, or server faults can cascade quickly across shared platforms. AWS says teams moved to mitigate, but the episode showed how a small mistake at scale becomes a global headache in minutes.

Experts warned of concentration risk: when one hyperscaler stumbles, many fall. Yet few true alternatives exist at AWS’s scale beyond Microsoft Azure and Google Cloud, with smaller rivals from IBM to Alibaba, and fledgling European plays, far behind.

Calls for UKEU cloud sovereignty are growing, but timelines and costs are steep. Monday’s outage is a reminder that resilience needs multi-region and multi-cloud designs, tested failovers, and clear incident comms, not just faith in a single provider.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

IAEA launches initiative to protect AI in nuclear facilities

The International Atomic Energy Agency (IAEA) has launched a new research project to strengthen computer security for AI in the nuclear sector. The initiative aims to support safe adoption of AI technologies in nuclear facilities, including small modular reactors and other applications.

AI and machine learning systems are increasingly used in the nuclear industry to improve operational efficiency and enhance security measures, such as threat detection. These technologies bring risks like data manipulation or misuse, requiring strong cybersecurity and careful oversight.

The Coordinated Research Project (CRP) on Enhancing Computer Security of Artificial Intelligence Applications for Nuclear Technologies will develop methodologies to identify vulnerabilities, implement protection mechanisms, and create AI-enabled security assessment tools.

Training frameworks will also be established to develop human resources capable of managing AI securely in nuclear environments.

Research organisations from all IAEA member states are invited to join the CRP. Proposals must be submitted by 30 November 2025, with participation encouraged for women and young researchers. The IAEA offers further details through its CRP contact page.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

China leads the global generative AI adoption with 515 million users

In China, the use of generative AI has expanded unprecedentedly, reaching 515 million users in the first half of 2025.

The figure, released by the China Internet Network Information Centre, shows more than double the number recorded in December and represents an adoption rate of 36.5 per cent.

Such growth is driven by strong digital infrastructure and the state’s determination to make AI a central tool of national development.

The country’s ‘AI Plus’ strategy aims to integrate AI across all sectors of society and the economy. The majority of users rely on domestic platforms such as DeepSeek, Alibaba Cloud’s Qwen and ByteDance’s Doubao, as access to leading Western models remains restricted.

Young and well-educated citizens dominate the user base, underlining the government’s success in promoting AI literacy among key demographics.

Microsoft’s recent research confirms that China has the world’s largest AI market, surpassing the US in total users. While the US adoption has remained steady, China’s domestic ecosystem continues to accelerate, fuelled by policy support and public enthusiasm for generative tools.

China also leads the world in AI-related intellectual property, with over 1.5 million patent applications accounting for nearly 39 per cent of the global total.

The rapid adoption of home-grown AI technologies reflects a strategic drive for technological self-reliance and positions China at the forefront of global digital transformation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google Cloud and NVIDIA join forces to accelerate enterprise AI and industrial digitalisation

NVIDIA and Google Cloud are expanding their collaboration to bring advanced AI computing to a wider range of enterprise workloads.

The new Google Cloud G4 virtual machines, powered by NVIDIA RTX PRO 6000 Blackwell GPUs, are now generally available, combining high-performance computing with scalability for AI, design, and industrial applications.

An announcement that also makes NVIDIA Omniverse and Isaac Sim available on the Google Cloud Marketplace, offering enterprises new tools for digital twin development, robotics simulation, and AI-driven industrial operations.

These integrations enable customers to build realistic virtual environments, train intelligent systems, and streamline design processes.

Powered by the Blackwell architecture, the RTX PRO 6000 GPUs support next-generation AI inference and advanced graphics capabilities. Enterprises can use them to accelerate complex workloads ranging from generative and agentic AI to high-fidelity simulations.

The partnership strengthens Google Cloud’s AI infrastructure and cements NVIDIA’s role as the leading provider of end-to-end computing for enterprise transformation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

China’s Unitree reveals next-generation humanoid ahead of major IPO

Unitree Robotics has unveiled its most lifelike humanoid robot to date, marking a bold step forward in the country’s rapidly advancing robotics industry.

The new H2 humanoid model, showcased in a short social media video, demonstrated remarkable agility and expressiveness, performing intricate dance moves with striking humanlike grace.

The 180cm-tall, 70kg robot features a silver face with defined eyes, lips and nose, alongside the tagline ‘Destiny Awakening – born to serve everyone safely and friendly’.

A model that represents the company’s growing ambition as it prepares for a mainland listing valued at around US$7 billion.

Unitree’s progress underscores the growing strength of China in humanoid robotics, a field increasingly dominated by domestic innovation and manufacturing capabilities.

As global competition intensifies, the company aims to position itself at the forefront of human-robot interaction and industrial automation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Startup raises $9m to orchestrate Gulf digital infrastructure

Bilal Abu-Ghazaleh has launched 1001 AI, a London–Dubai startup building an AI-native operating system for critical MENA industries. The two-month-old firm raised $9m seed from CIV, General Catalyst and Lux Capital, with angels including Chris Ré, Amjad Masad and Amira Sajwani.

Target sectors include airports, ports, construction, and oil and gas, where 1001 AI sees billions in avoidable inefficiencies. Its engine ingests live operational data, models workflows and issues real-time directives, rerouting vehicles, reassigning crews and adjusting plans autonomously.

Abu-Ghazaleh brings scale-up experience from Hive AI and Scale AI, where he led GenAI operations and contributor networks. 1001 borrows a consulting-style rollout: embed with clients, co-develop the model, then standardise reusable patterns across similar operational flows.

Investors argue the Gulf is an ideal test bed given sovereign-backed AI ambitions and under-digitised, mission-critical infrastructure. Deena Shakir of Lux says the region is ripe for AI that optimises physical operations at scale, from flight turnarounds to cargo moves.

First deployments are slated for construction by year-end, with aviation and logistics to follow. The funding supports early pilots and hiring across engineering, operations and go-to-market, as 1001 aims to become the Gulf’s orchestration layer before expanding globally.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!