Judge bars NSO Group from using spyware to target WhatsApp in landmark ruling

A US federal judge has permanently barred NSO Group, a commercial spyware company, from targeting WhatsApp and, in the same ruling, cut damages owed to Meta from $168 million to $4 million.

The decision by Judge Phyllis Hamilton of the Northern District of California stems from NSO’s 2019 hack of WhatsApp, when the company’s Pegasus spyware targeted 1,400 users through a zero-click exploit. The injunction bans NSO from accessing or assisting access to WhatsApp’s systems, a restriction the firm previously warned could threaten its business model.

An NSO spokesperson said the order ‘will not apply to NSO’s customers, who will continue using the company’s technology to help protect public safety,’ but declined to clarify how that interpretation aligns with the court’s wording. By contrast, Will Cathcart, head of WhatsApp, stated on X that the decision ‘bans spyware maker NSO from ever targeting WhatsApp and our global users again.’

Pegasus has allegedly been used against journalists, activists, and dissidents worldwide. The ruling sets an important precedent for US companies whose platforms have been compromised by commercial surveillance firms.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

NVIDIA and TSMC celebrate first US-made Blackwell AI chip

A collaboration between NVIDIA and TSMC has marked a historic milestone with the first NVIDIA Blackwell wafer produced on US soil.

The event, held at TSMC’s facility in Phoenix, symbolised the start of volume production for the Blackwell architecture and a major step toward domestic AI chip manufacturing.

NVIDIA’s CEO Jensen Huang described it as a moment that brings advanced technology and industrial strength back to the US.

A partnership that highlights how the companies aim to strengthen the US’s semiconductor supply chain by producing the world’s most advanced chips domestically.

TSMC Arizona will manufacture next-generation two-, three- and four-nanometre technologies, crucial for AI, telecommunications, and high-performance computing. The process transforms raw wafers through layering, etching, and patterning into the high-speed processors driving the AI revolution.

TSMC executives praised the achievement as the result of decades of partnership with NVIDIA, built on innovation and technical excellence.

Both companies believe that local chip production will help meet the rising global demand for AI infrastructure while securing the US’s strategic position in advanced technology manufacturing.

NVIDIA also plans to use its AI, robotics, and digital twin platforms to design and manage future American facilities, deepening its commitment to domestic production.

The companies say their shared investment signals a long-term vision of sustainable innovation, industrial resilience, and technological leadership for the AI era.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK and US freeze assets of Southeast Asian online scam network

The UK and US governments have jointly sanctioned a transnational network operating illegal scam centres across Southeast Asia. These centres use sophisticated methods, including fake romantic relationships, to defraud victims worldwide.

Many of the individuals forced to conduct these scams are trafficked foreign nationals, coerced under threat of torture. Authorities have frozen a £12 million North London mansion, along with a £100 million City office and several London flats.

Network leader Chen Zhi and his associates used corporate proxies and overseas companies to launder proceeds from their scams through London’s property market.

The sanctioned entities include the Prince Group, Jin Bei Group, Golden Fortune Resorts World Ltd., and Byex Exchange. Scam operations trap foreign nationals with fake job adverts, forcing them to commit online fraud, often through fake cryptocurrency schemes.

Proceeds are then laundered through a complex system of front businesses and gambling platforms.

Foreign Secretary Yvette Cooper and Fraud Minister Lord Hanson said the action protects human rights, UK citizens, and blocks criminals from storing illicit funds. Coordination with the US ensures these sanctions disrupt the network’s international operations and financial access.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

US seizes $15 billion crypto from Cambodia fraud ring

US federal prosecutors have seized $15 billion in cryptocurrency tied to a large-scale ‘pig butchering’ investment scam linked to forced labour compounds in Cambodia. Officials said it marks the biggest crypto forfeiture in Justice Department history.

Authorities charged Chinese-born businessman Chen Zhi, founder of the Prince Group, with money laundering and wire fraud. Chen allegedly used the conglomerate as cover for criminal operations that laundered billions through fake crypto investments. He remains at large.

Investigators say Chen and his associates operated at least ten forced labour sites in Cambodia where victims, many coerced workers, managed thousands of fake social media accounts to lure targets into fraudulent investment schemes.

The US Treasury also imposed sanctions on dozens of Prince Group affiliates, calling them transnational criminal organisations. FBI officials said the scam is part of a wider wave of crypto fraud across Southeast Asia, urging anyone targeted by online investment offers to contact authorities immediately.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Researchers expose weak satellite security with cheap equipment

Scientists in the US have shown how easy it is to intercept private messages and military information from satellites using equipment costing less than €500.

Researchers from the University of California, San Diego and the University of Maryland scanned internet traffic from 39 geostationary satellites and 411 transponders over seven months.

They discovered unencrypted data, including phone numbers, text messages, and browsing history from networks such as T-Mobile, TelMex, and AT&T, as well as sensitive military communications from the US and Mexico.

The researchers used everyday tools such as TV satellite dishes to collect and decode the signals, proving that anyone with a basic setup and a clear view of the sky could potentially access unprotected data.

They said there is a ‘clear mismatch’ between how satellite users assume their data is secured and how it is handled in reality. Despite the industry’s standard practice of encrypting communications, many transmissions were left exposed.

Companies often avoid stronger encryption because it increases costs and reduces bandwidth efficiency. The researchers noted that firms such as Panasonic could lose up to 30 per cent in revenue if all data were encrypted.

While intercepting satellite data still requires technical skill and precise equipment alignment, the study highlights how affordable tools can reveal serious weaknesses in global satellite security.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google rolls out AI features to surface fresh web content in Search & Discover

Google is launching two new AI-powered features in its Search and Discover tools to help people connect with more recent content on the web. The first feature upgrades Discover. It shows brief previews of trending stories and topics you care about, which you can expand to view more.

Each preview includes links so you can explore the full content on the web. This aims to make catching up on stories from both known and new publishers easier. The feature is now live in the US, South Korea and India.

The second is a sports-oriented update in Search: when looking up players or teams on your phone, you’ll soon see a ‘What’s new’ button. That will surface a feed of the latest updates and articles so you can follow recent action more directly. Rolling out in the US in the coming weeks.

These features are part of Google’s effort to use AI to help people stay better informed about topics they care about, trending news, sports, etc. At the same time, Google emphasises that web links remain a core part of the experience, helping users explore sources and dive deeper.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

California introduces first AI chatbot safety law

California has become the first US state to regulate AI companion chatbots after Governor Gavin Newsom signed landmark legislation designed to protect children and vulnerable users. The new law, SB 243, holds companies legally accountable if their chatbots fail to meet new safety and transparency standards.

The US legislation follows several tragic cases, including the death of a teenager who reportedly engaged in suicidal conversations with an AI chatbot. It also comes after leaked documents revealed that some AI systems allowed inappropriate exchanges with minors.

Under the new rules, firms must introduce age verification, self-harm prevention protocols, and warnings for users engaging with companion chatbots. Platforms must clearly state that conversations are AI-generated and are barred from presenting chatbots as healthcare professionals.

Major developers including OpenAI, Replika, and Character.AI say they are introducing stronger parental controls, content filters, and crisis support systems to comply. Lawmakers hope the move will inspire other states to adopt similar protections as AI companionship tools become increasingly popular.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI chatbots linked to US teen suicides spark legal action

Families in the US are suing AI developers after tragic cases in which teenagers allegedly took their own lives following exchanges with chatbots. The lawsuits accuse platforms such as Character.AI and OpenAI’s ChatGPT of fostering dangerous emotional dependencies with young users.

One case involves 14-year-old Sewell Setzer, whose mother says he fell in love with a chatbot modelled on a Game of Thrones character. Their conversations reportedly turned manipulative before his death, prompting legal action against Character.AI.

Another family claims ChatGPT gave their son advice on suicide methods, leading to a similar tragedy. The companies have expressed sympathy and strengthened safety measures, introducing age-based restrictions, parental controls, and clearer disclaimers stating that chatbots are not real people.

Experts warn that chatbots are repeating social media’s early mistakes, exploiting emotional vulnerability to maximise engagement. Lawmakers in California are preparing new rules to restrict AI tools that simulate human relationships with minors, aiming to prevent manipulation and psychological harm.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

ICE-tracking apps pulled from the App Store

Apple has taken down several mobile apps used to track US Immigration and Customs Enforcement (ICE) activity, sparking backlash from developers and digital rights advocates. The removals follow reported pressure from the US Department of Justice, which has cited safety and legal concerns.

One affected app, Eyes Up, was designed to alert users to ICE raids and detention locations. Its developer, identified only as Mark for safety reasons, said he believes the decision was politically motivated and vowed to contest it.

The takedown reflects a wider debate over whether app stores should host software linked to law enforcement monitoring or protest activity. Developers argue their tools support community safety and transparency, while regulators say such apps could risk interference with federal operations.

Apple has not provided detailed reasoning for its decision beyond referencing its developer guidelines. Google has also reportedly removed similar apps from its Play Store, citing policy compliance. Both companies face scrutiny over how content moderation intersects with political and civil rights issues.

Civil liberties groups warn that the decision could set a precedent limiting speech and digital activism in the US. The affected developers have said they will continue to distribute their apps through alternative channels while challenging the removals.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Retailers face new pressure under California privacy law

California has entered a new era of privacy and AI enforcement after the state’s privacy regulator fined Tractor Supply USD1.35 million for failing to honour opt-outs and ignoring Global Privacy Control signals. The case marks the largest penalty yet from the California Privacy Protection Agency.

In California, there is a widening focus on how companies manage consumer data, verification processes and third-party vendors. Regulators are now demanding that privacy signals be enforced at the technology layer, not just displayed through website banners or webforms.

Retailers must now show active, auditable compliance, with clear privacy notices, automated data controls and stronger vendor agreements. Regulators have also warned that businesses will be held responsible for partner failures and poor oversight of cookies and tracking tools.

At the same time, California’s new AI law, SB 53, extends governance obligations to frontier AI developers, requiring transparency around safety benchmarks and misuse prevention. The measure connects AI accountability to broader data governance, reinforcing that privacy and AI oversight are now inseparable.

Executives across retail and technology are being urged to embed compliance and governance into daily operations. California’s regulators are shifting from punishing visible lapses to demanding continuous, verifiable proof of compliance across both data and AI systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot