Customer data stolen in cyber attacks on Cartier and North Face

Fashion brand The North Face and luxury jeweller Cartier have confirmed recent cyber attacks that exposed customer data, including names and email addresses.

Neither company reported breaches of financial or password information.

North Face identified the attack as a credential stuffing attempt, where previously stolen passwords are used to break into other accounts.

Affected customers are being advised to change their login details, while the company’s owner, VF Corporation, continues recovering from an earlier incident.

Cartier said the breach allowed brief access to limited client data but insisted that it quickly secured its systems.

Retailers such as Adidas, Victoria’s Secret, Harrods, and M&S have all been hit in recent months, prompting warnings that the industry remains an attractive target for cyber criminals.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI turns ChatGPT into AI gateway

OpenAI plans to reinvent ChatGPT as an all-in-one ‘super assistant’ that knows its users and becomes their primary gateway to the internet.

Details emerged from a partly redacted internal strategy document shared during the US government’s antitrust case against Google.

Rather than limiting ChatGPT to existing apps and websites, OpenAI envisions a future where the assistant supports everyday life—from suggesting recipes at home to taking notes at work or guiding users while travelling.

The company says the AI should evolve into a reliable, emotionally intelligent helper capable of handling a various personal and professional tasks.

OpenAI also believes hardware will be key to this transformation. It recently acquired io, a start-up founded by former Apple designer Jony Ive, for $6.4 billion to develop AI-powered devices.

The company’s strategy outlines how upcoming models like o2 and o3, alongside tools like multimodality and generative user interfaces, could make ChatGPT capable of taking meaningful action instead of simply offering responses.

The document also reveals OpenAI’s intention to back a regulation requiring tech platforms to allow users to set ChatGPT as their default assistant. Confident in its fast growth, research lead, and independence from ads, the company aims to maintain its advantage through bold decisions, speed, and self-disruption.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

WhatsApp fixes deleted message privacy gap

WhatsApp is rolling out a privacy improvement that ensures deleted messages no longer linger in quoted replies, addressing a long-standing issue that exposed partial content users had intended to remove.

The update applies automatically, with no toggle required, and has begun reaching iOS users through version 25.12.73, with wider availability expected soon.

Until now, deleting a message for everyone in a chat has not removed it from quoted replies. That allowed fragments of deleted content to remain visible, undermining the purpose of deletion.

WhatsApp removes the associated quoted message entirely instead of keeping it in conversation threads, even in group or community chats.

WABetaInfo, which first spotted the update, noted that users delete messages for privacy or personal reasons, and leave behind quoted traces conflicted with those intentions.

The change ensures conversations reflect user expectations by entirely erasing deleted content, not only from the original message but also from any references.

Meta continues to develop new features for WhatsApp. Recent additions include voice chat in groups and a native interface for iPad. The company is also testing tools like AI-generated wallpapers, message summaries, and more refined privacy settings to enhance user control and experience further.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

DeepSeek claims R1 model matches OpenAI

Chinese AI start-up DeepSeek has announced a major update to its R1 reasoning model, claiming it now performs on par with leading systems from OpenAI and Google.

The R1-0528 version, released following the model’s initial launch in January, reportedly surpasses Alibaba’s Qwen3, which debuted only weeks earlier in April.

According to DeepSeek, the upgrade significantly enhances reasoning, coding, and creative writing while cutting hallucination rates by half.

These improvements stem largely from greater computational resources applied after the training phase, allowing the model to outperform domestic rivals in benchmark tests involving maths, logic, and programming.

Unlike many Western competitors, DeepSeek takes an open-source approach. The company recently shared eight GitHub projects detailing methods to optimise computing, communication, and storage efficiency during training.

Its transparency and resource-efficient design have attracted attention, especially since its smaller distilled model rivals Alibaba’s Qwen3-235B while being nearly 30 times lighter.

Major Chinese tech firms, including Tencent, Baidu and ByteDance, plan to integrate R1-0528 into their cloud services for enterprise clients. DeepSeek’s progress signals China’s continued push into globally competitive AI, driven by a young team determined to offer high performance with fewer resources

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

NSO asks court to overturn WhatsApp verdict

Israeli spyware company NSO Group has requested a new trial after a US jury ordered it to pay $168 million in damages to WhatsApp.

The company, which has faced mounting legal and financial troubles, filed a motion in a California federal court last week seeking to reduce the verdict or secure a retrial.

The May verdict awarded WhatsApp $444,719 in compensatory damages and $167.25 million in punitive damages. Jurors found that NSO exploited vulnerabilities in the encrypted platform and sold the exploit to clients who allegedly used it to target journalists, activists and political rivals.

WhatsApp, owned by Meta, filed the lawsuit in 2019.

NSO claims the punitive award is unconstitutional, arguing it is over 376 times greater than the compensatory damages and far exceeds the US Supreme Court’s general guidance of a 4:1 ratio.

The firm also said it cannot afford the penalty, citing losses of $9 million in 2023 and $12 million in 2024. Its CEO testified that the company is ‘struggling to keep our heads above water’.

WhatsApp, responding to TechCrunch in a statement, said NSO was once again trying to evade accountability. The company vowed to continue its legal campaign, including efforts to secure a permanent injunction that would prevent NSO from ever targeting WhatsApp or its users again.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

184 million passwords exposed in massive data breach

A major data breach has exposed over 184 million user credentials, including emails, passwords, and account details for platforms such as Google, Microsoft and government portals. It is still unclear whether this was due to negligence or deliberate criminal activity.

The unencrypted, unprotected database was discovered online by cybersecurity researcher Jeremiah Fowler, who confirmed many of the credentials were current and accurate. The breach highlights ongoing failures by data handlers to apply even the most basic security measures.

Fowler believes the data was gathered using infostealer malware, which silently extracts login information from compromised devices and sells it on the dark web. After the database was reported, the hosting provider took it offline, but the source remains unknown.

Security experts urge users to update passwords across all platforms, enable two-factor authentication, and use password managers and data removal services. In today’s hyper-connected world, the exposure of such critical information without encryption is seen as both avoidable and unacceptable.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft takes down massive Lumma malware network

Microsoft has dismantled a major cybercrime operation centred around the Lumma Stealer malware, which had infected over 394,000 Windows devices globally.

In partnership with global law enforcement and industry partners, Microsoft seized more than 1,300 domains linked to the malware.

The malware was known for stealing sensitive data such as login credentials, bank details and cryptocurrency information, making it a go-to tool for cybercriminals since 2022.

The takedown followed a court order from a US federal court and included help from the US Department of Justice, Europol, and Japan’s cybercrime unit.

Microsoft’s Digital Crimes Unit also received assistance from firms like Cloudflare and Bitsight to disrupt the infrastructure that supported Lumma’s Malware-as-a-Service network.

The operation is being hailed as a significant win against a sophisticated threat that had evolved to target Windows and Mac users. Security experts urge users to adopt strong cyber hygiene, including antivirus software, two-factor authentication, and password managers.

Microsoft’s action is part of a broader effort to tackle infostealers, which have fuelled a surge in data breaches and identity theft worldwide.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

IIT Bombay and BharatGen lead AI push with cultural datasets

In a landmark effort to support AI research grounded in Indian knowledge systems, IIT Bombay has digitised 30 ancient textbooks covering topics such as astronomy, medicine and mathematics—some dating back as far as 18 centuries.

The initiative, part of the government-backed AIKosh portal, has produced a dataset comprising approximately 218,000 sentences and 1.5 million words, now available to researchers across the country.

Launched in March, AIKosh serves as a national repository for datasets, models and toolkits to foster home-grown AI innovation.

Alongside BharatGen—a consortium led by IIT Bombay and comprising IIT Kanpur, IIT Madras, IIT Hyderabad, IIT Mandi, IIM Indore and IIIT Hyderabad—the institute has contributed 37 diverse models and datasets to the platform.

These contributions include 16 culturally significant datasets from IIT Bombay alone, as well as 21 AI models from BharatGen, which is supported by the Department of Science and Technology.

Professor Ganesh Ramakrishnan, who leads the initiative, said the team is developing sovereign AI models for India, trained from scratch and not merely fine-tuned versions of existing tools.

These models aim to be data- and compute-efficient while being culturally and linguistically relevant. The collection also includes datasets for audio-visual learning—such as tutorials on organic farming and waste-to-toy creation—mathematical reasoning in Hindi and English, image-based question answering, and video-text recognition.

One dataset even features question-answering derived from the works of historian Dharampal. ‘This is about setting benchmarks for the AI ecosystem in India,’ said Ramakrishnan, noting that the resources are openly available to researchers, enterprises and academic institutions alike.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

China creates AI to detect real nuclear warheads

Chinese scientists have created the world’s first AI-based system capable of identifying real nuclear warheads from decoys, marking a significant step in arms control verification.

The breakthrough, developed by the China Institute of Atomic Energy (CIAE), could strengthen Beijing’s hand in stalled disarmament talks, although it also raises difficult questions about AI’s growing role in managing weapons of mass destruction.

The technology builds on a long-standing US–China proposal but faced key obstacles: how to train AI using sensitive nuclear data, gain military approval without risking secret leaks, and persuade sceptical nations like the US to move past Cold War-era inspection methods.

So far, only the AI training has been completed, with the rest of the process still pending international acceptance.

The AI system uses deep learning and cryptographic protocols to analyse scrambled radiation signals from warheads behind a polythene wall, ensuring the weapons’ internal designs remain hidden.

The machine can verify a warhead’s chain-reaction potential without accessing classified details. According to CIAE, repeated randomised tests reduce the chance of deception to nearly zero.

While both China and the US have pledged not to let AI control nuclear launch decisions, the new system underlines AI’s expanding role in national defence.

Beijing insists the AI can be jointly trained and sealed before use to ensure transparency, but sceptics remain wary of trust, backdoor access and growing militarisation of AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Uber’s product chief turns to AI for reports and research

Uber’s chief product officer, Sachin Kansal, is embracing AI to streamline his daily workflow—particularly through tools like ChatGPT, Google Gemini, and, soon, NotebookLM.

Speaking on ‘Lenny’s Podcast,’ Kansal revealed how AI summarisation helps him digest lengthy 50- to 100-page reports he otherwise wouldn’t have time to read. He uses AI to understand market trends and rider feedback across regions such as Brazil, South Korea, and South Africa.

Kansal also relies on AI as a research assistant. For instance, when exploring new driver features, he used ChatGPT’s deep research capabilities to simulate possible driver reactions and generate brainstorming ideas.

‘It’s an amazing research assistant,’ he said. ‘It’s absolutely a starting point for a brainstorm with my team.’

He’s now eyeing Google’s NotebookLM, a note-taking and research tool, as the next addition to his AI toolkit—especially its ‘Audio Overview’ feature, which turns documents into AI-generated podcast-style discussions.

Uber CEO Dara Khosrowshahi previously noted that too few of Uber’s 30,000+ employees are using AI and stressed that mastering AI tools, especially for coding, would soon be essential.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!