Fashion sector targeted again as Louis Vuitton confirms data breach

Louis Vuitton Hong Kong is under investigation after a data breach potentially exposed the personal information of around 419,000 customers, according to the South China Morning Post.

The company informed Hong Kong’s privacy watchdog on 17 July, more than a month after its French office first detected suspicious activity on 13 June. The Office of the Privacy Commissioner has now launched a formal inquiry.

Early findings suggest that compromised data includes names, passport numbers, birth dates, phone numbers, email addresses, physical addresses, purchase histories, and product preferences.

Although no complaints have been filed so far, the regulator is examining whether the reporting delay breached data protection rules and how the unauthorised access occurred. Louis Vuitton stated that it responded quickly with the assistance of external cybersecurity experts and confirmed that no payment details were involved.

The incident adds to a growing list of cyberattacks targeting fashion and retail brands in 2025. In May, fast fashion giant Shein confirmed a breach that affected customer support systems.

[Correction] Contrary to some reports, Puma was not affected by a ransomware attack in 2025. This claim appears to be inaccurate and is not corroborated by any verified public disclosures or statements by the company. Please disregard any previous mentions suggesting otherwise.

Security experts have warned that the sector remains a growing target due to high-value customer data and limited cyber defences. Louis Vuitton said it continues to upgrade its security systems and will notify affected individuals and regulators as the investigation continues.

‘We sincerely regret any concern or inconvenience this situation may cause,’ the company said in a statement.

[Dear readers, a previous version of this article highlighted incorrect information about a cyberattack on Puma. The information has been removed from our website, and we hereby apologise to Puma and our readers.]

How to keep your data safe while using generative AI tools

Generative AI tools have become a regular part of everyday life, both professionally and personally. Despite their usefulness, concern is growing about how they handle private data shared by users.

Major platforms like ChatGPT, Claude, Gemini, and Copilot collect user input to improve their models. Much of this data handling occurs behind the scenes, raising transparency and security concerns.

Anat Baron, a generative AI expert, compares AI models to Pac-Man—constantly consuming data to enhance performance. The more information they receive, the more helpful they become, often at the expense of privacy.

Many users ignore warnings not to share sensitive information. Baron advises against sharing anything with AI that one would not give to a stranger, including ID numbers, financial data, and medical results.

Some platforms offer options to reduce data collection. ChatGPT users can disable training under ‘Data Controls’, while Claude collects data only if users opt in. Perplexity and Gemini offer similar, though less transparent, settings.

Microsoft’s Copilot protects organisational data when logged in, but risks increase when used anonymously on the web. DeepSeek, however, collects user data automatically with no opt-out—making it a risky choice.

Users still retain control, but must remain alert. AI tools are evolving, and with digital agents on the horizon, safeguarding personal information is becoming even more critical. Baron sums it up simply: ‘Privacy always comes at a cost. We must decide how much we’re willing to trade for convenience.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta CEO unveils plan to spend hundreds of billions on AI data centres

Mark Zuckerberg has pledged to invest hundreds of billions of dollars to build a network of massive data centres focused on superintelligent AI. The initiative forms part of Meta’s wider push to lead the race in developing machines capable of outperforming humans in complex tasks.

The first of these centres, called Prometheus, is set to launch in 2026. Another facility, Hyperion, is expected to scale up to 5 gigawatts. Zuckerberg said the company is building several more AI ‘titan clusters’, each one covering an area comparable to a significant part of Manhattan.

He also cited Meta’s strong advertising revenue as the reason it can afford such bold spending despite investor concerns.

Meta recently regrouped its AI projects under a new division, Superintelligence Labs, following internal setbacks and high-profile staff departures.

The company hopes the division will generate fresh revenue streams through Meta AI tools, video ad generators, and wearable smart devices. It is reportedly considering dropping its most powerful open-source model, Behemoth, in favour of a closed alternative.

The firm has increased its 2025 capital expenditure to up to $72 billion and is actively hiring top talent, including former Scale AI CEO Alexandr Wang and ex-GitHub chief Nat Friedman.

Analysts say Meta’s AI investments are paying off in advertising but warn that the real return on long-term AI dominance will take time to emerge.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

DuckDuckGo adds new tool to block AI-generated images from search results

Privacy-focused search engine DuckDuckGo has launched a new feature that allows users to filter out AI-generated images from search results.

Although the company admits the tool is not perfect and may miss some content, it claims it will significantly reduce the number of synthetic images users encounter.

The new filter uses open-source blocklists, including a more aggressive ‘nuclear’ option, sourced from tools like uBlock Origin and uBlacklist.

Users can access the setting via the Images tab after performing a search or use a dedicated link — noai.duckduckgo.com — which keeps the filter always on and also disables AI summaries and the browser’s chatbot.

The update responds to growing frustration among internet users. Platforms like X and Reddit have seen complaints about AI content flooding search results.

In one example, users searching for ‘baby peacock’ reported seeing just as many or more AI images than real ones, making it harder to distinguish between fake and authentic content.

DuckDuckGo isn’t alone in trying to tackle unwanted AI material. In 2024, Hiya launched a Chrome extension aimed at spotting deepfake audio across major platforms.

Microsoft’s Bing has also partnered with groups like StopNCII to remove explicit synthetic media from its results, showing that the fight against AI content saturation is becoming a broader industry trend.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Nearly 2 million patients affected in healthcare cyberattack

Anne Arundel Dermatology, a network of over 100 clinics across seven states, has confirmed a cyberattack that compromised patient data for nearly 1.9 million individuals.

The breach between 14 February and 13 May 2025 may have exposed sensitive personal and medical records.

The company responded swiftly by isolating affected systems, working with forensic experts and completing a full file review by 27 June.

While there is no evidence that the data was accessed or misused, patients were notified and offered 24 months of identity-theft protection.

The incident ranks among the largest reported healthcare data breaches this year, prompting mandatory notifications to state attorneys general and the HHS Office for Civil Rights.

Affected individuals are advised to monitor statements and credit reports carefully.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Drug‑testing firm exposes 748,000 records in breach

In a massive data breach revealed in July 2025, the Texas Alcohol & Drug Testing Service (TADTS) admitted hackers gained access to sensitive information belonging to approximately 748,763 individuals.

Attackers remained inside the network for five days in July 2024 before detection, later leaking hundreds of gigabytes of data via the BianLian ransomware group.

Exposed records include a dangerous mix of personal and financial data—names, Social Security and passport numbers, driver’s licence and bank account details, biometric information, health‑insurance files and login credentials.

The breadth of this data presents a significant risk of identity theft and financial fraud.

Despite identifying the breach shortly after, TADTS delayed notifying those affected until July 2025 and provided no credit monitoring or identity theft services.

The company is now under classic action scrutiny, with law firms investigating its response and breach notification delays.

Security experts warn that the extended timeline and broad data exposure could lead to scams, account takeovers and sustained damage to victims.

Affected individuals are urged to monitor statements, access free credit reports, and remain alert for suspicious activity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Salt Typhoon targets routers in sweeping campaign

Since early 2025, the Chinese-linked hacking group Salt Typhoon has aggressively targeted telecom infrastructure worldwide, compromising routers, switches and edge devices used by clients of major operators such as Comcast, MTN and LG Uplus.

Exploiting known but unpatched vulnerabilities, attackers gained persistent access to these network devices, potentially enabling further intrusions into core telecom systems.

The pattern suggests a strategic shift: the group broadly sweeps telecom infrastructure to establish ready-made access across critical communication channels.

Affected providers emphasised that only client-owned hardware was breached and confirmed no internal networks were compromised, but the campaign raises deeper concerns.

Experts warn that such indiscriminate telecommunications targeting could threaten data security and disrupt essential services, revealing a long-term cyber‑espionage strategy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Nvidia’s container toolkit patched after critical bug

Cloud security researchers at Wiz have uncovered a critical misconfiguration in Nvidia’s Container Toolkit, used widely across managed AI services, that could allow a malicious container to break out and gain full root privileges on the host system.

The vulnerability, tracked as CVE‑2025‑23266 and nicknamed ‘NVIDIAScape’, arises from unsafe handling of OCI hooks. Exploiters can bypass container boundaries by using a simple three‑line Dockerfile, granting them access to server files, memory and GPU resources.

With Nvidia’s toolkit integral to GPU‑accelerated cloud offerings, the risk is systemic. A single compromised container could steal or corrupt sensitive data and AI models belonging to other tenants on the same infrastructure.

Nvidia has released a security advisory alongside updated toolkit versions. Users are strongly advised to apply patches immediately. Experts also recommend deploying additional isolation measures, such as virtual machines, to protect against container escape threats in multi-tenant AI environments.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Crypto crime surges to record levels in 2025

The cryptocurrency industry faces a record-breaking year for theft in 2025, with losses surpassing $2.17 billion by mid-July, according to a Chainalysis report. The amount stolen so far has surpassed the total for all of 2024, highlighting a concerning increase in digital asset crime.

A large proportion, around $1.5 billion, stems from the North Korea-linked Bybit hack, which accounts for nearly 70% of thefts targeting crypto services this year.

While centralised exchanges remain prime targets, personal wallets now represent almost a quarter of stolen funds. The report highlights a rise in violent ‘wrench attacks,’ where criminals coerce Bitcoin holders into revealing private keys through threats or physical force.

Kidnappings of crypto executives and family members have also increased, with 2025 expected to double the number of such physical assaults compared to previous years.

Sophistication in laundering stolen crypto varies depending on the target. Hackers focusing on exchanges use advanced techniques like chain-hopping and mixers to obscure transactions.

Conversely, attackers targeting personal wallets often employ simpler methods. Interestingly, criminals are holding stolen assets longer and are willing to pay fees up to 14.5 times higher than average to swiftly move illicit funds and avoid detection.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Quantum tech could break online security, warns India

The Indian Computer Emergency Response Team (CERT-In), alongside cybersecurity firm SISA, cautions that these powerful machines could soon break the encryption used to protect everything from online banking to personal identity systems.

CERT-In’s new white paper outlines how attackers may already be stockpiling encrypted data to unlock later using quantum tools, a tactic called ‘harvest now, decrypt later’. If left unaddressed, this strategy could expose sensitive data stored today once quantum technology matures.

AI is adding to the urgency. As it becomes more embedded in digital systems, it also increases access to user data, raising the stakes if encryption is compromised. The biggest digital systems in India, including Aadhaar, cryptocurrencies, and smart devices, are seen as particularly exposed to this looming risk.

Everyday users are advised to take precautions: update devices regularly, use strong passwords with multi-factor authentication, and avoid storing sensitive data online long-term. Services like Signal or ProtonMail, which use strong encryption, are also recommended.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!