Surging AI use drives utility upgrades

The rapid rise of AI is placing unprecedented strain on the US power grid, as the electricity demands of massive data centres continue to surge.

Utilities nationwide are struggling to keep up, expanding infrastructure and revising rate structures to accommodate an influx of power-hungry facilities.

Regions like Northern Virginia have become focal points, where dense data centre clusters consume tens of megawatts each and create years-long delays for new connections.

Some next-generation AI systems are expected to require between 1 and 5 gigawatts of constant power, roughly the output of multiple Hoover Dams, posing significant challenges for energy suppliers and regulators alike.

In response, tech firms and utilities are considering a mix of solutions, including on-site natural gas generation, investments in small nuclear reactors, and greater reliance on renewable sources.

At the federal level, streamlined permitting and executive actions are used to fast-track grid and plant development.

‘The scale of AI’s power appetite is unprecedented,’ said Dr Elena Martinez, senior grid strategist at the Centre for Energy Innovation. ‘Utilities must pivot now, combining smart-grid tech, diverse energy sources and regulatory agility to avoid systemic bottlenecks.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

South Korea joins US-led multinational cyber exercise

South Korea’s Cyber Operations Command is participating in a US-led multinational cyber exercise this week, the Ministry of National Defence in Seoul announced on Monday.

Seven personnel from the command are taking part in the five-day Cyber Flag exercise, which began in Virginia, United States. This marks South Korea’s fourth participation in the exercise since first joining in 2022.

Launched in 2011, Cyber Flag is an annual exercise designed to enhance cooperation between the United States and its allies, particularly the Five Eyes intelligence alliance, which includes Australia, Canada, New Zealand, the United Kingdom, and the United States. The exercise provides a platform for partner nations to strengthen their collective ability to detect, respond to, and mitigate cyber threats through practical, scenario-based training.

According to the Ministry, Cyber Flag, together with bilateral exercises between South Korean and US cyber commands and the exchange of personnel and technologies, is expected to further advance cooperation between the two countries in the cyber domain.

The Cyber Flag exercise involves the Five Eyes intelligence alliance—comprising the United States, United Kingdom, Australia, Canada, and New Zealand—alongside other partner countries. The program focuses on enhancing collective capabilities to counter cyber threats through practical training.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Fashion sector targeted again as Louis Vuitton confirms data breach

Louis Vuitton Hong Kong is under investigation after a data breach potentially exposed the personal information of around 419,000 customers, according to the South China Morning Post.

The company informed Hong Kong’s privacy watchdog on 17 July, more than a month after its French office first detected suspicious activity on 13 June. The Office of the Privacy Commissioner has now launched a formal inquiry.

Early findings suggest that compromised data includes names, passport numbers, birth dates, phone numbers, email addresses, physical addresses, purchase histories, and product preferences.

Although no complaints have been filed so far, the regulator is examining whether the reporting delay breached data protection rules and how the unauthorised access occurred. Louis Vuitton stated that it responded quickly with the assistance of external cybersecurity experts and confirmed that no payment details were involved.

The incident adds to a growing list of cyberattacks targeting fashion and retail brands in 2025. In May, fast fashion giant Shein confirmed a breach that affected customer support systems.

[Correction] Contrary to some reports, Puma was not affected by a ransomware attack in 2025. This claim appears to be inaccurate and is not corroborated by any verified public disclosures or statements by the company. Please disregard any previous mentions suggesting otherwise.

Security experts have warned that the sector remains a growing target due to high-value customer data and limited cyber defences. Louis Vuitton said it continues to upgrade its security systems and will notify affected individuals and regulators as the investigation continues.

‘We sincerely regret any concern or inconvenience this situation may cause,’ the company said in a statement.

[Dear readers, a previous version of this article highlighted incorrect information about a cyberattack on Puma. The information has been removed from our website, and we hereby apologise to Puma and our readers.]

How to keep your data safe while using generative AI tools

Generative AI tools have become a regular part of everyday life, both professionally and personally. Despite their usefulness, concern is growing about how they handle private data shared by users.

Major platforms like ChatGPT, Claude, Gemini, and Copilot collect user input to improve their models. Much of this data handling occurs behind the scenes, raising transparency and security concerns.

Anat Baron, a generative AI expert, compares AI models to Pac-Man—constantly consuming data to enhance performance. The more information they receive, the more helpful they become, often at the expense of privacy.

Many users ignore warnings not to share sensitive information. Baron advises against sharing anything with AI that one would not give to a stranger, including ID numbers, financial data, and medical results.

Some platforms offer options to reduce data collection. ChatGPT users can disable training under ‘Data Controls’, while Claude collects data only if users opt in. Perplexity and Gemini offer similar, though less transparent, settings.

Microsoft’s Copilot protects organisational data when logged in, but risks increase when used anonymously on the web. DeepSeek, however, collects user data automatically with no opt-out—making it a risky choice.

Users still retain control, but must remain alert. AI tools are evolving, and with digital agents on the horizon, safeguarding personal information is becoming even more critical. Baron sums it up simply: ‘Privacy always comes at a cost. We must decide how much we’re willing to trade for convenience.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta CEO unveils plan to spend hundreds of billions on AI data centres

Mark Zuckerberg has pledged to invest hundreds of billions of dollars to build a network of massive data centres focused on superintelligent AI. The initiative forms part of Meta’s wider push to lead the race in developing machines capable of outperforming humans in complex tasks.

The first of these centres, called Prometheus, is set to launch in 2026. Another facility, Hyperion, is expected to scale up to 5 gigawatts. Zuckerberg said the company is building several more AI ‘titan clusters’, each one covering an area comparable to a significant part of Manhattan.

He also cited Meta’s strong advertising revenue as the reason it can afford such bold spending despite investor concerns.

Meta recently regrouped its AI projects under a new division, Superintelligence Labs, following internal setbacks and high-profile staff departures.

The company hopes the division will generate fresh revenue streams through Meta AI tools, video ad generators, and wearable smart devices. It is reportedly considering dropping its most powerful open-source model, Behemoth, in favour of a closed alternative.

The firm has increased its 2025 capital expenditure to up to $72 billion and is actively hiring top talent, including former Scale AI CEO Alexandr Wang and ex-GitHub chief Nat Friedman.

Analysts say Meta’s AI investments are paying off in advertising but warn that the real return on long-term AI dominance will take time to emerge.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

DuckDuckGo adds new tool to block AI-generated images from search results

Privacy-focused search engine DuckDuckGo has launched a new feature that allows users to filter out AI-generated images from search results.

Although the company admits the tool is not perfect and may miss some content, it claims it will significantly reduce the number of synthetic images users encounter.

The new filter uses open-source blocklists, including a more aggressive ‘nuclear’ option, sourced from tools like uBlock Origin and uBlacklist.

Users can access the setting via the Images tab after performing a search or use a dedicated link — noai.duckduckgo.com — which keeps the filter always on and also disables AI summaries and the browser’s chatbot.

The update responds to growing frustration among internet users. Platforms like X and Reddit have seen complaints about AI content flooding search results.

In one example, users searching for ‘baby peacock’ reported seeing just as many or more AI images than real ones, making it harder to distinguish between fake and authentic content.

DuckDuckGo isn’t alone in trying to tackle unwanted AI material. In 2024, Hiya launched a Chrome extension aimed at spotting deepfake audio across major platforms.

Microsoft’s Bing has also partnered with groups like StopNCII to remove explicit synthetic media from its results, showing that the fight against AI content saturation is becoming a broader industry trend.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Nearly 2 million patients affected in healthcare cyberattack

Anne Arundel Dermatology, a network of over 100 clinics across seven states, has confirmed a cyberattack that compromised patient data for nearly 1.9 million individuals.

The breach between 14 February and 13 May 2025 may have exposed sensitive personal and medical records.

The company responded swiftly by isolating affected systems, working with forensic experts and completing a full file review by 27 June.

While there is no evidence that the data was accessed or misused, patients were notified and offered 24 months of identity-theft protection.

The incident ranks among the largest reported healthcare data breaches this year, prompting mandatory notifications to state attorneys general and the HHS Office for Civil Rights.

Affected individuals are advised to monitor statements and credit reports carefully.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Drug‑testing firm exposes 748,000 records in breach

In a massive data breach revealed in July 2025, the Texas Alcohol & Drug Testing Service (TADTS) admitted hackers gained access to sensitive information belonging to approximately 748,763 individuals.

Attackers remained inside the network for five days in July 2024 before detection, later leaking hundreds of gigabytes of data via the BianLian ransomware group.

Exposed records include a dangerous mix of personal and financial data—names, Social Security and passport numbers, driver’s licence and bank account details, biometric information, health‑insurance files and login credentials.

The breadth of this data presents a significant risk of identity theft and financial fraud.

Despite identifying the breach shortly after, TADTS delayed notifying those affected until July 2025 and provided no credit monitoring or identity theft services.

The company is now under classic action scrutiny, with law firms investigating its response and breach notification delays.

Security experts warn that the extended timeline and broad data exposure could lead to scams, account takeovers and sustained damage to victims.

Affected individuals are urged to monitor statements, access free credit reports, and remain alert for suspicious activity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Salt Typhoon targets routers in sweeping campaign

Since early 2025, the Chinese-linked hacking group Salt Typhoon has aggressively targeted telecom infrastructure worldwide, compromising routers, switches and edge devices used by clients of major operators such as Comcast, MTN and LG Uplus.

Exploiting known but unpatched vulnerabilities, attackers gained persistent access to these network devices, potentially enabling further intrusions into core telecom systems.

The pattern suggests a strategic shift: the group broadly sweeps telecom infrastructure to establish ready-made access across critical communication channels.

Affected providers emphasised that only client-owned hardware was breached and confirmed no internal networks were compromised, but the campaign raises deeper concerns.

Experts warn that such indiscriminate telecommunications targeting could threaten data security and disrupt essential services, revealing a long-term cyber‑espionage strategy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Nvidia’s container toolkit patched after critical bug

Cloud security researchers at Wiz have uncovered a critical misconfiguration in Nvidia’s Container Toolkit, used widely across managed AI services, that could allow a malicious container to break out and gain full root privileges on the host system.

The vulnerability, tracked as CVE‑2025‑23266 and nicknamed ‘NVIDIAScape’, arises from unsafe handling of OCI hooks. Exploiters can bypass container boundaries by using a simple three‑line Dockerfile, granting them access to server files, memory and GPU resources.

With Nvidia’s toolkit integral to GPU‑accelerated cloud offerings, the risk is systemic. A single compromised container could steal or corrupt sensitive data and AI models belonging to other tenants on the same infrastructure.

Nvidia has released a security advisory alongside updated toolkit versions. Users are strongly advised to apply patches immediately. Experts also recommend deploying additional isolation measures, such as virtual machines, to protect against container escape threats in multi-tenant AI environments.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!