New GLOBAL GROUP ransomware targets all major operating systems

A sophisticated new ransomware threat, dubbed GLOBAL GROUP, has emerged on cybercrime forums, meticulously designed to target systems across Windows, Linux, and macOS with cross-platform precision.

In June 2025, a threat actor operating under the alias ‘Dollar Dollar Dollar’ launched the GLOBAL GROUP Ransomware-as-a-Service (RaaS) platform on the Ramp4u forum. The campaign offers affiliates scalable tools, automated negotiations, and generous profit-sharing, creating an appealing setup for monetising cybercrime at scale.

GLOBAL GROUP leverages the Golang language to build monolithic binaries, enabling seamless execution across varied operating environments in a single campaign. The strategy expands attackers’ reach, allowing them to exploit hybrid infrastructures while improving operational efficiency and scalability.

Golang’s concurrency model and static linking make it an attractive option for rapid, large-scale encryption without relying on external dependencies. However, forensic analysis by Picus Security Labs suggests GLOBAL GROUP is not an entirely original threat but rather a rebrand of previous ransomware operations.

Researchers linked its code and infrastructure to the now-defunct Mamona RIP and Black Lock families, revealing continuity in tactics and tooling. Evidence includes a reused mutex string—’Global\Fxo16jmdgujs437’—which was also found in earlier Mamona RIP samples, confirming code inheritance.

The re-use of such technical markers highlights how threat actors often evolve existing malware rather than building from scratch, streamlining development and deployment.

Beyond its cross-platform flexibility, GLOBAL GROUP also integrates modern cryptographic features to boost effectiveness and resistance to detection. It employs the ChaCha20-Poly1305 encryption algorithm, offering both confidentiality and message integrity with high processing performance.

The malware leverages Golang’s goroutines to encrypt all system drives simultaneously, reducing execution time and limiting defenders’ reaction window. Encrypted files receive customised extensions like ‘.lockbitloch’, with filenames also obscured to hinder recovery efforts without the correct decryption key.

Ransom note logic is embedded directly within the binary, generating tailored communication instructions and linking to Tor-based leak sites. The approach simplifies extortion for affiliates while preserving operational security and ensuring anonymous negotiations with victims.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Louis Vuitton Australia confirms customer data breach after cyberattack

Louis Vuitton has admitted to a significant data breach in Australia, revealing that an unauthorised third party accessed its internal systems and stole sensitive client details.

The breach, first detected on 2 July, included names, contact information, birthdates, and shopping preferences — though no passwords or financial data were taken.

The luxury retailer emailed affected customers nearly three weeks later, urging them to stay alert for phishing, scam calls, or suspicious texts.

While Louis Vuitton claims it acted quickly to contain the breach and block further access, questions remain about the delay in informing customers and the number of individuals affected.

Authorities have been notified, and cybersecurity specialists are now investigating. The incident adds to a growing list of cyberattacks on major Australian companies, prompting experts to call for stronger data protection laws and the right to demand deletion of personal information from corporate databases.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI governance needs urgent international coordination

A GIS Reports analysis emphasises that as AI systems become pervasive, they create significant global challenges, including surveillance risks, algorithmic bias, cyber vulnerabilities, and environmental pressures.

Unlike legacy regulatory regimes, AI technology blurs the lines among privacy, labour, environmental, security, and human rights domains, demanding a uniquely coordinated governance approach.

The report highlights that leading AI research and infrastructure remain concentrated in advanced economies: over half of general‑purpose AI models originated in the US, exacerbating global inequalities.

Meanwhile, facial recognition or deepfake generators threaten civic trust, amplify disinformation, and even provoke geopolitical incidents if weaponised in defence systems.

The analysis calls for urgent public‑private cooperation and a new regulatory paradigm to address these systemic issues.

Recommendations include forming international expert bodies akin to the IPCC, and creating cohesive governance that bridges labour rights, environmental accountability, and ethical AI frameworks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Hidden malware in DNS records bypasses defences

Security researchers at DomainTools have revealed a novel and stealthy cyberattack method: embedding malware within DNS records. Attackers are storing tiny, encoded pieces of malicious code inside TXT records across multiple subdomains.

The fragments are individually benign, but once fetched and reassembled, typically using PowerShell, they form fully operational malware, including Joke Screenmate prankware and a more serious PowerShell stager that can download further payloads.

DNS traffic is often treated as trustworthy and bypasses many security controls. The growing use of encrypted DNS services like DoH and DoT makes visibility even harder, creating an ideal channel for covert malware delivery.

Reported cases include the fragmentation of Joke Screenmate across hundreds of subdomain TXT records and instances of Covenant C2 stagers hidden in this manner.

Security teams are urged to ramp up DNS analytics, monitor uncommon TXT query patterns, and utilize comprehensive threat intelligence feeds. While still rare in the wild, this technique’s simplicity and stealthiness suggest it could gain traction soon

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Co-op confirms massive data breach as retail cyberattacks surge

All 6.5 million members of the Co-op had their personal data compromised in a cyberattack carried out on 30 April, the company’s chief executive has confirmed.

Shirine Khoury-Haq said the breach felt ‘personal’ after seeing the toll it took on IT teams fighting off the intrusion. She spoke in her first interview since the breach, broadcast on BBC Breakfast.

Initial statements from the Co-op described the incident as having only a ‘small impact’ on internal systems, including call centres and back-office operations.

Alleged hackers soon contacted media outlets and claimed to have accessed both employee and customer data, prompting the company to update its assessment.

The Co-op later admitted that data belonging to a ‘significant number’ of current and former members had been stolen. Exposed information included names, addresses, and contact details, though no payment data was compromised.

Restoration efforts are still ongoing as the company works to rebuild affected back-end systems. In some locations, operational disruption led to empty shelves and prolonged outages.

Khoury-Haq recalled meeting employees during the remediation phase and said she was ‘incredibly sorry’ for the incident. ‘I will never forget the looks on their faces,’ she said.

The attackers’ movements were closely tracked. ‘We were able to monitor every mouse click,’ Khoury-Haq added, noting that this helped authorities in their investigation.

The company reportedly disconnected parts of its network in time to prevent ransomware deployment, though not in time to avoid significant damage. Police said four individuals were arrested earlier this month in connection with the Co-op breach and related retail incidents. All have been released on bail.

Marks & Spencer and Harrods were also hit by cyberattacks in early 2025, with M&S still restoring affected systems. Researchers believe the same threat actor is responsible for all three attacks.

The group, identified as Scattered Spider, has previously disrupted other high-profile targets, including major US casinos in 2023.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Fashion sector targeted again as Louis Vuitton confirms data breach

Louis Vuitton Hong Kong is under investigation after a data breach potentially exposed the personal information of around 419,000 customers, according to the South China Morning Post.

The company informed Hong Kong’s privacy watchdog on 17 July, more than a month after its French office first detected suspicious activity on 13 June. The Office of the Privacy Commissioner has now launched a formal inquiry.

Early findings suggest that compromised data includes names, passport numbers, birth dates, phone numbers, email addresses, physical addresses, purchase histories, and product preferences.

Although no complaints have been filed so far, the regulator is examining whether the reporting delay breached data protection rules and how the unauthorised access occurred. Louis Vuitton stated that it responded quickly with the assistance of external cybersecurity experts and confirmed that no payment details were involved.

The incident adds to a growing list of cyberattacks targeting fashion and retail brands in 2025. In May, fast fashion giant Shein confirmed a breach that affected customer support systems.

[Correction] Contrary to some reports, Puma was not affected by a ransomware attack in 2025. This claim appears to be inaccurate and is not corroborated by any verified public disclosures or statements by the company. Please disregard any previous mentions suggesting otherwise.

Security experts have warned that the sector remains a growing target due to high-value customer data and limited cyber defences. Louis Vuitton said it continues to upgrade its security systems and will notify affected individuals and regulators as the investigation continues.

‘We sincerely regret any concern or inconvenience this situation may cause,’ the company said in a statement.

[Dear readers, a previous version of this article highlighted incorrect information about a cyberattack on Puma. The information has been removed from our website, and we hereby apologise to Puma and our readers.]

How to keep your data safe while using generative AI tools

Generative AI tools have become a regular part of everyday life, both professionally and personally. Despite their usefulness, concern is growing about how they handle private data shared by users.

Major platforms like ChatGPT, Claude, Gemini, and Copilot collect user input to improve their models. Much of this data handling occurs behind the scenes, raising transparency and security concerns.

Anat Baron, a generative AI expert, compares AI models to Pac-Man—constantly consuming data to enhance performance. The more information they receive, the more helpful they become, often at the expense of privacy.

Many users ignore warnings not to share sensitive information. Baron advises against sharing anything with AI that one would not give to a stranger, including ID numbers, financial data, and medical results.

Some platforms offer options to reduce data collection. ChatGPT users can disable training under ‘Data Controls’, while Claude collects data only if users opt in. Perplexity and Gemini offer similar, though less transparent, settings.

Microsoft’s Copilot protects organisational data when logged in, but risks increase when used anonymously on the web. DeepSeek, however, collects user data automatically with no opt-out—making it a risky choice.

Users still retain control, but must remain alert. AI tools are evolving, and with digital agents on the horizon, safeguarding personal information is becoming even more critical. Baron sums it up simply: ‘Privacy always comes at a cost. We must decide how much we’re willing to trade for convenience.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Agentic AI gains ground as GenAI maturity grows in public sector

Public sector organisations around the world are rapidly moving beyond experimentation with generative AI (GenAI), with up to 90% now planning to explore, pilot, or implement agentic AI systems within the next two years.

Capgemini’s latest global survey of 350 public sector agencies found that most already use or trial GenAI, while agentic AI is being recognised as the next step — enabling autonomous, goal-driven decision-making with minimal human input.

Unlike GenAI, which generates content subject to human oversight, agentic AI can act independently, creating new possibilities for automation and public service delivery.

Dr Kirti Jain of Capgemini explained that GenAI depends on human-in-the-loop (HITL) processes, where users review outputs before acting. By contrast, agentic AI completes the final step itself, representing a future phase of automation. However, data governance remains a key barrier to adoption.

Data sovereignty emerged as a leading concern for 64% of surveyed public sector leaders. Fewer than one in four said they had sufficient data to train reliable AI systems. Dr Jain emphasised that governance must be embedded from the outset — not added as an afterthought — to ensure data quality, accountability, and consistency in decision-making.

A proactive approach to governance offers the only stable foundation for scaling AI responsibly. Managing the full data lifecycle — from acquisition and storage to access and application — requires strict privacy and quality controls.

Significant risks arise when flawed AI-generated insights influence decisions affecting entire populations. Capgemini’s support for government agencies focuses on three areas: secure infrastructure, privacy-led data usability, and smarter, citizen-centric services.

EPA Victoria CTO Abhijit Gupta underscored the need for timely, secure, and accessible data as a prerequisite for AI in the public sector. Accuracy and consistency, Dr Jain noted, are essential whether outcomes are delivered by humans or machines. Governance, he added, should remain technology-agnostic yet agile.

Strong data foundations require only minor adjustments to scale agentic AI that can manage full decision-making cycles. Capgemini’s model of ‘active data governance’ aims to enable public sector AI to scale safely and sustainably.

Singapore was highlighted as a leading example of responsible innovation, driven by rapid experimentation and collaborative development. The AI Trailblazers programme, co-run with the private sector, is tackling over 100 real-world GenAI challenges through a test-and-iterate model.

Minister for Digital Josephine Teo recently reaffirmed Singapore’s commitment to sharing lessons and best practices in sustainable AI development. According to Dr Jain, the country’s success lies not only in rapid adoption, but in how AI is applied to improve services for citizens and society.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Human rights must anchor crypto design

Crypto builders face growing pressure to design systems that protect fundamental human rights from the outset. As concerns mount over surveillance, state-backed ID systems, and AI impersonation, experts warn that digital infrastructure must not compromise individual freedom.

Privacy-by-default, censorship resistance, and decentralised self-custody are no longer idealistic features — they are essential for any credible Web3 system. Critics argue that many current tools merely replicate traditional power structures, offering centralisation disguised as innovation.

The collapse of platforms like FTX has only strengthened calls for human-centric solutions.

New approaches are needed to ensure people can prove their personhood online without relying on governments or corporations. Digital inclusion depends on verification systems that are censorship-resistant, privacy-preserving and accessible.

Likewise, self-custody must evolve beyond fragile key backups and complex interfaces to empower everyday users.

While embedding values in code brings ethical and political risks, avoiding the issue could lead to greater harm. For the promise of Web3 to be realised, rights must be a design priority — not an afterthought.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

ChatGPT stuns users by guessing object in viral video using smart questions

A video featuring ChatGPT Live has gone viral after it correctly guessed an object hidden in a user’s hand using only a series of questions.

The clip, shared on the social media platform X, shows the chatbot narrowing down its guesses until it lands on the correct answer — a pen — within less than a minute. The video has fascinated viewers by showing how far generative AI has come since its initial launch.

Multimodal AI like ChatGPT can now process audio, video and text together, making interactions more intuitive and lifelike.

Another user attempted the same challenge with Gemini AI by holding an AC remote. Gemini described it as a ‘control panel for controlling temperature’, which was close but not entirely accurate.

The fun experiment also highlights the growing real-world utility of generative AI. During Google’s I/O conference during the year, the company demonstrated how Gemini Live can help users troubleshoot and repair appliances at home by understanding both spoken instructions and visual input.

Beyond casual use, these AI tools are proving helpful in serious scenarios. A UPSC aspirant recently explained how uploading her Detailed Application Form to a chatbot allowed it to generate practice questions.

She used those prompts to prepare for her interview and credited the AI with helping her boost her confidence.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!