UK proposes mandatory ransomware reporting and seeks to ban payments by public sector

The UK government has unveiled a new proposal to strengthen its response to ransomware threats by requiring victims to report breaches, enabling law enforcement to disrupt cybercriminal operations more effectively.

Published by the Home Office as part of an ongoing policy consultation, the proposal outlines key measures:

  • Mandatory breach reporting to equip law enforcement with actionable intelligence for identifying and disrupting ransomware groups.
  • A ban on ransom payments by public sector and critical infrastructure entities.
  • A notification requirement for other organisations intending to pay a ransom, allowing the government to assess and respond accordingly.

According to the proposal, these steps would help the UK government carry out ‘targeted disruptions’ in response to evolving ransomware threats, while also improving support for victims.

Cybersecurity experts have largely welcomed the initiative. Allan Liska of Recorded Future noted the plan reflects a growing recognition that many ransomware actors are within reach of law enforcement. Arda Büyükkaya of EclecticIQ praised the effort to formalise response protocols, viewing the proposed payment ban and proactive enforcement as meaningful deterrents.

This announcement follows a consultation process that began in January 2025. While the proposals signal a significant policy shift, they have not yet been enacted into law. The potential ban on ransom payments remains particularly contentious, with critics warning that, in some cases—such as hospital systems—paying a ransom may be the only option to restore essential services quickly.

The UK’s proposal follows similar international efforts, including Australia’s recent mandate for victims to disclose ransom payments, though Australia has stopped short of banning them outright.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Trump AI strategy targets China and cuts red tape

The Trump administration has revealed a sweeping new AI strategy to cement US dominance in the global AI race, particularly against China.

The 25-page ‘America’s AI Action Plan’ proposes 90 policy initiatives, including building new data centres nationwide, easing regulations, and expanding exports of AI tools to international allies.

White House officials stated the plan will boost AI development by scrapping federal rules seen as restrictive and speeding up construction permits for data infrastructure.

A key element involves monitoring Chinese AI models for alignment with Communist Party narratives, while promoting ‘ideologically neutral’ systems within the US. Critics argue the approach undermines efforts to reduce bias and favours politically motivated AI regulation.

The action plan also supports increased access to federal land for AI-related construction and seeks to reverse key environmental protections. Analysts have raised concerns over energy consumption and rising emissions linked to AI data centres.

While the White House claims AI will complement jobs rather than replace them, recent mass layoffs at Indeed and Salesforce suggest otherwise.

Despite the controversy, the announcement drew optimism from investors. AI stocks saw mixed trading, with NVIDIA, Palantir and Oracle gaining, while Alphabet slipped slightly. Analysts described the move as a ‘watershed moment’ for US tech, signalling an aggressive stance in the global AI arms race.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Autonomous vehicles fuel surge in 5G adoption

The global 5G automotive market is expected to grow sharply from $2.58 billion in 2024 to $31.18 billion by 2034, fuelled by the rapid adoption of connected and self-driving vehicles.

A compound annual growth rate of over 28% reflects the strong momentum behind the transition to smarter mobility and safer road networks.

Vehicle-to-everything communication is predicted to lead adoption, as it allows vehicles to exchange real-time data with other cars, infrastructure and even pedestrians.

In-car entertainment systems are also growing fast, with consumers demanding smoother connectivity and on-the-go access to apps and media.

Autonomous driving, advanced driver-assistance features and real-time navigation all benefit from 5G’s low latency and high-speed capabilities. Automakers such as BMW have already begun integrating 5G into electric models to support automated functions.

Meanwhile, the US government has pledged $1.5 billion to build smart transport networks that rely on 5G-powered communication.

North America remains ahead due to early 5G rollouts and strong manufacturing bases, but Asia Pacific is catching up fast through smart city investment and infrastructure development.

Regulatory barriers and patchy rural coverage continue to pose challenges, particularly in regions with strict data privacy laws or limited 5G networks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Spotify under fire for AI-generated songs on memorial artist pages

Spotify is facing criticism after AI-generated songs were uploaded to the pages of deceased artists without consent from estates or rights holders.

The latest case involves country singer-songwriter Blaze Foley, who died in 1989. A track titled ‘Together’ was posted to his official Spotify page over the weekend. The song sounded vaguely like a slow country ballad and was paired with AI-generated cover art showing a man who bore no resemblance to Foley.

Craig McDonald, whose label manages Foley’s catalogue, confirmed the track had nothing to do with the artist and described it as inauthentic and harmful. ‘I can clearly tell you that this song is not Blaze, not anywhere near Blaze’s style, at all,’ McDonald told 404 Media. ‘It has the authenticity of an algorithm.’

He criticised Spotify for failing to prevent such uploads and said the company had a duty to stop AI-generated music from appearing under real artists’ names.

‘It’s kind of surprising that Spotify doesn’t have a security fix for this type of action,’ he said. ‘They could fix this problem if they had the will to do so.’ Spotify said it had flagged the track to distributor SoundOn and removed it for violating its deceptive content policy.

However, other similar uploads have already emerged. The same company, Syntax Error, was linked to another AI-generated song titled ‘Happened To You’, uploaded last week under the name of Grammy-winning artist Guy Clark, who died in 2016.

Both tracks have since been removed, but Spotify has not explained how Syntax Error was able to post them using the names and likenesses of late musicians. The controversy is the latest in a wave of AI music incidents slipping through streaming platforms’ content checks.

Earlier this year, an AI-generated band called The Velvet Sundown amassed over a million Spotify streams before disclosing that all their vocals and instrumentals were made by AI.

Another high-profile case involved a fake Drake and The Weeknd collaboration, ‘Heart on My Sleeve’, which gained viral traction before being taken down by Universal Music Group.

Rights groups and artists have repeatedly warned about AI-generated content misrepresenting performers and undermining creative authenticity. As AI tools become more accessible, streaming platforms face mounting pressure to improve detection and approval processes to prevent further misuse.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New GLOBAL GROUP ransomware targets all major operating systems

A sophisticated new ransomware threat, dubbed GLOBAL GROUP, has emerged on cybercrime forums, meticulously designed to target systems across Windows, Linux, and macOS with cross-platform precision.

In June 2025, a threat actor operating under the alias ‘Dollar Dollar Dollar’ launched the GLOBAL GROUP Ransomware-as-a-Service (RaaS) platform on the Ramp4u forum. The campaign offers affiliates scalable tools, automated negotiations, and generous profit-sharing, creating an appealing setup for monetising cybercrime at scale.

GLOBAL GROUP leverages the Golang language to build monolithic binaries, enabling seamless execution across varied operating environments in a single campaign. The strategy expands attackers’ reach, allowing them to exploit hybrid infrastructures while improving operational efficiency and scalability.

Golang’s concurrency model and static linking make it an attractive option for rapid, large-scale encryption without relying on external dependencies. However, forensic analysis by Picus Security Labs suggests GLOBAL GROUP is not an entirely original threat but rather a rebrand of previous ransomware operations.

Researchers linked its code and infrastructure to the now-defunct Mamona RIP and Black Lock families, revealing continuity in tactics and tooling. Evidence includes a reused mutex string—’Global\Fxo16jmdgujs437’—which was also found in earlier Mamona RIP samples, confirming code inheritance.

The re-use of such technical markers highlights how threat actors often evolve existing malware rather than building from scratch, streamlining development and deployment.

Beyond its cross-platform flexibility, GLOBAL GROUP also integrates modern cryptographic features to boost effectiveness and resistance to detection. It employs the ChaCha20-Poly1305 encryption algorithm, offering both confidentiality and message integrity with high processing performance.

The malware leverages Golang’s goroutines to encrypt all system drives simultaneously, reducing execution time and limiting defenders’ reaction window. Encrypted files receive customised extensions like ‘.lockbitloch’, with filenames also obscured to hinder recovery efforts without the correct decryption key.

Ransom note logic is embedded directly within the binary, generating tailored communication instructions and linking to Tor-based leak sites. The approach simplifies extortion for affiliates while preserving operational security and ensuring anonymous negotiations with victims.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Louis Vuitton Australia confirms customer data breach after cyberattack

Louis Vuitton has admitted to a significant data breach in Australia, revealing that an unauthorised third party accessed its internal systems and stole sensitive client details.

The breach, first detected on 2 July, included names, contact information, birthdates, and shopping preferences — though no passwords or financial data were taken.

The luxury retailer emailed affected customers nearly three weeks later, urging them to stay alert for phishing, scam calls, or suspicious texts.

While Louis Vuitton claims it acted quickly to contain the breach and block further access, questions remain about the delay in informing customers and the number of individuals affected.

Authorities have been notified, and cybersecurity specialists are now investigating. The incident adds to a growing list of cyberattacks on major Australian companies, prompting experts to call for stronger data protection laws and the right to demand deletion of personal information from corporate databases.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Eric Schmidt warns that AI growth is limited by electricity

Former Google chief executive Eric Schmidt has warned that electricity, rather than semiconductors, will limit the future growth of AI.

Speaking on the Moonshots podcast, Schmidt said the push towards artificial superintelligence—AI that exceeds human cognitive ability in almost all domains—will depend on securing sufficient power instead of just developing more advanced chips.

Schmidt noted the US alone may require an extra 92 gigawatts of electricity to support AI growth, equivalent to dozens of nuclear power stations.

Instead of waiting for new plants, companies such as Microsoft are seeking to retrofit closed facilities, including the Three Mile Island plant targeted for relaunch in 2028.

Schmidt highlighted growing environmental pressures, citing Microsoft’s 34% increase in water use within a year, a trend experts link directly to rising AI workloads.

Major AI developers like OpenAI’s Sam Altman also acknowledge energy as a key constraint. Altman has invested in nuclear fusion through Helion, while firms such as Microsoft and AMD are pressing US policymakers to fast-track energy permits.

Environmental groups, including Greenpeace, warn that unchecked AI expansion risks undermining climate goals instead of supporting them.

Schmidt believes superintelligence is inevitable and approaching rapidly, predicting specialised AI tools across all fields within five years. Rather than focusing solely on AI’s capabilities, he stressed the urgent need for planning energy infrastructure today to match tomorrow’s AI demands.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

How cultural heritage can go green through digital preservation

A Europe-wide survey has exposed how cultural heritage institutions (CHIs) protect humanity’s legacy digitally while unintentionally harming the environment.

Instead of focusing only on efficiency, the Europeana Climate Action Community recommends a shift towards environmentally sustainable and regenerative digital preservation.

Led by the Environmental Sustainability Practice Task Force, the survey collected input from 108 organisations across 24 EU countries. While 80% of CHIs recognise environmental responsibility, just 42% follow formal environmental strategies and a mere 14% measure carbon footprints.

Many maintain redundant data backups without assessing the ecological cost, and most lack policies for retiring digital assets responsibly.

The report suggests CHIs develop community-powered archives, adopt hardware recycling and repair, and prioritise sufficiency instead of maximising digital volume. Interviews with institutions such as the National Library of Finland and the POLIN Museum revealed good practices alongside common challenges.

With digital preservation increasingly essential, the Europeana Initiative calls for immediate action. By moving from isolated efficiency efforts to collective regeneration strategies, CHIs can protect cultural memory while reducing environmental impact for future generations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU confirms AI Act rollout and releases GPAI Code of Practice

The European Commission has confirmed it will move forward with the EU AI Act exactly as scheduled, instead of granting delays requested by tech giants and businesses.

On 10 July 2025, it published the final General-Purpose AI (GPAI) Code of Practice alongside FAQs to guide organisations aiming to comply with the new law.

Rather than opting for a more flexible timetable, the Commission is standing firm on its regulatory goals. The GPAI Code of Practice, now in its final form, sets out voluntary but strongly recommended steps for companies that want reduced administrative burdens and clearer legal certainty under the AI Act.

The document covers transparency, copyright, and safety standards for advanced AI models, including a model documentation form for providers.

Key dates have already been set. From 2 August 2025, rules covering notifications, governance, and penalties will come into force. By February 2026, official guidelines on classifying high-risk AI systems are expected.

The remaining parts of the legislation will take effect by August 2026, instead of being postponed further.

With the publication of the GPAI Code of Practice, the EU takes another step towards building a unified ethical framework for AI development and deployment across Europe, focusing on transparency, accountability, and respect for fundamental rights.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Thailand to advance policies to boost space industry and satellite services

Thailand’s Digital Economy Ministry is advancing several key policies to boost the country’s space industry and satellite services. The draft Landing Rights policy, approved by the National Space Policy Committee and pending Cabinet approval, will allow foreign satellite operators to provide services within Thailand while ensuring fair competition with domestic providers.

Alongside this, the committee approved the updated National Space Master Plan, reflecting recommendations from national development councils. These policies form part of the broader National Satellite Policy Framework, which addresses policy, legal and regulatory development, international collaboration, and space infrastructure to support the growing space sector.

The government is focused on using space technology and satellite data to improve areas like disaster management, agriculture, and resource governance. Thailand is also enhancing international cooperation by planning to sign the Artemis Accords, working with China on lunar research, and developing space traffic management policies to strengthen its global role.

To support these goals, Thailand is investing in digital infrastructure to expand space activities, ensure inclusive access, reduce inequality, and boost economic and social security.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!