Hackers infiltrate Southeast Asian telecom networks

A cyber group breached telecoms across Southeast Asia, deploying advanced tracking tools instead of stealing data. Palo Alto Networks’ Unit 42 assesses the activity as ‘associated with a nation-state nexus’.

A hacking group gained covert access to telecom networks across Southeast Asia, most likely to track users’ locations, according to cybersecurity analysts at Palo Alto Networks’ Unit 42.

The campaign lasted from February to November 2024.

Instead of stealing data or directly communicating with mobile devices, the hackers deployed custom tools such as CordScan, designed to intercept mobile network protocols like SGSN. These methods suggest the attackers focused on tracking rather than data theft.

Unite42 assessed the activity ‘with high confidence’ as ‘associated with a nation state nexus’. The Unit notes that ‘this cluster heavily overlaps with activity attributed to Liminal Panda, a nation state adversary tracked by CrowdStrike’; according to CrowdStrike, Liminal Panda is considered to be a ‘likely China-nexus adversary’. It further states that ‘while this cluster significantly overlaps with Liminal Panda, we have also observed overlaps in attacker tooling with other reported groups and activity clusters, including Light Basin, UNC3886, UNC2891 and UNC1945.’

The attackers initially gained access by brute-forcing SSH credentials using login details specific to telecom equipment.

Once inside, they installed new malware, including a backdoor named NoDepDNS, which tunnels malicious data through port 53 — typically used for DNS traffic — in order to avoid detection.

To maintain stealth, the group disguised malware, altered file timestamps, disabled system security features and wiped authentication logs.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google AI Mode raises fears over control of news

Google’s AI Mode has quietly launched in the UK, reshaping how users access news by summarising information directly in search results.

By paraphrasing content gathered across the internet, the tool offers instant answers while reducing the need to visit original news sites.

Critics argue that the technology monopolises UK information by filtering what users see, based on algorithms rather than editorial judgement. Concerns have grown over transparency, fairness and the future of independent journalism.

Publishers are not compensated for content used by AI Mode, and most users rarely click through to the sources. Newsrooms fear pressure to adapt their output to align with Google’s preferences or risk being buried online.

While AI may streamline convenience, it lacks accountability. Regulated journalism must operate under legal frameworks, whereas AI faces no such scrutiny even when errors have real consequences.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Creative industries raise concerns over the EU AI Act

Organisations representing creative sectors have issued a joint statement expressing concerns over the current implementation of the EU AI Act, particularly its provisions for general-purpose AI systems.

The response focuses on recent documents, including the General Purpose AI Code of Practice, accompanying guidelines, and the template for training data disclosure under Article 53.

The signatories, drawn from music and broader creative industries, said they had engaged extensively throughout the consultation process. They now argue that the outcomes do not fully reflect the issues raised during those discussions.

According to the statement, the result does not provide the level of intellectual property protection that some had expected from the regulation.

The group has called on the European Commission to reconsider the implementation package and is encouraging the European Parliament and member states to review the process.

The original EU AI Act was widely acknowledged as a landmark regulation, with technology firms and creative industries closely watching its rollout across member countries.

Google confirmed that it will sign the General Purpose Code of Practice elsewhere. The company said the latest version supports Europe’s broader innovation goals more effectively than earlier drafts, but it also noted ongoing concerns.

These include the potential impact of specific requirements on competitiveness and handling trade secrets.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI breaches push data leak costs to new heights despite global decline

IBM’s 2025 Cost of a Data Breach Report revealed a sharp gap between rapid AI adoption and the oversight needed to secure it.

Although the global average data breach cost fell slightly to $4.44 million, security incidents involving AI systems remain more severe and disruptive.

Around 13% of organisations reported breaches involving AI models or applications, while 8% were unsure whether they had been compromised.

Alarmingly, nearly all AI-related breaches occurred without access controls, leading to data leaks in 60% of cases and operational disruption in almost one-third. Shadow AI (unsanctioned or unmanaged systems) played a central role, with one in five breaches traced back to it.

Organisations without AI governance policies or detection systems faced significantly higher costs, especially when personally identifiable information or intellectual property was exposed.

Attackers increasingly used AI tools such as deepfakes and phishing, with 16% of studied breaches involving AI-assisted threats.

Healthcare remained the costliest sector, with an average breach price of $7.42 million and the most extended recovery timeline of 279 days.

Despite the risks, fewer organisations plan to invest in post-breach security. Only 49% intend to strengthen defences, down from 63% last year.

Even fewer will prioritise AI-driven security tools. With many organisations also passing costs on to consumers, recovery now often includes long-term financial and reputational fallout, not just restoring systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU AI Act begins as tech firms push back

Europe’s AI crackdown officially begins soon, as the EU enforces the first rules targeting developers of generative AI models like ChatGPT.

Under the AI Act, firms must now assess systemic risks, conduct adversarial testing, ensure cybersecurity, report serious incidents, and even disclose energy usage. The goal is to prevent harms related to bias, misinformation, manipulation, and lack of transparency in AI systems.

Although the legislation was passed last year, the EU only released developer guidance on 10 July, leaving tech giants with little time to adapt.

Meta, which developed the Llama AI model, has refused to sign the voluntary code of practice, arguing that it introduces legal uncertainty. Other developers have expressed concerns over how vague and generic the guidance remains, especially around copyright and practical compliance.

The EU also distinguishes itself from the US, where a re-elected Trump administration has launched a far looser AI Action Plan. While Washington supports minimal restrictions to encourage innovation, Brussels is focused on safety and transparency.

Trade tensions may grow, but experts warn that developers should not rely on future political deals instead of taking immediate steps toward compliance.

The AI Act’s rollout will continue into 2026, with the next phase focusing on high-risk AI systems in healthcare, law enforcement, and critical infrastructure.

Meanwhile, questions remain over whether AI-generated content qualifies for copyright protection and how companies should handle AI in marketing or supply chains. For now, Europe’s push for safer AI is accelerating—whether Big Tech likes it or not.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI bands rise as real musicians struggle to compete

AI is quickly transforming the music industry, with AI-generated bands now drawing millions of plays on platforms like Spotify.

While these acts may sound like traditional musicians, they are entirely digital creations. Streaming services rarely label AI music clearly, and the producers behind these tracks often remain anonymous and unreachable. Human artists, meanwhile, are quietly watching their workload dry up.

Music professionals are beginning to express concern. Composer Leo Sidran believes AI is already taking work away from creators like him, noting that many former clients now rely on AI-generated solutions instead of original compositions.

Unlike previous tech innovations, which empowered musicians, AI risks erasing job opportunities entirely, according to Berklee College of Music professor George Howard, who warns it could become a zero-sum game.

AI music is especially popular for passive listening—background tracks for everyday life. In contrast, real musicians still hold value among fans who engage more actively with music.

However, AI is cheap, fast, and royalty-free, making it attractive to publishers and advertisers. From film soundtracks to playlists filled with faceless artists, synthetic sound is rapidly replacing human creativity in many commercial spaces.

Experts urge musicians to double down on what makes them unique instead of mimicking trends that AI can easily replicate. Live performance remains one of the few areas where AI has yet to gain traction. Until synthetic bands take the stage, artists may still find refuge in concerts and personal connection with fans.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Robot artist Ai-Da explores human self-perception

The world’s first ultra-realistic robot artist, Ai-Da, has been prompting profound questions about human-robot interactions, according to her creator.

Designed in Oxford by Aidan Meller, a modern and contemporary art specialist, and built in the UK by Engineered Arts, Ai-Da is a humanoid robot specifically engineered for artistic creation. She recently unveiled a portrait of King Charles III, adding to her notable portfolio.

Aidan Meller, Ai-Da’s creator, stated that working with the robot has evoked ‘lots of questions about our relationship with ourselves.’ He highlighted how Ai-Da’s artwork ‘drills into some of our time’s biggest concerns and thoughts.’

Ai-Da uses cameras in her eyes to capture images, which are then processed by AI algorithms and converted into real-time coordinates for her robotic arm, enabling her to paint and draw.

Mr Meller explained, ‘You can meet her, talk to her using her language model, and she can then paint and draw you from sight.’

He also observed that people’s preconceptions about robots are often outdated: ‘It’s not until you look a robot in the eye and they say your name that the reality of this new sci-fi world that we are now in takes hold.’

Ai-Da’s contributions to the art world continue to grow. She had produced and showcased her work at the AI for Good Global Summit 2024 in Geneva, Switzerland, an event under the auspices of the UN. That same year, her triptych of Enigma code-breaker Alan Turing sold for over £1 million at auction.

Her focus this year shifted to King Charles III, chosen because, as Mr Meller noted, ‘With extraordinary strides that are taking place in technology and again, always questioning our relationship to the environment, we felt that King Charles was an excellent subject.’

Buckingham Palace authorised the display of Ai-Da’s portrait of the King, despite the robot not meeting him. Ai-Da, connected to the internet, uses extensive data to inform her choice of subjects, with Mr Meller revealing, ‘Uncannily, and rather nerve-rackingly, we just ask her.’

The conversations generated inform the artwork. Ai-Da also painted a portrait of King Charles’s mother, Queen Elizabeth II, in 2023. Mr Meller shared that the most significant realisation from six years of working with Ai-Da was ‘not so much about how human she is but actually how robotic we are.’

He concluded, ‘We hope Ai-Da’s artwork can be a provocation for that discussion.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Spotify under fire for AI-generated songs on memorial artist pages

Spotify is facing criticism after AI-generated songs were uploaded to the pages of deceased artists without consent from estates or rights holders.

The latest case involves country singer-songwriter Blaze Foley, who died in 1989. A track titled ‘Together’ was posted to his official Spotify page over the weekend. The song sounded vaguely like a slow country ballad and was paired with AI-generated cover art showing a man who bore no resemblance to Foley.

Craig McDonald, whose label manages Foley’s catalogue, confirmed the track had nothing to do with the artist and described it as inauthentic and harmful. ‘I can clearly tell you that this song is not Blaze, not anywhere near Blaze’s style, at all,’ McDonald told 404 Media. ‘It has the authenticity of an algorithm.’

He criticised Spotify for failing to prevent such uploads and said the company had a duty to stop AI-generated music from appearing under real artists’ names.

‘It’s kind of surprising that Spotify doesn’t have a security fix for this type of action,’ he said. ‘They could fix this problem if they had the will to do so.’ Spotify said it had flagged the track to distributor SoundOn and removed it for violating its deceptive content policy.

However, other similar uploads have already emerged. The same company, Syntax Error, was linked to another AI-generated song titled ‘Happened To You’, uploaded last week under the name of Grammy-winning artist Guy Clark, who died in 2016.

Both tracks have since been removed, but Spotify has not explained how Syntax Error was able to post them using the names and likenesses of late musicians. The controversy is the latest in a wave of AI music incidents slipping through streaming platforms’ content checks.

Earlier this year, an AI-generated band called The Velvet Sundown amassed over a million Spotify streams before disclosing that all their vocals and instrumentals were made by AI.

Another high-profile case involved a fake Drake and The Weeknd collaboration, ‘Heart on My Sleeve’, which gained viral traction before being taken down by Universal Music Group.

Rights groups and artists have repeatedly warned about AI-generated content misrepresenting performers and undermining creative authenticity. As AI tools become more accessible, streaming platforms face mounting pressure to improve detection and approval processes to prevent further misuse.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Stay True To The Act campaign defends music rights

More than 30 European musicians have launched a united video campaign urging the European Commission to preserve the integrity of the EU AI Act.

The Stay True To The Act campaign calls on policymakers to enforce transparency and uphold copyright protections.

Artists, including Spanish singer-songwriter Álex Ubago and Poland’s Eurovision 2025 entrant Justyna Steczkowska, have voiced concern over the unauthorised use of their work to train AI models. They demand the right to be informed and the power to refuse such usage.

The EU AI Act, passed in 2024, includes provisions requiring developers to disclose the content used in AI training. However, as implementation plans develop, artists fear the law may be diluted, weakening protections for creators.

The campaign appeals for vigorous enforcement of the Act’s original principles: transparency, copyright control and fair innovation. Artists say AI and music can coexist in Europe only if ethical boundaries are upheld.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

GPAI Code of Practice creates legal uncertainty for non-signatories

Lawyers at William Fry say the EU’s final Code of Practice for general-purpose AI (GPAI) models leaves key questions unanswered. GPAI systems include models such as OpenAI’s GPT-4, Google’s Gemini, Anthropic’s Claude, and Meta’s Llama, trained on vast datasets for broad applications.

The Code of Practice, released last week, addresses transparency, safety, security, and copyright, and is described by the European Commission as a voluntary tool. It was prepared by independent experts to help GPAI developers comply with upcoming legal obligations under the EU AI Act.

In a statement on the firm’s website, William Fry lawyers Barry Scannell and Leo Moore question how voluntary the code truly is. They note that signatories not in full compliance can still be seen as acting in good faith and will be supported rather than penalised.

A protected grace period runs until 2 August 2026, after which the AI Act could allow fines for non-compliance. The lawyers warn that this creates a two-tier system, shielding signatories while exposing non-signatories to immediate legal risk under the AI Act.

Developers who do not sign the code may face higher regulatory scrutiny, despite it being described as non-binding. William Fry also points out that detailed implementation guidelines and templates have not yet been published by the EU.

Additional guidance to clarify key GPAI concepts is expected later this month, but the current lack of detail creates uncertainty. The code’s copyright section, the lawyers argue, shows how the document has evolved into a quasi-regulatory framework.

An earlier draft required only reasonable efforts to avoid copyright-infringing sources. The final version demands the active exclusion of such sites. A proposed measure requiring developers to verify the source of copyrighted data acquired from third parties has been removed from the final draft.

The lawyers argue that this creates a practical blind spot, allowing unlawful content to slip into training data undetected. Rights holders still retain the ability to pursue action if they believe their content was misused, even if providers are signatories.

Meanwhile, the transparency chapter now outlines specific standards, rather than general principles. The safety and security section also sets enforceable expectations, increasing the operational burden on model developers.

William Fry warns that gaps between the code’s obligations and the missing technical documentation could have costly consequences. They conclude that, without the final training data template or implementation details, both developers and rights holders face compliance risks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!