Hackers infiltrate Southeast Asian telecom networks

A cyber group breached telecoms across Southeast Asia, deploying advanced tracking tools instead of stealing data. Palo Alto Networks’ Unit 42 assesses the activity as ‘associated with a nation-state nexus’.

A hacking group gained covert access to telecom networks across Southeast Asia, most likely to track users’ locations, according to cybersecurity analysts at Palo Alto Networks’ Unit 42.

The campaign lasted from February to November 2024.

Instead of stealing data or directly communicating with mobile devices, the hackers deployed custom tools such as CordScan, designed to intercept mobile network protocols like SGSN. These methods suggest the attackers focused on tracking rather than data theft.

Unite42 assessed the activity ‘with high confidence’ as ‘associated with a nation state nexus’. The Unit notes that ‘this cluster heavily overlaps with activity attributed to Liminal Panda, a nation state adversary tracked by CrowdStrike’; according to CrowdStrike, Liminal Panda is considered to be a ‘likely China-nexus adversary’. It further states that ‘while this cluster significantly overlaps with Liminal Panda, we have also observed overlaps in attacker tooling with other reported groups and activity clusters, including Light Basin, UNC3886, UNC2891 and UNC1945.’

The attackers initially gained access by brute-forcing SSH credentials using login details specific to telecom equipment.

Once inside, they installed new malware, including a backdoor named NoDepDNS, which tunnels malicious data through port 53 — typically used for DNS traffic — in order to avoid detection.

To maintain stealth, the group disguised malware, altered file timestamps, disabled system security features and wiped authentication logs.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google AI Mode raises fears over control of news

Google’s AI Mode has quietly launched in the UK, reshaping how users access news by summarising information directly in search results.

By paraphrasing content gathered across the internet, the tool offers instant answers while reducing the need to visit original news sites.

Critics argue that the technology monopolises UK information by filtering what users see, based on algorithms rather than editorial judgement. Concerns have grown over transparency, fairness and the future of independent journalism.

Publishers are not compensated for content used by AI Mode, and most users rarely click through to the sources. Newsrooms fear pressure to adapt their output to align with Google’s preferences or risk being buried online.

While AI may streamline convenience, it lacks accountability. Regulated journalism must operate under legal frameworks, whereas AI faces no such scrutiny even when errors have real consequences.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Hackers use steganography to evade Windows defences

North Korea-linked hacking group APT37 is using malicious JPEG image files to deploy advanced malware on Windows systems, according to Genians Security Centre. The new campaign showcases a more evasive version of RoKRAT malware, which hides payloads in image files through steganography.

These attacks rely on large Windows shortcut files embedded in email attachments or cloud storage links, enticing users with decoy documents while executing hidden code. Once activated, the malware launches scripts to decrypt shellcode and inject it into trusted apps like MS Paint and Notepad.

This fileless strategy makes detection difficult, avoiding traditional antivirus tools by leaving minimal traces. The malware also exfiltrates data through legitimate cloud services, complicating efforts to trace and block the threat.

Researchers stress the urgency for organisations to adopt cybersecurity measures, behavioural monitoring, robust end point management, and ongoing user education. Defenders must prioritise proactive strategies to protect critical systems as threat actors evolve.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Moflin, Japan’s AI-powered robot pet with a personality

A fluffy, AI-powered robot pet named Moflin is capturing the imagination of consumers in Japan with its unique ability to develop distinct personalities based on how it is ‘raised.’ Developed by Casio, Moflin recognises its owner and learns their preferences through interactions such as cuddling and stroking, boasting over four million possible personality variations.

Priced at ¥59,400, Moflin has become more than just a companion at home, with some owners even taking it along on day trips. To complement the experience, Casio offers additional services, including a specialised salon to clean and maintain the robot’s fur, further enhancing its pet-like feel.

Erina Ichikawa, the lead developer, says the aim was to create a supportive sidekick capable of providing comfort during challenging moments, blending technology with emotional connection in a new way.

A similar ‘pet’ was also seen in China. Namely, AI-powered ‘smart pets’ like BooBoo are gaining popularity in China, especially among youth, offering emotional support and companionship. Valued for easing anxiety and isolation, the market is set to reach $42.5 billion by 2033, reflecting shifting social and family dynamics.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Creative industries raise concerns over the EU AI Act

Organisations representing creative sectors have issued a joint statement expressing concerns over the current implementation of the EU AI Act, particularly its provisions for general-purpose AI systems.

The response focuses on recent documents, including the General Purpose AI Code of Practice, accompanying guidelines, and the template for training data disclosure under Article 53.

The signatories, drawn from music and broader creative industries, said they had engaged extensively throughout the consultation process. They now argue that the outcomes do not fully reflect the issues raised during those discussions.

According to the statement, the result does not provide the level of intellectual property protection that some had expected from the regulation.

The group has called on the European Commission to reconsider the implementation package and is encouraging the European Parliament and member states to review the process.

The original EU AI Act was widely acknowledged as a landmark regulation, with technology firms and creative industries closely watching its rollout across member countries.

Google confirmed that it will sign the General Purpose Code of Practice elsewhere. The company said the latest version supports Europe’s broader innovation goals more effectively than earlier drafts, but it also noted ongoing concerns.

These include the potential impact of specific requirements on competitiveness and handling trade secrets.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Science removes concern from Microsoft quantum paper

The journal Science will replace an editorial expression of concern (EEoC) on a 2020 Microsoft quantum computing paper with a correction. The update notes incomplete explanations of device tuning and partial data disclosure, but no misconduct.

Co-author Charles Marcus welcomed the decision but lamented the four-year dispute.

Sergey Frolov, who raised concerns about data selection, disagrees with the correction and believes the paper should be retracted. The debate centres on Microsoft’s claims about topological superconductors using Majorana particles, a critical step for quantum computing.

Several Microsoft-backed papers on Majoranas have faced scrutiny, including retractions. Critics accuse Microsoft of cherry-picking data, while supporters stress the research’s complexity and pioneering nature.

The controversy reveals challenges in peer review and verifying claims in a competitive field.

Microsoft defends the integrity of its research and values open scientific debate. Critics warn that selective reporting risks misleading the community. The dispute highlights the difficulty of confirming breakthrough quantum computing claims in an emerging industry.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI breaches push data leak costs to new heights despite global decline

IBM’s 2025 Cost of a Data Breach Report revealed a sharp gap between rapid AI adoption and the oversight needed to secure it.

Although the global average data breach cost fell slightly to $4.44 million, security incidents involving AI systems remain more severe and disruptive.

Around 13% of organisations reported breaches involving AI models or applications, while 8% were unsure whether they had been compromised.

Alarmingly, nearly all AI-related breaches occurred without access controls, leading to data leaks in 60% of cases and operational disruption in almost one-third. Shadow AI (unsanctioned or unmanaged systems) played a central role, with one in five breaches traced back to it.

Organisations without AI governance policies or detection systems faced significantly higher costs, especially when personally identifiable information or intellectual property was exposed.

Attackers increasingly used AI tools such as deepfakes and phishing, with 16% of studied breaches involving AI-assisted threats.

Healthcare remained the costliest sector, with an average breach price of $7.42 million and the most extended recovery timeline of 279 days.

Despite the risks, fewer organisations plan to invest in post-breach security. Only 49% intend to strengthen defences, down from 63% last year.

Even fewer will prioritise AI-driven security tools. With many organisations also passing costs on to consumers, recovery now often includes long-term financial and reputational fallout, not just restoring systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

VPN use surges in UK as age checks go live

The way UK internet users access adult content has undergone a significant change, with new age-verification rules now in force. Under Ofcom’s directive, anyone attempting to visit adult websites must now prove they are over 18, typically by providing credit card or personal ID details.

The move aims to prevent children from encountering harmful content online, but it has raised serious privacy and cybersecurity concerns.

Experts have warned that entering personal and financial information could expose users to cyber threats. Jake Moore from cybersecurity firm ESET pointed out that the lack of clear implementation standards leaves users vulnerable to data misuse and fraud.

There’s growing unease that ID verification systems might inadvertently offer a goldmine to scammers.
In response, many have started using VPNs to bypass the restrictions, with providers reporting a surge in UK downloads.

VPNs mask user locations, allowing access to blocked content, but free versions often lack the security features of paid services. As demand rises, cybersecurity specialists are urging users to be cautious.

Free VPNs can compromise user data through weak encryption or selling browsing histories to advertisers. Mozilla and EC-Council have stressed the importance of avoiding no-cost VPNs unless users know the risks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

FBI warns public to avoid scanning QR codes on unsolicited packages

The FBI has issued a public warning about a rising scam involving QR codes placed on packages delivered to people who never ordered them.

According to the agency, these codes can lead recipients to malicious websites or prompt them to install harmful software, potentially exposing sensitive personal and financial data.

The scheme is a variation of the so-called brushing scam, in which online sellers send unordered items and use recipients’ names to post fake product reviews. In the new version, QR codes are added to the packaging, increasing the risk of fraud by directing users to deceptive websites.

While not as widespread as other fraud attempts, the FBI urges caution. The agency recommends avoiding QR codes from unknown sources, especially those attached to unrequested deliveries.

It also advised consumers to pay close attention to the web address that appears before tapping on any QR code link.

Authorities have noted broader misuse of QR codes, including cases where criminals place fake codes over legitimate ones in public spaces.

In one recent incident, scammers used QR stickers on parking meters in New York to redirect people to third-party payment pages requesting card details.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Prisons trial AI to forecast conflict and self‑harm risk

UK Justice Secretary Shabana Mahmood has rolled out an AI-driven violence prediction tool across prisons and probation services. One system evaluates inmates’ profiles, factoring in age, past behaviour, and gang ties, to flag those likely to become violent. Matching prisoners to tighter supervision or relocation aims to reduce attacks on staff and fellow inmates.

Another feature actively scans content from seized mobile phones. AI algorithms sift through over 33,000 devices and 8.6 million messages, detecting coded language tied to contraband, violence, or escape plans. When suspicious content is flagged, staff receive alerts for preventive action.

Rising prison violence and self-harm underscore the urgency of such interventions. Assaults on staff recently reached over 10,500 a year, the highest on record, while self-harm incidents reached nearly 78,000. Overcrowding and drug infiltration have intensified operational challenges.

Analysts compare the approach to ‘pre‑crime’ models, drawing parallels with sci-fi narratives, raising concerns around civil liberties. Without robust governance, predictive tools may replicate biases or punish potential rather than actual behaviour. Transparency, independent audit, and appeals processes are essential to uphold inmate rights.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!