Hackers infiltrate Southeast Asian telecom networks

A cyber group breached telecoms across Southeast Asia, deploying advanced tracking tools instead of stealing data. Palo Alto Networks’ Unit 42 assesses the activity as ‘associated with a nation-state nexus’.

A hacking group gained covert access to telecom networks across Southeast Asia, most likely to track users’ locations, according to cybersecurity analysts at Palo Alto Networks’ Unit 42.

The campaign lasted from February to November 2024.

Instead of stealing data or directly communicating with mobile devices, the hackers deployed custom tools such as CordScan, designed to intercept mobile network protocols like SGSN. These methods suggest the attackers focused on tracking rather than data theft.

Unite42 assessed the activity ‘with high confidence’ as ‘associated with a nation state nexus’. The Unit notes that ‘this cluster heavily overlaps with activity attributed to Liminal Panda, a nation state adversary tracked by CrowdStrike’; according to CrowdStrike, Liminal Panda is considered to be a ‘likely China-nexus adversary’. It further states that ‘while this cluster significantly overlaps with Liminal Panda, we have also observed overlaps in attacker tooling with other reported groups and activity clusters, including Light Basin, UNC3886, UNC2891 and UNC1945.’

The attackers initially gained access by brute-forcing SSH credentials using login details specific to telecom equipment.

Once inside, they installed new malware, including a backdoor named NoDepDNS, which tunnels malicious data through port 53 — typically used for DNS traffic — in order to avoid detection.

To maintain stealth, the group disguised malware, altered file timestamps, disabled system security features and wiped authentication logs.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Hackers use steganography to evade Windows defences

North Korea-linked hacking group APT37 is using malicious JPEG image files to deploy advanced malware on Windows systems, according to Genians Security Centre. The new campaign showcases a more evasive version of RoKRAT malware, which hides payloads in image files through steganography.

These attacks rely on large Windows shortcut files embedded in email attachments or cloud storage links, enticing users with decoy documents while executing hidden code. Once activated, the malware launches scripts to decrypt shellcode and inject it into trusted apps like MS Paint and Notepad.

This fileless strategy makes detection difficult, avoiding traditional antivirus tools by leaving minimal traces. The malware also exfiltrates data through legitimate cloud services, complicating efforts to trace and block the threat.

Researchers stress the urgency for organisations to adopt cybersecurity measures, behavioural monitoring, robust end point management, and ongoing user education. Defenders must prioritise proactive strategies to protect critical systems as threat actors evolve.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI breaches push data leak costs to new heights despite global decline

IBM’s 2025 Cost of a Data Breach Report revealed a sharp gap between rapid AI adoption and the oversight needed to secure it.

Although the global average data breach cost fell slightly to $4.44 million, security incidents involving AI systems remain more severe and disruptive.

Around 13% of organisations reported breaches involving AI models or applications, while 8% were unsure whether they had been compromised.

Alarmingly, nearly all AI-related breaches occurred without access controls, leading to data leaks in 60% of cases and operational disruption in almost one-third. Shadow AI (unsanctioned or unmanaged systems) played a central role, with one in five breaches traced back to it.

Organisations without AI governance policies or detection systems faced significantly higher costs, especially when personally identifiable information or intellectual property was exposed.

Attackers increasingly used AI tools such as deepfakes and phishing, with 16% of studied breaches involving AI-assisted threats.

Healthcare remained the costliest sector, with an average breach price of $7.42 million and the most extended recovery timeline of 279 days.

Despite the risks, fewer organisations plan to invest in post-breach security. Only 49% intend to strengthen defences, down from 63% last year.

Even fewer will prioritise AI-driven security tools. With many organisations also passing costs on to consumers, recovery now often includes long-term financial and reputational fallout, not just restoring systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

FBI warns public to avoid scanning QR codes on unsolicited packages

The FBI has issued a public warning about a rising scam involving QR codes placed on packages delivered to people who never ordered them.

According to the agency, these codes can lead recipients to malicious websites or prompt them to install harmful software, potentially exposing sensitive personal and financial data.

The scheme is a variation of the so-called brushing scam, in which online sellers send unordered items and use recipients’ names to post fake product reviews. In the new version, QR codes are added to the packaging, increasing the risk of fraud by directing users to deceptive websites.

While not as widespread as other fraud attempts, the FBI urges caution. The agency recommends avoiding QR codes from unknown sources, especially those attached to unrequested deliveries.

It also advised consumers to pay close attention to the web address that appears before tapping on any QR code link.

Authorities have noted broader misuse of QR codes, including cases where criminals place fake codes over legitimate ones in public spaces.

In one recent incident, scammers used QR stickers on parking meters in New York to redirect people to third-party payment pages requesting card details.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cybersecurity sector sees busy July for mergers

July witnessed a significant surge in cybersecurity mergers and acquisitions (M&A), spearheaded by Palo Alto Networks’ announcement of its definitive agreement to acquire identity security firm CyberArk for an estimated $25 billion.

The transaction, set to be the second-largest cybersecurity acquisition on record, signals Palo Alto’s strategic entry into identity security.

Beyond this significant deal, Palo Alto Networks also completed its purchase of AI security specialist Protect AI. The month saw widespread activity across the sector, including LevelBlue’s acquisition of Trustwave to create the industry’s largest pureplay managed security services provider.

Zurich Insurance Group, Signicat, Limerston Capital, Darktrace, Orange Cyberdefense, SecurityBridge, Commvault, and Axonius all announced or finalised strategic cybersecurity acquisitions.

The deals highlight a strong market focus on AI security, identity management, and expanding service capabilities across various regions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

China says the US used a Microsoft server vulnerability to launch cyberattacks

China has accused the US of exploiting long-known vulnerabilities in Microsoft Exchange servers to launch cyberattacks on its defence sector, escalating tensions in the ongoing digital arms race between the two superpowers.

In a statement released on Friday, the Cyber Security Association of China claimed that US hackers compromised servers belonging to a significant Chinese military contractor, allegedly maintaining access for nearly a year.

The group did not disclose the name of the affected company.

The accusation is a sharp counterpunch to long-standing US claims that Beijing has orchestrated repeated cyber intrusions using the same Microsoft software. In 2021, Microsoft attributed a wide-scale hack affecting tens of thousands of Exchange servers to Chinese threat actors.

Two years later, another incident compromised the email accounts of senior US officials, prompting a federal review that criticised Microsoft for what it called a ‘cascade of security failures.’

Microsoft, based in Redmond, Washington, has recently disclosed additional intrusions by China-backed groups, including attacks exploiting flaws in its SharePoint platform.

Jon Clay of Trend Micro commented on the tit-for-tat cyber blame game: ‘Every nation carries out offensive cybersecurity operations. Given the latest SharePoint disclosure, this may be China’s way of retaliating publicly.’

Cybersecurity researchers note that Beijing has recently increased its use of public attribution as a geopolitical tactic. Ben Read of Wiz.io pointed out that China now uses cyber accusations to pressure Taiwan and shape global narratives around cybersecurity.

In April, China accused US National Security Agency (NSA) employees of hacking into the Asian Winter Games in Harbin, targeting personal data of athletes and organisers.

While the US frequently names alleged Chinese hackers and pursues legal action against them, China has historically avoided levelling public allegations against American intelligence agencies, until now.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

China’s Silk Typhoon hackers filed patents for advanced spyware tools

A Chinese state-backed hacking group known as Silk Typhoon has filed more than ten patents for intrusive cyberespionage tools, shedding light on its operations’ vast scope and sophistication.

These patents, registered by firms linked to China’s Ministry of State Security, detail covert data collection software far exceeding the group’s previously known attack methods.

The revelations surfaced following a July 2025 US Department of Justice indictment against two alleged members of Silk Typhoon, Xu Zewei and Zhang Yu.

Both are associated with companies tied to the Shanghai State Security Bureau and connected to the Hafnium group, which Microsoft rebranded as Silk Typhoon in 2022.

Instead of targeting only Windows environments, the patent filings reveal a sweeping set of surveillance tools designed for Apple devices, routers, mobile phones, and even smart home appliances.

Submissions include software for bypassing FileVault encryption, extracting remote cellphone data, decrypting hard drives, and analysing smart devices. Analysts from SentinelLabs suggest these filings offer an unprecedented glimpse into the architecture of China’s cyberwarfare ecosystem.

Silk Typhoon gained global attention in 2021 with its Microsoft Exchange ProxyLogon campaign, which prompted a rare coordinated condemnation by the US, UK, and EU. The newly revealed capabilities show the group’s operations are far more advanced and diversified than previously believed.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI cloaking helps hackers dodge browser defences

Cybercriminals increasingly use AI-powered cloaking tools to bypass browser security systems and trick users into visiting scam websites.

These tools conceal malicious content from automated scanners, showing it only to human visitors, making it harder to detect phishing attacks and malware delivery.

Platforms such as Hoax Tech and JS Click Cloaker are being used to filter web traffic and serve fake pages to victims while hiding them from security systems.

The AI behind these services analyses a visitor’s browser, location, and behaviour before deciding which version of a site to display.

Known as white page and black page cloaking, the technique shows harmless content to detection tools and harmful pages to real users. However, this allows fraudulent sites to live longer, boosting the effectiveness and lifespan of cyberattacks.

Experts warn that cloaking is no longer a fringe method but a core part of cybercrime, now available as a commercial service. As these tactics grow more sophisticated, the pressure increases on browser developers to improve detection and protect users more effectively.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Scattered Spider cyberattacks set to intensify, warn FBI and CISA

The cybercriminal group known as Scattered Spider is expected to intensify its attacks in the coming weeks, according to a joint warning issued by the FBI, CISA, and cybersecurity agencies in Canada, the UK and Australia.

These warnings highlight the group’s increasingly sophisticated methods, including impersonating employees to bypass IT support and hijack multi-factor authentication processes.

Instead of relying on old techniques, the hackers now deploy stealthy tools like RattyRAT and DragonForce ransomware, particularly targeting VMware ESXi servers.

Their attacks combine social engineering with SIM swapping and phishing, enabling them to exfiltrate sensitive data before locking systems and demanding payment — a tactic known as double extortion.

Scattered Spider, also referred to as Okta Tempest, is reportedly creating fake online identities and infiltrating internal communication channels like Slack and Microsoft Teams. In some cases, they have even joined incident response calls to gain insight into how companies are reacting.

Security agencies urge organisations to adopt phishing-resistant multi-factor authentication, audit remote access software, monitor unusual logins and behaviours, and ensure offline encrypted backups are maintained.

More incidents are expected, as the group continues refining its strategies instead of slowing down.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google rolls out AI age detection to protect teen users

In a move aimed at enhancing online protections for minors, Google has started rolling out a machine learning-based age estimation system for signed-in users in the United States.

The new system uses AI to identify users who are likely under the age of 18, with the goal of providing age-appropriate digital experiences and strengthening privacy safeguards.

Initially deployed to a small number of users, the system is part of Google’s broader initiative to align its platforms with the evolving needs of children and teenagers growing up in a digitally saturated world.

‘Children today are growing up with technology, not growing into it like previous generations. So we’re working directly with experts and educators to help you set boundaries and use technology in a way that’s right for your family,’ the company explained in a statement.

The system builds on changes first previewed earlier this year and reflects Google’s ongoing efforts to comply with regulatory expectations and public demand for better youth safety online.

Once a user is flagged by the AI as likely underage, Google will introduce a range of restrictions—most notably in advertising, content recommendation, and data usage.

According to the company, users identified as minors will have personalised advertising disabled and will be shielded from ad categories deemed sensitive. These protections will be enforced across Google’s entire advertising ecosystem, including AdSense, AdMob, and Ad Manager.

The company’s publishing partners were informed via email this week that no action will be required on their part, as the changes will be implemented automatically.

Google’s blog post titled ‘Ensuring a safer online experience for US kids and teens’ explains that its machine learning model estimates age based on behavioural signals, such as search history and video viewing patterns.

If a user is mistakenly flagged or wishes to confirm their age, Google will offer verification tools, including the option to upload a government-issued ID or submit a selfie.

The company stressed that the system is designed to respect user privacy and does not involve collecting new types of data. Instead, it aims to build a privacy-preserving infrastructure that supports responsible content delivery while minimising third-party data sharing.

Beyond advertising, the new protections extend into other parts of the user experience. For those flagged as minors, Google will disable Timeline location tracking in Google Maps and also add digital well-being features on YouTube, such as break reminders and bedtime prompts.

Google will also tweak recommendation algorithms to avoid promoting repetitive content on YouTube, and restrict access to adult-rated applications in the Play Store for flagged minors.

The initiative is not Google’s first foray into child safety technology. The company already offers Family Link for parental controls and YouTube Kids as a tailored platform for younger audiences.

However, the deployment of automated age estimation reflects a more systemic approach, using AI to enforce real-time, scalable safety measures. Google maintains that these updates are part of a long-term investment in user safety, digital literacy, and curating age-appropriate content.

Similar initiatives have already been tested in international markets, and the company announces it will closely monitor the US rollout before considering broader implementation.

‘This is just one part of our broader commitment to online safety for young users and families,’ the blog post reads. ‘We’ve continually invested in technology, policies, and literacy resources to better protect kids and teens across our platforms.’

Nonetheless, the programme is likely to attract scrutiny. Critics may question the accuracy of AI-powered age detection and whether the measures strike the right balance between safety, privacy, and personal autonomy — or risk overstepping.

Some parents and privacy advocates may also raise concerns about the level of visibility and control families will have over how children are identified and managed by the system.

As public pressure grows for tech firms to take greater responsibility in protecting vulnerable users, Google’s rollout may signal the beginning of a new industry standard.

The shift towards AI-based age assurance reflects a growing consensus that digital platforms must proactively mitigate risks for young users through smarter, more adaptive technologies.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!