New malware steals 200,000 passwords and credit card details through fake software

Hackers are now using fake versions of familiar software and documents to spread a new info-stealing malware known as PXA Stealer.

First discovered by Cisco Talos, the malware campaign is believed to be operated by Vietnamese-speaking cybercriminals and has already compromised more than 4,000 unique IP addresses across 62 countries.

Instead of targeting businesses alone, the attackers are now focusing on ordinary users in countries including the US, South Korea, and the Netherlands.

PXA Stealer is written in Python and designed to collect passwords, credit card data, cookies, autofill information, and even crypto wallet details from infected systems.

It spreads by sideloading malware into files like Microsoft Word executables or ZIP archives that also contain legitimate-looking programs such as Haihaisoft PDF Reader.

The malware uses malicious DLL files to gain persistence through the Windows Registry and downloads additional harmful files via Dropbox. After infection, it uses Telegram to exfiltrate stolen data, which is then sold on the dark web.

Once activated, the malware even attempts to open a fake PDF in Microsoft Edge, though the file fails to launch and shows an error message — by that point, it has already done the damage.

To avoid infection, users should avoid clicking unknown email links and should not open attachments from unfamiliar senders. Instead of saving passwords and card details in browsers, a trusted password manager is a safer choice.

Although antivirus software remains helpful, hackers in the campaign have used sophisticated methods to bypass detection, making careful online behaviour more important than ever.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cloudflare claims Perplexity circumvented website scraping blocks

Cloudflare has accused AI startup Perplexity of ignoring explicit website instructions not to scrape their content.

According to the internet infrastructure company, Perplexity has allegedly disguised its identity and used technical workarounds to bypass restrictions set out in Robots.txt files, which tell bots which pages they may or may not access.

The behaviour was reportedly detected after multiple Cloudflare customers complained about unauthorised scraping attempts.

Instead of respecting these rules, Cloudflare claims Perplexity altered its bots’ user agent to appear as a Google Chrome browser on macOS and switched its network identifiers to avoid detection.

The company says these tactics were seen across tens of thousands of domains and millions of daily requests, and that it used machine learning and network analysis to identify the activity.

Perplexity has denied the allegations, calling Cloudflare’s report a ‘sales pitch’ and disputing that the bot named in the findings belongs to the company. Cloudflare has since removed Perplexity’s bots from its verified list and introduced new blocking measures.

The dispute arises as Cloudflare intensifies its efforts to grant website owners greater control over AI crawlers. Last month, it launched a marketplace enabling publishers to charge AI firms for scraping, alongside free tools to block unauthorised data collection.

Perplexity has previously faced criticism over content use, with outlets such as Wired accusing it of plagiarism in 2024.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI to improve its ability in detecting mental or emotional distress

In search of emotional support during a mental health crisis, it has been reported that people use ChatGPT as their ‘therapist.’ While this may seem like an easy getaway, reports have shown that ChatGPT’s responses have had an amplifying effect on people’s delusions rather than helping them find coping mechanisms. As a result, OpenAI stated that it plans to improve the chatbot’s ability to detect mental distress in the new GPT-5 AI model, which is expected to launch later this week.

OpenAI admits that GPT-4 sometimes failed to recognise signs of delusion or emotional dependency, especially in vulnerable users. To encourage healthier use of ChatGPT, which now serves nearly 700 million weekly users, OpenAI is introducing break reminders during long sessions, prompting users to pause or continue chatting.

Additionally, it plans to refine how and when ChatGPT displays break reminders, following a trend seen on platforms like YouTube and TikTok.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

The risky rise of all-in-one AI companions

A concerning new trend is emerging: AI companions are merging with mental health tools, blurring ethical lines. Human therapists are required to maintain a professional distance. Yet AI doesn’t follow such rules; it can be both confidant and counsellor.

AI chatbots are increasingly marketed as friendly companions. At the same time, they can offer mental health advice. Combined, you get an AI friend who also becomes your emotional guide. The mix might feel comforting, but it’s not without risks.

Unlike a human therapist, AI has no ethical compass. It mimics caring responses based on patterns, not understanding. One prompt might trigger empathetic advice and best-friend energy, a murky interaction without safeguards.

The deeper issue? There’s little incentive for AI makers to stop this. Blending companionship and therapy boosts user engagement and profits. Unless laws intervene, these all-in-one bots will keep evolving.

There’s also a massive privacy cost. People confide personal feelings to these bots, often daily, for months. The data may be reviewed, stored, and reused to train future models. Your digital friend and therapist might also be your data collector.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

The US launches $100 million cybersecurity grant for states

The US government has unveiled more than $100 million in funding to help local and tribal communities strengthen their cybersecurity defences.

The announcement came jointly from the Cybersecurity and Infrastructure Security Agency (CISA) and the Federal Emergency Management Agency (FEMA), both part of the Department of Homeland Security.

Instead of a single pool, the funding is split into two distinct grants. The State and Local Cybersecurity Grant Program (SLCGP) will provide $91.7 million to 56 states and territories, while the Tribal Cybersecurity Grant Program (TCGP) allocates $12.1 million specifically for tribal governments.

These funds aim to support cybersecurity planning, exercises and service improvements.

CISA’s acting director, Madhu Gottumukkala, said the grants ensure communities have the tools needed to defend digital infrastructure and reduce cyber risks. The effort follows a significant cyberattack on St. Paul, Minnesota, which prompted a state of emergency and deployment of the National Guard.

Officials say the funding reflects a national commitment to proactive digital resilience instead of reactive crisis management. Homeland Security leaders describe the grant as both a strategic investment in critical infrastructure and a responsible use of taxpayer funds.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Weak cyber hygiene in smart devices risks national infrastructure

The UK’s designation of data centres as Critical National Infrastructure highlights their growing strategic importance, yet a pressing concern remains over vulnerabilities in their OT and IoT systems. While IT security often receives significant investment, the same cannot be said for other technologies.

Attackers increasingly target these overlooked systems, gaining access through insecure devices such as IP cameras and biometric scanners. Many of these operate on outdated firmware and lack even basic protections, making them ideal footholds for malicious actors.

There have already been known breaches, with OT systems used in botnet activity and crypto mining, often without detection. These attacks not only compromise security in the UK but can destabilise infrastructure by overloading resources or bypassing safeguards.

Addressing these threats requires full visibility across all connected systems, with real-time monitoring, wireless traffic analysis, and network segmentation. Experts urge data centre operators to act now, not in response to a breach, but to prevent one entirely.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI’s transformation of work habits, mindset and lifestyle

At Mindvalley’s AI Summit, former Google Chief Decision Scientist Cassie Kozyrkov described AI as not a substitute for human thought but a magnifier of what the human mind can produce. Rather than replacing us, AI lets us offload mundane tasks and focus on deeper cognitive and creative work.

Work structures are being transformed, not just in factories, but behind computer screens. AI now handles administrative ‘work about work,’ multitasking, scheduling, and research summarisation, lowering friction in knowledge work and enabling people to supervise agents rather than execute tasks manually.

Personal life is being reshaped, too. AI tools for finance or health, such as budgeting apps or personalised diagnostics, move decisions into data-augmented systems with faster insight and fewer human biases.

Meanwhile, creativity is co-authored via AI-generated design, music or writing, requiring humans to filter, refine and ideate beyond the algorithm.

Recognising cognitive change, AI thought leaders envision a new era where ‘blended work’ prevails: humans manage AI agents, call the shots, and wield ethical oversight, while the AI executes pipelines of repetitive or semi-intelligent tasks.

Scholars warn that this model demands new fairness, transparency, and collaboration skills.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Hackers infiltrate Southeast Asian telecom networks

A cyber group breached telecoms across Southeast Asia, deploying advanced tracking tools instead of stealing data. Palo Alto Networks’ Unit 42 assesses the activity as ‘associated with a nation-state nexus’.

A hacking group gained covert access to telecom networks across Southeast Asia, most likely to track users’ locations, according to cybersecurity analysts at Palo Alto Networks’ Unit 42.

The campaign lasted from February to November 2024.

Instead of stealing data or directly communicating with mobile devices, the hackers deployed custom tools such as CordScan, designed to intercept mobile network protocols like SGSN. These methods suggest the attackers focused on tracking rather than data theft.

Unite42 assessed the activity ‘with high confidence’ as ‘associated with a nation state nexus’. The Unit notes that ‘this cluster heavily overlaps with activity attributed to Liminal Panda, a nation state adversary tracked by CrowdStrike’; according to CrowdStrike, Liminal Panda is considered to be a ‘likely China-nexus adversary’. It further states that ‘while this cluster significantly overlaps with Liminal Panda, we have also observed overlaps in attacker tooling with other reported groups and activity clusters, including Light Basin, UNC3886, UNC2891 and UNC1945.’

The attackers initially gained access by brute-forcing SSH credentials using login details specific to telecom equipment.

Once inside, they installed new malware, including a backdoor named NoDepDNS, which tunnels malicious data through port 53 — typically used for DNS traffic — in order to avoid detection.

To maintain stealth, the group disguised malware, altered file timestamps, disabled system security features and wiped authentication logs.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google AI Mode raises fears over control of news

Google’s AI Mode has quietly launched in the UK, reshaping how users access news by summarising information directly in search results.

By paraphrasing content gathered across the internet, the tool offers instant answers while reducing the need to visit original news sites.

Critics argue that the technology monopolises UK information by filtering what users see, based on algorithms rather than editorial judgement. Concerns have grown over transparency, fairness and the future of independent journalism.

Publishers are not compensated for content used by AI Mode, and most users rarely click through to the sources. Newsrooms fear pressure to adapt their output to align with Google’s preferences or risk being buried online.

While AI may streamline convenience, it lacks accountability. Regulated journalism must operate under legal frameworks, whereas AI faces no such scrutiny even when errors have real consequences.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Hackers use steganography to evade Windows defences

North Korea-linked hacking group APT37 is using malicious JPEG image files to deploy advanced malware on Windows systems, according to Genians Security Centre. The new campaign showcases a more evasive version of RoKRAT malware, which hides payloads in image files through steganography.

These attacks rely on large Windows shortcut files embedded in email attachments or cloud storage links, enticing users with decoy documents while executing hidden code. Once activated, the malware launches scripts to decrypt shellcode and inject it into trusted apps like MS Paint and Notepad.

This fileless strategy makes detection difficult, avoiding traditional antivirus tools by leaving minimal traces. The malware also exfiltrates data through legitimate cloud services, complicating efforts to trace and block the threat.

Researchers stress the urgency for organisations to adopt cybersecurity measures, behavioural monitoring, robust end point management, and ongoing user education. Defenders must prioritise proactive strategies to protect critical systems as threat actors evolve.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!