BeatBanker malware targets Android users in Brazil

A new Android malware called BeatBanker is targeting users in Brazil through fake Starlink and government apps. The malware hijacks devices, steals banking credentials, tampers with cryptocurrency transactions, and secretly mines Monero.

Infection begins on phishing websites mimicking the Google Play Store or the ‘INSS Reembolso’ app. Users are tricked into installing trojanised APKs, which evade detection through memory-based decryption and by blocking analysis environments.

Fake update screens maintain persistence while silently downloading additional malicious payloads.

BeatBanker initially combined a banking trojan with a cryptocurrency miner. It uses accessibility permissions to monitor browsers and crypto apps, overlaying fake screens to redirect Tether and other crypto transfers.

A foreground service plays silent audio loops to prevent the device from shutting down, while Firebase Cloud Messaging enables remote control of infected devices.

The latest variant replaces the banking module with the BTMOB RAT, providing full control over devices. Capabilities include automatic permissions, background persistence, keylogging, GPS tracking, camera access, and screen-lock credential capture.

Kaspersky warns that BeatBanker demonstrates the growing sophistication of mobile threats and multi-layered malware campaigns.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI browsers expose new cybersecurity attack surfaces

Security researchers have demonstrated that agentic browsers, powered by AI, may introduce new cybersecurity vulnerabilities.

Experiments targeting the Comet AI browser, developed by Perplexity AI, showed that attackers could manipulate the system into executing phishing scams in only a few minutes.

The attack exploits the reasoning process used by AI agents when interacting with websites. These systems continuously explain their actions and observations, revealing internal signals that attackers can analyse to refine malicious strategies and bypass built-in safeguards.

Researchers showed that phishing pages can be iteratively trained using adversarial machine learning methods, such as Generative Adversarial Networks.

By observing how the AI browser responds to suspicious signals, attackers can optimise fraudulent pages until the system accepts them as legitimate.

The findings highlight a shift in the cybersecurity threat landscape. Instead of deceiving human users directly, attackers increasingly focus on manipulating the AI agents that perform online actions on behalf of users.

Security experts warn that prompt injection vulnerabilities remain a fundamental challenge for large language models and agentic systems.

Although new defensive techniques are being developed, researchers believe such weaknesses may remain difficult to eliminate.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI agents face growing prompt injection risks

AI developers are working on new defences against prompt-injection attacks that aim to manipulate AI agents. Security specialists warn that attackers are increasingly using social engineering techniques to influence AI systems that interact with online content.

Researchers say AI agents that browse the web or handle user tasks face growing risks from hidden instructions embedded in emails or websites. Experts in the US note that attackers often attempt to trick AI into revealing sensitive information.

Engineers are responding by designing systems that limit the impact of manipulation attempts. Developers in the US say AI tools must include safeguards preventing sensitive data from being transmitted without user approval.

Security teams are also introducing technologies that detect risky actions and prompt users for confirmation. Specialists argue that strong system design and user oversight will remain essential as AI agents gain more autonomy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Google outlines roadmap for safer generative AI for young users

Google has presented a strategy for developing generative AI systems designed to protect younger users better better while supporting learning and creativity.

The approach emphasises building conversational AI experiences that balance innovation with safeguards tailored to children and teenagers.

The company’s framework rests on three pillars: protecting young people online, respecting the role of families in digital environments and enabling youth to explore AI technologies responsibly.

According to Google, safety policies prohibit harmful content, including material linked to child exploitation, violent extremism and self-harm, while additional restrictions target age-inappropriate topics.

Safeguards are integrated throughout the AI development lifecycle, from user input to model responses. Systems use specialised classifiers to detect potentially harmful queries and prevent inappropriate outputs.

These protections are also applied to models such as Gemini, which incorporates defences against prompt manipulation and cyber misuse.

Beyond preventing harm, Google aims to support responsible AI adoption through educational initiatives.

Resources designed for families encourage discussions about responsible technology use, while tools such as Guided Learning in Gemini seek to help students explore complex topics through structured explanations and interactive learning support.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI-driven adaptive malware highlights new cyber threat landscape

Google’s cybersecurity division, Mandiant, has warned about the growing threat of AI-driven adaptive malware, highlighting how AI is reshaping the cyber threat landscape.

According to a recent report, adaptive malware can modify its behaviour and code in response to the environment it encounters, thereby evading traditional security tools. By analysing the security systems protecting a target, the malware can rewrite parts of its code to bypass detection.

Unlike traditional malware, which typically follows fixed instructions, adaptive malware can adjust its behaviour during an attack. This capability makes it more difficult for conventional cybersecurity tools to detect and block malicious activity.

Mandiant noted that such malware is increasingly associated with advanced persistent threat (APT) groups that conduct long-term, targeted cyber operations. These groups often pursue espionage objectives or financial gain while maintaining prolonged access to compromised systems.

AI is also being used to automate elements of cyberattacks. Machine learning algorithms allow malicious software to anticipate defensive measures and adjust its behaviour in real time. In some cases, attackers are integrating AI into broader automated attack chains. AI-driven malware can gather information, adapt its strategy, and continue operating with minimal human intervention.

Security researchers say autonomous AI agents may be capable of managing multiple stages of an attack, including reconnaissance, exploitation, and persistence, while remaining undetected.

To address these evolving threats, Mandiant recommends that organisations strengthen their cybersecurity strategies by deploying advanced detection and response tools, including AI-based systems that can identify anomalous behaviour. As AI capabilities continue to develop, cybersecurity experts say understanding adaptive malware and automated attack techniques will be essential for organisations seeking to protect their systems and data.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI and quantum computing reshape the global cybersecurity landscape

Cybersecurity risks are increasing as digital connectivity expands across governments, businesses and households.

According to Thales Group, a growing number of connected devices and digital services has significantly expanded the potential entry points for cyberattacks.

AI is reshaping the cybersecurity landscape by enabling attackers to identify vulnerabilities at unprecedented speed.

Security specialists increasingly describe the environment as a contest in which defensive systems must deploy AI to counter adversaries using similar technologies to exploit weaknesses in digital infrastructure.

Security concerns also extend beyond large institutions. Connected devices in homes, including smart cameras and speakers, often lack robust security protections, increasing exposure for individuals and networks.

Policymakers in Europe are responding through measures such as the Cyber Resilience Act, which will introduce mandatory security requirements for connected products sold in the EU.

Long-term risks are also emerging from advances in quantum computing.

Experts warn that powerful future machines could eventually break widely used encryption systems that currently protect communications, financial data and government networks, prompting organisations to adopt quantum-resistant security methods.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU explores AI image generation safeguards

The Council of the European Union is examining a compromise proposal that could introduce restrictions on certain AI systems capable of generating sensitive synthetic images.

The discussions form part of ongoing adjustments to the EU AI Act.

A proposed measure that would primarily address AI tools that generate illegal material, particularly content involving the exploitation of minors.

Policymakers are considering ways to prevent the development or deployment of systems that could produce such material while maintaining proportionate rules for legitimate AI applications.

Early indications suggest the proposal may not apply to images depicting people in standard clothing contexts, such as swimwear. The distinction reflects policymakers’ effort to define the scope of restrictions without imposing unnecessary limits on common image-generation uses.

The debate highlights broader regulatory challenges linked to generative AI technologies. European institutions are seeking to strengthen protections against harmful uses of AI while preserving space for innovation and lawful digital services.

Further negotiations among the EU institutions are expected as lawmakers continue refining how these provisions could fit within the broader European framework governing AI.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Malicious npm package targets developers with Openclaw impersonation

Security researchers uncovered a malicious npm package impersonating an Openclaw AI installer, designed to infect developer machines with credential-stealing malware.

JFrog Security Research identified the attack in early March 2026 after the package appeared on the npm registry and was downloaded roughly 178 times.

The deceptive package mimics legitimate Openclaw tools and contains ordinary-looking JavaScript files and documentation. Hidden scripts run during installation, displaying a fake command-line interface and a fabricated system prompt that requests the user’s password.

Entering the password grants the malware elevated access and allows it to download an encrypted payload from a remote command server. Once installed, the payload deploys Ghostloader, a remote access trojan that persists on the system and communicates with attacker servers.

Researchers say the malware targets sensitive information, including saved passwords, browser cookies, SSH keys, and cryptocurrency wallet files. Developers are advised to remove the package immediately, rotate credentials, and install software only from verified sources.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Tycoon 2FA phishing service disrupted in global cybercrime crackdown

Authorities have disrupted the Tycoon 2FA phishing-as-a-service (PhaaS) platform, which sent millions of phishing emails to organisations worldwide.

The operation, led by Microsoft, Europol, and several industry partners, targeted the infrastructure behind Tycoon 2FA, which enabled large-scale phishing campaigns against more than 500,000 organisations each month.

By mid-2025, Tycoon 2FA accounted for 62% of the phishing attempts blocked by Microsoft, with over 30 million malicious emails blocked in a single month. Experts link the platform to around 96,000 global victims since 2023, including 55,000 Microsoft customers.

Researchers from Resecurity found cybercriminals widely used the platform to impersonate legitimate users and gain unauthorised access to accounts such as Microsoft 365, Outlook and Gmail. The service relied on techniques such as URL rotation using open redirect vulnerabilities and the misuse of Cloudflare Workers to hide malicious infrastructure.

‘The author of Tycoon 2FA is actively updating the tool with regular kit updates,’ reads the report published by Resecurity. ‘What makes Tycoon 2FA so special is that the kit effectively combines multiple methods to deliver phishing at scale—from PDF attachments to QR codes.’

Authorities say taking the infrastructure offline disrupts a key pathway for account takeover attacks and prevents additional threats, such as data theft, ransomware, business email compromise, and financial fraud.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

GitHub malware campaign uses SEO tricks to steal browser data

Cybersecurity researchers have uncovered a malware campaign spreading through over 100 GitHub repositories disguised as free software tools. Hackers used SEO-heavy descriptions to make their fake repositories appear high in search results, close to legitimate software.

Users searching for popular programs were directed to counterfeit download pages. These pages offered ZIP files containing BoryptGrab, a malware designed to steal data from infected Windows systems. The files were disguised as cracked software, gaming cheats, or utility tools.

The malware collects sensitive information, including browser passwords, cookies, and cryptocurrency wallet details. It can access nine major browsers, including Chrome, Edge, Firefox, Opera, Brave, and Vivaldi, and bypass some security protections.

Certain variants also install additional tools allowing remote access and persistent control over infected machines. However, this enables hackers to run commands, maintain ongoing access, and steal more information without the user’s knowledge.

Trend Micro, the cybersecurity firm that reported the campaign, noted some code and logs suggest a possible Russian origin, though attribution is not confirmed. Experts warn that GitHub and search engine manipulation make this attack method especially dangerous.

Users are advised to download software only from trusted sources and to verify the authenticity of the repository. Organisations should follow security best practices such as software allowlisting, maintaining inventory, and removing unauthorised applications to prevent similar attacks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot