AI-driven adaptive malware highlights new cyber threat landscape

Google’s cybersecurity division, Mandiant, has warned about the growing threat of AI-driven adaptive malware, highlighting how AI is reshaping the cyber threat landscape.

According to a recent report, adaptive malware can modify its behaviour and code in response to the environment it encounters, thereby evading traditional security tools. By analysing the security systems protecting a target, the malware can rewrite parts of its code to bypass detection.

Unlike traditional malware, which typically follows fixed instructions, adaptive malware can adjust its behaviour during an attack. This capability makes it more difficult for conventional cybersecurity tools to detect and block malicious activity.

Mandiant noted that such malware is increasingly associated with advanced persistent threat (APT) groups that conduct long-term, targeted cyber operations. These groups often pursue espionage objectives or financial gain while maintaining prolonged access to compromised systems.

AI is also being used to automate elements of cyberattacks. Machine learning algorithms allow malicious software to anticipate defensive measures and adjust its behaviour in real time. In some cases, attackers are integrating AI into broader automated attack chains. AI-driven malware can gather information, adapt its strategy, and continue operating with minimal human intervention.

Security researchers say autonomous AI agents may be capable of managing multiple stages of an attack, including reconnaissance, exploitation, and persistence, while remaining undetected.

To address these evolving threats, Mandiant recommends that organisations strengthen their cybersecurity strategies by deploying advanced detection and response tools, including AI-based systems that can identify anomalous behaviour. As AI capabilities continue to develop, cybersecurity experts say understanding adaptive malware and automated attack techniques will be essential for organisations seeking to protect their systems and data.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI and quantum computing reshape the global cybersecurity landscape

Cybersecurity risks are increasing as digital connectivity expands across governments, businesses and households.

According to Thales Group, a growing number of connected devices and digital services has significantly expanded the potential entry points for cyberattacks.

AI is reshaping the cybersecurity landscape by enabling attackers to identify vulnerabilities at unprecedented speed.

Security specialists increasingly describe the environment as a contest in which defensive systems must deploy AI to counter adversaries using similar technologies to exploit weaknesses in digital infrastructure.

Security concerns also extend beyond large institutions. Connected devices in homes, including smart cameras and speakers, often lack robust security protections, increasing exposure for individuals and networks.

Policymakers in Europe are responding through measures such as the Cyber Resilience Act, which will introduce mandatory security requirements for connected products sold in the EU.

Long-term risks are also emerging from advances in quantum computing.

Experts warn that powerful future machines could eventually break widely used encryption systems that currently protect communications, financial data and government networks, prompting organisations to adopt quantum-resistant security methods.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU lawmakers call for stronger copyright safeguards in AI training

The European Parliament has adopted a report urging policymakers to establish a long-term framework protecting copyrighted works used in AI training.

These recommendations aim to ensure that creative industries retain transparency and fair treatment as generative AI technologies expand.

Among the central proposals is the creation of a European register managed by the European Union Intellectual Property Office. The database would list copyrighted works used to train AI systems and identify creators who have chosen to exclude their content from such use.

Lawmakers in the EU are also calling for greater transparency from AI developers, including disclosure of the websites from which training data has been collected. According to the report, failing to meet transparency requirements could raise questions about compliance with existing copyright rules.

The recommendations have received mixed reactions from industry stakeholders.

Organisations representing creators argue that stronger safeguards are necessary to ensure fair remuneration and legal clarity, while technology sector groups caution that additional requirements could create complexity for companies developing AI systems.

The report is not legally binding but signals the political direction of ongoing European discussions on copyright and AI governance.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Amazon launches Health AI to assist with medical queries

Amazon has launched a new AI-powered assistant, Health AI, on its website and mobile app. The tool is designed to answer health questions, explain medical records, manage prescriptions, and connect users with healthcare providers.

Health AI can also book appointments and guide users based on their health information if they grant access to their records. The feature is currently limited to the US, with a wider rollout planned in the coming weeks.

The assistant is linked with One Medical, Amazon’s healthcare service, allowing users to communicate with licensed professionals through messages, video consultations, or in-person visits. It can also send prescription renewal requests and suggest relevant health products.

Users can create an Amazon Health Profile and enable two-step authentication to start using Health AI. By allowing the AI to access their medical records, including medications, lab results, and diagnoses, users can receive more personalised responses.

Amazon emphasises that Health AI is a support tool rather than a replacement for doctors. It helps users understand health information and prepare for discussions with healthcare providers, but it does not provide independent diagnoses or treatment.

As part of an introductory offer, eligible US Prime members can receive up to five free message consultations with One Medical providers. The system runs on Amazon Bedrock and uses multiple AI agents to manage tasks, monitor interactions, and escalate to human professionals when necessary.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU explores AI image generation safeguards

The Council of the European Union is examining a compromise proposal that could introduce restrictions on certain AI systems capable of generating sensitive synthetic images.

The discussions form part of ongoing adjustments to the EU AI Act.

A proposed measure that would primarily address AI tools that generate illegal material, particularly content involving the exploitation of minors.

Policymakers are considering ways to prevent the development or deployment of systems that could produce such material while maintaining proportionate rules for legitimate AI applications.

Early indications suggest the proposal may not apply to images depicting people in standard clothing contexts, such as swimwear. The distinction reflects policymakers’ effort to define the scope of restrictions without imposing unnecessary limits on common image-generation uses.

The debate highlights broader regulatory challenges linked to generative AI technologies. European institutions are seeking to strengthen protections against harmful uses of AI while preserving space for innovation and lawful digital services.

Further negotiations among the EU institutions are expected as lawmakers continue refining how these provisions could fit within the broader European framework governing AI.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Moltbook founders join Meta’s AI research lab

Meta Platforms has acquired Moltbook, a social networking platform designed for AI agents. The deal brings co-founders Matt Schlicht and Ben Parr into Meta’s AI research division, the Superintelligence Labs, led by Alexandr Wang.

Financial terms of the acquisition were not disclosed, and the founders are expected to start on 16 March.

Moltbook, launched in January, allows AI-powered bots to exchange code and interact socially in a Reddit-like environment. The platform has sparked debate on AI autonomy and real-world capabilities, highlighting growing competition among tech giants for AI talent and technology.

Industry figures have offered differing views on the platform’s significance. OpenAI CEO Sam Altman called Moltbook a potential fad but acknowledged its underlying technology hints at the future of AI agents.

Meanwhile, Anthropic’s chief product officer, Mike Krieger, noted that most users are not ready to grant AI full autonomy over their systems.

The platform’s growth also highlighted security risks. Cybersecurity firm Wiz reported a vulnerability that exposed private messages, email addresses, and credentials, which was resolved after the owners were notified.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Japan expands strategic investment in AI, quantum computing, and drones

Japan has identified dozens of advanced technologies as priority investment targets as part of an economic strategy led by Sanae Takaichi.

The plan aims to channel public and private capital into industries expected to drive long-term economic growth.

Government officials selected 61 technologies and products for support across 17 strategic sectors. The list includes emerging fields such as AI, quantum computing, regenerative medicine and marine drones.

Many of these technologies are still in early development, but are considered important for economic security and global competitiveness.

The strategy forms a central pillar of Takaichi’s broader economic agenda to strengthen Japan’s industrial base and encourage investment in high-growth sectors. Authorities plan to release spending estimates and implementation timelines by summer as part of a detailed investment roadmap.

Japan has also set ambitious market goals in several sectors. Officials aim to secure more than 30% of the global AI robotics market by 2040 while increasing annual sales of domestically produced semiconductors to ¥40 trillion.

Several Japanese technology companies could benefit from the policy direction. Firms such as Fanuc, Yaskawa Electric and Mitsubishi Electric are integrating AI into industrial robots, while Sony Group produces sensors used in robotic systems.

Chipmakers, including Rohm, Kioxia and Renesas Electronics, may also benefit from increased investment in semiconductor manufacturing and related supply chains.

Despite strong investor interest, analysts note uncertainty about how the programme will be financed, particularly as Japan faces rising spending pressures from social security, defence and public debt.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Malicious npm package targets developers with Openclaw impersonation

Security researchers uncovered a malicious npm package impersonating an Openclaw AI installer, designed to infect developer machines with credential-stealing malware.

JFrog Security Research identified the attack in early March 2026 after the package appeared on the npm registry and was downloaded roughly 178 times.

The deceptive package mimics legitimate Openclaw tools and contains ordinary-looking JavaScript files and documentation. Hidden scripts run during installation, displaying a fake command-line interface and a fabricated system prompt that requests the user’s password.

Entering the password grants the malware elevated access and allows it to download an encrypted payload from a remote command server. Once installed, the payload deploys Ghostloader, a remote access trojan that persists on the system and communicates with attacker servers.

Researchers say the malware targets sensitive information, including saved passwords, browser cookies, SSH keys, and cryptocurrency wallet files. Developers are advised to remove the package immediately, rotate credentials, and install software only from verified sources.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Dutch court increases pressure on Meta over non-profiling social media feeds

A court in the Netherlands has increased potential penalties against Meta after ruling that changes to social media timelines must be implemented urgently.

The decision raises the potential fine for non-compliance from €5 million to €10 million if required adjustments are not applied to Facebook and Instagram feeds.

Judges at the Amsterdam Court of Appeals said users must be able to select a timeline that does not rely on profiling-based recommendations.

The ruling follows a legal challenge from the digital rights organisation Bits of Freedom, which argued that users who switched away from algorithmic feeds were automatically returned to them after navigating the platform or reopening the application.

The court concluded that the automatic resetting mechanism represents a deceptive design practice known as a ‘dark pattern’.

Such practices are prohibited under the EU’s Digital Services Act, which requires large online platforms to provide greater transparency and user control over recommendation systems.

Judges acknowledged that Meta had already introduced several technical changes, although not all required measures were fully implemented. The company must ensure that the non-profiling timeline option remains active once selected, rather than reverting to algorithmic recommendations.

The dispute also highlights regulatory tensions within the European framework. Before turning to the courts, Bits of Freedom submitted a complaint to Coimisiún na Meán, the national authority responsible for overseeing Meta’s compliance with the EU rules.

According to the organisation, the lack of progress from regulators encouraged legal action in Dutch courts.

Meta indicated that the company intends to challenge the decision and pursue further legal proceedings. The case could become an important test of how the Digital Services Act is enforced against major online platforms across Europe.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!