Hackers steal $500K via malicious Cursor AI extension

A cyberattack targeting the Cursor AI development environment has resulted in the theft of $500,000 in cryptocurrency from a Russian developer. Despite strong security practices and a fresh operating system, the victim downloaded a malicious extension named ‘Solidity Language’ in June 2025.

Masquerading as a syntax highlighting tool, the fake extension exploited search rankings to appear more legitimate than actual alternatives. Once installed, the extension served as a dropper for malware rather than offering any development features.

It contacted a command-and-control server and began deploying scripts designed to check for remote desktop software and install backdoors. The malware used PowerShell scripts to install ScreenConnect, granting persistent access to the victim’s system through a relay server.

Securelist analysts found that the extension exploited Open VSX registry algorithms by publishing with a more recent update date. Further investigation revealed the same attack methods were used in other packages, including npm’s ‘solsafe’ and three VS Code extensions.

The campaign reflects a growing trend of supply chain attacks exploiting AI coding tools to distribute persistent, stealthy malware.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI can reshape the insurance industry, but carries real-world risks

AI is creating new opportunities for the insurance sector, from faster claims processing to enhanced fraud detection.

According to Jeremy Stevens, head of EMEA business at Charles Taylor InsureTech, AI allows insurers to handle repetitive tasks in seconds instead of hours, offering efficiency gains and better customer service. Yet these opportunities come with risks, especially if AI is introduced without thorough oversight.

Poorly deployed AI systems can easily cause more harm than good. For instance, if an insurer uses AI to automate motor claims but trains the model on biassed or incomplete data, two outcomes are likely: the system may overpay specific claims while wrongly rejecting genuine ones.

The result would not simply be financial losses, but reputational damage, regulatory investigations and customer attrition. Instead of reducing costs, the company would find itself managing complaints and legal challenges.

To avoid such pitfalls, AI in insurance must be grounded in trust and rigorous testing. Systems should never operate as black boxes. Models must be explainable, auditable and stress-tested against real-world scenarios.

It is essential to involve human experts across claims, underwriting and fraud teams, ensuring AI decisions reflect technical accuracy and regulatory compliance.

For sensitive functions like fraud detection, blending AI insights with human oversight prevents mistakes that could unfairly affect policyholders.

While flawed AI poses dangers, ignoring AI entirely risks even greater setbacks. Insurers that fail to modernise may be outpaced by more agile competitors already using AI to deliver faster, cheaper and more personalised services.

Instead of rushing or delaying adoption, insurers should pursue carefully controlled pilot projects, working with partners who understand both AI systems and insurance regulation.

In Stevens’s view, AI should enhance professional expertise—not replace it—striking a balance between innovation and responsibility.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Hackers use fake Termius app to infect macOS devices

Hackers are bundling legitimate Mac apps with ZuRu malware and poisoning search results to lure users into downloading trojanized versions. Security firm SentinelOne reported that the Termius SSH client was recently compromised and distributed through malicious domains and fake downloads.

The ZuRu backdoor, originally detected in 2021, allows attackers to silently access infected machines and execute remote commands undetected. Attackers continue to target developers and IT professionals by trojanising trusted tools such as SecureCRT, Navicat, and Microsoft Remote Desktop.

Infected disk image files are slightly larger than legitimate ones due to embedded malicious binaries. Victims unknowingly launch malware alongside the real app.

The malware bypasses macOS code-signing protections by injecting a temporary developer signature into the compromised application bundle. The updated variant of ZuRu requires macOS Sonoma 14.1 or newer and supports advanced command-and-control functions using the open-source Khepri beacon.

The functions include file transfers, command execution, system reconnaissance and process control, with captured outputs sent back to attacker-controlled domains. The latest campaign used termius.fun and termius.info to host the trojanized packages. Affected users often lack proper endpoint security.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Bank of Korea sounds alarm over unregulated stablecoins

Bank of Korea Governor Lee Chang-yong warned that letting non-banks issue won-based stablecoins could spark economic confusion similar to the 19th-century US Free Banking Era. His remarks follow President Lee Jae Myung’s push to launch domestic stablecoins under his economic agenda.

Governor Lee noted that handing over payment and settlement services to non-banks might disrupt the profit models of traditional banks and conflict with foreign exchange policies. He stressed that stablecoin policy requires coordination across government, as the central bank lacks sole authority.

Meanwhile, President Lee’s support for stablecoins has sparked a flurry of activity among fintech and banking firms, with many filing trademark applications linked to KRW stablecoin symbols. KakaoPay, one of South Korea’s largest payment platforms, has seen its stock surge by more than 120% since Lee’s election.

The BOK recently announced it will pause its central bank digital currency (CBDC) pilot, citing legal uncertainty surrounding the coexistence of CBDCs, stablecoins, and deposit tokens. Lee stated the trial had considered stablecoin interaction from the beginning, and further action will depend on legislative developments.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI technology drives sharp rise in synthetic abuse material

AI is increasingly being used to produce highly realistic synthetic abuse videos, raising alarm among regulators and industry bodies.

According to new data published by the Internet Watch Foundation (IWF), 1,286 individual AI-generated abuse videos were identified during the first half of 2025, compared to just two in the same period last year.

Instead of remaining crude or glitch-filled, such material now appears so lifelike that under UK law, it must be treated like authentic recordings.

More than 1,000 of the videos fell into Category A, the most serious classification involving depictions of extreme harm. The number of webpages hosting this type of content has also risen sharply.

Derek Ray-Hill, interim chief executive of the IWF, expressed concern that longer-form synthetic abuse films are now inevitable unless binding safeguards around AI development are introduced.

Safeguarding minister Jess Phillips described the figures as ‘utterly horrific’ and confirmed two new laws are being introduced to address both those creating this material and those providing tools or guidance on how to do so.

IWF analysts say video quality has advanced significantly instead of remaining basic or easy to detect. What once involved clumsy manipulation is now alarmingly convincing, complicating efforts to monitor and remove such content.

The IWF encourages the public to report concerning material and share the exact web page where it is located.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Space operators face strict cybersecurity obligations under EU plan

The European Commission has unveiled a new draft law introducing cybersecurity requirements for space infrastructure, aiming to protect ground and orbital systems.

Operators must implement rigorous cyber risk management measures, including supply chain oversight, encryption, access control and incident response systems. A notable provision places direct accountability on company boards, which could be held personally liable for failures to comply.

The proposed law builds on existing EU regulations such as NIS 2 and DORA, with additional tailored obligations for the space domain. Non-EU firms will also fall within scope unless their home jurisdictions are recognised as offering equivalent regulatory protections.

Fines of up to 2% of global revenue are foreseen, with member states and the EU’s space agency EUSPA granted inspection and enforcement powers. Industry stakeholders are encouraged to engage with the legislative process and align existing cybersecurity frameworks with the Act’s provisions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Qantas hacked as airline cyber threats escalate

Qantas Airways has confirmed that personal data from 5.7 million customers was stolen in a recent cyberattack, including names, contact details and meal preferences. The airline stated that no financial or login credentials were accessed, and frequent flyer accounts remain secure.

An internal investigation found the data breach involved various levels of personal information, with 2.8 million passengers affected most severely. Meal preferences were the least common data stolen, while over a million customers lost addresses or birth dates.

Qantas has contacted affected passengers and says it offers support while monitoring the situation with cybersecurity experts. Under pressure to manage the crisis effectively, CEO Vanessa Hudson assured the public that extra security steps had been taken.

The breach is the latest in a wave of attacks targeting airlines, with the FBI warning that the hacking group Scattered Spider may be responsible. Similar incidents have recently affected carriers in the US and Canada.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

M&S still rebuilding after April cyber incident

Marks & Spencer has revealed that the major cyberattack it suffered in April stemmed from a sophisticated impersonation of a third-party user.

The breach began on 17 April and was detected two days later, sparking weeks of disruption and a crisis response effort described as ‘traumatic’ by Chairman Archie Norman.

The retailer estimates the incident will cost it £300 million in operating profit and says it remains in rebuild mode, although customer services are expected to normalise by month-end.

Norman confirmed M&S is working with UK and US authorities, including the National Crime Agency, the National Cyber Security Centre, and the FBI.

While the ransomware group DragonForce has claimed responsibility, Norman declined to comment on whether any ransom was paid. He said such matters were better left to law enforcement and not in the public interest to discuss further.

The company expects to recover some of its losses through insurance, although the process may take up to 18 months. Other UK retailers, including Co-op and Harrods, were also targeted in similar attacks around the same time, reportedly using impersonation tactics to bypass internal security systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU urges stronger AI oversight after Grok controversy

A recent incident involving Grok, the AI chatbot developed by xAI, has reignited European Union calls for stronger oversight of advanced AI systems.

Comments generated by Grok prompted criticism from policymakers and civil society groups, leading to renewed debate over AI governance and voluntary compliance mechanisms.

The chatbot’s responses, which circulated earlier this week, included highly controversial language and references to historical figures. In response, xAI stated that the content was removed and that technical steps were being taken to prevent similar outputs from appearing in the future.

European policymakers said the incident highlights the importance of responsible AI development. Brando Benifei, an Italian lawmaker who co-led the EU AI Act negotiations, said the event illustrates the systemic risks the new regulation seeks to mitigate.

Christel Schaldemose, a Danish member of the European Parliament and co-lead on the Digital Services Act, echoed those concerns. She emphasised that such incidents underline the need for clear and enforceable obligations for developers of general-purpose AI models.

The European Commission is preparing to release guidance aimed at supporting voluntary compliance with the bloc’s new AI legislation. This code of practice, which has been under development for nine months, is expected to be published this week.

Earlier drafts of the guidance included provisions requiring developers to share information on how they address systemic risks. Reports suggest that some of these provisions may have been weakened or removed in the final version.

A group of five lawmakers expressed concern over what they described as the last-minute removal of key transparency and risk mitigation elements. They argue that strong guidelines are essential for fostering accountability in the deployment of advanced AI models.

The incident also brings renewed attention to the Digital Services Act and its enforcement, as X, the social media platform where Grok operates, is currently under EU investigation for potential violations related to content moderation.

General-purpose AI systems, such as OpenAI’s GPT, Google’s Gemini and xAI’s Grok, will be subject to additional requirements under the EU AI Act beginning 2 August. Obligations include disclosing training data sources, addressing copyright compliance, and mitigating systemic risks.

While these requirements are mandatory, their implementation is expected to be shaped by the Commission’s voluntary code of practice. Industry groups and international stakeholders have voiced concerns over regulatory burdens, while policymakers maintain that safeguards are critical for public trust.

The debate over Grok’s outputs reflects broader challenges in balancing AI innovation with the need for oversight. The EU’s approach, combining binding legislation with voluntary guidance, seeks to offer a measured path forward amid growing public scrutiny of generative AI technologies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI scam targets donors with fake orphan images

Cambodian authorities have warned the public about increasing online scams using AI-generated images to deceive donors. The scams often show fabricated scenes of orphaned children or grieving families, with QR codes attached to collect money.

One Facebook account, ‘Khmer Khmer’, was named in an investigation by the Anti-Cyber Crime Department for spreading false stories and deepfake images to solicit charity donations. These included claims of a wife unable to afford a coffin and false fundraising efforts near the Thai border.

The department confirmed that AI-generated realistic visuals are designed to manipulate emotions and lure donations. Cambodian officials continue investigations and have promised legal action if evidence of criminal activity is confirmed.

Authorities reminded the public to remain cautious and to only contribute to verified and officially recognised campaigns. While AI’s ability to create realistic content has many uses, it also opens the door to dangerous forms of fraud and misinformation when abused.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!