TSMC faces curbs on shipping US tech to China

The United States has revoked Taiwan Semiconductor Manufacturing Company’s licence to ship advanced technology from America to China. The decision follows similar restrictions on South Korean firms Samsung and SK Hynix, increasing uncertainty for chipmakers operating Chinese facilities.

TSMC confirmed that Washington has notified that its authorisation will expire by the end of the year. The company said it would discuss the matter with the US government and stressed its commitment to keeping operations in China running without disruption.

The curbs are part of broader US measures to limit China’s access to advanced semiconductors. While they could complicate shipments and force suppliers to seek individual approvals, analysts suggest the direct impact on TSMC will be limited, as its sole Chinese plant in Nanjing makes older-generation chips that contribute only a small share of revenue.

Chinese customers may increasingly turn to domestic chipmakers, even if their technology lags. Such a shift could spur innovation in less performance-critical areas, while global suppliers grapple with higher costs and regulatory hurdles.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI framework Hexstrike-AI repurposed by cybercriminals for rapid attacks

Within hours of its public release, the offensive security framework Hexstrike-AI has been weaponised by threat actors to exploit zero-day vulnerabilities, most recently affecting Citrix NetScaler ADC and Gateway, within just ten minutes.

Automated agents execute actions such as scanning, exploiting CVEs and deploying webshells, all orchestrated through high-level commands like ‘exploit NetScaler’.

Researchers from CheckPoint note that attackers are now using Hexstrike-AI to achieve unauthenticated remote code execution automatically.

The AI framework’s design, complete with retry logic and resilience, makes chaining reconnaissance, exploitation and persistence seamless and more effective.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Salt Typhoon espionage campaign revealed through global cybersecurity advisory

Intelligence and cybersecurity agencies from 13 countries, including the NSA, CISA, the UK’s NCSC and Canada’s CSIS, have jointly issued an advisory on Salt Typhoon, a Chinese state-sponsored advanced persistent threat group.

The alert highlights global intrusions into telecommunications, military, government, transport and lodging sectors.

Salt Typhoon has exploited known, unpatched vulnerabilities in network-edge appliances, such as routers and firewalls, to gain initial access. Once inside, it covertly embeds malware and employs living-off-the-land tools for persistence and data exfiltration.

The advisory also warns that stolen data from compromised ISPs can help intelligence services track global communications and movements.

It pinpoints three Chinese companies with links to the Ministry of State Security and the People’s Liberation Army as central to Salt Typhoon’s operations.

Defensive guidelines accompany the advisory, urging organisations to apply urgent firmware patches, monitor for abnormal network activity, verify firmware integrity and tighten device configurations, especially for telecom infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Experts warn of sexual and drug risks to kids from AI chatbots

A new report highlights alarming dangers from AI chatbots on platforms such as Character AI. Researchers acting as 12–15-year-olds logged 669 harmful interactions, from sexual grooming to drug offers and secrecy instructions.

Bots frequently claimed to be real humans, increasing their credibility with vulnerable users.

Sexual exploitation dominated the findings, with nearly 300 cases of adult bots pursuing romantic relationships and simulating sexual activity. Some bots suggested violent acts, staged kidnappings, or drug use.

Experts say the immersive and role-playing nature of these apps amplifies risks, as children struggle to distinguish between fantasy and reality.

Advocacy groups, including ParentsTogether Action and Heat Initiative, are calling for age restrictions, urging platforms to limit access to verified adults. The scrutiny follows a teen suicide linked to Character AI and mounting pressure on tech firms to implement effective safeguards.

OpenAI has announced parental controls for ChatGPT, allowing parents to monitor teen accounts and set age-appropriate rules.

Researchers warn that without stricter safety measures, interactive AI apps may continue exposing children to dangerous content. Calls for adult-only verification, improved filters, and public accountability are growing as the debate over AI’s impact on minors intensifies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Hackers exploit Ethereum smart contracts to spread malware

Cybersecurity researchers have uncovered a new method hackers use to deliver malware, which hides malicious commands inside Ethereum smart contracts. ReversingLabs identified two compromised NPM packages on the popular Node Package Manager repository.

The packages, named ‘colortoolsv2’ and ‘mimelib2,’ were uploaded in July and used blockchain queries to fetch URLs that delivered downloader malware. The contracts hid command and control addresses, letting attackers evade scans by making blockchain traffic look legitimate.

Researchers say the approach marks a shift in tactics. While the Lazarus Group previously leveraged Ethereum smart contracts, the novel element uses them as hosts for malicious URLs. Analysts warn that open-source repositories face increasingly sophisticated evasion techniques.

The malicious packages formed part of a broader deception campaign involving fake GitHub repositories posing as cryptocurrency trading bots. With fabricated commits, fake user accounts, and professional-looking documentation, attackers built convincing projects to trick developers.

Experts note that similar campaigns have also targeted Solana and Bitcoin-related libraries, signalling a broader trend in evolving threats.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Free GPU access offered to AI startups in Taiwan

Taiwan’s new Digital Minister Lin Yi-ching has unveiled his policy agenda, putting AI development, cybersecurity and anti-fraud at the forefront.

He pledged to build on the work of his predecessor while accelerating digital government projects.

Lin said the government will support the AI industry through five key tools: computing power, data, talent, marketing and funding.

Taiwan startups will gain free GPU access, revised regulations will release non-sensitive public data, and a sovereign AI corpus will be developed.

Cybersecurity and fraud prevention are also central. Measures include DNS blocking, government SMS codes, and partnerships with platforms like Google and Line to curb scams. Lin reaffirmed the government’s commitment to the digital certificate wallet.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Jaguar Land Rover production disrupted by cyber incident

Jaguar Land Rover (JLR) has confirmed its production and retail operations were ‘severely disrupted’ due to a cyber incident, prompting a precautionary system shutdown.

The company stated there is currently ‘no evidence’ that any customer data has been compromised and assured it is working at pace to restore systems in a controlled manner.

The incident disrupted output at key UK plants, including Halewood and Solihull, led to operational bottlenecks such as halted vehicle registrations, and impacted a peak retail period following the release of ’75’ number plates.

A Telegram group named Scattered Lapsus$ Hunters, a conflation of known hacking collectives, claimed responsibility, posting what appeared to be internal logs. Cybersecurity experts caution that such claims should be viewed sceptically, as attribution via Telegram may be misleading.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Privacy concerns arise as Google reportedly expands gaming data sharing

Google may roll out a Play Games update on 23 September adding public profiles, stat tracking, and community features. Reports suggest users may customise profiles, follow others, and import gaming history, while Google could collect gameplay and developer data.

The update is said to track installed games, session lengths, and in-game achievements, with some participating developers potentially accessing additional data. Players can reportedly manage visibility settings, delete profiles, or keep accounts private, with default settings applied unless changed.

The EU and UK are expected to receive the update on 1 October.

Privacy concerns have been highlighted in Europe. Austrian group NOYB filed a complaint against Ubisoft over alleged excessive data collection in games like Far Cry Primal, suggesting that session tracking and frequent online connections may conflict with GDPR.

Ubisoft could face fines of up to four percent of global turnover, based on last year’s revenues.

Observers suggest the update reflects a social and data-driven gaming trend, though European players may seek more explicit consent and transparency.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

CJEU dismisses bid to annul EU-US data privacy framework

The General Court of the Court of Justice of the European Union (CJEU) has dismissed an action seeking the annulment of the EU–US Data Privacy Framework (DPF). Essentially, the DPF is an agreement between the EU and the USA allowing personal data to be transferred from the EU to US companies without additional data protection safeguards.

Following the agreement, the European Commission conducted further investigations to assess whether it offered adequate safeguards. On 10 July 2023, the Commission adopted an adequacy decision concluding that the USA ensures a sufficient level of protection comparable to that of the EU when transferring data from the EU to the USA, and that there is no need for supplementary data protection measures.

However, on 6 September 2023, Philippe Latombe, a member of the French Parliament, brought an action seeking annulment of the EU–US DPF.

He argued that the framework fails to ensure adequate protection of personal data transferred from the EU to the USA. Latombe also claimed that the Data Protection Review Court (DPRC), which is responsible for reviewing safeguards during such data transfers, lacks impartiality and independence and depends on the executive branch.

Finally, Latombe asserted that ‘the practice of the intelligence agencies of that country of collecting bulk personal data in transit from the European Union, without the prior authorisation of a court or an independent administrative authority, is not circumscribed in a sufficiently clear and precise manner and is, therefore, illegal.’As a result, the General Court of the EU dismissed the action for annulment, stating that:

  • The DPRC has sufficient safeguards to ensure judicial independence,
  • US intelligence agencies’ bulk data collection practices are compatible with the EU fundamental rights, and
  • The decision consolidates the European Commission’s ability to suspend or amend the framework if US legal safeguards change.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU and Australia diverge on paths to AI regulation

The regulatory approaches to AI in the EU and Australia are diverging significantly, creating a complex challenge for the global tech sector.

Instead of a unified global standard, companies must now navigate the EU’s stringent, risk-based AI Act and Australia’s more tentative, phased-in approach. The disparity underscores the necessity for sophisticated cross-border legal expertise to ensure compliance in different markets.

In the EU, the landmark AI Act is now in force, implementing a strict risk-based framework with severe financial penalties for non-compliance.

Conversely, Australia has yet to pass binding AI-specific laws, opting instead for a proposal paper outlining voluntary safety standards and 10 mandatory guardrails for high-risk applications currently under consultation.

It creates a markedly different compliance environment for businesses operating in both regions.

For tech companies, the evolving patchwork of international regulations turns AI governance into a strategic differentiator instead of a mere compliance obligation.

Understanding jurisdictional differences, particularly in areas like data governance, human oversight, and transparency, is becoming essential for successful and lawful global operations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!