China and India adopt contrasting approaches to AI governance

As AI becomes central to business strategy, questions of corporate governance and regulation are gaining prominence. The study by Akshaya Kamalnath and Lin Lin examines how China and India are addressing these issues through law, policy, and corporate practice.

The paper focuses on three questions: how regulations are shaping AI and data protection in corporate governance, how companies are embedding technological expertise into governance structures, and how institutional differences influence each country’s response.

Findings suggest a degree of convergence in governance practices. Both countries have seen companies create chief technology officer roles, establish committees to manage technological risks, and disclose information about their use of AI.

In China, these measures are largely guided by central and provincial authorities, while in India, they reflect market-driven demand.

China’s approach is characterised by a state-led model that combines laws, regulations, and soft-law tools such as guidelines and strategic plans. The system is designed to encourage innovation while addressing risks in an adaptive manner.

India, by contrast, has fewer binding regulations and relies on a more flexible, principles-based model shaped by judicial interpretation and self-regulation.

Broader themes also emerge. In China, state-owned enterprises are using AI to support environmental, social, and governance (ESG) goals, while India has framed its AI strategy under the principle of ‘AI for All’ with a focus on the role of public sector organisations.

Together, these approaches underline how national traditions and developmental priorities are shaping AI governance in two of the world’s largest economies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

CJEU confirms Zalando’s status as very large online platform under DSA

On 25 April 2023, the European Commission designated Zalando, as a ‘very large online platform’ (VLOP) under the Digital Services Act (DSA), noting that over 83 million people used the platform monthly, well above the 45 million threshold. As a VLOP, Zalando is subject to stricter obligations, particularly in protecting consumers and preventing the spread of illegal content.

Zalando contested this designation before the General Court of the European Union, arguing that only its third-party seller section (the Partner Programme) should qualify as an online platform under the DSA, not its direct retail operations (Zalando Retail).

The Court rejected Zalando’s arguments and upheld the Commission’s decision. It ruled that Zalando qualifies as a VLOP due to its Partner Programme. Since Zalando could not distinguish between users exposed to third-party seller content and those who were not, the Commission was entitled to consider all 83 million users as active recipients.

The Court also dismissed Zalando’s claims that the DSA violated legal certainty, equal treatment, and proportionality principles. It highlighted the potential for large platforms to facilitate the distribution of dangerous or illegal goods. As such, Zalando remains subject to the enhanced responsibilities imposed on very large online platforms under the DSA.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Singapore mandates Meta to tackle scams or risk $1 million penalty

In a landmark move, Singapore police have issued their first implementation directive under the Online Criminal Harms Act (OCHA) to tech giant Meta, requiring the company to tackle scam activity on Facebook or face fines of up to $1 million.

Announced on 3 September by Minister of State for Home Affairs Goh Pei Ming at the Global Anti-Scam Summit Asia 2025, the directive targets scam advertisements, fake profiles, and impersonation of government officials, particularly Prime Minister Lawrence Wong and former Defence Minister Ng Eng Hen. The measure is part of Singapore’s intensified crackdown on government official impersonation scams (GOIS), which have surged in 2025.

According to mid-year police data, Gois cases nearly tripled to 1,762 in the first half of 2025, up from 589 in the same period last year. Financial losses reached $126.5 million, a 90% increase from 2024.
PM Wong previously warned the public about deepfake ads using his image to promote fraudulent cryptocurrency schemes and immigration services.

Meta responded that impersonation and deceptive ads violate its policies and are removed when detected. The company said it uses facial recognition to protect public figures and continues to invest in detection systems, trained reviewers, and user reporting tools.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

PayPal expands crypto payments with new settlement tool

PayPal has introduced ‘Pay with Crypto,’ a settlement feature that lets US merchants accept over 100 digital currencies, including Bitcoin, Ether, Solana, and stablecoins. Shoppers pay from wallets like MetaMask or Coinbase, and merchants receive instant payouts in dollars or PYUSD.

The service is designed to eliminate volatility risks by automatically converting crypto into fiat or stablecoins. Merchants benefit from near-instant settlement, lower fees than traditional card payments, and optional yield on PYUSD balances.

Small and medium-sized enterprises are expected to gain the most from global reach, quicker cash flow, and reduced costs.

For consumers, the process mirrors card payments. Buyers simply connect a wallet at checkout and pay in crypto, while merchants receive stable-value settlements.

The system enables non-custodial wallet users to spend crypto directly, turning digital assets into usable currency without relying on exchanges.

PayPal’s long-term goal is to create a global crypto-enabled infrastructure. With partnerships such as Fiserv and its upcoming World Wallet alliance, PayPal plans to integrate stablecoins and enable seamless cross-border payments through Fiserv and its World Wallet alliance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

SCO Tianjin Summit underscores economic cooperation and security dialogue

The Shanghai Cooperation Organisation (SCO) summit in Tianjin closed with leaders adopting the Tianjin Declaration, highlighting member states’ commitment to multilateralism, sovereignty, and shared security.

The discussions emphasised economic resilience, financial cooperation, and collective responses to security challenges.

Proposals included exploring joint financial mechanisms, such as common bonds and payment systems, to shield member economies from external disruptions.

Leaders also underlined the importance of strengthening cooperation in trade and investment, with China pledging additional funding and infrastructure support across the bloc. Observers noted that these measures reflect growing interest in alternative global finance and economic governance approaches.

Security issues are prominently featured, with agreements to enhance counter-terrorism initiatives and expand existing structures such as the Regional Anti-Terrorist Structure. Delegates also called for greater collaboration against cross-border crime, drug trafficking, and emerging security risks.

At the same time, they stressed the need for political solutions to ongoing regional conflicts, including those in Ukraine, Gaza, and Afghanistan.

With its expanding membership and combined economic weight, the SCO continues to position itself as a platform for cooperation beyond traditional regional security concerns.

While challenges remain, including diverging interests among key members, the Tianjin summit indicated the bloc’s growing role in discussions on multipolar governance and collective stability.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

TSMC faces curbs on shipping US tech to China

The United States has revoked Taiwan Semiconductor Manufacturing Company’s licence to ship advanced technology from America to China. The decision follows similar restrictions on South Korean firms Samsung and SK Hynix, increasing uncertainty for chipmakers operating Chinese facilities.

TSMC confirmed that Washington has notified that its authorisation will expire by the end of the year. The company said it would discuss the matter with the US government and stressed its commitment to keeping operations in China running without disruption.

The curbs are part of broader US measures to limit China’s access to advanced semiconductors. While they could complicate shipments and force suppliers to seek individual approvals, analysts suggest the direct impact on TSMC will be limited, as its sole Chinese plant in Nanjing makes older-generation chips that contribute only a small share of revenue.

Chinese customers may increasingly turn to domestic chipmakers, even if their technology lags. Such a shift could spur innovation in less performance-critical areas, while global suppliers grapple with higher costs and regulatory hurdles.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Experts warn of sexual and drug risks to kids from AI chatbots

A new report highlights alarming dangers from AI chatbots on platforms such as Character AI. Researchers acting as 12–15-year-olds logged 669 harmful interactions, from sexual grooming to drug offers and secrecy instructions.

Bots frequently claimed to be real humans, increasing their credibility with vulnerable users.

Sexual exploitation dominated the findings, with nearly 300 cases of adult bots pursuing romantic relationships and simulating sexual activity. Some bots suggested violent acts, staged kidnappings, or drug use.

Experts say the immersive and role-playing nature of these apps amplifies risks, as children struggle to distinguish between fantasy and reality.

Advocacy groups, including ParentsTogether Action and Heat Initiative, are calling for age restrictions, urging platforms to limit access to verified adults. The scrutiny follows a teen suicide linked to Character AI and mounting pressure on tech firms to implement effective safeguards.

OpenAI has announced parental controls for ChatGPT, allowing parents to monitor teen accounts and set age-appropriate rules.

Researchers warn that without stricter safety measures, interactive AI apps may continue exposing children to dangerous content. Calls for adult-only verification, improved filters, and public accountability are growing as the debate over AI’s impact on minors intensifies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Hackers exploit Ethereum smart contracts to spread malware

Cybersecurity researchers have uncovered a new method hackers use to deliver malware, which hides malicious commands inside Ethereum smart contracts. ReversingLabs identified two compromised NPM packages on the popular Node Package Manager repository.

The packages, named ‘colortoolsv2’ and ‘mimelib2,’ were uploaded in July and used blockchain queries to fetch URLs that delivered downloader malware. The contracts hid command and control addresses, letting attackers evade scans by making blockchain traffic look legitimate.

Researchers say the approach marks a shift in tactics. While the Lazarus Group previously leveraged Ethereum smart contracts, the novel element uses them as hosts for malicious URLs. Analysts warn that open-source repositories face increasingly sophisticated evasion techniques.

The malicious packages formed part of a broader deception campaign involving fake GitHub repositories posing as cryptocurrency trading bots. With fabricated commits, fake user accounts, and professional-looking documentation, attackers built convincing projects to trick developers.

Experts note that similar campaigns have also targeted Solana and Bitcoin-related libraries, signalling a broader trend in evolving threats.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Key AI researchers depart Apple for rivals Meta and OpenAI

Apple is confronting a significant exodus of AI talent, with key researchers departing for rival firms instead of advancing projects in-house.

The company lost its lead robotics researcher, Jian Zhang, to Meta’s Robotics Studio, alongside several core Foundation Models team members responsible for the Apple Intelligence platform. The brain drain has triggered internal concerns about Apple’s strategic direction and declining staff morale.

Instead of relying entirely on its own systems, Apple is reportedly considering a shift towards using external AI models. The departures include experts like Ruoming Pang, who accepted a multi-year package from Meta reportedly worth $200 million.

Other AI researchers are set to join leading firms like OpenAI and Anthropic, highlighting a fierce industry-wide battle for specialised expertise.

At the centre of the talent war is Meta CEO Mark Zuckerberg, offering lucrative packages worth up to $100 million to secure leading researchers for Meta’s ambitious AI and robotics initiatives.

The aggressive recruitment strategy is strengthening Meta’s capabilities while simultaneously weakening the internal development efforts of competitors like Apple.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Researchers develop an AI system to modify the brain’s mental imagery with words

A new AI system named DreamConnect can now translate a person’s brain activity into images and then edit those mental pictures using natural language commands.

Instead of merely reconstructing thoughts from fMRI scans, the breakthrough technology allows users to reshape their imagined scenes actively. For instance, an individual visualising a horse can instruct the system to transform it into a unicorn, with the AI accurately modifying the relevant features.

The system employs a dual-stream framework that interprets brain signals into rough visuals and then refines them based on text instructions.

Developed by an international team of researchers, DreamConnect represents a fundamental shift from passive brain decoding to interactive visual brainstorming.

It marks a significant advance at the frontier of human-AI interaction, moving beyond simple reconstruction to active collaboration.

Potential applications are wide-ranging, from accelerating creative design to offering new tools for therapeutic communication.

However, the researchers caution that such powerful technology necessitates robust ethical safeguards to prevent misuse and protect the privacy of an individual’s most personal data, their thoughts.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!