China warns over biometric data risks linked to crypto schemes

China’s Ministry of State Security has warned of foreign attempts to collect sensitive biometric data via crypto schemes. The ministry warned that foreign agents are illegally harvesting iris scans and facial data, risking personal privacy and national security.

The advisory noted recent cases in which foreign intelligence services exploited biometric technologies to spy on individuals within China. Cryptocurrencies incentivised people worldwide to submit iris scans, which were sent overseas.

Although no specific companies were named, the description resembled the approach of the crypto firm World, formerly known as Worldcoin.

Biometric identification methods have proliferated across many sectors due to their accuracy and convenience. However, the ministry stressed the vulnerability of such systems to data breaches and misuse.

Iris patterns, unique and challenging to replicate, are prized by malicious actors.

Citizens are urged to remain cautious, carefully review privacy policies, and question how their biometric information is handled.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU proposal to scan private messages gains support

The European Union’s ‘Chat Control’ proposal is gaining traction, with 19 member states now supporting a plan to scan all private messages on encrypted apps. From October, apps like WhatsApp, Signal, and Telegram must scan all messages, photos, and videos on users’ devices before encryption.

France, Denmark, Belgium, Hungary, Sweden, Italy, and Spain back the measure, while Germany has yet to decide. The proposal could pass by mid-October under the EU’s qualified majority voting system if Germany joins.

The initiative aims to prevent child sexual abuse material (CSAM) but has sparked concerns over mass surveillance and the erosion of digital privacy.

In addition to scanning, the proposal would introduce mandatory age verification, which could remove anonymity on messaging platforms. Critics argue the plan amounts to real-time surveillance of private conversations and threatens fundamental freedoms.

Telegram founder Pavel Durov recently warned of societal collapse in France due to censorship and regulatory pressure. He disclosed attempts by French officials to censor political content on his platform, which he refused to comply with.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

WhatsApp shuts down 6.8 million scam accounts

As part of its anti-scam efforts, WhatsApp has removed 6.8 million accounts linked to fraudulent activity, according to its parent company, Meta.

The crackdown follows the discovery that organised criminal groups are operating scam centres across Southeast Asia, hacking WhatsApp accounts or adding users to group chats to lure victims into fake investment schemes and other types of fraud.

In one case, WhatsApp, Meta, and OpenAI collaborated to disrupt a Cambodian cybercrime group that used ChatGPT to generate fake instructions for a rent-a-scooter pyramid scheme.

Victims were enticed with offers of cash for social media engagement before being moved to private chats and pressured to make upfront payments via cryptocurrency platforms.

Meta warned that these scams often stem from well-organised networks in Southeast Asia, some exploiting forced labour. Authorities continue to urge the public to remain vigilant, enable features such as WhatsApp’s two-step verification, and be wary of suspicious or unsolicited messages.

It should be mentioned that these scams have also drawn political attention in the USA. Namely, US Senator Maggie Hassan has urged SpaceX CEO Elon Musk to act against transnational criminal groups in Southeast Asia that use Starlink satellite internet to run massive online fraud schemes targeting Americans.

Despite SpaceX’s policies allowing service termination for fraud, Starlink remains active in regions where these scams, often linked to forced labour and human trafficking, operate.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Chinese nationals accused of bypassing US export controls on AI chips

Two Chinese nationals have been charged in the US with illegally exporting millions of dollars’ worth of advanced Nvidia AI chips to China, violating the export controls.

The Department of Justice (DOJ) said Chuan Geng and Shiwei Yang operated California-based ALX Solutions, which allegedly shipped restricted hardware without the required licences over the past three years.

The DOJ claims that the company exported Nvidia’s H100 and GeForce RTX 4090 graphics processing units to China via transit hubs in Singapore and Malaysia, concealing their ultimate destination.

Payments for the shipments allegedly came from firms in Hong Kong and mainland China, including a $1 million transfer in January 2024.

Court documents state that ALX falsely declared shipments to Singapore-based customers, but US export control officers could not confirm the deliveries.

One 2023 invoice for over $28 million reportedly misrepresented the buyer’s identity. Neither Geng nor Yang had sought export licences from the US Commerce Department.

Yang was arrested on Saturday, and Geng surrendered soon after. Both appeared in a Los Angeles federal court on Monday and could face up to 20 years in prison if convicted.

Nvidia and Super Micro, a supplier, said they comply with all export regulations and will cooperate with authorities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Law curbs AI use in mental health services across US state

A new law in a US state has banned the use of AI for delivering mental health care, drawing a firm line between digital tools and licensed professionals. The legislation limits AI systems to administrative tasks such as note-taking and scheduling, explicitly prohibiting them from offering therapy or clinical advice.

The move comes as concerns grow over the use of AI chatbots in sensitive care roles. Lawmakers in the midwestern state of Illinois approved the measure, citing the need to protect residents from potentially harmful or misleading AI-generated responses.

Fines of up to $10,000 may be imposed on companies or individuals who violate the ban. Officials stressed that AI lacks the empathy, accountability and clinical oversight necessary to ensure safe and ethical mental health treatment.

One infamous case saw an AI-powered chatbot suggest drug use to a fictional recovering addict, a warning signal, experts say, of what can go wrong without strict safeguards. The law is named the Wellness and Oversight for Psychological Resources Act.

Other parts of the United States are considering similar steps. Florida’s governor recently described AI as ‘the biggest issue’ facing modern society and pledged new state-level regulations within months.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

X challenges India’s expanded social media censorship in court

Tensions have escalated between Elon Musk’s social media platform, X, and the Indian government over extensive online content censorship measures.

Triggered by a seemingly harmless post describing a senior politician as ‘useless,’ the incident quickly spiralled into a significant legal confrontation.

X has accused Prime Minister Narendra Modi’s administration of overstepping constitutional bounds by empowering numerous government bodies to issue content-removal orders, significantly expanding the scope of India’s digital censorship.

At the heart of the dispute lies India’s increased social media content regulation since 2023, including launching the Sahyog platform, a centralised portal facilitating direct content-removal orders from officials to tech firms.

X rejected participating in Sahyog, labelling it a ‘censorship portal,’ and subsequently filed a lawsuit in Karnataka High Court earlier this year, contesting the legality of India’s directives and website, which it claims undermine free speech.

Indian authorities justify their intensified oversight by pointing to the need to control misinformation, safeguard national security, and prevent societal discord. They argue that the measures have broad support within the tech community. Indeed, major players like Google and Meta have reportedly complied without public protest, though both companies have declined to comment on their stance.

However, the court documents reveal that the scope of India’s censorship requests extends far beyond misinformation.

Authorities have reportedly targeted satirical cartoons depicting politicians unfavorably, criticism regarding government preparedness for natural disasters, and even media coverage of serious public incidents like a deadly stampede at a railway station.

While Musk and Prime Minister Modi maintain an outwardly amicable relationship, the conflict presents significant implications for X’s operations in India, one of its largest user bases.

Musk, a self-proclaimed free speech advocate, finds himself at a critical juncture, navigating between principles and the imperative to expand his business ventures within India’s substantial market.

Source: Reuters

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Cloudflare claims Perplexity circumvented website scraping blocks

Cloudflare has accused AI startup Perplexity of ignoring explicit website instructions not to scrape their content.

According to the internet infrastructure company, Perplexity has allegedly disguised its identity and used technical workarounds to bypass restrictions set out in Robots.txt files, which tell bots which pages they may or may not access.

The behaviour was reportedly detected after multiple Cloudflare customers complained about unauthorised scraping attempts.

Instead of respecting these rules, Cloudflare claims Perplexity altered its bots’ user agent to appear as a Google Chrome browser on macOS and switched its network identifiers to avoid detection.

The company says these tactics were seen across tens of thousands of domains and millions of daily requests, and that it used machine learning and network analysis to identify the activity.

Perplexity has denied the allegations, calling Cloudflare’s report a ‘sales pitch’ and disputing that the bot named in the findings belongs to the company. Cloudflare has since removed Perplexity’s bots from its verified list and introduced new blocking measures.

The dispute arises as Cloudflare intensifies its efforts to grant website owners greater control over AI crawlers. Last month, it launched a marketplace enabling publishers to charge AI firms for scraping, alongside free tools to block unauthorised data collection.

Perplexity has previously faced criticism over content use, with outlets such as Wired accusing it of plagiarism in 2024.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Musk’s robotaxi ambitions threatened as Tesla faces a $243 million autopilot verdict

A recent court verdict has required Tesla to pay approximately $243 million in damages following a 2019 fatal crash involving an Autopilot-equipped Model S.

The Florida jury found Tesla’s driver-assistance software defective, a claim the company intends to appeal, asserting that the driver was solely responsible for the incident.

The ruling may significantly impact Tesla’s ambitions to expand its emerging robotaxi network in the US, fuelling heightened scrutiny over the safety of the company’s autonomous technology from both regulators and the public.

The timing of this legal setback is critical as Tesla is seeking regulatory approval for its robotaxi services, crucial to its market valuation and efforts to manage global competition while facing backlash against CEO Elon Musk’s political views.

Additionally, the company has recently awarded CEO Elon Musk a substantial new compensation package worth approximately $29 billion in stock options, signalling the company’s continued reliance on Musk’s leadership at a critical juncture, since the company plans transitions from a struggling auto business toward futuristic ventures like robotaxis and humanoid robots.

Tesla’s approach to autonomous driving, which relies on cameras and AI instead of more expensive technologies like lidars and radars used by competitors, has prompted it to start a limited robotaxi trial in Texas. However, its aggressive expansion plans for this service starkly contrast with the cautious rollouts by companies such as Waymo, which runs the US’s only commercial driverless robotaxi system.

The jury’s decision also complicates Tesla’s interactions with state regulators, as the company awaits approvals in multiple states, including California and Florida. While Nevada has engaged with Tesla regarding its robotaxi programme, Arizona remains indecisive.

This ruling challenges Tesla’s narrative of safety efficacy, especially since the case involved a distracted driver whose vehicle ran a stop sign and collided with a parked car, yet the Autopilot system was partially blamed.

Source: Reuters

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

The US launches $100 million cybersecurity grant for states

The US government has unveiled more than $100 million in funding to help local and tribal communities strengthen their cybersecurity defences.

The announcement came jointly from the Cybersecurity and Infrastructure Security Agency (CISA) and the Federal Emergency Management Agency (FEMA), both part of the Department of Homeland Security.

Instead of a single pool, the funding is split into two distinct grants. The State and Local Cybersecurity Grant Program (SLCGP) will provide $91.7 million to 56 states and territories, while the Tribal Cybersecurity Grant Program (TCGP) allocates $12.1 million specifically for tribal governments.

These funds aim to support cybersecurity planning, exercises and service improvements.

CISA’s acting director, Madhu Gottumukkala, said the grants ensure communities have the tools needed to defend digital infrastructure and reduce cyber risks. The effort follows a significant cyberattack on St. Paul, Minnesota, which prompted a state of emergency and deployment of the National Guard.

Officials say the funding reflects a national commitment to proactive digital resilience instead of reactive crisis management. Homeland Security leaders describe the grant as both a strategic investment in critical infrastructure and a responsible use of taxpayer funds.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

The US considers chip tracking to prevent smuggling to China

The US is exploring how to build better location-tracking into advanced chips, as part of an effort to prevent American semiconductors from ending up in China.

Michael Kratsios, a senior official behind Donald Trump’s AI strategy, confirmed that software or physical updates to chips are being considered to support traceability.

Instead of relying on external enforcement, Washington aims to work directly with the tech industry to improve monitoring of chip movements. The strategy forms part of a broader national plan to counter smuggling and maintain US dominance in cutting-edge technologies.

Beijing recently summoned Nvidia representatives to address concerns over American proposals linked to tracking features and perceived security risks in the company’s H20 chips.

Although US officials have not directly talked with Nvidia or AMD on the matter, Kratsios clarified that chip tracking is now a formal objective.

The move comes even as Trump’s team signals readiness to lift certain export restrictions to China in return for trade benefits, such as rare-earth magnet sales to the US.

Kratsios criticised China’s push to lead global AI regulation, saying countries should define their paths instead of following a centralised model. He argued that the US innovation-first approach offers a more attractive alternative.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!