Chinese nationals accused of bypassing US export controls on AI chips

Two Chinese nationals have been charged in the US with illegally exporting millions of dollars’ worth of advanced Nvidia AI chips to China, violating the export controls.

The Department of Justice (DOJ) said Chuan Geng and Shiwei Yang operated California-based ALX Solutions, which allegedly shipped restricted hardware without the required licences over the past three years.

The DOJ claims that the company exported Nvidia’s H100 and GeForce RTX 4090 graphics processing units to China via transit hubs in Singapore and Malaysia, concealing their ultimate destination.

Payments for the shipments allegedly came from firms in Hong Kong and mainland China, including a $1 million transfer in January 2024.

Court documents state that ALX falsely declared shipments to Singapore-based customers, but US export control officers could not confirm the deliveries.

One 2023 invoice for over $28 million reportedly misrepresented the buyer’s identity. Neither Geng nor Yang had sought export licences from the US Commerce Department.

Yang was arrested on Saturday, and Geng surrendered soon after. Both appeared in a Los Angeles federal court on Monday and could face up to 20 years in prison if convicted.

Nvidia and Super Micro, a supplier, said they comply with all export regulations and will cooperate with authorities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Korea’s LG CNS wins first overseas AI data centre deal in Indonesia

LG CNS has secured a 100 billion won ($72 million) contract to build an AI data centre in Jakarta, a first for a Korean firm in a project of this kind overseas. The centre is expected to be completed by 2026 and will house over 100,000 servers.

The deal was signed through LG Sinar Mas Technology Solutions, a joint venture between Sinar Mas Group of Indonesia and LG of South Korea. Local partner KMG, backed by Korea Investment Real Asset Management, is leading the project to create Indonesia’s largest hyperscale AI data centre.

The 11-storey facility will launch with a power capacity of 30 megawatts, with plans to expand to 220 megawatts in future phases. LG CNS will manage key infrastructure, including electricity, cooling, and telecoms systems, using technologies across the LG Group.

Safety has been a key selling point. The centre will utilise seismic isolation systems to safeguard equipment in earthquake-prone Southeast Asia. Redundant power systems will also ensure continuous operation even during outages.

Southeast Asia is emerging as a cost-effective hub for AI among global technology giants. LG CNS plans to leverage the Jakarta project as a launchpad for expanding into Singapore, Malaysia, and other international markets.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UK GP surgery praised for using AI to boost efficiency and patient care

UK Health Minister Karin Smyth praised St George’s Surgery in Weston-super-Mare for utilising AI to enhance efficiency. Serving nearly 14,000 patients, the surgery uses AI to automate note-taking and letter drafting, reducing administrative burdens on staff.

It has been reported that, in June of 2025, St George’s Surgery handled over 9,000 appointments, with more than half booked and held on the same day. As part of the UK’s 10-Year Health Plan, the government stated it aims to expand AI adoption in healthcare, potentially freeing up the capacity of over 2,000 full-time GPs.

Andy Carpenter, Digital Director at Mendip Vale Medical Group, highlighted that AI is helping to manage growing patient demand, increase face-to-face time with GPs, and maintain strong data protection standards. Health Minister Karin Smyth also stressed the need for safe, well-regulated AI in healthcare, noting its practical uses, such as remote monitoring of vaccine fridge temperatures.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI targets $500 billion valuation ahead of potential IPO

OpenAI is in early discussions over a share sale that could value the company at around $500 billion, according to a source familiar with the talks.

The transaction would occur before a possible IPO and let current and former employees sell several billion dollars’ worth of shares.

The valuation marks a steep rise from the $300 billion figure attached to its most recent funding round earlier in the year. Backed by Microsoft, OpenAI has seen rapid growth in users and revenue, with ChatGPT attracting about 700 million weekly active users, up from 400 million in February.

Revenue doubled in the first seven months of the year, reaching an annualised run rate of $12 billion, and is on track for $20 billion by year-end.

The potential sale comes as competition for AI talent intensifies.

Meta has invested billions in Scale AI to lure its chief executive, Alexandr Wang, to head its superintelligence unit. At the same time, firms such as ByteDance and Databricks have used private share sales to update valuations and reward staff.

Thrive Capital and other existing OpenAI investors are discussing joining the deal.

OpenAI is also preparing a major corporate restructuring that could replace its capped-profit model and clear the way for an eventual public listing.

However, Chief Financial Officer Sarah Friar said any IPO would only happen when the company and the markets are ready.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google DeepMind launches Genie 3 to create interactive 3D worlds from text

Google DeepMind has introduced Genie 3, an AI world model capable of generating explorable 3D environments in real time from a simple text prompt.

Unlike earlier versions, it supports several minutes of continuous interaction, basic visual memory, and real-time changes such as altering weather or adding characters.

The system allows users to navigate these spaces at 24 frames per second in 720p resolution, retaining object placement for about a minute.

Users can trigger events within the virtual world by typing new instructions, making Genie 3 suitable for applications ranging from education and training to video games and robotics.

Genie 3’s improvements over Genie 2 include frame-by-frame generation with memory tracking and dynamic scene creation without relying on pre-built 3D assets.

However, the AI model still has limits, including the inability to replicate real-world locations with geographic accuracy and restricted interaction capabilities. Multi-agent features are still in development.

Currently offered as a limited research preview to select academics and creators, Genie 3 will be made more widely available over time.

Google DeepMind has noted that safety and responsibility remain central concerns during the gradual rollout.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Malaysia tackles online scams with AI and new cyber guidelines

Cybercrime involving financial scams continues to rise in Malaysia, with 35,368 cases reported in 2024, a 2.53 per cent increase from the previous year, resulting in losses of RM1.58 billion.

The situation remains severe in 2025, with over 12,000 online scam cases recorded in the first quarter alone, involving fake e-commerce offers, bogus loans, and non-existent investment platforms. Losses during this period reached RM573.7 million.

Instead of waiting for the situation to worsen, the Digital Ministry is rolling out proactive safeguards. These include new AI-related guidelines under development by the Department of Personal Data Protection, scheduled for release by March 2026.

The documents will cover data protection impact assessments, automated decision-making, and privacy-by-design principles.

The ministry has also introduced an official framework for responsible AI use in the public sector, called GPAISA, to ensure ethical compliance and support across government agencies.

Additionally, training initiatives such as AI Untuk Rakyat and MD Workforce aim to equip civil servants and enforcement teams with skills to handle AI and cyber threats.

In partnership with CyberSecurity Malaysia and Universiti Kebangsaan Malaysia, the ministry is also creating an AI-powered application to verify digital images and videos.

Instead of relying solely on manual analysis, the tool will help investigators detect online fraud, identity forgery, and synthetic media more effectively.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Law curbs AI use in mental health services across US state

A new law in a US state has banned the use of AI for delivering mental health care, drawing a firm line between digital tools and licensed professionals. The legislation limits AI systems to administrative tasks such as note-taking and scheduling, explicitly prohibiting them from offering therapy or clinical advice.

The move comes as concerns grow over the use of AI chatbots in sensitive care roles. Lawmakers in the midwestern state of Illinois approved the measure, citing the need to protect residents from potentially harmful or misleading AI-generated responses.

Fines of up to $10,000 may be imposed on companies or individuals who violate the ban. Officials stressed that AI lacks the empathy, accountability and clinical oversight necessary to ensure safe and ethical mental health treatment.

One infamous case saw an AI-powered chatbot suggest drug use to a fictional recovering addict, a warning signal, experts say, of what can go wrong without strict safeguards. The law is named the Wellness and Oversight for Psychological Resources Act.

Other parts of the United States are considering similar steps. Florida’s governor recently described AI as ‘the biggest issue’ facing modern society and pledged new state-level regulations within months.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft offers $5 million for cloud and AI vulnerabilities

Microsoft is offering security researchers up to $5 million for uncovering critical vulnerabilities in its products, with a focus on cloud and AI systems. The Zero Day Quest contest will return in spring 2026, following a $1.6 million payout in its previous edition.

Researchers are invited to submit discoveries between 4 August and 4 October 2025, targeting Azure, Copilot, M365, and other significant services. High-severity flaws are eligible for a 50% bonus payout, increasing the incentive for impactful findings.

Top participants will receive exclusive invitations to a live hacking event at Microsoft’s Redmond campus. The event promises collaboration with product teams and the Microsoft Security Response Centre.

Training from Microsoft’s AI Red Team and other internal experts will also be available. The company encourages public disclosure of patched findings to support the broader cybersecurity community.

The competition aligns with Microsoft’s Secure Future Initiative, which aims to strengthen cloud and AI security by default, design, and operation. Vulnerabilities will be disclosed transparently, even if no customer action is needed.

Full details and submission rules are available through the MSRC Researcher Portal. All reports will be subject to Microsoft’s bug bounty terms.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cloudflare claims Perplexity circumvented website scraping blocks

Cloudflare has accused AI startup Perplexity of ignoring explicit website instructions not to scrape their content.

According to the internet infrastructure company, Perplexity has allegedly disguised its identity and used technical workarounds to bypass restrictions set out in Robots.txt files, which tell bots which pages they may or may not access.

The behaviour was reportedly detected after multiple Cloudflare customers complained about unauthorised scraping attempts.

Instead of respecting these rules, Cloudflare claims Perplexity altered its bots’ user agent to appear as a Google Chrome browser on macOS and switched its network identifiers to avoid detection.

The company says these tactics were seen across tens of thousands of domains and millions of daily requests, and that it used machine learning and network analysis to identify the activity.

Perplexity has denied the allegations, calling Cloudflare’s report a ‘sales pitch’ and disputing that the bot named in the findings belongs to the company. Cloudflare has since removed Perplexity’s bots from its verified list and introduced new blocking measures.

The dispute arises as Cloudflare intensifies its efforts to grant website owners greater control over AI crawlers. Last month, it launched a marketplace enabling publishers to charge AI firms for scraping, alongside free tools to block unauthorised data collection.

Perplexity has previously faced criticism over content use, with outlets such as Wired accusing it of plagiarism in 2024.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

The risky rise of all-in-one AI companions

A concerning new trend is emerging: AI companions are merging with mental health tools, blurring ethical lines. Human therapists are required to maintain a professional distance. Yet AI doesn’t follow such rules; it can be both confidant and counsellor.

AI chatbots are increasingly marketed as friendly companions. At the same time, they can offer mental health advice. Combined, you get an AI friend who also becomes your emotional guide. The mix might feel comforting, but it’s not without risks.

Unlike a human therapist, AI has no ethical compass. It mimics caring responses based on patterns, not understanding. One prompt might trigger empathetic advice and best-friend energy, a murky interaction without safeguards.

The deeper issue? There’s little incentive for AI makers to stop this. Blending companionship and therapy boosts user engagement and profits. Unless laws intervene, these all-in-one bots will keep evolving.

There’s also a massive privacy cost. People confide personal feelings to these bots, often daily, for months. The data may be reviewed, stored, and reused to train future models. Your digital friend and therapist might also be your data collector.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!