EU and Australia diverge on paths to AI regulation

The regulatory approaches to AI in the EU and Australia are diverging significantly, creating a complex challenge for the global tech sector.

Instead of a unified global standard, companies must now navigate the EU’s stringent, risk-based AI Act and Australia’s more tentative, phased-in approach. The disparity underscores the necessity for sophisticated cross-border legal expertise to ensure compliance in different markets.

In the EU, the landmark AI Act is now in force, implementing a strict risk-based framework with severe financial penalties for non-compliance.

Conversely, Australia has yet to pass binding AI-specific laws, opting instead for a proposal paper outlining voluntary safety standards and 10 mandatory guardrails for high-risk applications currently under consultation.

It creates a markedly different compliance environment for businesses operating in both regions.

For tech companies, the evolving patchwork of international regulations turns AI governance into a strategic differentiator instead of a mere compliance obligation.

Understanding jurisdictional differences, particularly in areas like data governance, human oversight, and transparency, is becoming essential for successful and lawful global operations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google avoids forced breakup in search monopoly trial

A United States federal judge has ruled against a forced breakup of Google’s search business, instead opting for a series of behavioural changes to curb anticompetitive behaviour.

The ruling, from US District Court Judge Amit P. Mehta, bars Google from entering or maintaining exclusive deals that tie the distribution of its search products, such as Search, Chrome, and Gemini, to other apps or revenue agreements.

The tech giant will also have to share specific search data with rivals and offer search and search ad syndication services to competitors at standard rates.

The ruling comes a year after Judge Mehta found that Google had illegally maintained its monopoly in online search. The Department of Justice brought the case and pushed for stronger measures, including forcing Google to sell off its Chrome browser and Android operating system.

It also sought to end Google’s lucrative agreements with companies like Apple and Samsung, in which it pays billions to be the default search engine on their devices. The judge acknowledged during the trial that these default placements were ‘extremely valuable real estate’ that effectively locked out rivals.

A final judgement has not yet been issued, as Judge Mehta has given Google and the Department of Justice until 10 September to submit a revised plan. A technical committee will be established to help enforce the judgement, which will go into effect 60 days after entry and last for six years.

Experts say the ruling may influence a separate antitrust trial against Google’s advertising technology business, and that the search case itself is likely to face a lengthy appeals process, stretching into 2028.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Hackers exploited flaws in WhatsApp and Apple devices, company says

WhatsApp has disclosed a hacking attempt that combined flaws in its app with a vulnerability in Apple’s operating system. The company has since fixed the issues.

The exploit, tracked as CVE-2025-55177 in WhatsApp and CVE-2025-43300 in iOS, allowed attackers to hijack devices via malicious links. Fewer than 200 users worldwide are believed to have been affected.

Amnesty International reported that some victims appeared to be members of civic organisations. Its Security Lab is collecting forensic data and warned that iPhone and Android users were impacted.

WhatsApp credited its security team for identifying the loopholes, describing the operation as highly advanced but narrowly targeted. The company also suggested that other apps could have been hit in the same campaign.

The disclosure highlights ongoing risks to secure messaging platforms, even those with end-to-end encryption. Experts stress that keeping apps and operating systems up to date remains essential to reducing exposure to sophisticated exploits.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

US regulators offer clarity on spot crypto products

The US Securities and Exchange Commission (SEC) and the Commodity Futures Trading Commission (CFTC) have announced a joint effort to clarify spot cryptocurrency trading. Regulators confirmed that US and foreign exchanges can list spot crypto products- leveraged and margin ones.

The guidance follows the President’s Working Group on Digital Asset Markets recommendations, which called for rules that keep blockchain innovation within the country.

Regulators said they are ready to review filings, address custody and clearing, and ensure spot markets meet transparency and investor protection standards.

Under the new approach, major venues such as the New York Stock Exchange, Nasdaq, CME Group and Cboe Global Markets could seek to list spot crypto assets. Foreign boards of trade recognised by the CFTC may also be eligible.

The move highlights a policy shift under President Donald Trump’s administration, with Congress and the White House pressing for greater regulatory clarity.

In July, the House of Representatives passed the CLARITY Act, a bill on crypto market structure now before the Senate. The moves and the regulators’ statement mark a key step in aligning US digital assets with established financial rules.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Google dismisses false breach rumours as Gmail security concerns grow

Reports that Gmail suffered a massive breach have been dismissed by Google, which said rumours of warnings to 2.5 billion users were false.

In a Monday blog post, Google rejected claims that it had issued global notifications about a serious Gmail security issue. It stressed that its protections remain effective against phishing and malware.

Confusion stems from a June incident involving a Salesforce server, during which attackers briefly accessed public business information, including names and contact details. Google said all affected parties were notified by early August.

The company acknowledged that phishing attempts are increasing, but clarified that Gmail’s defences block more than 99.9% of such attempts. A July blog post on phishing risks may have been misinterpreted as evidence of a breach.

Google urged users to remain vigilant, recommending password alternatives such as passkeys and regular account reviews. While the false alarm spurred unnecessary panic, security experts noted that updating credentials remains good practice.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI-generated media must now carry labels in China

China has introduced a sweeping new law that requires all AI-generated content online to carry labels. The measure, which came into effect on 1 September, aims to tackle misinformation, fraud and copyright infringement by ensuring greater transparency in digital media.

The law, first announced in March by the Cyberspace Administration of China, mandates that all AI-created text, images, video and audio must carry explicit and implicit markings.

These include visible labels and embedded metadata such as watermarks in files. Authorities argue that the rules will help safeguard users while reinforcing Beijing’s tightening grip over online spaces.

Major platforms such as WeChat, Douyin, Weibo and RedNote moved quickly to comply, rolling out new features and notifications for their users. The regulations also form part of the Qinglang campaign, a broader effort by Chinese authorities to clean up online activity with a strong focus on AI oversight.

While Google and other US companies are experimenting with content authentication tools, China has enacted legally binding rules nationwide.

Observers suggest that other governments may soon follow, as global concern about the risks of unlabelled AI-generated material grows.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ESMA highlights risks of tokenised equity products

A top European regulator has warned that tokenised stocks could mislead investors and undermine confidence in financial markets. Natasha Cazenave of ESMA said many tokenised stocks, like voting or dividends, lack shareholder rights.

Unlike traditional equities, tokenised stocks are typically issued through intermediaries and merely track share prices. Cazenave cautioned that retail investors may wrongly believe they own company shares, exposing them to a risk of misunderstanding.

Her warning follows the expansion of tokenised stock services on platforms like Robinhood and Kraken.

The World Federation of Exchanges recently echoed these concerns, urging regulators to strengthen oversight. Without intervention, the group warned that tokenised products could threaten market integrity and heighten investor risks.

Although advocates say tokenisation could cut costs and widen access, Cazenave noted most projects remain small, illiquid, and far from delivering promised efficiency. Regulators, she added, remain focused on balancing innovation with investor protection.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

ChatGPT safety checks may trigger police action

OpenAI has confirmed that ChatGPT conversations signalling a risk of serious harm to others can be reviewed by human moderators and may even reach the police.

The company explained these measures in a blog post, stressing that its system is designed to balance user privacy with public safety.

The safeguards treat self-harm differently from threats to others. When a user expresses suicidal intent, ChatGPT directs them to professional resources instead of contacting law enforcement.

By contrast, conversations showing intent to harm someone else are escalated to trained moderators, and if they identify an imminent risk, OpenAI may alert authorities and suspend accounts.

The company admitted its safety measures work better in short conversations than in lengthy or repeated ones, where safeguards can weaken.

OpenAI is working to strengthen consistency across interactions and developing parental controls, new interventions for risky behaviour, and potential connections to professional help before crises worsen.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Alibaba shares soar on AI and cloud growth

Alibaba’s Hong Kong shares rose over 15%, their most significant single-day gain since early 2023, following strong AI revenue growth. AI-related sales surged triple digits, and the cloud division grew 26% to 33.4 billion yuan ($4.7 billion), exceeding expectations and driving expansion.

The results underline Alibaba’s transformation from a retail-heavy company into a diversified technology player. Analysts say AI is now a central growth driver, with cloud and AI offerings boosting investor confidence despite price war pressures from JD.com and Meituan.

Alibaba is investing in AI hardware and developing proprietary chips to reduce reliance on foreign semiconductors. The strategy aims to build faster, cheaper, and more secure AI systems for domestic and international markets, including Lazada and AliExpress.

Experts view this calculated self-reliance and strong cloud and AI services as a long-term growth driver.

While retail rivals continue to struggle with profit pressure, Alibaba’s leadership has emphasised AI as a core strategic focus.

CEO Eddie Wu emphasised ambitions in artificial general intelligence, with analysts noting AI could protect Alibaba from price wars and support growth across multiple business areas.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI oversight and audits at core of Pakistan’s security plan

Pakistan plans to roll out AI-driven cybersecurity systems to monitor and respond to attacks on critical infrastructure and sensitive data in real time. Documents from the Ministry for Information Technology outline a framework to integrate AI into every stage of security operations.

The initiative will enforce protocols like secure data storage, sandbox testing, and collaborative intelligence sharing. Human oversight will remain mandatory, with public sector AI deployments registered and subject to transparency requirements.

Audits and impact assessments will ensure compliance with evolving standards, backed by legal penalties for breaches. A national policy on data security will define authentication, auditing, and layered defence strategies across network, host, and application levels.

New governance measures include identity management policies with multi-factor authentication, role-based controls, and secure frameworks for open-source AI. AI-powered simulations will help anticipate threats, while regulatory guidelines address risks from disinformation and generative AI.

Regulatory sandboxes will allow enterprises in Pakistan to test systems under controlled conditions, with at least 20 firms expected to benefit by 2027. Officials say the measures will balance innovation with security, safeguarding infrastructure and citizens.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!