Retailers face new pressure under California privacy law

California has entered a new era of privacy and AI enforcement after the state’s privacy regulator fined Tractor Supply USD1.35 million for failing to honour opt-outs and ignoring Global Privacy Control signals. The case marks the largest penalty yet from the California Privacy Protection Agency.

In California, there is a widening focus on how companies manage consumer data, verification processes and third-party vendors. Regulators are now demanding that privacy signals be enforced at the technology layer, not just displayed through website banners or webforms.

Retailers must now show active, auditable compliance, with clear privacy notices, automated data controls and stronger vendor agreements. Regulators have also warned that businesses will be held responsible for partner failures and poor oversight of cookies and tracking tools.

At the same time, California’s new AI law, SB 53, extends governance obligations to frontier AI developers, requiring transparency around safety benchmarks and misuse prevention. The measure connects AI accountability to broader data governance, reinforcing that privacy and AI oversight are now inseparable.

Executives across retail and technology are being urged to embed compliance and governance into daily operations. California’s regulators are shifting from punishing visible lapses to demanding continuous, verifiable proof of compliance across both data and AI systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

PayPay and Binance Japan unite to advance digital finance

PayPay, Japan’s top cashless payment firm and a SoftBank company, has acquired 40% of Binance Japan to unite traditional finance with blockchain innovation. The partnership merges PayPay’s 70 million users and trusted network with Binance’s digital asset expertise and global Web3 leadership.

Under the new alliance, Binance Japan users will soon be able to purchase cryptocurrencies using PayPay Money and withdraw funds directly into their PayPay wallets. The integration seeks to simplify digital trading and connect cashless payments with decentralised finance.

Executives from both companies highlighted the significance of this collaboration. PayPay’s Masayoshi Yanase said the deal supports Japan’s financial growth, while Takeshi Chino called it a milestone for everyday Web3 adoption.

The alliance is expected to accelerate Japan’s digital finance landscape, strengthening its role as one of the world’s most advanced economies in financial technology. By combining secure payments with blockchain innovation, PayPay and Binance Japan aim to build a seamless digital economy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Ant Group launches trillion-parameter AI model Ling-1T

Ant Group has unveiled its Ling AI model family, introducing Ling-1T, a trillion-parameter large language model that has been open-sourced for public use.

The Ling family now includes three main series: the Ling non-thinking models, the Ring thinking models, and the multimodal Ming models.

Ling-1T delivers state-of-the-art performance in code generation, mathematical reasoning, and logical problem-solving, achieving 70.42% accuracy on the 2025 AIME benchmark.

A model that combines efficient inference with strong reasoning capabilities, marking a major advance in AI development for complex cognitive tasks.

Company’s Chief Technology Officer, He Zhengyu, said that Ant Group views AGI as a public good that should benefit society.

The release of Ling-1T and the earlier Ring-1T-preview underscores Ant Group’s commitment to open, collaborative AI innovation and the development of inclusive AGI technologies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Beijing tightens grip on rare earth exports

China has announced new restrictions on rare earth and permanent magnet exports, significantly escalating its control over critical materials essential for advanced technologies and defence production. The move, revealed ahead of President Donald Trump’s expected meeting with President Xi Jinping, introduces the most rigid export controls yet.

For the first time, Beijing will require foreign companies to obtain approval to export magnets that contain even minimal Chinese-sourced materials or were made with Chinese technology, effectively extending its influence across the global supply chain.

The restrictions could have profound implications for the US defence and semiconductor industries. Rare earth elements are indispensable for producing fighter jets, submarines, missiles, and other advanced systems.

Beginning 1 December 2025, any company tied to foreign militaries, particularly the US, will likely be denied export licenses, while applications for high-tech uses, such as next-generation semiconductors, will face case-by-case reviews. These measures grant Chinese authorities broad discretion to delay or deny exports, tightening their strategic control at a time when Washington already struggles to boost domestic production.

Beijing’s announcement also limits Chinese nationals from participating in overseas rare earth projects without government authorisation, aiming to block the transfer of technical know-how abroad. Analysts suggest the move serves both as a negotiation tactic ahead of renewed trade talks and as a continuation of China’s long-term strategy to weaponise its dominance in the rare earth sector, which supplies over 90% of the world’s magnet manufacturing.

Meanwhile, the US is racing to build resilience. Noveon Magnetics and Lynas Rare Earths are partnering to establish a domestic magnet supply chain, while the Department of War has invested heavily in MP Materials to expand rare earth mining and processing capacity.

Yet experts warn that developing these capabilities will take years, leaving China with significant leverage over global supply chains critical to US national security.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Microsoft boosts AI leadership with NVIDIA GB300 NVL72 supercomputer

Microsoft Azure has launched the world’s first NVIDIA GB300 NVL72 supercomputing cluster, explicitly designed for OpenAI’s large-scale AI workloads.

The new NDv6 GB300 VM series integrates over 4,600 NVIDIA Blackwell Ultra GPUs, representing a significant step forward in US AI infrastructure and innovation leadership.

Each rack-scale system combines 72 GPUs and 36 Grace CPUs, offering 37 terabytes of fast memory and 1.44 exaflops of FP4 performance.

A configuration that supports complex reasoning and multimodal AI systems, achieving up to five times the throughput of the previous NVIDIA Hopper architecture in MLPerf benchmarks.

The cluster is built on NVIDIA’s Quantum-X800 InfiniBand network, delivering 800 Gb/s of bandwidth per GPU for unified, high-speed performance.

Microsoft and NVIDIA’s long-standing collaboration has enabled a system capable of powering trillion-parameter models, positioning Azure at the forefront of the next generation of AI training and deployment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Quantum innovations promise faster, cleaner, more efficient technologies

The Nobel Prize in Physics has spotlighted quantum mechanics’ growing role in shaping a smarter, more sustainable future. Such advances are reshaping technology across communications and energy.

Researchers are finding new ways to use quantum effects to boost efficiency. Quantum computing could ease AI’s power demands, while novel production methods may transform energy systems.

A Institute of Science Tokyo team has built a quantum energy harvester that captures waste heat and converts it into power, bypassing traditional thermodynamic limits.

MIT has observed frictionless electron movement, and new quantum batteries promise faster charging by storing energy in photons. The breakthroughs could enable cleaner and more efficient technologies.

Quantum advances offer huge opportunities but also risks, including threats to encryption. Responsible governance will be crucial to ensure these technologies serve the public good.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

OSCE warns AI threatens freedom of thought

The OSCE has launched a new publication warning that rapid progress in AI threatens the fundamental human right to freedom of thought. The report, Think Again: Freedom of Thought in the Age of AI, calls on governments to create human rights-based safeguards for emerging technologies.

Speaking during the Warsaw Human Dimension Conference, Professor Ahmed Shaheed of the University of Essex said that freedom of thought underpins most other rights and must be actively protected. He urged states to work with ODIHR to ensure AI development respects personal autonomy and dignity.

Experts at the event said AI’s growing influence on daily life risks eroding individuals’ ability to form independent opinions. They warned that manipulation of online information, targeted advertising, and algorithmic bias could undermine free thought and democratic participation.

ODIHR recommends states to prevent coercion, discrimination, and digital manipulation, ensuring societies remain open to diverse ideas. Protecting freedom of thought, the report concludes, is essential to preserving human dignity and democratic resilience in an age shaped by AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

ID data from 70,000 Discord users exposed in third-party breach

Discord has confirmed that official ID images belonging to around 70,000 users may have been exposed in a cyberattack targeting a third-party service provider. The platform itself was not breached, but hackers targeted a company involved in age verification processes.

The leaked data may include personal information, partial credit card details, and conversations with Discord’s customer service agents. No full credit card numbers, passwords, or activity beyond support interactions were affected. Impacted users have been contacted, and law enforcement is investigating.

The platform has revoked the support provider’s access to its systems and has not named the third party involved. Zendesk, a customer service software supplier to Discord, said its own systems were not compromised and denied being the source of the breach.

Discord has rejected claims circulating online that the breach was larger than reported, calling them part of an attempted extortion. The company stated it would not comply with demands from the attackers. Cybercriminals often sell personal information on illicit markets for use in scams.

ID numbers and official documents are especially valuable because, unlike credit card details, they rarely change. Discord previously tightened its age-verification measures following concerns over the misuse of some servers to distribute illegal material.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

European Commission launches Apply AI and AI in Science strategies

Countries are racing to harness AI, and the European Commission has unveiled two strategies to maintain Europe’s competitiveness. Apply AI targets faster adoption across industries and the public sector, while AI in Science focuses on boosting Europe’s research leadership.

Commission President Ursula von der Leyen stated that Europe must shape AI’s future by balancing innovation and safety. The European Commission is mobilising €1 billion to boost adoption in healthcare, manufacturing, energy, defence, and culture, while supporting SMEs.

Measures include creating AI-powered screening centres for healthcare, backing frontier models, and upgrading testing infrastructure. An Apply AI Alliance will unite industry, academia, civil society, and public bodies to coordinate action, while an AI Observatory will monitor sector trends and impacts.

The AI in Science Strategy centres on RAISE, a new virtual institute to pool and coordinate resources for applying AI in research. Investments include €600 million in compute power through Horizon Europe and €58 million for talent networks, alongside plans to double annual AI research funding to over €3 billion.

The EU aims to position itself as a global hub for trustworthy and innovative AI by linking infrastructure, data, skills, and investment. Upcoming events, such as the AI in Science Summit in Copenhagen, will showcase new initiatives as Europe pushes to translate its AI ambitions into tangible outcomes.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

California enacts landmark AI whistleblower law

California has enacted SB 53, offering legal protection to employees reporting AI risks or safety concerns. The law covers companies using large-scale computing for AI model training, focusing on leading developers and exempting smaller firms.

It also mandates transparency, requiring risk mitigation plans, safety test results, and reporting of critical safety incidents to the California Office of Emergency Services (OES).

The legislation responds to calls from industry insiders, including former OpenAI and DeepMind employees, who highlighted restrictive offboarding agreements that silenced criticism and limited public discussion of AI risks.

The new law protects employees who have ‘reasonable cause’ to believe a catastrophic risk exists, defined as endangering 50 lives or causing $1 billion in damages. It allows them to report concerns to regulators, the Attorney General, or management without fear of retaliation.

While experts praise the law as a crucial step, they note its limitations. The protections focus on catastrophic risks, leaving smaller but significant harms unaddressed.

Harvard law professor Lawrence Lessig emphasises that a lower ‘good faith’ standard for reporting would simplify protections for employees, though it is currently limited to internal anonymous channels.

The law reflects growing recognition of the stakes in frontier AI, balancing the need for innovation with safeguards that encourage transparency. Advocates stress that protecting whistleblowers is essential for employees to raise AI concerns safely, even at personal or financial risk.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot