OpenAI joins dialogue with the EU on fair and transparent AI development

The US AI company, OpenAI, has met with the European Commission to discuss competition in the rapidly expanding AI sector.

A meeting focused on how large technology firms such as Apple, Microsoft and Google shape access to digital markets through their operating systems, app stores and search engines.

During the discussion, OpenAI highlighted that such platforms significantly influence how users and developers engage with AI services.

The company encouraged regulators to ensure that innovation and consumer choice remain priorities as the industry grows, noting that collaboration between major and minor players can help maintain a balanced ecosystem.

An issue arises as OpenAI continues to partner with several leading technology companies. Microsoft, a key investor, has integrated ChatGPT into Windows 11’s Copilot, while Apple recently added ChatGPT support to Siri as part of its Apple Intelligence features.

Therefore, OpenAI’s engagement with regulators is part of a broader dialogue about maintaining open and competitive markets while fostering cooperation across the industry.

Although the European Commission has not announced any new investigations, the meeting reflects ongoing efforts to understand how AI platforms interact within the broader digital economy.

OpenAI and other stakeholders are expected to continue contributing to discussions to ensure transparency, fairness and sustainable growth in the AI ecosystem.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft attracts tech pioneers to build the next era of AI

Some of the world’s most influential technologists (the creators of Python, Kubernetes, Google Docs, Google Lens, RSS feeds and ONNX) are now helping Microsoft shape the next era of AI.

Drawn by the company’s scale, openness to collaboration, and long-term investment in AI, they are leading projects that span infrastructure, productivity, responsible innovation and reasoning systems.

R.V. Guha, who invented RSS feeds, is developing NLWeb, a project that lets users converse directly with websites.

Brendan Burns, co-creator of Kubernetes, focuses on improving AI tools that simplify developers’ work. At the same time, Aparna Chennapragada, the mind behind Google Lens, now leads efforts to build intelligent AI agents and enhance productivity through Microsoft 365 Copilot.

Sarah Bird, who helped create the ONNX framework, leads Microsoft’s responsible AI division, ensuring that emerging systems are safe, secure and reliable.

Meanwhile, Sam Schillace, co-creator of Google Docs, explores ways AI can collaborate with people more naturally. Python’s creator, Guido van Rossum, works on systems to strengthen AI’s long-term memory across conversations.

Together, these innovators illustrate how Microsoft has become a magnet for the pioneers who defined modern computing, and they are now united in advancing the next stage of AI’s evolution.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Retailers face new pressure under California privacy law

California has entered a new era of privacy and AI enforcement after the state’s privacy regulator fined Tractor Supply USD1.35 million for failing to honour opt-outs and ignoring Global Privacy Control signals. The case marks the largest penalty yet from the California Privacy Protection Agency.

In California, there is a widening focus on how companies manage consumer data, verification processes and third-party vendors. Regulators are now demanding that privacy signals be enforced at the technology layer, not just displayed through website banners or webforms.

Retailers must now show active, auditable compliance, with clear privacy notices, automated data controls and stronger vendor agreements. Regulators have also warned that businesses will be held responsible for partner failures and poor oversight of cookies and tracking tools.

At the same time, California’s new AI law, SB 53, extends governance obligations to frontier AI developers, requiring transparency around safety benchmarks and misuse prevention. The measure connects AI accountability to broader data governance, reinforcing that privacy and AI oversight are now inseparable.

Executives across retail and technology are being urged to embed compliance and governance into daily operations. California’s regulators are shifting from punishing visible lapses to demanding continuous, verifiable proof of compliance across both data and AI systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

PayPay and Binance Japan unite to advance digital finance

PayPay, Japan’s top cashless payment firm and a SoftBank company, has acquired 40% of Binance Japan to unite traditional finance with blockchain innovation. The partnership merges PayPay’s 70 million users and trusted network with Binance’s digital asset expertise and global Web3 leadership.

Under the new alliance, Binance Japan users will soon be able to purchase cryptocurrencies using PayPay Money and withdraw funds directly into their PayPay wallets. The integration seeks to simplify digital trading and connect cashless payments with decentralised finance.

Executives from both companies highlighted the significance of this collaboration. PayPay’s Masayoshi Yanase said the deal supports Japan’s financial growth, while Takeshi Chino called it a milestone for everyday Web3 adoption.

The alliance is expected to accelerate Japan’s digital finance landscape, strengthening its role as one of the world’s most advanced economies in financial technology. By combining secure payments with blockchain innovation, PayPay and Binance Japan aim to build a seamless digital economy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Google fights to bundle Gemini AI with Maps and YouTube

Google is seeking permission to bundle its Gemini AI application with long-standing services such as YouTube and Maps, even as US regulators press for restrictions to curb its dominance in search.

At a recent court hearing, Google’s lawyer John Schmidtlein told Judge Amit Mehta that tying Gemini to its core apps is vital to delivering a consistent AI experience across its ecosystem.

He insisted the courts should not treat the AI market as a settled domain subject to old rules, and claimed that neither Maps nor YouTube is a monopoly product justifying special constraints.

The government’s position is more cautious. During the hearing, Judge Mehta questioned whether allowing Google to require its AI app to be installed to access Maps or YouTube would give it unfair leverage over competitors, mirroring past practices that regulators found harmful in search and browser markets.

This moment frames a broader tension: how antitrust frameworks will adapt (or not) when dominant platforms seek to integrate generative AI across many services. The outcome could shape the future of bundling practices and interoperability in AI ecosystems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Microsoft boosts AI leadership with NVIDIA GB300 NVL72 supercomputer

Microsoft Azure has launched the world’s first NVIDIA GB300 NVL72 supercomputing cluster, explicitly designed for OpenAI’s large-scale AI workloads.

The new NDv6 GB300 VM series integrates over 4,600 NVIDIA Blackwell Ultra GPUs, representing a significant step forward in US AI infrastructure and innovation leadership.

Each rack-scale system combines 72 GPUs and 36 Grace CPUs, offering 37 terabytes of fast memory and 1.44 exaflops of FP4 performance.

A configuration that supports complex reasoning and multimodal AI systems, achieving up to five times the throughput of the previous NVIDIA Hopper architecture in MLPerf benchmarks.

The cluster is built on NVIDIA’s Quantum-X800 InfiniBand network, delivering 800 Gb/s of bandwidth per GPU for unified, high-speed performance.

Microsoft and NVIDIA’s long-standing collaboration has enabled a system capable of powering trillion-parameter models, positioning Azure at the forefront of the next generation of AI training and deployment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Harvard Medical School licenses health content to Microsoft’s Copilot

Harvard Medical School has signed a licensing agreement with Microsoft, granting the company access to its consumer health content on diseases and wellness.

Under the deal, Microsoft will pay a licensing fee to Harvard. The partnership aims to enhance Microsoft’s Copilot AI assistant by enabling it to provide medical advice that more closely aligns with what a user might receive from a healthcare professional.

So far, Copilot’s underlying models have been powered primarily by OpenAI. But this agreement is part of Microsoft’s broader push to diversify and reduce its dependence on OpenAI’s technology stack.

Dominic King, Microsoft’s vice president of health, has said the goal is for Copilot’s responses to health queries to more accurately reflect what a medical practitioner would say, rather than generic or superficial answers.

Microsoft declined to comment in detail, but the move strengthens Copilot’s differentiation in the health domain, arguably a high-stakes vertical area where accuracy and trustworthiness matter greatly.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Nvidia to invest $2bn in Elon Musk’s xAI

Nvidia is reportedly investing up to $2bn in Elon Musk’s AI company, xAI, as part of a $20bn funding round aimed at scaling its Colossus 2 data centre in Memphis. The capital will be used to buy Nvidia GPUs, essential for powering xAI’s next generation of AI models.

The funding package combines about $7.5bn in equity and up to $12.5bn in debt, structured through a special purpose vehicle that will lease the hardware to xAI over five years. The debt is secured by the GPUs themselves, allowing investors to recover their costs through chip rentals.

xAI faces mounting financial pressure, with reports indicating a cash burn of around $1bn per month. The firm raised $10bn earlier in the year and continues to draw on capital from Musk’s other ventures, including SpaceX.

The move comes amid an intense funding surge across the AI sector, as OpenAI, Meta and Oracle also announce multi-billion-dollar investments in infrastructure. Nvidia’s latest deal with xAI further cements its position at the centre of the global AI hardware ecosystem.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

US greenlights Nvidia chip exports to UAE under new AI pact

The US has approved its first export licences for Nvidia’s advanced AI chips destined for the United Arab Emirates, marking a concrete step in the bilateral AI partnership announced earlier in 2025.

These licences come under the oversight of the US Commerce Department’s Bureau of Industry and Security, aligned with a formal agreement between the two nations signed in May.

In return, the UAE has committed to investing in the United States, making this a two-way deal. The licences do not cover every project yet: some entities, such as the AI firm G42, are currently excluded from the approved shipments.

The UAE sees the move as crucial to its AI push under Vision 2031, particularly for funding data centre expansion and advancing research in robotics and intelligent systems. Nvidia already collaborates with Abu Dhabi’s Technology Innovation Institute (TII) in a joint AI and robotics lab.

Challenges remain. Some US officials cite national security risks, especially given the UAE’s ties and potential technology pathways to third countries.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OSCE warns AI threatens freedom of thought

The OSCE has launched a new publication warning that rapid progress in AI threatens the fundamental human right to freedom of thought. The report, Think Again: Freedom of Thought in the Age of AI, calls on governments to create human rights-based safeguards for emerging technologies.

Speaking during the Warsaw Human Dimension Conference, Professor Ahmed Shaheed of the University of Essex said that freedom of thought underpins most other rights and must be actively protected. He urged states to work with ODIHR to ensure AI development respects personal autonomy and dignity.

Experts at the event said AI’s growing influence on daily life risks eroding individuals’ ability to form independent opinions. They warned that manipulation of online information, targeted advertising, and algorithmic bias could undermine free thought and democratic participation.

ODIHR recommends states to prevent coercion, discrimination, and digital manipulation, ensuring societies remain open to diverse ideas. Protecting freedom of thought, the report concludes, is essential to preserving human dignity and democratic resilience in an age shaped by AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot