Microsoft and OpenAI revisit investment deal

OpenAI chief executive Sam Altman revealed that he had a conversation with Microsoft CEO Satya Nadella on Monday to discuss the future of their partnership.

Speaking on a New York Times podcast, Altman described the dialogue as part of ongoing efforts to align on the evolving nature of their collaboration.

Earlier this month, the Wall Street Journal reported that Microsoft — OpenAI’s primary backer — and the AI firm are in discussions to revise the terms of their investment. Topics under negotiation reportedly include Microsoft’s future equity stake in OpenAI.

According to the Financial Times, Microsoft is weighing whether to pause the talks if the two parties cannot resolve key issues. Neither Microsoft nor OpenAI responded to media requests for comment outside regular business hours.

‘Obviously, in any deep partnership, there are points of tension, and we certainly have those,’ Altman said. ‘But on the whole, it’s been wonderfully good for both companies.’

Altman also commented on his recent discussions with United States President Donald Trump regarding AI. He noted that Trump appeared to grasp the technology’s broader geopolitical and economic significance.

In January, Trump announced Stargate — a proposed private sector initiative to invest up to $500 billion in AI infrastructure — with potential backing from SoftBank, OpenAI, and Oracle.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI governance efforts centre on human rights

At the Internet Governance Forum 2025 in Lillestrøm, Norway, a key session spotlighted the launch of the Freedom Online Coalition’s (FOC) updated Joint Statement on Artificial Intelligence and Human Rights. Backed by 21 countries and counting, the statement outlines a vision for human-centric AI governance rooted in international human rights law.

Representatives from governments, civil society, and the tech industry—most notably the Netherlands, Germany, Ghana, Estonia, and Microsoft—gathered to emphasise the urgent need for a collective, multistakeholder approach to tackle the real and present risks AI poses to rights such as privacy, freedom of expression, and democratic participation.

Ambassador Ernst Noorman of the Netherlands warned that human rights and security must be viewed as interconnected, stressing that unregulated AI use can destabilise societies rather than protect them. His remarks echoed the Netherlands’ own hard lessons from biassed welfare algorithms.

Other panellists, including Germany’s Cyber Ambassador Maria Adebahr, underlined how AI is being weaponised for transnational repression and emphasised Germany’s commitment by doubling funding for the FOC. Ghana’s cybersecurity chief, Divine Salese Agbeti, added that AI misuse is not exclusive to governments—citizens, too, have exploited the technology for manipulation and deception.

From the private sector, Microsoft’s Dr Erika Moret showcased the company’s multi-layered approach to embedding human rights in AI, from ethical design and impact assessments to rejecting high-risk applications like facial recognition in authoritarian contexts. She stressed the company’s alignment with UN guiding principles and the need for transparency, fairness, and inclusivity.

The discussion also highlighted binding global frameworks like the EU AI Act and the Council of Europe’s Framework Convention, calling for their widespread adoption as vital tools in managing AI’s global impact. The session concluded with a shared call to action: governments must use regulatory tools and procurement power to enforce human rights standards in AI, while the private sector and civil society must push for accountability and inclusion.

The FOC’s statement remains open for new endorsements, standing as a foundational text in the ongoing effort to align the future of AI with the fundamental rights of all people.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

Global consensus grows on inclusive and cooperative AI governance at IGF 2025

At the Internet Governance Forum 2025 in Lillestrøm, Norway, the ‘Building an International AI Cooperation Ecosystem’ session spotlighted the urgent need for international collaboration to manage AI’s transformative impact. Hosted by China’s Cyberspace Administration, the session featured a global roster of experts who emphasised that AI is no longer a niche or elite technology, but a powerful and widely accessible force reshaping economies, societies, and governance frameworks.

China’s Cyberspace Administration Director-General Qi Xiaoxia opened the session by stressing her country’s leadership in AI innovation, citing that over 60% of global AI patents originate from China. She proposed a cooperative agenda focused on sustainable development, managing AI risks, and building international consensus through multilateral collaboration.

Echoing her call, speakers highlighted that AI’s rapid evolution requires national regulations and coordinated global governance, ideally under the auspices of the UN.

Speakers, such as Jovan Kurbalija, executive director of Diplo, and Wolfgang Kleinwächter, emeritus professor for Internet Policy and Regulation at the University of Aarhus, warned against the pitfalls of siloed regulation and technological protectionism. Instead, they advocated for open-source standards, inclusive policymaking, and leveraging existing internet governance models to shape AI rules.

Kurbalija

Regional case studies from Shanghai and Mexico illustrated diverse governance approaches—ranging from rights-based regulation to industrial ecosystem building—while initiatives like China Mobile’s AI+ Global Solutions showcased the role of major industry actors. A recurring theme throughout the forum was that no single stakeholder can monopolise effective AI governance.

Instead, a multistakeholder approach involving governments, civil society, academia, and the private sector is essential. Participants agreed that the goal is not just to manage risks, but to ensure AI is developed and deployed in a way that is ethical, inclusive, and beneficial to all humanity.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

EuroDIG outcomes shared at IGF 2025 session in Norway

At the Internet Governance Forum (IGF) 2025 in Norway, a high-level networking session was held to share key outcomes from the 18th edition of the European Dialogue on Internet Governance (EuroDIG), which took place earlier this year from 12–14 May in Strasbourg, France. Hosted by the Council of Europe and supported by the Luxembourg Presidency of the Committee of Ministers, the Strasbourg conference centred on balancing innovation and regulation, strongly focusing on safeguarding human rights in digital policy.

Sandra Hoferichter, who moderated the session in Norway, opened by noting the symbolic significance of EuroDIG’s return to Strasbourg—the city where the forum began in 2008. She emphasised EuroDIG’s unique tradition of issuing “messages” as policy input, which IGF and other regional dialogues later adopted.

Swiss Ambassador Thomas Schneider, President of the EuroDIG Support Association, presented the community’s consolidated contributions to the WSIS+20 review process. “The multistakeholder model isn’t optional—it’s essential,” he said, adding that Europe strongly supports making the Internet Governance Forum a permanent institution rather than one renewed every decade. He called for a transparent and inclusive WSIS+20 process, warning against decisions being shaped behind closed diplomatic doors.

YouthDIG representative Frances Douglas Thomson shared insights from the youth-led sessions at EuroDIG. She described strong debates on digital literacy, particularly around the role of generative AI in schools. ‘Some see AI as a helpful assistant; others fear it diminishes critical thinking,’ she said. Content moderation also sparked division, with some young participants calling for vigorous enforcement against harmful content and others raising concerns about censorship. Common ground emerged around the need for greater algorithmic transparency so users understand how content is curated.

Hans Seeuws, business operations manager at EURid, emphasised the need for infrastructure providers to be heard in policy spaces. He supported calls for concrete action on AI governance and digital rights, stressing the importance of translating dialogue into implementation.

Chetan Sharma from the Data Mission Foundation Trust India questioned the practical impact of governance forums in humanitarian crises. Frances highlighted several EuroDIG sessions that tackled using autonomous weapons, internet shutdowns, and misinformation during conflicts. ‘Dialogue across stakeholders can shift how we understand digital conflict. That’s meaningful change,’ she noted.

A representative from Geneva Macro Labs challenged the panel to explain how internet policy can be effective when many governments lack technical literacy. Schneider replied that civil society, business, and academia must step in when public institutions fall short. ‘Democracy is not self-sustaining—it requires daily effort. The price of neglect is high,’ he cautioned.

Janice Richardson, an expert at the Council of Europe, asked how to widen youth participation. Frances praised YouthDIG’s accessible, bottom-up format and called for increased funding to help young people from underrepresented regions join discussions. ‘The more youth feel heard, the more they stay engaged,’ she said.

As the session closed, Hoferichter reminded attendees of the over 400 applications received for YouthDIG this year. She urged donors to help cover the high travel costs, mainly from Eastern Europe and the Caucasus. ‘Supporting youth in internet governance isn’t charity—it’s a long-term investment in inclusive, global policy,’ she concluded.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

AI data risks prompt new global cybersecurity guidance

A coalition of cybersecurity agencies, including the NSA, FBI, and CISA, has issued joint guidance to help organisations protect AI systems from emerging data security threats. The guidance explains how AI systems can be compromised by data supply chain flaws, poisoning, and drift.

Organisations are urged to adopt security measures throughout all four phases of the AI life cycle: planning, data collection, model building, and operational monitoring.

The recommendations include verifying third-party datasets, using secure ingestion protocols, and regularly auditing AI system behaviour. Particular emphasis is placed on preventing model poisoning and tracking data lineage to ensure integrity.

The guidance encourages firms to update their incident response plans to address AI-specific risks, conduct audits of ongoing projects, and establish cross-functional teams involving legal, cybersecurity, and data science experts.

With AI models increasingly central to critical infrastructure, treating data security as a core governance issue is essential.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI tools at work pose hidden dangers

AI tools are increasingly used in workplaces to enhance productivity but come with significant security risks. Workers may unknowingly breach privacy laws like GDPR or HIPAA by sharing sensitive data with AI platforms, risking legal penalties and job loss.

Experts warn of AI hallucinations where chatbots generate false information, highlighting the need for thorough human review. Bias in AI outputs, stemming from flawed training data or system prompts, can lead to discriminatory decisions and potential lawsuits.

Cyber threats like prompt injection and data poisoning can manipulate AI behaviour, while user error and IP infringement pose further challenges. As AI technology evolves, unknown risks remain a concern, making caution essential when integrating AI into business processes.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Perplexity tests AI browser for Windows

Perplexity has begun testing its AI-powered Comet browser for Windows, expanding beyond its earlier launch on Macs with Apple Silicon.

The browser integrates AI at its core, offering features such as natural language interactions, email reminders, and a tool for trying on AI-generated outfits.

The Comet browser aims to stand out in a market where major players like Microsoft, Google, and OpenAI dominate the AI space. Perplexity’s plans for the browser’s wider release and final features remain unclear, as testing is limited to a small group.

Perplexity’s push into the browser market comes amid controversy over its plans to collect extensive user data for personalised advertising. The company also faces legal threats from the BBC over alleged content scraping practices.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google launches AI Mode Search in India

Google has launched its advanced AI Mode search experience in India, allowing users to explore information through more natural and complex interactions.

The feature, previously available as an experiment in the US, can now be enabled in English via Search Labs. Users test experimental tools on this platform and share feedback on early Google Search features.

Once activated, AI Mode introduces a new tab in the Search interface and Google app. It offers expanded reasoning capabilities powered by Gemini 2.5, enabling queries through text, voice, or images.

The shift supports deeper exploration by allowing follow-up questions and offering diverse web links, helping users understand topics from multiple viewpoints.

India plays a key role in this rollout due to its widespread visual and voice search use.

According to Hema Budaraju, Vice President of Product Management for Search, more users in India engage with Google Lens each month than anywhere else. AI Mode reflects Google’s broader goal of making information accessible across different formats.

Google also highlighted that over 1.5 billion people globally use AI Overviews monthly. These AI-generated summaries, which appear at the top of search results, have driven a 10% rise in user engagement for specific types of queries in both India and the US.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Alibaba Cloud launches new AI tools and education partnerships in Europe

Alibaba Cloud has announced a new suite of AI services as part of its expansion across Europe.

Revealed during the Alibaba European Summit in Paris, the company said the new offerings reinforce its long-term commitment to the region by providing AI-driven tools and cloud solutions for fashion, healthcare, and automotive industries.

A key development is a significant upgrade to the Platform for AI (PAI), Alibaba’s AI computing platform hosted in the Frankfurt cloud region. The company stated that the enhancements will increase efficiency and scalability to meet the rising demand for compute-intensive workloads.

The platform’s improvements are powered by Alibaba’s proprietary AI Scheduler, which optimises allocating diverse cloud computing resources.

Alibaba Cloud also aims to support European companies entering Asian markets. The firm cited its strategic partnership with SAP to provide enterprise resource planning (ERP) solutions in China, Southeast Asia, and the Middle East.

In the automotive sector, Alibaba recently extended its partnership with BMW in China to integrate its Qwen AI models into vehicles.

Alibaba Cloud has signed an agreement with France’s Brest Business School to strengthen AI skills and collaboration. The partnership will include academic programmes, training in AI and cloud technologies, and support for digital transformation initiatives.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI and io face lawsuit over branding conflict

OpenAI and hardware startup io, founded by former Apple designer Jony Ive, are now embroiled in a trademark infringement lawsuit filed by iyO, a Google-backed company specialising in custom headphones.

The legal case prompted OpenAI to withdraw promotional material linked to its $6.005 billion acquisition of io, raising questions about the branding of its future AI device.

Court documents reveal that OpenAI and io had previously met with iyO representatives and tested their custom earbud product, although the tests were unsuccessful.

Despite initial contact and discussions about potential collaboration, OpenAI rejected iyO’s proposals to invest, license, or acquire the company for $200 million. The lawsuit, however, does not centre on an earbud or wearable device, according to io’s co-founders.

Io executives clarified in court that their prototype does not resemble iyO’s product and remains unfinished. It is neither wearable nor intended for sale within the following year.

OpenAI CEO Sam Altman described the joint project as an attempt to reimagine hardware interfaces. At the same time, Jony Ive expressed enthusiasm for the device’s early design, which he claims captured his imagination.

Court testimony and emails suggest io explored various technologies, including desktop, mobile, and portable designs. Internal communications also reference possible ergonomic research using 3D ear scan data.

Although the lawsuit has exposed some development details, the main product of the collaboration between OpenAI and io remains undisclosed.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!