OpenAI and Microsoft’s collaboration is near breaking point

The once-celebrated partnership between OpenAI and Microsoft is now under severe strain as disputes over control and strategic direction threaten to dismantle their alliance.

OpenAI’s move toward a for-profit model has placed it at odds with Microsoft, which has invested billions and provided exclusive access to Azure infrastructure.

Microsoft’s financial backing and technical involvement have granted it a powerful voice in OpenAI’s operations. However, OpenAI now appears determined to gain independence, even if it risks severing ties with the tech giant.

Negotiations are ongoing, but the growing rift could reshape the trajectory of generative AI development if the collaboration collapses.

Amid tensions, Microsoft evaluates alternative options, including developing AI tools and working with rivals like Meta and xAI.

Such a pivot suggests Microsoft is preparing for a future beyond OpenAI, potentially ending its exclusive access to upcoming models and intellectual property.

A breakdown could have industry-wide repercussions. OpenAI may struggle to secure the estimated $40 billion in fresh funding it seeks, especially without Microsoft’s support.

At the same time, the rivalry could accelerate competition across the AI sector, prompting others to strengthen or redefine their positions in the race for dominance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

IGF 2025 opens in Norway with focus on inclusive digital governance

Norway will host the 20th annual Internet Governance Forum (IGF) from 23 to 27 June 2025 in a hybrid format, with the main venue set at Nova Spektrum in Lillestrøm, just outside Oslo.

This milestone event marks two decades of the UN-backed forum that brings together diverse stakeholders to discuss how the internet should be governed for the benefit of all.

The overarching theme, Building Governance Together, strongly emphasises inclusivity, democratic values, and sustainable digital cooperation.

With participation expected from governments, the private sector, civil society, academia, and international organisations, IGF 2025 will continue to promote multistakeholder dialogue on critical topics, including digital trust, cybersecurity, AI, and internet access.

A key feature will be the IGF Village, where companies and organisations will showcase technologies and products aligned with global internet development and governance.

Norway’s Minister of Digitalisation and Public Governance, Karianne Oldernes Tung, underlined the significance of this gathering in light of current geopolitical tensions and the forthcoming WSIS+20 review later in 2025.

Reaffirming Norway’s support for the renewal of the IGF mandate at the UN General Assembly, Minister Tung called for unity and collaborative action to uphold an open, secure, and inclusive internet. The forum aims to assess progress and help shape the next era of digital policy.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

SoftBank plans $1 trillion AI and robotics park in Arizona

SoftBank founder Masayoshi Son is planning what could become his most audacious venture yet: a $1 trillion AI and robotics industrial park in Arizona.

Dubbed ‘Project Crystal Land’, the initiative aims to recreate a high-tech manufacturing hub reminiscent of China’s Shenzhen, focused on AI-powered robots and next-gen automation.

Son is courting global tech giants — including Taiwan Semiconductor Manufacturing Co. (TSMC) and Samsung — to join the vision, though none have formally committed.

The plan hinges on support from federal and state governments, with SoftBank already discussing possible tax breaks with US officials, including Commerce Secretary Howard Lutnick.

While TSMC is already investing $165 billion in Arizona facilities, sources suggest Son’s project has not altered the chipmaker’s current roadmap. SoftBank hopes to attract semiconductor and AI hardware leaders to power the park’s infrastructure.

Son has also approached SoftBank Vision Fund portfolio companies to participate, including robotics startup Agile Robots.

The park may serve as a production hub for emerging tech firms, complementing SoftBank’s broader investments, such as a potential $30 billion stake in OpenAI, a $6.5 billion acquisition of Ampere Computing, and funding for Stargate, a global data centre venture with OpenAI, Oracle, and MGX.

While the vision is still early, Project Crystal Land could radically shift US high-tech manufacturing. Son’s strategy relies heavily on project-based financing, allowing extensive infrastructure builds with minimal upfront capital.

As SoftBank eyes long-term AI growth and increased investor confidence, whether this futuristic park will become a reality — or another of Son’s high-stakes dreams remains to be seen.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU AI Act challenges 68% of European businesses, AWS report finds

As AI becomes integral to digital transformation, European businesses struggle to adapt to new regulations like the EU AI Act.

A report commissioned by AWS and Strand Partners revealed that 68% of surveyed companies find the EU AI Act difficult to interpret, with compliance absorbing around 40% of IT budgets.

Businesses unsure of regulatory obligations are expected to invest nearly 30% less in AI over the coming year, risking a slowdown in innovation across the continent.

The EU AI Act, effective since August 2024, introduces a phased risk-based framework to regulate AI in the EU. Some key provisions, including banned practices and AI literacy rules, are already enforceable.

Over the next year, further requirements will roll out, affecting AI system providers, users, distributors, and non-EU companies operating within the EU. The law prohibits exploitative AI applications and imposes strict rules on high-risk systems while promoting transparency in low-risk deployments.

AWS has reaffirmed its commitment to responsible AI, which is aligned with the EU AI Act. The company supports customers through initiatives like AI Service Cards, its Responsible AI Guide, and Bedrock Guardrails.

AWS was the first primary cloud provider to receive ISO/IEC 42001 certification for its AI offerings and continues to engage with the EU institutions to align on best practices. Amazon’s AI Ready Commitment also offers free education on responsible AI development.

Despite the regulatory complexity, AWS encourages its customers to assess how their AI usage fits within the EU AI Act and adopt safeguards accordingly.

As compliance remains a shared responsibility, AWS provides tools and guidance, but customers must ensure their applications meet the legal requirements. The company updates customers as enforcement advances and new guidance is issued.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI diplomacy enters the spotlight with Gulf region partnerships

In a groundbreaking shift in global diplomacy, recent US-brokered AI partnerships in the Gulf region have propelled AI to the centre of international strategy. As highlighted by Slobodan Kovrlija, this development transforms the Gulf into a key AI hub, alongside the US and China.

Countries like Saudi Arabia, the UAE, and Qatar are investing heavily in AI infrastructure—from quantum computing to sprawling data centres—as part of a calculated effort to integrate more deeply into a US-led technological sphere and counter China’s Digital Silk Road ambitions. That movement is already reshaping global dynamics.

China is racing to deepen its AI alliances with developing nations, while Russia is leveraging the expanded BRICS bloc to build alternative AI systems and promote its AI Code of Ethics. On the other hand, Europe is stepping up efforts to internationalise its ‘human-centric AI’ regulatory approach under the EU AI Act.

These divergent paths underscore how AI capabilities are now as essential to diplomacy as traditional military or economic tools, forming emerging ‘AI blocs’ that may redefine geopolitics for decades. Kovrlija emphasises that AI diplomacy is no longer a theoretical concept but a practical necessity.

Being a technological front-runner now means possessing enhanced diplomatic influence, with partnerships based on AI potentially replacing older alliance models. However, this new terrain also presents serious challenges, such as ensuring ethical standards, data privacy, and equitable access. The Gulf deals, while strategic, also open a space for joint efforts in responsible AI governance.

Why does it matter?

As the era of AI diplomacy dawns, institutions like Diplo are stepping in to prepare diplomats for this rapidly evolving landscape. Kovrlija concludes that understanding and engaging with AI diplomacy is now essential for any nation wishing to maintain its relevance and influence in global affairs.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

New Meta smart glasses target sports enthusiasts

Meta is set to launch a new pair of AI-powered smart glasses under the Oakley brand, targeting sports users. Scheduled for release on 20 June, the glasses mark an expansion of Meta’s partnership with eyewear giant EssilorLuxottica.

Oakley’s sporty design and outdoor functionality make it ideal for active users, a market Meta aims to capture with this launch. The glasses will feature a central camera and likely retail for around $360.

This follows the success of Meta’s Ray-Ban smart glasses, which include AI assistant integration and hands-free visual capture. Over two million pairs have been sold since 2023, according to EssilorLuxottica’s CEO.

Meta CEO Mark Zuckerberg continues to push smart eyewear as a long-term replacement for smartphones. With high-fashion Prada smart glasses also in development, Meta is betting on wearable tech becoming the next frontier in computing.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI helps Google curb scams and deepfakes in India

Google has introduced its Safety Charter for India to combat rising online fraud, deepfakes and cybersecurity threats. The charter outlines a collaborative plan focused on user safety, responsible AI development and protection of digital infrastructure.

AI-powered measures have already helped Google detect 20 times more scam-related pages, block over 500 million scam messages monthly, and issue 2.5 billion suspicious link warnings. Its ‘Digikavach’ programme has reached over 177 million Indians with fraud prevention tools and awareness campaigns.

Google Pay alone averted financial fraud worth ₹13,000 crore in 2024, while Google Play Protect stopped nearly 6 crore high-risk app installations. These achievements reflect the company’s ‘AI-first, secure-by-design’ strategy for early threat detection and response.

The tech giant is also collaborating with IIT-Madras on post-quantum cryptography and privacy-first technologies. Through language models like Gemini and watermarking initiatives such as SynthID, Google aims to build trust and inclusion across India’s digital ecosystem.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Hexagon unveils AEON humanoid robot powered by NVIDIA to build industrial digital twins

As industries struggle to fill 50 million job vacancies globally, Hexagon has unveiled AEON — a humanoid robot developed in collaboration with NVIDIA — to tackle labour shortages in manufacturing, logistics and beyond.

AEON can perform complex tasks like reality capture, asset inspection and machine operation, thanks to its integration with NVIDIA’s full-stack robotics platform.

By simulating skills using NVIDIA Isaac Sim and training in Isaac Lab, AEON drastically reduced its development time, mastering locomotion in weeks instead of months.

The robot is built using NVIDIA’s trio of AI systems, combining simulation with onboard intelligence powered by Jetson Orin and IGX Thor for real-time navigation and safe collaboration.

AEON will be deployed in factories and warehouses, scanning environments to build high-fidelity digital twins through Hexagon’s cloud-based Reality Cloud Studio and NVIDIA Omniverse.

Hexagon believes AEON can bring digital twins into mainstream use, streamlining industrial workflows through advanced sensor fusion and simulation-first AI. The company is also leveraging synthetic motion data to accelerate robot learning, pushing the boundaries of physical AI for real-world applications.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT now supports MCP for business data access, but safety risks remain

OpenAI has officially enabled support for Anthropic’s Model Context Protocol (MCP) in ChatGPT, allowing businesses to connect their internal tools directly to the chatbot through Deep Research.

The development enables employees to retrieve company data from previously siloed systems, offering real-time access to documents and search results via custom-built MCP servers.

Adopting MCP — an open industry protocol recently embraced by OpenAI, Google and Microsoft — opens new possibilities and presents security risks.

OpenAI advises users to avoid third-party MCP servers unless hosted by the official service provider, warning that unverified connections may carry prompt injections or hidden malicious directives. Users are urged to report suspicious activity and avoid exposing sensitive data during integration.

To connect tools, developers must set up an MCP server and create a tailored connector within ChatGPT, complete with detailed instructions. The feature is now live for ChatGPT Enterprise, Team and Edu users, who can share the connector across their workspace as a trusted data source.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Deepfake technology fuels new harassment risks

A growing threat of AI-generated media is reshaping workplace harassment, with deepfakes used to impersonate colleagues and circulate fabricated explicit content in the US. Recent studies found that almost all deepfakes were sexually explicit by 2023, often targeting women.

Organisations risk liability under existing laws if deepfake incidents create hostile work environments. New legislation like the TAKE IT DOWN Act and Florida’s Brooke’s Law now mandates rapid removal of non-consensual intimate imagery.

Employers are also bracing for proposed rules requiring strict authentication of AI-generated evidence in legal proceedings. Industry experts advise an urgent review of harassment and acceptable use policies, clear incident response plans and targeted training for HR, legal and IT teams.

Protective measures include auditing insurance coverage for synthetic media claims and staying abreast of evolving state and federal regulations. Forward-looking employers already embed deepfake awareness into their harassment prevention and cybersecurity training to safeguard workplace dignity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!