EU scrutiny intensifies over Broadcom VMware licensing dispute

Broadcom is facing increased regulatory pressure in the EU following a formal antitrust complaint concerning changes to VMware licensing practices.

The complaint highlights growing tensions between large technology providers and European cloud infrastructure firms.

The filing, submitted by Cloud Infrastructure Services Providers in Europe, raises concerns that revised licensing models could significantly alter market dynamics.

European providers argue that the changes may limit flexibility, increase costs, and affect their ability to compete effectively in the cloud services sector.

At the centre of the dispute lies the broader issue of market concentration and control over critical digital infrastructure.

Industry stakeholders suggest that restrictive licensing conditions could reshape access to essential virtualisation technologies, which underpin a wide range of cloud and enterprise services across the EU.

Regulatory attention is expected to focus on whether such practices align with the EU competition rules, particularly regarding fair access and market neutrality.

The case emerges at a time when European policymakers are intensifying oversight of dominant technology firms and seeking to strengthen digital sovereignty across strategic sectors.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU advances AI simplification effort ahead of further negotiations

A committee within the European Parliament has approved a proposal to simplify aspects of AI regulation, marking a step forward in efforts to refine the implementation of the AI Act.

An initiative that seeks to adjust certain requirements to support clearer compliance, particularly for industry stakeholders.

The proposal focuses on technical and procedural elements linked to how AI rules are applied in practice.

Lawmakers aim to ensure that regulatory obligations remain proportionate while maintaining existing safeguards. Part of the discussion includes how specific categories of AI systems should be addressed within the broader framework.

Some elements of the proposal may require further discussion in upcoming negotiations with the Council of the European Union. Areas under consideration include the treatment of sensitive AI applications and the balance between regulatory clarity and enforcement effectiveness.

The development reflects ongoing efforts within the EU to refine its approach to AI governance. As discussions continue, policymakers are expected to assess how adjustments can support innovation while maintaining consistency with existing legal principles.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Meta’s metaverse collapses as Horizon Worlds shuts down on Quest

Meta will shut down Horizon Worlds on its Quest headsets, ending its flagship virtual reality (VR) platform and marking a clear retreat from its metaverse ambitions. The app will be removed from the Quest store on 31 March and discontinued in VR by 15 June, continuing only as a mobile service.

Horizon Worlds, launched in 2021, was central to Meta’s rebranding from Facebook and its vision of a fully immersive virtual environment. Despite billions in investment and high-profile partnerships, the platform failed to attract a large user base and struggled with design limitations and weak engagement.

Reality Labs, the division behind the metaverse push, has accumulated nearly $80 billion in losses since 2020, including more than $6 billion in a single quarter. Recent layoffs affecting around 10 percent of the VR workforce, along with the shutdown of related projects, underscore a broader pullback.

Competition and shifting priorities have accelerated the decline. Rival platforms such as VRChat maintained stronger communities, while Meta increasingly redirected resources toward AI and hardware, including its Ray-Ban smart glasses.

Although Meta says it remains committed to VR, the closure of Horizon Worlds signals a strategic reset. The company is repositioning its future around AI-driven products, marking a decisive shift away from its earlier metaverse vision.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Google responds to UK digital market rules and CMA proposals

Debate over proposed UK digital market rules is intensifying, with Google outlining its position and emphasising the need to balance competition with user experience and platform integrity. The company said it supports the objectives of the Competition and Markets Authority but warned that some proposals could introduce risks for users.

Google argued that maintaining fair and relevant search results remains a priority, stating that its ranking systems are designed to prioritise quality rather than favour its own services. It cautioned that certain third-party proposals could expose its systems to manipulation, potentially weakening protections against spam and reducing the pace of product improvements.

The company also addressed user choice on Android devices, noting that existing options already allow users to select preferred services. It suggested that adding frequent mandatory choice screens could disrupt user experience, proposing instead a permanent settings-based option to change defaults without repeated prompts.

Regarding publisher relations, Google highlighted efforts to increase control over how content is used, particularly with generative AI features such as AI Overviews. It said new tools are being developed to allow publishers to opt out of specific AI functionalities while maintaining visibility in search results.

Google said it would continue engaging with UK regulators to shape rules that support users, publishers, and businesses, while ensuring that innovation and service quality are not compromised.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

White-collar jobs hold steady as automation concerns grow

Mass layoffs across major tech firms, including Amazon’s 16,000 job cuts, have intensified concerns that AI will replace white-collar workers. Headlines suggest a rapid shift, yet broader labour data tells a more measured story.

US employment has grown by 1.1% since the launch of ChatGPT in November 2022, reaching over 157 million workers. Service industries expanded significantly, adding more than two million jobs, while goods-producing sectors declined modestly.

Overall trends indicate no major disruption to the labour market so far.

Sector-level data reveals uneven shifts. The information industry recorded the steepest losses, particularly in media, telecoms, and content production, where automation and long-term structural changes continue to reduce headcounts.

Meanwhile, highly automatable roles such as telemarketing and call centres saw the sharpest declines.

Professional services present a more complex picture. Legal, engineering, and consulting roles have grown or remained stable, defying expectations of widespread displacement.

Hiring continues to exceed layoffs in several sectors, though younger workers appear increasingly vulnerable as competition intensifies in AI-exposed roles.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Joint SEC and CFTC framework reshapes crypto oversight

The US Securities and Exchange Commission and the Commodity Futures Trading Commission issued joint guidance confirming that most crypto assets are not securities. Move marks a coordinated effort to clarify how digital assets are classified and regulated across the US.

New interpretation establishes a clearer framework, distinguishing between securities and commodities. While tokens linked to investment contracts may fall under securities laws, many assets can transition out of that category over time, reducing long-standing legal uncertainty.

Earlier approaches relied on enforcement and court rulings, leading to inconsistent treatment of similar assets. Updated guidance introduces defined categories, including utility tokens, stablecoins, collectables, and commodities, and aligns oversight between the two agencies.

Clearer rules are expected to support innovation and reduce compliance risks for firms. Guidance supports broader efforts to build a unified digital asset framework, advancing more predictable and structured crypto regulation in the US.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Advancing global digital cooperation and AI innovation across the UN system

Digital technologies and AI are increasingly shaping economic development, governance and international cooperation. As these technologies expand rapidly, international organisations are working to ensure that innovation is accompanied by responsible governance, inclusive access and coordinated global policies.

Within the United Nations system, a range of initiatives aim to strengthen cooperation on digital transformation and the development of AI. These efforts address issues such as digital infrastructure, data governance, technological innovation and equitable participation in emerging digital ecosystems. International collaboration plays an essential role in ensuring that the benefits of digital technologies support sustainable development while reducing global inequalities in access to digital resources.

Several programmes across the United Nations system reflect these priorities, combining global governance initiatives with practical AI applications in areas such as development, humanitarian response and digital inclusion. The following sections examine selected initiatives that illustrate how AI and digital cooperation are being advanced across different areas of the UN system.

Global Digital Compact

 City

The Global Digital Compact is a comprehensive international framework adopted by United Nations member states to guide global digital cooperation and enhance the governance of AI. Negotiated by the 193 member states and reflects broad consultations aimed at shaping a shared vision for a digital future that is open, inclusive, safe, and secure for all. The Compact is part of the Pact for the Future, adopted at the 2024 Summit of the Future in New York.

At its core, the Compact seeks to address persistent digital divides by promoting universal connectivity, affordable access and inclusive participation in the digital economy. Governments and stakeholders have committed to connecting all individuals, schools, and hospitals to the internet, increasing investment in digital public infrastructure, and ensuring that technologies are accessible in diverse languages and formats.

The Compact also emphasises human rights and the protection of fundamental freedoms in the digital space, calling for the strengthened legal and policy frameworks that uphold international law and protect users from harms such as misinformation and discrimination. It promotes an open, global, stable, and secure internet while supporting access to independent, fact-based information.

The key objective of the Compact is to enhance international cooperation on data governance and AI for the benefit of humanity. It includes commitments to develop interoperable national data governance frameworks, advance responsible and equitable approaches to AI governance, and establish mechanisms for global dialogue and scientific guidance on AI. These elements reflect the need for collaborative, multistakeholder governance that balances innovation with transparency, accountability, and respect for human rights.

Independent International Scientific Panel on AI

 Logo, Text

The Independent International Scientific Panel on AI is a mechanism called for within the Global Digital Compact to support evidence‑based policymaking in AI governance. Member states requested the establishment of a multi‑disciplinary panel under the United Nations to assess the opportunities, risks and societal impacts of AI, and to promote scientific understanding across geographic and sectoral divides.

The panel is intended to contribute robust, independent scientific analysis to global AI discussions, ensuring that policy decisions are grounded in research rather than short‑term market pressures or fragmented national approaches. Its mandate includes conducting comprehensive risk and impact assessments, developing common methodologies for evaluating AI systems, and advising on interoperable governance frameworks that respect human rights and international law.

By bringing together experts from diverse disciplines and regions, the panel aims to bridge the gap between scientific developments and policymaking. It is a key institutional mechanism for fostering inclusive AI governance, with balanced geographic representation to ensure that insights reflect global needs rather than narrow technological interests.

The panel also complements the broader Global Dialogue on AI Governance, which seeks to engage governments, international organisations, civil society and technical communities in ongoing discussions about normative approaches, standards, and principles for global AI governance.

The UN Digital Cooperation Portal

The UN Digital Cooperation Portal is a central platform designed to support the implementation of the Global Digital Compact by mapping global digital cooperation activities and facilitating coordination among diverse stakeholders. The portal invites governments, UN entities, civil society organisations, researchers, and private sector actors to voluntarily submit information on initiatives related to the Compact’s objectives.

Launched in December 2025, the portal aggregates initiatives across thematic areas, including digital inclusion, AI governance, data governance, digital infrastructure, and the protection of human rights online. By visualising how activities align with agreed international frameworks, the platform supports strategic collaboration, strengthens transparency and highlights opportunities for joint action across regions and sectors.

The portal generates interactive data visualisations that illustrate how digital cooperation initiatives are evolving at the national, regional and global levels. These tools help identify gaps and overlaps in current efforts, enabling stakeholders to coordinate more effectively in pursuit of shared objectives such as closing digital divides and advancing equitable digital development.

As a resource for governments, UN agencies and external partners, the portal also contributes to the preparatory process for the high‑level review of the Global Digital Compact scheduled for 2027, providing an evidence‑based foundation assessing progress and emerging policy priorities.

Closing the language gap in AI through local language accelerators

 Text, Symbol

Language diversity remains one of the major challenges in global AI development. More than half of the world’s population speaks one of over seven thousand languages, yet most AI systems currently support only a small number of widely used global languages.

Around 1.2 billion people rely on low-resource languages that remain poorly represented in digital technologies. Limited language representation can restrict access to AI-powered services in sectors such as agriculture, healthcare, education and civic participation.

The Local Language Accelerators programme, developed by the United Nations Development Programme, addresses this challenge by supporting the creation of digital language resources and AI models for underrepresented languages.

The initiative combines technological development with partnerships involving universities, research institutions and local language communities. The technologies involved include optical character recognition systems that digitise written texts, automatic speech recognition tools capable of processing spoken language and text-to-speech technologies that generate digital audio.

Ten projects are currently underway across four continents, including initiatives in Serbia, the Democratic Republic of the Congo, the Republic of the Congo, Namibia, Lesotho, Ghana, Mexico, Peru, Nepal and Iraq. These projects support the creation of new datasets and language resources that can be reused for future AI systems.

Using satellite imagery and AI to improve disaster response

 Logo, Text

Rapid damage assessment plays a critical role in humanitarian response following natural disasters. Traditional assessment methods often require manual analysis of satellite images and field inspections conducted by experts, a process that can take weeks.

Emergency response operations, however, require reliable information within the first seventy-two hours after a disaster to prioritise rescue operations and humanitarian assistance.

The SKAI platform, developed by the World Food Programme Innovation Accelerator, uses AI-based computer vision to analyse satellite imagery and identify damaged buildings automatically. The system enables humanitarian organisations to assess destruction at the level of individual structures across large geographic areas.

Developed as an open-source project in collaboration with Google Research, the platform can generate prioritised damage assessments within approximately twenty-four hours. Since 2022, the system has analysed more than 3.9 million buildings and identified around 450,000 severely damaged or destroyed structures.

Expanding inclusive participation through the UN Women AI School

 Logo, Text, Outdoors

Increasing participation in AI development is another priority across the United Nations system. Women remain underrepresented in many AI-related fields, including machine learning engineering and data science.

The UN Women AI School addresses this challenge by providing training programmes designed for policymakers, civil society organisations, UN staff, and young innovators. The initiative aims to strengthen AI literacy and encourage broader participation in shaping the future of digital technologies.

Participants follow structured training tracks combining technical education with discussions on AI governance, ethics, and social impact. Collaborative learning environments encourage participants to develop solutions tailored to the needs of their communities.

More than three thousand participants have taken part in the programme since its launch. A train-the-trainer (ToT) model enables graduates to support future training programmes and expand the initiative to additional regions.

Responsible AI in satellite technologies and earth observation

 Logo, Outdoors

AI technologies are increasingly integrated into satellite systems and Earth observation platforms. These systems analyse large volumes of geospatial data and generate near-real-time insights about environmental conditions.

Applications include monitoring climate change, analysing natural disasters, and supporting environmental policy planning. Rapid technological progress in this field also raises governance challenges related to transparency and accountability.

Many AI models used in satellite analysis operate as black box systems whose internal decision-making processes are difficult to interpret. Limited transparency can create risks when such systems are used to inform critical policy decisions.

Data bias represents another concern. Training datasets often originate primarily from the Global North, which may lead to inaccurate interpretations of environmental conditions in other regions of the world.

Experts from the United Nations Office for Outer Space Affairs have therefore proposed a framework promoting the responsible use of AI in space technologies. The framework emphasises transparency, accountability, and continued human oversight.

Assessing national readiness for AI governance

 License Plate, Transportation, Vehicle

UNESCO’s AI Readiness Assessment Methodology helps governments evaluate their capacity to adopt and regulate AI technologies responsibly.

The methodology examines multiple dimensions of national AI ecosystems, including infrastructure, research capacity, institutional readiness and regulatory frameworks. Rather than ranking countries, the assessment identifies strengths and areas requiring further development.

Since its introduction in 2022, the methodology has been implemented in more than seventy countries. More than seventeen thousand stakeholders have participated in consultations associated with the initiative.

Assessment results have contributed to the development of national AI strategies and policy frameworks in several regions. An updated version of the methodology is expected to be released in 2026.

Additionally, UNESCO promotes the ethical development and use of AI through its Recommendation on the Ethics of Artificial Intelligence. The global framework sets out principles on transparency, accountability, fairness, and respect for human rights to guide national policies and international cooperation.

AI for Good and global capacity building

 Art, Logo

The International Telecommunication Union coordinates the AI for Good initiative, which focuses on applying AI technologies to global challenges while strengthening international cooperation in governance and standards.

The programme operates across multiple areas, including multistakeholder dialogue, technical standard development, governance support and capacity development activities.

More than four hundred AI-related standards have already been developed in areas such as multimedia technologies, energy efficiency and cybersecurity. Governance dialogues organised through the initiative have involved more than one hundred ministers and regulators.

Educational programmes linked to the initiative aim to expand digital skills among young people worldwide through robotics competitions, machine learning challenges and educational partnerships.

The AI for Good Global Summit 2026, set to take place from 7–10 July in Geneva, will convene governments, industry leaders and civil society to advance AI governance, promote responsible innovation, and highlight initiatives that foster inclusive and equitable digital development.

AI tools supporting refugee entrepreneurship

 Logo

AI technologies are also being used to support the economic opportunities for displaced populations. The United Nations Refugee Agency has developed an AI-powered virtual assistant designed to help refugees and asylum seekers transform business ideas into structured business plans.

The platform guides users through financial planning, market analysis and the preparation of investment proposals. The development of the system involved collaboration with NGOs, governments, and entrepreneurial networks across Latin America.

The tool was initially implemented in Paraguay and was designed with input from refugee communities. Remote access allows users to engage with the platform regardless of geographical or institutional constraints.

More than 340 refugee entrepreneurs have used the platform since its launch, with women representing approximately sixty percent of participants. The model is designed to be scalable and could be implemented in additional regions.

Promoting responsible innovation in civilian AI for peace and security

 Logo

The rapid expansion of AI technologies brings increasing security challenges, particularly due to the potential misuse of civilian AI systems in military, conflict-related, or high-risk contexts. Dual-use applications mean that tools designed for civilian purposes, such as data analysis or autonomous systems, could also be repurposed in ways that threaten international peace, stability or human safety.

The United Nations Office for Disarmament Affairs works to foster responsible innovation practices, ensuring that the development and deployment of AI technologies consider their broader implications for global peace and security. Addressing these risks requires ongoing collaboration and dialogue among policymakers, researchers, industry stakeholders, and civil society, creating a shared framework for understanding and mitigating potential threats.

To support this, the programme organises a comprehensive set of initiatives, including thematic multistakeholder dialogues, academic workshops, public panels, private sector roundtables and in-person training sessions for graduate students. These activities aim not only to raise awareness of emerging security risks, but also to provide practical guidance and tools that promote safe, transparent and accountable AI practices in civilian applications worldwide.

UN 2.0 Communities of Practice

 Advertisement, Poster, Text, QR Code, Person, Head

Knowledge sharing and collaboration are strengthened through UN 2.0 Communities of Practice, connecting partners across the United Nations system and beyond. The networks facilitate the exchange of expertise and approaches on digital transformation, data strategy, innovation, and strategic foresight.

Over 18,000 practitioners from more than 160 countries participate, enhancing the collective capacity to address complex AI and digital challenges. Thematic groups, including those focused on digital and data initiatives, support peer-to-peer engagement, professional development, and collaborative problem-solving. Participation allows stakeholders to contribute to a wider ecosystem of expertise and innovation, promoting inclusive digital governance and supporting the Sustainable Development Goals.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU delays tech sovereignty package with AI and Chips Act 2

The European Commission has delayed a flagship tech sovereignty package for the second time, according to its latest College agenda. The measures are now scheduled for adoption on 27 May, after previously being postponed from March to April.

The tech sovereignty package includes several major initiatives aimed at strengthening EU tech sovereignty, such as the Cloud and AI Development Act, the Chips Act 2, an open-source strategy, and a roadmap for digitalisation and AI in energy. European Commission officials have not provided a reason for the latest delay.

The Cloud and AI Development Act is expected to define what constitutes a ‘sovereign’ cloud and simplify rules for building data centres. The proposal is designed to accelerate infrastructure development as Europe seeks to compete in the global AI race.

Chips Act 2 will follow up on the EU’s earlier semiconductor strategy, which struggled to boost domestic chip production significantly. The new proposal is expected to refine industrial policy efforts to reduce reliance on foreign suppliers.

Meanwhile, the planned open source strategy aims to support European software ecosystems and reduce dependence on large US technology firms. By encouraging commercially viable open source projects, the EU hopes to strengthen its long-term digital autonomy.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Publishers challenge OpenAI over alleged copyright infringement

Legal pressure is increasing on OpenAI as Encyclopaedia Britannica and Merriam-Webster file a lawsuit accusing the company of large-scale copyright violations.

According to the complaint, nearly 100,000 copyrighted articles were allegedly used without authorisation to train large language models. Publishers also argue that AI-generated outputs can reproduce parts of their content, raising concerns about unauthorised distribution.

Additional claims focus on how AI systems retrieve and present information. The lawsuit argues that retrieval-augmented generation tools may rely on proprietary databases, potentially undermining publishers’ business models by reducing traffic to original sources.

Concerns are also raised about inaccurate outputs attributed to publishers, which could affect trust in established information providers. The case highlights ongoing tensions between AI development and intellectual property protections.

Growing legal disputes involving media organisations, including The New York Times, suggest that courts will play a key role in defining how copyrighted material can be used in AI training.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

New licensing rules for crypto platforms in Australia

Australia is advancing plans to regulate digital asset platforms under its financial services framework. The Senate committee recommended passing the Digital Assets Framework Bill 2025, bringing Australia closer to licensing crypto exchanges and tokenisation platforms.

Industry groups have raised concerns about definitions such as ‘digital token’ and ‘factual control.’ Broad wording could inadvertently cover infrastructure providers, including multi-party wallet systems, potentially classifying them as financial service operators.

Ripple Labs emphasised the need for precise language to avoid unintended regulation.

The committee supported the Treasury’s approach while planning to refine technical details through future regulations. Coinbase welcomed the progress but noted ongoing banking challenges for crypto firms.

The bill now proceeds to the Senate for debate and a final vote, which could reshape digital asset operations in Australia.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot