
24 – 30 April 2026
Note to our readers: This issue comes to your inbox on a Thursday, rather than tomorrow, 1 May, in observance of Labour Day. Expect the next issue next Friday, as is customary.
HIGHLIGHT OF THE WEEK
Mission, money, and the future of OpenAI
For the second week in a row, technocrats take centre stage in our Weekly newsletter. This time, we’re spotlighting a billionaire row: A courtroom battle between Elon Musk and Sam Altman is putting the origins and future of OpenAI under scrutiny.
At the centre of the dispute is a fundamental question: was OpenAI meant to remain a nonprofit serving humanity, or was a shift toward a profit-driven model always part of the plan?
Musk, a cofounder, argues he was misled. He claims that OpenAI’s leadership abandoned its original mission and pivoted toward commercialisation, particularly through partnerships and products like ChatGPT. His lawsuit seeks sweeping remedies: removing Altman and president Greg Brockman, forcing structural changes to OpenAI’s governance, and potentially awarding up to $150 billion in damages to its nonprofit arm.
OpenAI, backed by Microsoft, rejects this narrative. Its legal team frames the case as a competitive dispute—arguing that Musk raised objections only after OpenAI’s success and the emergence of rival efforts, such as his own AI venture. In court, both sides are leaning heavily on early emails, funding discussions, and conflicting interpretations of what ‘open’ and ‘nonprofit’ were supposed to mean in practice.

The big (business) picture. This case could redefine the nonprofit–for-profit hybrid model that underpins much of today’s AI ecosystem. OpenAI’s structure—a nonprofit overseeing a capped-profit entity—has been widely copied or studied. If the court rules that such a transition violated founding principles, it could force a rethink across the industry, especially for organisations balancing public-interest missions with the massive capital demands of AI development.
Second, the trial may set a precedent for AI governance and accountability. Musk’s argument hinges on the idea that AI labs developing potentially transformative—or risky—technologies should be bound by enforceable commitments to the public good. If courts start treating these commitments as legally binding rather than aspirational, companies could face stricter scrutiny over how they deploy and monetise AI.
Third, there are implications for competition in AI markets. OpenAI’s partnerships, particularly with major tech players, have already raised questions about the concentration of power. A ruling that forces structural separation could reshape the competitive landscape.
It does bear saying that Musk’s xAI has filed for an initial public offering. OpenAI is rumoured to be considering an IPO itself, slated for Q4 2026 to mid-to-late 2027.
The user’s POV. If OpenAI is forced to prioritise its nonprofit mission more strictly, users might see greater transparency—for example, more openness about how models are trained, how decisions are made, or how risks are managed. On the other hand, limiting commercial incentives could slow down development or reduce the scale of investment, potentially affecting how quickly tools improve.
If the current model is upheld, it will underline that market logic and commercial interest will drive AI development. In practical terms, users could face more tiered access, stronger platform lock-in, and less visibility into how systems operate.
Beyond that, if the spat amuses you, The Verge has reporters in the courtroom offering coverage and witty commentary.
IN OTHER NEWS LAST WEEK
This week in AI governance
The USA. Washington is quietly reversing course on its standoff with Anthropic. The White House is drafting executive guidance that would allow federal agencies to work with Anthropic again, despite the company previously being labelled a supply-chain risk by the Pentagon. The shift reflects internal fractures: while parts of the defence establishment remain wary, others see excluding frontier models like Mythos as strategically costly.
The UK. The government is planning to back British strengths in the parts of the AI stack where the UK can build real leverage, Liz Kendall, the UK’s Secretary of State for Science, Innovation and Technology, stated. Kendall rejected technological isolationism, instead championing AI sovereignty for Britain: reducing over-dependencies, backing domestic firms with a £500 million Sovereign AI fund, and launching a new AI Hardware Plan in June 2026 to capture chip market share. Kendall also advocated collaboration with other middle powers, including on setting the standards for how AI is deployed.
The EU. EU member states and European Parliament lawmakers have failed to reach an agreement on revisions to the EU Artificial Intelligence Act, after 12 hours of negotiations over proposed changes under the Commission’s Digital Omnibus package. Disagreements centred on whether sectors already covered by existing product and safety regulations should be exempt from certain parts of the AI framework. Lawmakers warned that the latest deadlock risks creating legal uncertainty for companies already preparing for compliance, while privacy and civil society groups cautioned that proposed relaxations could weaken core safeguards. Talks will, however, resume next month.
South Africa. South Africa has withdrawn its draft national AI policy after it was discovered that the document contained fake, AI-generated citations, undermining the credibility of the proposed framework. The government said the lapse occurred due to a failure to verify references and stressed that stronger human oversight is required in policy processes involving AI tools. The withdrawal delays plans to establish new AI governance institutions and incentives, and the policy will now be redrafted.
China. The Cyberspace Administration of China has warned several ByteDance-owned platforms, including CapCut, Catbox and the Dreamina AI system, over failures to properly label AI-generated and synthetic content. The Cyberspace Administration of China said inspections found violations of cybersecurity and generative AI regulations, prompting enforcement measures such as mandatory rectification, warnings and disciplinary action against responsible personnel.
The EU-USA critical minerals alliance for the technological future
The EU and the USA have launched a coordinated framework to strengthen resilience in critical minerals supply chains, combining a strategic Memorandum of Understanding (MoU) with an Action Plan.
The MoU establishes a broad strategic partnership covering the entire critical minerals value chain—from exploration and extraction to processing, recycling, and recovery. It frames critical minerals as strategic assets underpinning defence readiness, technological development, and economic resilience. The partnership aims to secure diversified and sustainable supply chains through joint project development in the EU, US, and third countries, supported by coordinated investment tools, risk reduction mechanisms, and improved business linkages.
Beyond supply security, the MoU introduces cooperation on market governance and resilience tools. This includes addressing non-market practices and export restrictions, promoting standards-based and transparent markets, improving permitting processes, coordinating on stockpiling and crisis response, and strengthening oversight of strategic asset sales. It also expands cooperation on innovation, recycling, geological mapping, and investment coordination. The agreement is explicitly non-binding, relying on domestic implementation and voluntary coordination.
The Action Plan operationalises these commitments by outlining steps toward a potential plurilateral trade initiative with like-minded partners. It explores coordinated trade instruments such as border-adjusted price floors, standards-based markets, price gap subsidies, and offtake agreements, initially focused on selected minerals. It also proposes harmonised standards, investment screening coordination, joint R&D, stockpiling cooperation, and rapid response mechanisms to supply disruptions. Implementation is led by USTR and DG TRADE, with links to broader multilateral efforts such as the G7.
Why does it matter? This initiative reflects ever-intensifying geopolitical competition over control of critical minerals, which are essential inputs for semiconductors, batteries, defence systems, and clean energy technologies. Supply chains are currently highly concentrated, particularly in processing and refining stages, creating strategic vulnerabilities for both the EU and the USA. The countries say it themselves: By aligning trade tools, standards, and investment screening, the EU and USA aim to safeguard their technological future (including energy, automotive, and electronics sectors), defence readiness, and economic resilience against external disruptions.
Europe’s growing age verification push for platform use
The European Commission has urged member states to rapidly roll out an EU age-verification app that allows users to prove they meet minimum age requirements without revealing personal data such as identity or exact date of birth. The system is designed to integrate with national digital identity wallets and can either operate as a standalone application or be embedded into existing e-ID infrastructure
This initiative is part of a broader EU enforcement effort under the Digital Services Act (DSA), which requires platforms to take stronger measures to protect children online. The Commission has also recently taken preliminary action against Meta, finding that Facebook and Instagram have not effectively prevented users under 13 from accessing their services, largely because age checks can be bypassed with false birthdates and weak verification systems.
At the same time, several European countries are moving toward stricter national rules that go beyond platform compliance. Norway has announced plans to introduce a ban on social media use for children under 16, placing responsibility for age verification on technology companies. Greece is considering measures that would restrict anonymity online and strengthen digital identity requirements. Under the plan, social media platforms would, from 2027, be required to block access for users under 15 using age verification systems rather than self-declared age data.
Australia reshapes news bargaining rules
Australia’s government has proposed a new Media Bargaining Incentive designed to force large digital platforms to financially support local journalism—or pay a levy.
Under the plan, tech companies with significant Australian revenue (over $250 million annually) would face a charge of up to 2.25% of their Australian revenue if they do not reach commercial agreements with at least four news organisations. The revenue collected would be redistributed to media outlets, with allocations linked partly to newsroom staffing levels.
These agreements would be “super-deductible”, meaning firms could offset up to 150% of their value (or 170% for smaller publishers) against the levy. In practice, this makes negotiating with media outlets cheaper than paying the tax itself.
The government proposes the measure as a correction to an imbalance in the digital economy. Communications Minister Anika Wells argued that large platforms benefit directly from journalism flowing through their feeds and should therefore contribute to its production, especially as news consumption shifts overwhelmingly to social media.
The reaction from Big Tech has been sharp. Meta dismissed the measure as a ‘government-mandated transfer of wealth’, arguing that news organisations voluntarily publish content on its platforms because they derive value from it. It also warned that the scheme resembles a digital services tax. Google also rejected the policy, pointing to its existing commercial deals with more than 90 Australian news businesses and arguing that the proposal misunderstands how the advertising market and news consumption have evolved. Both companies also criticised the policy’s selective scope, which excludes major platforms such as Microsoft, Snapchat, and OpenAI.
Australian media organisations, by contrast, strongly support the move. In a joint statement, outlets including the ABC, News Corp Australasia, Nine, SBS, and others described the proposal as a critical step to ensuring the sustainability of journalism.
What’s next? The draft legislation will now enter consultation, with lobbying from both tech firms and media organisations expected to intensify as the details are finalised. Consultation on draft legislation is open until 18 May 2026.
LOOKING AHEAD
It will be a busy week in Geneva as the Geneva Cyber Week 2026 unfolds. Organised by the UN Institute for Disarmament Research (UNIDIR) and the Swiss Federal Department of Foreign Affairs (FDFA) under the overarching theme ‘Advancing Global Cooperation in Cyberspace’. Discussions will cover topics such as cyber norms and international cooperation, AI governance and regulation, critical infrastructure protection, cyber capacity building, incident response, and the security implications of emerging technologies, including artificial intelligence and quantum computing. Today (30 April) is the last day to register for the event.
As part of Geneva Cyber Week, UNIDI will organise the Cyber Stability Conference 2026, on 4–5 May in Geneva and online, bringing together governments, international organisations, industry, academia, and civil society to discuss ICT security and cyber governance. Under the theme “Cyber governance in an era of technological revolution: Past lessons, present realities and future frontiers,” discussions will explore how international cyber stability frameworks are adapting to rapid technological change, including AI and quantum computing, while reflecting on lessons from past cyber diplomacy processes and current security challenges.
Meanwhile, RightsCon 2026, which was scheduled to kick off in Lusaka, Zambia, on 5 May, will not proceed either in Lusaka or online. The conference has been deferred to a later date, the Zambian government has stated.
READING CORNER
AI systems are increasingly capable of producing legal language and rules that look authoritative, including cases where outputs have echoed or fabricated legal references, as highlighted in South Africa. The real question, writes Jovan Kurbalija, is how societies can distinguish between useful AI assistance and ‘fake laws’ and why human institutions must remain the final gatekeepers of legitimacy and enforcement.
In this blog, Slobodan Kovrlija examines how open-weight AI is empowering emerging economies to build sovereign agricultural and health tools, from Kenya’s crop diagnostics to Zambia’s maternal care.


