
13 March – 20 March 2026
HIGHLIGHT OF THE WEEK
Pay to think: Intelligence on a meter
This week’s highlight is, in fact, more of a lowlight. Last Friday, Sam Altman, CEO of OpenAI, remarked:
‘We see a future where intelligence is a utility, like electricity or water, and people buy it from us on a meter and use it for whatever they want to use it for.’
Our knowledge is already becoming commodified by tech companies and the advertising industry. But what OpenAI’s CEO suggests is a world in which intelligence itself is outsourced to a handful of platforms.

The AI monopolisation of intelligence challenges one of the pillars of civilisation built over millennia: That knowledge defines what it means to be human.
Altman is therefore not just describing a business model; he is also outlining a new social order, one in which intelligence is centralised, privatised, and sold back to humanity by major AI companies.
Not an inevitable future. The battle for human intelligence and knowledge – for who owns the capacity to think, to know, to decide – is not yet over.
The real alternative to monopolising and metering our knowledge back to us isn’t no AI; the real alternative is to have AI as an extension of our personal knowledge shared communities, countries, and humanity, as per our preferences.
Communities, universities, companies, and countries can build bottom-up AI rooted in their own languages, values, and knowledge systems. Open-source models have made human-centred AI technically possible and financially affordable. This would lead to a distributed ecosystem in which AI strengthens human communities rather than subordinates them.
IN OTHER NEWS LAST WEEK
This week in AI governance
The USA. The Trump administration is defending the Pentagon’s decision to cut ties with Anthropic. The Department of Justice (DoJ) urged a federal judge to reject Anthropic’s request to block its designation as a ‘supply chain risk,’ pointing to the company’s insistence on restricting the use of its AI for autonomous weapons and domestic surveillance. The DoJ is arguing that the move is lawful, reasonable, and grounded in national security, not a violation of free speech.
Egypt. On 14 March 2026, Egypt published the National Guidelines for Trustworthy and Responsible AI. The Guidelines provide a national reference for the responsible development, deployment, and oversight of AI across public and private sectors, ensuring AI use is safe, ethical, and transparent while supporting innovation aligned with Egypt’s Vision 2030 and the National AI Strategy. Complementing the National AI Governance Framework, which defines what should be governed, these Guidelines specify how to comply, offering methodologies, metrics, and checklists to operationalise ethical principles. Targeted at data scientists, compliance officers, and developers, they provide actionable directions to protect individual rights, promote societal well-being, enhance accountability and transparency, and foster innovation grounded in safety. The Guidelines also align Egypt with international standards and engage government entities, private enterprises, and community actors in responsible AI governance.
The EU. Tensions are emerging in the EU over AI infrastructure investment, with France, Poland, Austria, and Lithuania pushing to reserve part of the €20 billion AI Gigafactory project for European technologies, while Germany is sceptical about linking the project to digital sovereignty goals. Meanwhile, Germany is pursuing a major expansion of domestic data centres and AI processing power, supported by regulatory reforms, tax incentives, and land allocation to attract investment, aiming to reduce reliance on foreign providers.In parallel, the EU is tightening AI regulations: the Council has endorsed proposals to ban AI from generating non-consensual sexual content (CSAM), adjust high-risk AI compliance timelines, and streamline the AI Act, including exemptions for some SMEs, registration requirements, and clarified oversight responsibilities. These moves reflect Europe’s broader effort to secure sovereign AI infrastructure and ensure safe, accountable AI deployment.
Fighting fraud: The Global Summit in Vienna
INTERPOL has launched a new global task force at the Global Fraud Summit 2026 as part of a more coordinated, data-driven response to the rapid global expansion of financial fraud.
The task force is jointly developed by the UK’s Home Office and INTERPOL and is codenamed Operation Shadow Storm. The task force will target scam centres and their links to cybercrime and human trafficking, using tools such as stop-payment mechanisms and international intelligence-sharing networks. The initial focus of the task force will be dismantling criminal operations across Southeast Asia.
Simultaneously, major technology and consumer-facing companies, including Google, Amazon, Meta, and OpenAI, have signed the ‘Industry Accord Against Online Scams and Fraud’ at the Global Fraud Summit 2026.
The companies pledged to focus on deploying proactive security measures and AI-driven detection systems; strengthening information sharing between industry and law enforcement to better identify and respond to fraud; enhancing resilience through advanced defensive technologies and rapid response mechanisms; and improving public education to help individuals recognise and avoid scams.
Zooming out. Online scams are growing in scale and sophistication, aided by AI-generated content and cross-platform operations. Data shows that consumers lost over $16 billion to online scams in 2024.
EU rules on CSAM detection lapse, leaving a regulatory gap
The EU has been unable to reach an agreement on extending temporary rules that allow online platforms to detect child sexual abuse material, leaving the current framework set to expire in April. The existing rules, in place since 2021, permit technology companies to voluntarily scan their services for harmful content, supporting efforts to identify and remove illegal material.
But negotiations between the European Parliament and member states stalled over key issues — especially whether such measures should apply to encrypted services.
What’s next? Attention now shifts to the long-delayed permanent framework (the Child Sexual Abuse Regulation).
Cybersecurity: On the defensive
As cyber capabilities are now deeply integrated into broader conflict dynamics, countries are increasingly deploying the full range of tools at their disposal.
The EU has imposed sanctions over cyber attacks targeting its member states and partners, listing China-based Integrity Technology Group and Anxun Information Technology, as well as Iran-based Emennet Pasargad, along with Anxun’s co-founders. Integrity Technology is assessed to have facilitated the compromise of over 65,000 devices across six member states. Anxun is assessed to have provided offensive cyber capabilities targeting critical infrastructure, and two of the company’s co-founders have been individually designated for their roles in these operations. Emennet is assessed to have a compromised digital advertising infrastructure to disseminate disinformation during the 2024 Paris Olympics. The sanctions entail an asset freeze and a travel ban for the listed individuals. The EU citizens and entities are additionally prohibited from making funds available to the designated companies.
Long constrained by a defensive security doctrine, Japan will introduce ‘hack-back’ powers from October. The change comes around as part of Japan’s ‘Active Cyber Defense’ law, which was passed in 2025 and is rolling out in incremental stages through 2027. The framework enables authorities to pre-emptively identify and neutralise hostile infrastructure, while also mandating incident reporting by critical infrastructure operators and strengthening coordination between the National Police Agency, intelligence services, and the Self-Defense Forces (SDF).
After last week’s cyberattack on US medical giant Stryker, the FBI seized four websites tied to Handala, the pro-Iranian hacking group that claimed responsibility for the attack, and to Iran’s Ministry of Intelligence and Security (MOIS). The sites were used to claim responsibility for cyberattacks, leak stolen data, and incite violence against journalists, dissidents, and Israeli individuals, investigators claim. Investigators found the domains were interconnected through shared infrastructure and a coordinated operational playbook involving disruptive cyberattacks and ‘faketivist’ propaganda. Handala posted a message stating that ‘This act of digital aggression only serves to highlight the fear and anxiety our actions have instilled in the hearts of those who oppress and deceive. They may have taken down our website, but they will never take down our spirit, our resolve, or the power of truth.’
CISA published an alert urging organisations to harden end point management system configurations to defend against similar malicious activity.
A reminder that last week, Iran’s semi-official Tasnim News Agency, which is linked to the country’s Islamic Revolutionary Guard Corps, listed a number of major US tech companies as potential targets: Google, Microsoft, Palantir, IBM, Nvidia and Oracle.
China’s five-year plan to lead in tech
A new five-year development plan approved by lawmakers in Beijing places innovation and advanced technology at the centre of future economic growth, with the explicit aim of strengthening technological capabilities and positioning China as a leading global tech power.
The strategy sets out ambitions to upgrade the industrial sector, expand domestic research capacity, and reduce reliance on foreign technologies. Priority areas include AI, robotics, aerospace, biotechnology, and quantum computing
China plans to expand AI-related industries, invest in large-scale computing infrastructure, and support the development of advanced systems capable of performing complex tasks beyond traditional chatbot applications.
At the same time, China is set to scale up spending on science and technology, with government research budgets projected to grow by around 10% annually.
The plan also targets an increase in overall R&D investment of at least 7% per year.
The big picture. This strategy is both a response to external pressures and a long-term shift toward higher-value, tech-driven economic development. China aims to end its reliance on foreign innovation and directly challenge Western dominance in critical future industries, such as AI and quantum computing, a goal that is certainly influenced by continued tensions with the USA over trade and technology restrictions.
Brazil’s ECA Digital goes into force
Brazil has started enforcing a new law aimed at strengthening protections for children online, marking a significant shift in how digital platforms are regulated in the country. The legislation, known as ECA Digital, introduces stricter rules for technology companies and will test whether stronger oversight can translate into real-world impact.
The law, which takes effect this week, allows authorities to impose warnings and fines of up to $10 million for violations. In severe cases, courts may order the suspension or banning of platforms operating in Brazil. The measure was passed rapidly following public outrage over online content involving the sexualisation of minors.
ECA Digital builds on Brazil’s existing child protection framework and adapts it to the digital environment. It introduces obligations such as age verification, stricter content moderation, and mechanisms to remove harmful material involving minors without requiring a court order.
The law also targets platform design, requiring companies to limit features that may encourage compulsive use among children. This includes restrictions on excessive notifications, profiling for targeted advertising, and design elements that prolong user engagement.
What’s next? Enforcement of ECA Digital will be led by Brazil’s data protection authority, ANPD, alongside a new screening centre within the Federal Police. However, implementation challenges remain, including limited regulatory capacity and the short timeline between the law’s approval and enforcement.
LOOKING AHEAD

The Geneva Graduate Institute is organising a briefing lunch on 23 March to examine evolving transatlantic dynamics at the intersection of US politics and the global influence of major technology platforms. The discussion will explore how recent political developments in the USA and the concentration of technological power shape Europe’s position, including questions of dependency, regulation, and strategic autonomy.
The International Labour Organization (ILO) is hosting a session on the macroeconomic impacts of AI on 25 March, showcasing a new World Bank Group model that treats AI as a structural transformation of production. The tool simulates how AI adoption affects sectors, occupations, and prices, helping policymakers assess implications for growth, equity, and structural change. A first case study in Poland will explore its application, with potential use in other emerging and middle-income economies.
The 14th Ministerial Conference of the World Trade Organization (MC14) is scheduled to take place from 26 to 29 March 2026 at the Palais des Congrès in Yaoundé, Cameroon. The Ministerial Conference, convening biennially, holds the highest authority within the WTO. It brings together all WTO members, comprising countries or customs unions, enabling decisions on various issues covered by the multilateral trade agreements. Members are expected to seek ministerial endorsement of a structured work plan, and preparatory breakout sessions on reform are included in the conference roadmap. These efforts reflect the central role that reform is anticipated to play in Yaoundé and the intention to initiate high-level political exchanges among ministers on this topic during the conference.
READING CORNER
Trump’s Cyber Strategy for America: Implications for international cyber norms and diplomacy – Diplo
The Trump Administration’s 2026 Cyber Strategy signals a shift from rule-based cyber governance to a power-driven approach centred on offensive capabilities, private-sector mobilisation, and transactional diplomacy. Diplo experts examine what this posture means for international cyber norms, multilateral processes, and the future of cyber diplomacy.
How are the EU AI Act and California’s new laws actually being enforced in 2026? We look at the “AI traffic cops” and what happens when rules are broken.
Policymakers worldwide are caught between awe and apprehension over AI. They recognise its potential to accelerate productivity and scientific progress while worrying about threats to jobs, human rights, and social cohesion. Yet they’re missing a critical risk: AI is becoming a code of opacity within government. Without adequate oversight, AI systems can facilitate corruption—eroding public trust in both the technology and the institutions deploying it.



