April 2026 in retrospect
Dear readers,
Visionary or outlandish statements about the future are a feature of tech industry discourse. But the rapid acceleration of generative AI seems to have shortened the timeline for many of these claims. April brought another wave of high-profile predictions. While some might be tempted to dismiss them as mere hype, there’s a strong reason to assess them: the quiet danger of designing the future without meaningful public input.
South Africa unveiled its first draft national AI policy and was quickly forced to withdraw it after reviewers found a critical flaw: it was riddled with fake sources and non-existent citations, likely generated by AI. This incident is rather illustrative of a broader problem with AI-generated laws. In this issue of the newsletter, we examine how to prevent fake laws governing real life.
In earty April, Anthropic announced Claude Mythos Preview, its most capable AI model to date, alongside the explicit decision not to make it publicly available. We look at the model’s capabilities, the reason behind restricting access to the model, as well as the governance questions the model has brought up.
We invite interested readers to join our team of Knowledge Fellows. Knowledge Fellows are central to the observatory’s ability to provide comprehensive, accurate, and up-to-date coverage of specific areas of digital governance. More details on what we are looking for and what we offer in return are available in the newsletter.
Plus: April’s top digital policy developments and a Geneva wrap-up.
Snapshot: The developments that made waves in April
Technologies
The EU and the USA have launched a coordinated framework to strengthen resilience in critical minerals supply chains, combining a strategic Memorandum of Understanding (MoU) with an Action Plan. partnership aims to secure diversified and sustainable supply chains through joint project development in the EU, US, and third countries, supported by coordinated investment tools, risk reduction mechanisms, and improved business linkages.
Canada and Finland have set out a new agenda for cooperation on sovereign technology and AI, positioning advanced digital capabilities as central to economic resilience, security, and strategic autonomy in a contested global environment. Announced after talks in Ottawa, the agenda spans AI adoption across government and industry, high-performance computing, telecommunications, AI gigafactories (including support for Nokia’s AI gigafactory), quantum research, critical minerals, and trusted supply chains. Both countries plan to deepen coordination on sovereign AI infrastructure, reduce technological dependencies, support small and medium-sized enterprises, and expand telecom opportunities through initiatives such as the Global Coalition on Telecommunications.
Canada is increasing support for its quantum research ecosystem through new funding announced by the Natural Sciences and Engineering Research Council of Canada, aiming to strengthen the country’s scientific capacity, innovation base, and long-term leadership in a strategically important field. The initiative will back researchers, projects, and cross-institutional collaboration, advancing both fundamental science and applied development while helping translate quantum research into practical technological progress.
The UK government has identified six frontier technologies – AI, cybersecurity, advanced connectivity, engineering biology, quantum technologies, and semiconductors – as the pillars of its 2025 Modern Industrial Strategy and Digital and Technologies Sector Plan, aiming to strengthen digital capability, economic growth, national resilience, and long-term competitiveness. The agenda prioritises investment in next-generation telecoms, including 5G and future 6G, alongside expanded compute capacity, supercomputing infrastructure, and workforce development to reinforce the UK’s position as a leading EU AI hub.
Australian researchers have used a Wikipedia-based AI model to identify 100 emerging technologies gaining momentum ahead of 2026, offering a data-driven alternative to traditional forecasting methods often shaped by expert judgement. Drawing on thousands of Wikipedia entries, the analysis mapped more than 23,000 technologies to produce the ‘Momentum 100’ list, led by reinforcement learning and followed by blockchain, 3D printing, soft robotics, augmented reality, and other fast-developing fields.
Infrastructure
European technology providers Cubbit, SUSE, Elemento, and StorPool Storage have launched a joint Disaster Recovery Pack to help organisations maintain data access and operational continuity during disruptions caused by external technology dependencies. Presented at the European Data Summit in Berlin, the solution combines storage, compute, orchestration, networking, identity, observability, and management into a single deployable cloud software stack designed to reduce fragmentation and simplify recovery planning. By enabling critical workloads to be transferred to European-based infrastructure with limited disruption, the initiative seeks to meet practical disaster recovery needs while supporting wider efforts to reduce reliance on non-European cloud providers.
A new report, citing research by the Brussels-based Future of Technology Institute, warns that most of the EU defence agencies remain heavily dependent on US cloud and technology providers, raising concerns over exposure to a potential ‘kill switch’ scenario in which critical services could be restricted or disabled during political or strategic tensions. Open contracting data reviewed by the institute suggests that 23 of 28 EU and UK countries rely on US firms, either directly or through EU suppliers, using American cloud infrastructure, with 16 countries classified as high risk, including Germany, Finland, Poland, Denmark, Estonia, and the UK. Google Cloud, Microsoft, and Oracle are described as dominant providers in sensitive defence systems, while Austria is presented as a lower-risk case due to apparent reliance on sovereign alternatives.
Panthalassa has raised $140 million in Series B funding, led by Peter Thiel, to develop offshore systems that harness ocean wave energy to power AI computing as demand for data centre capacity accelerates. The company plans to build wave-powered nodes that generate electricity at sea, run AI computing on board, and transmit data through low-Earth-orbit satellites, offering a potential response to land-based data centres’ growing constraints on power supply, cooling, and infrastructure.
Security
By pairing AI-driven discovery with industry coordination, $100 million in usage credits, and funding for open-source security, the project Glasswing brings together major technology, cybersecurity, finance, and open-source actors, including AWS, Apple, Google, Microsoft, NVIDIA, Cisco, CrowdStrike, JPMorgan Chase, and the Linux Foundation, in a coordinated effort to use advanced AI to defend critical software infrastructure. Led by Anthropic’s Claude Mythos Preview model, the initiative aims to detect complex vulnerabilities at scale, with early findings uncovering thousands of previously unknown flaws across operating systems, browsers, and core digital infrastructure, some of which had remained hidden for decades.
A joint CISA advisory warns that Iranian-affiliated cyber actors are targeting internet-facing programmable logic controllers across US critical infrastructure, including Rockwell Automation and Allen-Bradley CompactLogix and Micro850 devices used in government, water, energy, and industrial systems. Active since at least March 2026, the campaign has disrupted PLC functions, manipulated project files, and altered HMI and SCADA displays, causing operational and financial damage.
Canada has introduced Level 1 of the Canadian Program for Cyber Security Certification, setting a baseline of cyber requirements for suppliers working on defence contracts as cyber threats increasingly target contractors, sensitive data, and critical supply chains. Phased implementation will begin in summer 2026, with certification required at the contract award stage, giving industry time to adapt while strengthening procurement trust and operational readiness.
Europol’s 2026 Internet Organised Crime Threat Assessment warns that the EU’s cybercrime landscape is becoming more complex, industrialised, and difficult to disrupt as criminal networks exploit encryption, proxies, fragmented online spaces, and AI-enabled tools. The report identifies cybercrime enablers, online fraud, cyber-attacks, and online child sexual exploitation as major areas of concern, with AI making scams, deception, and abuse more scalable and convincing.
Norway has announced plans to introduce a ban on social media use for children under 16, placing responsibility for age verification on technology companies.
Greece is moving to tighten restrictions on minors’ use of social media, with legislation expected later this year that would introduce a ban for children under 15. The measure is set to take effect on 1 January 2027 and is intended to be a framework that changes how platforms operate. Platforms would be required to implement robust age verification mechanisms, including the re-verification of existing accounts, with oversight provided by national regulators such as the Hellenic Telecommunications and Post Commission (EETT).
French President Emmanuel Macron is convening EU leaders, including Spanish Prime Minister Pedro Sanchez and representatives of Italy, the Netherlands and Ireland, to align national approaches to restricting minors’ access to social media and to press for faster EU-level action.
The UK’s Children’s Wellbeing and Schools Bill is set to expand ministers’ powers to shape how online services protect children, including by restricting access to risky platforms, features, or functions and by targeting design elements such as contact settings, live communication, location visibility, and time spent online. The draft would also bring Ofcom into a stronger advisory role, introduce a six-month timeline for regulations or a progress update, and give ministers new authority over children’s data consent, age assurance, and enforcement. The regulatory package remains unsettled for now, with Parliament still negotiating key provisions and no final law yet in place.
The European Commission has developed a standardised age-verification app intended to work across member states. The app allows users to confirm they meet age requirements to access social media platforms by providing their passport or ID number. It is designed to integrate into national digital wallets or operate as a standalone app, with a coordinated EU framework to ensure interoperability and avoid fragmented national systems. The app is open source and available for both public and private implementation, but is subject to common technical and privacy requirements. The Commission plans to establish an EU-level coordination mechanism to oversee rollout, accreditation, and cross-border usability. The rollout has faced scrutiny by security researchers. Reported weaknesses include locally stored authentication data that can be reset or modified, allowing users to bypass PIN protections, disable biometric checks, and reset rate-limiting mechanisms by editing configuration files. This effectively enables the reuse of verified identity data under altered access controls. The criticism has triggered broader concerns among developers about the app’s architecture, including why secure hardware features were not used, and whether elements like expiring age credentials are logically necessary.
The European Commission has also recently taken preliminary action against Meta, finding that Facebook and Instagram have not effectively prevented users under 13 from accessing their services, largely because age checks can be bypassed with false birthdates and weak verification systems.
Australia’s child-safety push is widening from social media to gaming, as regulators intensify scrutiny of how platforms protect minors from harm. On 21 April, the eSafety Commissioner issued legally enforceable transparency notices to Roblox, Minecraft, Fortnite and Steam, demanding details on how they handle risks, including child sexual exploitation, cyberbullying, hate and extremist material on services widely used by children.
The UK Information Commissioner’s Office has launched a campaign to help parents and carers speak with primary school-aged children about online privacy, after research found that many children are sharing personal details online, while families often feel unsure how to respond. The ICO says 24% of children have shared their real name or address online, 22% have disclosed information such as health details to AI tools, and 21% of parents have never discussed online privacy with them.
Economic
The European Commission has issued a supplementary charge sheet to Meta (called Supplementary Statement of Objections), outlining concerns over potential restrictions on third-party AI assistants’ access to WhatsApp. Previously, Meta decided to reinstate access to WhatsApp for third-party AI assistants for a fee. However, the Commission has preliminarily found that these measures remain anticompetitive and has now issued interim measures to prevent these policy changes from causing serious harm on the market. The interim measures would stay in effect until the Commission concludes its investigation and issues a final decision on Meta’s conduct.
UNCTAD reports that global trade grew by $2.5 trillion in 2025 to reach $35 trillion, reflecting continued expansion in goods and services but also a more fragile and uneven economic landscape. Rising geopolitical tensions, disrupted shipping routes, conflicts in the Middle East, and instability in key maritime corridors are driving up energy, transport, and import costs, placing heavier pressure on developing economies with limited fiscal space. Services growth has slowed, while much of the recent trade increase stems from higher prices rather than stronger volumes. East Asia and Africa remain important drivers through South–South trade and shifting supply chains, yet fragmentation, US–China decoupling, inflation, debt, and protectionism are expected to weigh on 2026 prospects.
The International Labour Organisation warns that social protection systems are failing to keep pace with fast-changing labour markets shaped by climate change, technological disruption, demographic shifts, and evolving forms of work. Its new report highlights major gaps in coverage, adequacy, and financing, leaving many workers exposed during unemployment, illness, retirement, or job transitions.
Russia is moving to criminalise large-scale unauthorised cryptocurrency activity, after a government legislative commission approved amendments that create prison sentences for organising the circulation of digital currency without central bank authorisation. The proposed Article 171.7 of the Criminal Code would punish cases involving significant harm, major illicit income, or damage to individuals, organisations, or the state with a sentence of 4 to 7 years in prison. Expected to take effect on 1 July 2027, the measure marks a sharper enforcement turn in Russia’s digital asset sector.
The European Commission has updated its technology transfer competition rules to reflect better data-driven innovation, digital markets, and modern licensing practices across the EU. The revised framework clarifies how companies can license patents, software, know-how, and data-related technologies while staying within competition law, aiming to protect collaboration and legal certainty without allowing agreements that restrict market access or innovation. Greater attention is given to digital ecosystems, standard-essential technologies, and licensing arrangements that may shape control over data, interoperability, and downstream competition.
Canada has announced C$23.8 million for the Digital Skills for Youth programme, aiming to help young people gain practical experience as AI, cybersecurity, big data, automation, and broader digital transformation reshape the labour market. Led by Industry Minister Mélanie Joly, the two-year initiative will fund training and work placements for post-secondary graduates by linking them with employers across emerging technology sectors. Eligible recipients include businesses, non-profits, public institutions, Indigenous organisations, and provincial or territorial bodies, with flexible access for participants in Yukon, the Northwest Territories, and Nunavut.
Human rights
Brazil has inaugurated its first Center for Access, Research and Innovation in Assistive Technology (Capta) at the Benjamin Constant Institute in Rio de Janeiro. Run by the Ministry of Science, Technology and Innovation (MCTI) under the National Plan for the Rights of People with Disabilities, the centre aims to foster the development, experimentation, and dissemination of assistive technologies that enhance autonomy, inclusion, and quality of life for people with disabilities. The launch marks the first of several planned centres nationwide to expand access to these technologies.
UNESCO warns that students with disabilities continue to face deep barriers in education, including inaccessible infrastructure, limited assistive technologies, insufficient teacher training, stigma, and weak data systems that leave many learners invisible in policy planning. Its findings show that exclusion often begins early and is reinforced by poverty, gender inequality, displacement, and other overlapping disadvantages, limiting access to quality learning and future opportunities. UNESCO urges governments to move beyond narrow inclusion measures by investing in accessible schools, inclusive curricula, trained educators, reliable data, and meaningful participation by persons with disabilities.
The Philippines and Bermuda have signed a memorandum of understanding to strengthen cross-border cooperation on personal data protection, linking the Philippines’ National Privacy Commission with Bermuda’s Office of the Privacy Commissioner. The agreement enables information sharing, mutual assistance in investigations, and closer coordination on data breach cases that cross jurisdictions. Beyond enforcement, the partnership supports compatible data protection mechanisms, certification frameworks, trusted data flows, training, and knowledge exchange on emerging privacy challenges.
Legal
A unanimous US Supreme Court ruling has narrowed the circumstances under which an internet service provider (ISP) can be held liable for users’ copyright infringement. Writing for the Court, Justice Clarence Thomas said an ISP is liable only if its service was designed for unlawful activity or if it actively induced infringement.
French authorities have summoned Elon Musk and former X chief Linda Yaccarino to give voluntary interviews in relation to a criminal investigation into whether X enabled the spread of child sexual abuse material, AI-generated deepfakes, Holocaust denial content, and other harmful or unlawful material. However, Musk appears to have refused by not showing up. The confrontation widened when reports emerged that the US Justice Department had declined to assist the French inquiry, arguing that the case risked crossing into the regulation of protected speech and that it would unfairly target a US company. French authorities, however, have framed the matter as a legitimate enforcement action under national law.
In the federal multidistrict litigation (MDL) pending in the Northern District of California involving Meta, Google (YouTube), ByteDance (TikTok), and Snap Inc. the court denied motions to dismiss filed by several school districts. That moves the case out of the pleading stage and into bellwether proceedings, where selected cases will test core liability and damages theories. The plaintiffs’ main argument is product design-based. They claim the platforms were engineered to maximise engagement among minors despite internal awareness of mental health risks. They link this to reported increases in anxiety, depression, and behavioural disruption in school environments. The causal chain is disputed, but that is the core theory being advanced. The MDL is large, with over 2,300 related actions across six states, making it one of the more significant litigations in this area. The upcoming June bellwether trial is expected to be the first real test of these claims and will likely influence both settlement pressure and the broader direction of the MDL.
Raine v. OpenAI is proceeding as a standalone case in California, not part of any MDL. The complaint alleges that Adam Raine’s use of ChatGPT shifted from academic purposes to emotional reliance, with escalating mental health disclosures allegedly met by responses that reinforced dependence rather than directing him outward. The plaintiffs argue this was a foreseeable result of engagement-oriented design. They bring claims including wrongful death and seek injunctive relief for stronger safeguards. While no trial date has been formally set, the case remains in its early procedural stage in California and may proceed toward trial in late 2026 or 2027, depending on pretrial developments.
Sociocultural
The European Commission has launched a Mediterranean digital transformation programme for North African and Middle Eastern countries, marking the first digital initiative under the Pact for the Mediterranean. The programme aims to support inclusive and sustainable growth by improving access to digital services, aligning telecommunications regulation with the EU standards, and strengthening national regulatory authorities. Cybersecurity is a core priority, with support for stronger national frameworks, institutional capacity, and coordinated responses to digital threats.
The European Commission’s first monitoring results under the revised Code of Conduct on Countering Illegal Hate Speech Online+ show that major platforms are making progress in reviewing reported illegal hate speech within 24 hours, while gaps remain in accuracy, consistency, and reporting practices. Based on independent monitoring and company data, the assessment found that many notifications were handled within agreed timelines, but a notable share of cases were disputed or wrongly classified. Linked to the Digital Services Act’s co-regulatory model, the exercise acts as a practical test of platform accountability, transparency, and compliance with the EU and national law.
The UK government is planning measures that could make senior technology executives face criminal charges, including prison sentences, if their companies fail to remove non-consensual intimate images when required by regulators. The move builds on existing obligations that already require platforms to take down such material within strict timeframes or face significant penalties, including fines of up to 10% of global turnover or even service blocking. The latest step goes further: instead of relying solely on corporate sanctions, it introduces personal criminal accountability at the executive level. This type of liability is likely to accelerate compliance in ways that financial penalties alone have not, and may serve as an example to other jurisdictions. The policy is part of a broader tightening of the UK’s online safety framework, driven by persistent concerns over revenge porn and the rapid proliferation of AI-generated intimate imagery.
AI governance in April
National plans and initiatives
India. India has set up a Technology and Policy Expert Committee under the Ministry of Electronics and Information Technology to help shape the country’s AI governance framework and advise the new AI Governance and Economic Group. Bringing together government, academia, industry, and policy expertise, the body is meant to translate fast-moving technical and regulatory issues into practical guidance, bringing a more structured and adaptive approach to AI governance aligned with India’s economic and social priorities.
South Africa. South Africa has withdrawn its draft national AI policy after it was discovered that the document contained fake, AI-generated citations, undermining the credibility of the proposed framework. The government said the lapse occurred due to a failure to verify references and stressed that stronger human oversight is required in policy processes involving AI tools. The withdrawal delays plans to establish new AI governance institutions and incentives, and the policy will now be redrafted.
Sovereignty
The UK. The government is planning to back British strengths in the parts of the AI stack where the UK can build real leverage, Liz Kendall, the UK’s Secretary of State for Science, Innovation and Technology, stated. Kendall rejected technological isolationism, instead championing AI sovereignty for Britain: reducing over-dependencies, backing domestic firms with a £500 million Sovereign AI fund, and launching a new AI Hardware Plan in June 2026 to capture chip market share. Kendall also advocated collaboration with other middle powers, including on setting the standards for how AI is deployed.
The government has also launched a £500 million Sovereign AI Fund to accelerate domestic AI startups and strengthen national technological autonomy. The initiative combines direct equity investment with access to national compute infrastructure, fast-tracked visas for global talent, and procurement pathways into public services. It targets early-stage to growth companies in areas such as AI infrastructure, life sciences and advanced computing, with the explicit goal of ensuring that high-potential firms scale and remain anchored in the UK rather than relocating abroad.
Papua New Guinea. The government has issued new guidance on AI and data sovereignty, setting out principles for ensuring that national data assets remain under domestic control. The framework emphasises governance over data storage, processing and cross-border transfers, particularly where public-sector or sensitive datasets are involved.
Russia. Russia is advancing a draft AI regulatory framework that would formalise oversight of AI development and deployment, aligning with broader efforts to strengthen digital sovereignty and state control over emerging technologies. The proposals focus on risk management, national standards and reducing dependence on foreign AI systems, while supporting domestic innovation. The move fits into Moscow’s broader strategy to tighten control over digital infrastructure and cross-border data flows.
Partnerships
South Korea–France. South Korea and France are deepening cooperation through a new strategic AI and technology partnership, aimed at strengthening joint research, industrial collaboration and standard-setting across emerging technologies. The initiative reflects a broader effort to align capabilities in semiconductors, data infrastructure and advanced computing, while positioning both countries more competitively in the global AI landscape.
The EU-Morocco. The European Commission and Morocco have launched a digital dialogue to deepen strategic cooperation on emerging technologies, digital transformation, and innovation-led development. Focused on AI, digital infrastructure, start-up support, research collaboration, and stronger AI ecosystems, the initiative aims to turn digital technologies into drivers of economic and social progress. Greater interoperability of digital public services and expanded knowledge exchange are also central to the partnership, reflecting a shared interest in more connected, efficient, and inclusive digital governance.
Legal
The USA. A federal appeals court in Washington, D.C. has declined to block the Pentagon’s national-security blacklisting of Anthropic, allowing the designation to remain in force while litigation continues. The ruling contrasts with a separate decision by a California judge who had earlier blocked part of the government’s action, highlighting a growing judicial split over the unprecedented move.
Paraguay. Paraguay has adopted new rules for the use of AI in its courts, with UNESCO support, marking a notable step in judicial AI governance. The framework, approved by the Supreme Court of Justice, limits AI to a supporting role in data processing, information management, and assisted decision-making, while requiring human oversight, transparency, accountability, and disclosure when AI tools influence judicial processes. The rules align Paraguay’s approach with UNESCO’s guidance on AI in courts and underscore a wider trend toward rights-based, trust-focused AI deployment in public institutions.
Belgium. Belgium’s data protection authority has released a new information brochure titled ‘The Impact of Artificial Intelligence (AI) on Privacy’, providing guidance on risks such as bias, privacy violations and misuse of generative AI systems. The document is intended to raise awareness among organisations and the public, and to support compliance with EU data protection and AI governance frameworks.
Safety and security
The EU. EU member states and European Parliament lawmakers have failed to reach an agreement on revisions to the EU Artificial Intelligence Act, after 12 hours of negotiations over proposed changes under the Commission’s Digital Omnibus package. Disagreements centred on whether sectors already covered by existing product and safety regulations should be exempt from certain parts of the AI framework. Lawmakers warned that the latest deadlock risks creating legal uncertainty for companies already preparing for compliance, while privacy and civil society groups cautioned that proposed relaxations could weaken core safeguards. Talks will, however, resume in May.
Kazakhstan. Kazakhstan has introduced mandatory audits for high-risk AI systems, requiring developers to obtain a positive audit assessment before their systems can be listed as ‘trusted’ by sectoral authorities. The government will publish and regularly update official lists of approved systems, based on applications that include documentation on ownership, functionality and use conditions, reviewed within strict timelines. The move aims to build trust and standardise best practices in AI deployment, signalling a more structured and compliance-driven approach to high-risk AI governance.
New Zealand, the UK, Singapore. New Zealand’s National Cyber Security Centre, the UK National Cyber Security Centre, and Singapore’s Cyber Security Agency have issued coordinated warnings that frontier AI is reshaping the cyber threat landscape by lowering barriers to sophisticated attacks, accelerating vulnerability discovery, and compressing the window between disclosure and exploitation. All three stress the dual-use nature of AI, urging organisations to reassess outdated risk models and prioritise rapid patching, continuous monitoring, stronger identity and access controls, and reduced attack surfaces to counter increasingly automated and faster-moving cyber threats across both public and private sectors.
The USA. US cybersecurity officials are considering reducing the patching deadline for actively exploited flaws to just three days, citing the accelerating speed at which AI systems can identify and weaponise vulnerabilities. The proposed shift would initially apply to federal civilian agencies but could redefine baseline incident response expectations across government and critical infrastructure, with agencies arguing that traditional patch cycles are no longer compatible with current exploit timelines, while industry warns that such compressed deadlines may exceed the capacity of complex and legacy IT environments.
Meanwhile, Washington is quietly reversing course on its standoff with Anthropic. The White House is drafting executive guidance that would allow federal agencies to work with Anthropic again, despite the company previously being labelled a supply-chain risk by the Pentagon. The shift reflects internal fractures: while parts of the defence establishment remain wary, others see excluding frontier models like Mythos as strategically costly.
Mythos. Anthropic has launched an investigation after a small group of users gained unauthorised access to its powerful Mythos AI model via a third-party contractor environment. The access reportedly occurred just as the company began rolling out a limited preview of the model to selected organisations under Project Glasswing. The unauthorised users are believed to have operated through a private Discord group, using a mix of tactics, including contractor access and open-source intelligence tools, to gain access to the system. Mythos was intentionally restricted due to its ability to accelerate cyberattacks and was provided to a limited number of partners, yet it appears to have leaked almost immediately through the partner ecosystem rather than through a direct breach. The window during which Mythos’ capabilities remain contained may prove far shorter than anticipated.
Content governance
China. The Cyberspace Administration of China has warned several ByteDance-owned platforms, including CapCut, Catbox and the Dreamina AI system, over failures to properly label AI-generated and synthetic content. The Cyberspace Administration of China said inspections found violations of cybersecurity and generative AI regulations, prompting enforcement measures such as mandatory rectification, warnings and disciplinary action against responsible personnel.
Development
Ghana. The Ghanaian Ministry of Communication, Digital Technology and Innovations has launched a public-sector AI capacity development programme in collaboration with the Government of Japan and the UN Development Programme. The programme is designed to equip public officials with knowledge of AI and its applications in governance. It focuses on improving decision-making and service delivery, drawing on experience from the UN and Japan.
UNESCO-Latin America & Caribbean. UNESCO has launched a regional AI in Education Observatory for Latin America and the Caribbean, designed to support evidence-based policymaking and track the impact of AI on education systems. The initiative aims to build capacity, share best practices and guide responsible integration of AI tools in schools and learning environments.
UNESCO–Oxford. UNESCO and the University of Oxford have launched a global AI course for courts. The programme trains judges and legal professionals to assess algorithmic tools, identify bias, and ensure compliance with human rights standards in increasingly digitalised judicial processes. It introduces practical frameworks for evaluating AI outputs in legal contexts, with a strong focus on maintaining judicial independence, transparency and accountability as AI becomes embedded in evidence handling and decision-support systems.
Commonwealth. The Commonwealth Secretariat has launched a capacity-building programme on the use of AI in election management, training electoral officials from member states on how AI tools can support voter education, administrative efficiency and data analysis while safeguarding electoral integrity. The initiative focuses on practical applications of AI in electoral processes, including risks such as misinformation, bias and automation of sensitive decision-support functions. It emphasises that AI should remain assistive rather than substitutive in democratic processes, with human oversight positioned as central to maintaining trust, legitimacy and accountability in elections.
Australia. Under its national AI workforce strategy, Australia is expanding targeted upskilling programmes for learners and workers to address structural skill gaps created by AI-driven labour market shifts. The approach prioritises integration of AI literacy into education and vocational pathways, alongside employer-linked training to support adaptation in high-exposure sectors. It frames AI as a general-purpose technology requiring continuous reskilling rather than one-off training, with policy attention on inclusion, transition support and alignment between education systems and emerging digital economy demands.
Pakistan. Pakistan has approved the establishment of an AI Education Authority alongside plans for virtual schools. The reforms aim to scale AI-driven learning systems, support personalised education delivery and standardise digital curricula across regions. The initiative is framed within broader efforts to modernise the education sector, strengthen digital access, and build national capacity for AI adoption in public education, while addressing disparities in learning outcomes through technology-enabled delivery models.
When AI writes the rules: How to avoid fake laws governing real life
Last month, South Africa unveiled its first draft national AI policy, aiming to position the country as a continental leader in innovation. The plan included ambitious new institutions: a National AI Commission, an Ethics Board, and tax breaks for private sector collaboration.
But just days later, the celebration turned sour.
According to Reuters, South Africa’s government was forced to withdraw the draft after reviewers discovered a fatal flaw: the policy was riddled with fake resources and citations that didn’t exist. The research supporting the country’s AI strategy had likely been generated by an AI.
This isn’t a minor typo. AI hallucinated policies and supporting resources. It is not surprising, as LLMs are advanced guessing machines, not providers of verified facts. Even when they are fake, texts can look perfectly correct and legitimate.
South Africa’s Minister of Communications and Digital Technologies, Solly Malatsi, acknowledged the failure with refreshing honesty:
‘The most plausible explanation is that AI-generated citations were included without proper verification. This should not have happened.’
He noted that this lapse ‘has compromised the integrity and credibility of the draft policy.’
Why does it matter? We are not highlighting South Africa to single it out or cause embarrassment. We are shining a spotlight on the problem with AI-generated laws. South Africa’s incident is not an exception. As policymakers rush to keep up with technology, we are seeing more examples of AI-drafted regulations being submitted for review. For instance, in the USA, a federal judge in California sanctioned two law firms for submitting a legal brief containing a fake citation generated by AI.
The problem isn’t that AI is used. The danger lies in how it is being used.
Legal documents and policies require precision, grounding, and contextualisation. Generic AI models often fail at all three:
- Lack of precision: AI frequently provides vague, generic answers to specific legal questions. Laws need pointed, solid definitions; AI prefers probabilistic guesswork.
- No grounding: Most AI models cannot provide a verifiable link to the exact sentence of a law or regulation. Often, they mix up jurisdictions across countries and jurisdictions.
- Zero context: AI frequently lacks the specific political, social, or historical context of a policy and regulations. Temporal context is also missing, which shows how legal issues evolved over the course of drafting and negotiations.
How to fix a problem (without banning AI). The solution lies in a two-pronged approach: developing institutional AI and increasing AI literacy.
If South Africa had had institutional AI anchored into local knowledge and context, such a hallucination could have been avoided. Moreover, AI would be a genuine and useful tool reflecting the topical and temporal context of policy development and law drafting.
But more importantly, we need to build AI competencies among policymakers. This requires a shift in pedagogy. We cannot teach policymakers to simply use AI; they must understand how it works.
As Minister Malatsi stated:
‘This unacceptable lapse proves why vigilant human oversight over the use of artificial intelligence is critical. It’s a lesson we take with humility.’
If we fail to build precise, grounded AI tools and train policymakers to use them properly, we won’t just have fake citations in a draft. We will have fake laws governing real people.
Read the original ‘When AI writes the rules: How to avoid fake laws governing real life’ blog post by Dr Jovan Kurbalija.
Designing our future
Visionary or outlandish statements about the future are a feature of tech industry discourse. But the rapid acceleration of generative AI seems to have shortened the timeline for many of these claims.
April brought another wave of high-profile predictions. While some might be tempted to dismiss them as mere hype, there’s a strong reason not to. These ideas come from people who are not only building the platforms or technologies we rely so much on, but are also spending capital to transform into reality their visions of what the future should look like. When ‘tech leaders’ float their ideas, they begin to steer real-world resources and regulatory conversations. And there we come to the quiet danger: designing the future without meaningful public input.
The co-founder of Palantir, Alex Karp, and Palantir’s Head of Corporate Affairs, Nicholas W. Zamiska, published a set of 22 propositions drawn from their upcoming book, Technological Republic. It did not arrive quietly. Critics called it ‘technofascism’ and ‘what evil would tweet.’
Their vision is organised around duty, hard power, and scepticism toward modern democratic culture. They argue that Silicon Valley owes a moral debt to the country that made its rise possible, and that the engineering elite has an affirmative obligation to participate in national defence. They question the all‑volunteer force, suggesting that national service should be a universal duty so that the next war involves shared risk. Soft power and soaring rhetoric, they writes, have been exposed as insufficient. Free societies need hard power, and in this century, hard power will be built on software.
When it comes to AI weapons, Karp and Zamiska are blunt: they will be built regardless of Western debates. The only question is by whom and for what purpose. The authors also defend Elon Musk against what he sees as cultural snickering, arguing that we should applaud those who attempt to build where the market has failed to act. At the same time, they reject what they calls vacant pluralism, insisting that not all cultures are equally productive and that elite intolerance of religious belief is a sign of intellectual closure.
What Kapr and Zamiska do not offer is much economic policy. Their technological republic is organised around security and technological power, not redistribution. The state exists to be defended. The individual exists to serve.

Around the same time, OpenAI released its own policy document, Industrial Policy for the Intelligence Age. It is longer, somewhat softer, full of phrases like ‘public wealth fund’ and ‘right to AI.’ It asks for a democratic conversation about AI industrial policy, regulation, ethics and economy.
OpenAI’s document starts from a different problem. Superintelligence—AI systems capable of outperforming the smartest humans, even when those humans are assisted by AI—is coming. Market forces alone cannot manage the transition, OpenAI argues. Drawing parallels to the Progressive Era and the New Deal, the company proposes ambitious public‑private collaboration.
On the economic side, this includes giving workers a formal voice in how AI is deployed in workplaces, microgrants to help workers become AI‑first entrepreneurs, a right to AI as foundational access comparable to literacy or electricity, shifting taxation from payroll to capital gains and automated labour, creating a Public Wealth Fund to give citizens a direct stake in AI‑driven growth, and converting efficiency gains into shorter workweeks or better benefits.
On the resilience side, OpenAI proposes safety systems for cyber and biological risks, an AI trust stack for verification, auditing regimes for frontier models, model‑containment playbooks for dangerous AI, and guardrails for government use. The company acknowledges it does not have all the answers and invites feedback.
Similarities and differences. Where Karp and Zamiska talk about duty and war, OpenAI talks about transitions and safety nets. Yet both reject the current political order as inadequate. Both see technology as the primary vector of power. And both propose new forms of obligation—national service in one case, a right to AI and portable benefits in the other.
Taken together, these two documents are not opposing manifestos. They are different dialects of the same emerging language: tech leaders no longer see themselves as toolmakers. They see themselves as institutional designers. And a courtroom battle between Elon Musk and Sam Altman is about to decide how enforceable their original promises really are.
Promises, promises. Elon Musk is suing Sam Altman over whether OpenAI was fraudulently diverted from its original nonprofit mission. Musk argues that he was misled and that OpenAI’s leadership abandoned its promise to serve humanity, pivoting instead toward commercialisation through partnerships and products like ChatGPT. He seeks to remove Altman and President Greg Brockman, force structural changes to OpenAI’s governance, and potentially award up to $150 billion in damages to its nonprofit arm. OpenAI rejects this narrative, framing the case as a competitive dispute—Musk raised objections, they say, only after OpenAI’s success and the emergence of his own AI venture, xAI, which has filed for an IPO. OpenAI itself is rumoured to be considering an IPO in late 2026 or 2027. The court will have to weigh early emails, funding discussions, and conflicting interpretations of what “open” and “nonprofit” were supposed to mean.
If the court rules that shifting toward profit violated founding principles, many similar hybrid organisations may need to restructure. If the current model is upheld, it will solidify the reality that market logic and commercial interest drive AI development. Because advanced AI is expensive to build and operate, companies need pricing tiers to cover costs and make a profit. And because the underlying models and infrastructure are valuable competitive assets, firms have incentives to lock users in and limit disclosure to maintain their advantage. That means that users could be facing more tiered access, stronger platform lock‑in, and less visibility into how systems operate.
So what could societies do? Karp and Zamiska, and OpenAI share a premise that is rarely stated outright: that the existing legal and political order is too slow or too confused to manage the technologies now emerging.
If we assume they are even only partially right, the solution cannot be handing design authority to the same firms that profit from those technologies. Three measured steps are worth considering.
First, separate policy design from corporate strategy. Any company that holds major public contracts in areas such as defence, health, or border control should not be the source of the policies used to regulate that company’s activities.
Second, codify accountability. If AI developers claim public-interest missions, those claims need legal and regulatory grounding, not just branding. The Musk-OpenAI case may accelerate this, but policymakers cannot outsource the task to courts.
Third, broaden participation. OpenAI’s call for public input points in the right direction, but mechanisms matter. Without meaningful inclusion—across labour, civil society, and smaller economies participation risks becoming procedural rather than substantive.
We are not about to wake up in a technological republic overnight. But it is already clear that tech oligarchs are no longer just building products; they are articulating political and social orders. Modern societies will have to get right what type of legal and policy order is needed and how to deal with the growing power of tech companies and their leaders.
Claude Mythos Preview sets new benchmark for AI capability and raises governance questions
On 7 April 2026, Anthropic announced Claude Mythos Preview, its most capable AI model to date, alongside the explicit decision not to make it publicly available. Claude Mythos Preview is a general-purpose, unreleased frontier model that, in Anthropic’s own words, reveals a stark fact: AI models have reached a level of coding capability where they can surpass all but the most skilled humans in finding and exploiting software vulnerabilities.
Anthropic’s published benchmarks show Mythos Preview scored 93.9% on the SWE-bench Verified test, 97.6% on the USAMO 2026 mathematics evaluation, and and significantly outperformed all previously released models in cybersecurity-specific assessments. The SWE-bench Verified score is roughly double the 2024 state of the art and was achieved in an agentic context, where the model autonomously resolved real software engineering issues from production codebases.
On the USAMO 2026 evaluation, Mythos Preview scored 55 percentage points higher than Opus 4.6, which scored 42.3%. On GPQA Diamond, a graduate-level scientific reasoning benchmark, Mythos Preview scored 94.6%. On Terminal-Bench 2.0, which evaluates system administration and command-line proficiency, it scored 82.0%, a 16.6-point lead over Opus 4.6. On the cybersecurity benchmark Cybench, the model scored 100% on the first attempt, making it no longer useful as a discriminating evaluation.
Cybersecurity capabilities
The decision not to release Mythos Preview publicly is linked to concerns about its advanced capabilities, particularly in high-risk domains such as cybersecurity, as well as broader considerations related to safety and potential misuse.
Notably, these capabilities are not the result of targeted training. Anthropic did not explicitly train Mythos Preview to have these capabilities. They emerged as a downstream consequence of general improvements in code, reasoning, and autonomy. The same improvements that make the model substantially more effective at patching vulnerabilities also make it substantially more effective at exploiting them.
During internal testing, Mythos Preview identified thousands of zero-day vulnerabilities across every major operating system and every major web browser, as well as other critical software, many of them high severity and previously undetected for years. Anthropic engineers with no formal security training could ask Mythos to find remote code execution vulnerabilities overnight and have a complete, working exploit the following morning. This accessibility dimension poses a distinct governance concern. Traditionally, sophisticated cyberattacks have required highly skilled teams, extensive planning, and deep technical expertise. Models with these capabilities may lower those barriers substantially, including smaller state actors and non-state actors.
Anthropic has disclosed only a fraction of what it says it has found during internal testing. Over 99% of the vulnerabilities discovered by Mythos remained unpatched at the time of the 7 April announcement.
Project Glasswing
Anthropic launched Project Glasswing as a structured access mechanism to use Claude Mythos Preview for defensive cybersecurity purposes. The initiative brings together Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks as launch partners, with access also extended to over 40 additional organisations that build or maintain critical software infrastructure.
Project Glasswing partners will receive access to Claude Mythos Preview to find and fix vulnerabilities in their foundational systems, with work expected to focus on local vulnerability detection, black box testing of binaries, securing endpoints, and penetration testing. Anthropic is committing up to $100M in usage credits for Mythos Preview across these efforts. Following the initial research preview period, access to the model will be available to participants at $25 per million input tokens and $125 per million output tokens across the Claude API, Amazon Bedrock, Google Cloud’s Vertex AI, and Microsoft Foundry.

Anthropic has also donated $2.5M to Alpha-Omega and OpenSSF through the Linux Foundation, and $1.5M to the Apache Software Foundation to enable open-source software maintainers to respond to the changing cybersecurity landscape.
Within 90 days, Anthropic has committed to reporting publicly on what it has learned, as well as the vulnerabilities fixed and improvements made that can be disclosed. The company also intends to collaborate with leading security organisations to produce practical recommendations covering vulnerability disclosure processes, software update processes, open-source and supply-chain security, and patching automation, among other areas.
Anthropic has stated that Project Glasswing is a starting point, and that in the medium term, an independent, third-party body bringing together private and public sector organisations might be the ideal home for continued work on large-scale cybersecurity projects.
Project Glasswing raises a governance question for the industry, as cyber-capable AI systems may become useful security tools and a source of misuse risk at the same time. Project Glasswing’s structure also reveals tensions, as it concentrates several roles including discovery, disclosure coordination, and capability gatekeeping in a single organisation. Entities such as Anthropic and major cloud providers control critical components of the Glasswing ecosystem, raising questions about power and governance that, for financial institutions in particular, translate into systemic risk.
We also wrote about the Glasswing project and its implications in our Weekly newsletter in early April.
Geopolitical dimensions
Claude Mythos has sharpened attention on the competitive and geopolitical dimensions of frontier AI development. Project Glasswing’s launch partners exclude Anthropic’s rival OpenAI, which is reported to be approximately six months behind Anthropic in developing a model with comparable offensive cyber capabilities.
Senior policy voices have positioned Mythos within the broader competition between Western AI companies and China’s rapidly evolving AI ecosystem, with implications for national security, enterprise adoption, and technological leadership. A security researcher assessed a concurrent source code leak from Anthropic as a geopolitical accelerant, noting that such exposures compress the timeline for adversaries to replicate technological advantages currently held by Western laboratories.
Many defence organisations still rely on legacy software and infrastructure not designed with AI-driven threats in mind. Models capable of autonomously identifying hidden flaws in older code may expose weaknesses in critical defence networks around the world. The difficulty of containment at the geopolitical level is reflected in usage patterns. Access restriction at the laboratory level does not translate reliably into containment across jurisdictions when the same underlying models are accessible via cloud infrastructure spanning multiple countries and regulatory environments.
The limits of voluntary AI governance
The Claude Mythos case has clarified, with considerable precision, what voluntary AI governance can and cannot achieve. A responsible laboratory can make a unilateral decision not to release a dangerous system. It can support coordinated vulnerability disclosure, engage governments proactively, and produce detailed public documentation of a model’s capabilities and risks. All of these have occurred with Mythos, and represent meaningful progress relative to the governance environment of a few years ago.
What voluntary frameworks cannot do is bind competitors who operate under different assumptions. Anthropic’s RSP version 3.0 acknowledges this directly by removing the commitment to withhold unsafe models if another laboratory releases a comparable model first. The competitive structure of the AI industry means that restraint by one actor does not prevent the underlying capability from eventually proliferating. Voluntary governance frameworks work best when they generate shared norms across an industry. When the industry is structured around intense competition among a small number of organisations, voluntary restraint by a single actor does not resolve the broader question of access.
Analysts note that what Mythos does today in a restricted environment, publicly available models are likely to replicate within one to two model generations. The next phase of the EU AI Act takes effect in August 2026, introducing automated audit trails, cybersecurity requirements for AI systems classified as high risk, incident reporting obligations, and penalties of up to 3% of global revenue. The EU framework represents a shift toward binding governance, but its scope relative to the pace and international distribution of frontier AI development remains to be demonstrated.
The way forward
Anthropic acknowledges that capabilities like those demonstrated by Mythos will proliferate beyond actors committed to deploying them safely, with potential fallout for economies, public safety, and national security. The company’s response, taken in aggregate, reflects a serious attempt to manage that risk within the constraints of voluntary frameworks and private decision-making. The Responsible Scaling Policy, Project Glasswing, proactive government briefings, and the detailed system card are each substantive contributions. They are also all products of a single private entity’s judgement, operating without binding external accountability.
The Mythos case does not so much call for a different assessment of Anthropic’s conduct as it does a clear-eyed view of what voluntary governance can realistically sustain at the frontier of AI development. Governments on both sides of the Atlantic were briefed informally about a model whose capabilities are consequential for critical infrastructure and national security. No binding notification requirement existed. No independent technical authority had prior access. No international coordination mechanism was in place.
No single organisation can solve these challenges alone. Frontier AI developers, software companies, security researchers, open-source maintainers, and governments all have essential roles to play. The Mythos case has made that observation not merely a statement of aspiration but a policy problem that requires concrete institutional responses. Whether those responses will take shape before the next capability threshold is reached is the question now facing policymakers.
This text is an adaptation of Reyhan Damalan’s text ‘Claude Mythos Preview sets new benchmark for AI capability and raises governance questions‘.
Last month in Geneva

29th session of the CSTD
The 29th session of the Commission on Science and Technology for Development (CSTD) took place from 20 to 24 April 2026 at the Palais des Nations in Geneva, Switzerland.
For its 29th session, the programme addressed the priority theme of ‘Science, Technology and Innovation in the Age of Artificial Intelligence’ and heard presentations on report on technical cooperation activities in science, technology and innovation.
CSTD members also reviewed progress in implementing and following up to the outcomes of the World Summit on the Information Society at the regional and international levels.
The session also included a briefing on the plans of the International Scientific Panel on AI, a report on Global Digital Compact (GDC) implementation milestones, as well as an opportunity to engage with the Draft Joint Implementation Road Map for WSIS-GDC Coherence.
Ultimately, CSTD members adopted two resolutions on WSIS and Science, Technology & Innovation for Development.

Image credit: UNCTAD Innovation X post
Shaping Switzerland’s AI Summit Strategy
A report intended to inform strategic planning for the AI Summit Geneva 2027, synthesising inputs from a multistakeholder roundtable and 50+ written submissions to shape Switzerland’s strategy for hosting the AI Summit, has been made public.
The core finding of ‘Shaping Switzerland’s AI Summit Strategy’ is that Switzerland’s comparative advantage lies not in technological scale, but in trusted convening, pragmatic governance, and institutional credibility. Its neutrality, strong institutions, research base (e.g. ETH/EPFL), and Geneva’s multilateral ecosystem position it as a facilitator of practical, cross-sector cooperation. However, gaps remain in investment and in scaling innovations to market.
Two priority issue clusters dominate. First, trusted and sovereign AI infrastructure, including open models, interoperability, and reducing dependence on dominant providers—alongside a noted gap in Switzerland’s access to production-grade AI compute. Second, AI’s impact on human rights, security, and humanitarian law, particularly in relation to military use, surveillance, and preservation of human agency. Cross-cutting concerns include AI literacy, SME adoption, public-sector readiness, and equitable access for developing countries.
Strategically, Geneva 2027 should be framed as a platform for implementation, contributors highlighted, delivering a limited set of practical, internationally reusable tools backed by an inclusive preparatory process and follow-up mechanisms.
Geneva Cyber Week 2026
The UN Institute for Disarmament Research (UNDIR) and the Swiss Federal Department of Foreign Affairs (FDFA) are co-hosting Geneva Cyber Week from 4 to 8 May 2026, bringing policymakers, diplomats, technical experts, industry leaders, academics, and civil society representatives to venues across Geneva and online for a week of discussions on cyber stability, resilience, governance, digitalisation, and the security implications of emerging technologies, including AI.
Returning after its inaugural edition, the event is being positioned as a response to a more fragile cyber and geopolitical environment. Held under the theme ‘Advancing Global Cooperation in Cyberspace’, Geneva Cyber Week 2026 comes at a moment of mounting cyber insecurity, intensifying geopolitical tension, and rapid technological change. The programme will feature nearly 90 events and reinforce Geneva’s role as a centre for cyber diplomacy, international cooperation, and digital governance.
Opportunity: Become a Knowledge Fellow
Diplo is pleased to launch a new call for applications for Digital Watch Knowledge Fellows (2026).
What is the Digital Watch Observatory?
The Digital Watch Observatory (DW) is a comprehensive observatory and one-stop shop source of information on digital governance. It tracks latest developments, provides policy overviews and analysis, and curates information on key topics, technologies, processes, policy players, events, and resources.
DW is designed for diplomats, policymakers, researchers, civil society actors, business representatives, and other stakeholders who need reliable, structured, impartial, and up-to-date information on digital governance issues.
Its content is organised around:
- Topics, from cybercrime and freedom of expression, to data governance and critical infrastructure.
- Technologies such as artificial intelligence, quantum computing, and semiconductors.
- Processes including the UN Global Mechanism on ICT Security, the Internet Governance Forum, the Global Digital Compact process, and more.
- Policy players such as countries, technical entities, business associations, UN entities, and other international and regional organisations.
- Resources, including conventions, resolutions, laws and regulations, reports, and more.
- Events, such as meetings, negotiations, conferences, and consultations.
This structure is complemented by:
Daily updates, regular analyses, and weekly and monthly newsletters that track and explain the most relevant developments across the digital governance landscape.
What is the role of a Knowledge Fellow?
Knowledge Fellows (KF) are central to the observatory’s ability to provide comprehensive, accurate, and up-to-date coverage of specific areas of digital governance.
Each KF is expected to cover one or more areas of expertise and help ensure that DW remains accurate, relevant, and complete, and impartial. This means:
- Monitoring and analysing developments related to the assigned area(s) of expertise and ensuring these are reflected in daily updates and regular analyses.
- Keeping assigned DW pages accurate, up-to-date, and substantively strong.
- Tracking events relevant to their area(s) of expertise and helping ensure that important meetings, negotiations and discussions are reflected in DW.
- Identifying key resources relevant to their area(s) of expertise such as UN resolutions and other intergovernmentally agreed documents, laws, regulations, reports, and policy papers.
- Supporting stronger coverage of organisations, countries, and other key actors in digital governance.
- Contributing, when relevant, to newsletters, policy and research papers, and other knowledge products.
Knowledge Fellows may also have opportunities to contribute to Diplo’s wider knowledge ecology, including courses, discussions, and thematic initiatives.
Who should apply?
At a time when the public space is abundant with AI-generated content, we are looking for more than just someone who can use AI to summarise news or rewrite online resources.
KF will have access to custom-made AI tools to support them in their work, but the role requires subject expertise, critical judgement, and the ability to identify what is important, what is missing, and what deserves deeper analysis.
Specifically, we are looking for applicants who:
- Have a strong expertise in digital governance, grounded in professional experience, academic research, policy engagement, or a combination of these.
- Are interested in continuing to develop this expertise.
- Know where to look and what to look for in order to ensure a comprehensive coverage of assigned topics, technologies, processes, etc.
- Can identify major developments, policy controversies, key debates, and emerging trends in the digital governance landscape, and cover them accurately and impartially.
This means combining subject expertise with editorial judgement, policy awareness, and a strong sense of knowledge curation.
Applicants must also have:
- Availability to contribute on a regular basis. The fellowship is conducted online, with an expected commitment of at least 8 hours per week.
- Strong analytical and writing skills in English.
- Basic skills in using web and social media, as well as familiarity with generative AI tools.
What we offer
Digital Watch Knowledge Fellows will benefit from:
- Onboarding and guidance on Digital Watch’s editorial and curation approach.
- Training on observatory workflows and digital/AI tools.
- Remuneration.
- Visibility for their work among DW users (diplomatic communities in Geneva and other diplomatic centres, professionals from across all stakeholder groups dealing with digital topics, etc.)
- Opportunities to promote their digital governance-related research through DW and Diplo networks.
- Membership in a global community of experts and professionals working on digital governance.
Fellows are engaged on a consultancy/fee basis; The role does not constitute employment with DiploFoundation.
How to apply
Interested applicants are invited to complete the application form.
Application deadline: 31 May 2026

