
17 – 24 April 2026
HIGHLIGHT OF THE WEEK
The ‘Technological Republic’ tech oligarchs immagine
Last week, the Technological Republic Manifesto by Palantir’s founder, Alex Karp, triggered an avalanche of comments and criticism as he challenged many pillars of our society, from equality and inclusion to security and democracy.
In 22 points extracted from his book, ‘Technological Republic’, Alex Karp, Palantir’s CEO, mixes national security, techno optimism, and democracy scepticism.
Palantir has already been at the centre of controversies around the use of its security products – Gotham, Foundry, and Maven in the Gaza war and by the US security apparatus on anti-migration and criminal activities.
The new Manifesto added to the controversy as the company moved from the business realm to the ideological and political realms.
Cas Muddle labelled Manifesto “Technofascism pure!” while Yanis Varoufakis said if Evil could tweet, this is what it would!

The backlash has been especially strong in the UK, where Palantir already holds major public contracts. Critics, including MPs and campaign groups, argue that the company’s ideology sits uneasily with its presence in sensitive parts of the state, from health data to policing and defence. Palantir, by contrast, says its systems improve efficiency, resilience, and public services.
Why does it matter? Palantir’s Manifesto put in a sharper and blunter way the growing power of tech companies in shaping society both nationally and internationally. Modern societies will have to get right what type of legal and policy order is needed and how to deal with the growing power of tech companies and their leaders.
IN OTHER NEWS LAST WEEK
This week in AI governance
Paraguay. Paraguay has adopted new rules for the use of AI in its courts, with UNESCO support, marking a notable step in judicial AI governance. The framework, approved by the Supreme Court of Justice, limits AI to a supporting role in data processing, information management, and assisted decision-making, while requiring human oversight, transparency, accountability, and disclosure when AI tools influence judicial processes. The rules align Paraguay’s approach with UNESCO’s guidance on AI in courts and underscore a wider trend toward rights-based, trust-focused AI deployment in public institutions.
India. India has set up a Technology and Policy Expert Committee under the Ministry of Electronics and Information Technology to help shape the country’s AI governance framework and advise the new AI Governance and Economic Group. Bringing together government, academia, industry, and policy expertise, the body is meant to translate fast-moving technical and regulatory issues into practical guidance, bringing a more structured and adaptive approach to AI governance aligned with India’s economic and social priorities.
Mythos. Anthropic has launched an investigation after a small group of users gained unauthorised access to its powerful Mythos AI model via a third-party contractor environment. The access reportedly occurred just as the company began rolling out a limited preview of the model to selected organisations under Project Glasswing. The unauthorised users are believed to have operated through a private Discord group, using a mix of tactics, including contractor access and open-source intelligence tools, to gain access to the system. Mythos was intentionally restricted due to its ability to accelerate cyberattacks and was provided to a limited number of partners, yet it appears to have leaked almost immediately through the partner ecosystem rather than through a direct breach. The window during which Mythos’ capabilities remain contained may prove far shorter than anticipated.
EU’s defence cloud reliance raises ‘kill switch’ fears.
A new report says most of the EU defence agencies remain heavily dependent on US cloud providers, exposing critical systems to the risk of a foreign ‘kill switch’ and sharpening concerns over Europe’s digital sovereignty.
According to the findings, 23 of 28 countries studied rely on US tech for defence functions, with 16 assessed as high risk, prompting renewed debate over whether sensitive public infrastructure, including security and defence systems, should move faster toward sovereign or air-gapped alternatives.
France vs X, a transatlantic showdown
France’s criminal investigation into X has evolved into a transatlantic dispute over platform governance and state authority.
How did it all begin? The case began with a French probe into whether the platform enabled the spread of child sexual abuse material, AI-generated deepfakes, Holocaust denial content, and other harmful or unlawful material, and later intensified with a search of X’s Paris offices and summonses for Elon Musk and former X chief Linda Yaccarino to give voluntary interviews – a request Musk appears to have refused by not showing up.
And then. The confrontation widened when reports emerged that the US Justice Department had declined to assist the French inquiry, arguing that the case risked crossing into the regulation of protected speech and that it would unfairly target a US company. French authorities, however, have framed the matter as a legitimate enforcement action under national law.
Australia targets also games in child safety crackdown
Australia’s child-safety push is widening from social media to gaming, as regulators intensify scrutiny of how platforms protect minors from harm. On 21 April, the eSafety Commissioner issued legally enforceable transparency notices to Roblox, Minecraft, Fortnite and Steam, demanding details on how they handle risks, including child sexual exploitation, cyberbullying, hate and extremist material on services widely used by children.
Seen in context. This is part of a broader tightening of enforcement around Australia’s under-16 social media rules, which took effect on 10 December 2025 and require age-restricted platforms to take reasonable steps to prevent underage children from creating and holding accounts. Yet regulators say compliance remains uneven: in March, eSafety flagged big concerns about Facebook, Instagram, Snapchat, TikTok and YouTube, warning that many children could still access platforms by simply self-declaring they were older than 16.
Microsoft bets A$25 on Australia’s AI future
Microsoft has announced a A$25 billion investment in Australia by 2029, its largest in the country, to expand local AI and cloud infrastructure, strengthen cybersecurity, and train three million Australians in workforce-ready AI skills.
The plan will increase Azure AI supercomputing capacity, expand Microsoft’s Australian cloud footprint by more than 140%, and deepen cooperation with the Australian Government, including the Australian AI Safety Institute and the Microsoft–Australian Signals Directorate Cyber Shield.
Framed as support for Australia’s National AI Plan, the package links AI growth with cyber resilience, digital sovereignty, responsible deployment, and broader access to skills across schools, nonprofits, workers, government, and industry.
UK fortifying child safety online with new powers
The UK’s Children’s Wellbeing and Schools Bill would reportedly expand ministers’ powers to shape how online services protect children, including by restricting access to risky platforms, features, or functions and by targeting design elements such as contact settings, live communication, location visibility, and time spent online.
The draft would also bring Ofcom into a stronger advisory role, introduce a six-month timeline for regulations or a progress update, and give ministers new authority over children’s data consent, age assurance, and enforcement.
Why does it matter? Taken together, the amendments point to a more interventionist and fine-grained model of child online safety, focused not only on harmful content but also on the design and governance of children’s digital environments. The regulatory package remains unsettled for now, with Parliament still negotiating key provisions and no final law yet in place.
LAST WEEK IN GENEVA
Shaping Switzerland’s AI Summit Strategy
A report intended to inform strategic planning for the AI Summit Geneva 2027, synthesising inputs from a multistakeholder roundtable and 50+ written submissions to shape Switzerland’s strategy for hosting the AI Summit, has been made public.
The core finding of ‘Shaping Switzerland’s AI Summit Strategy’ is that Switzerland’s comparative advantage lies not in technological scale, but in trusted convening, pragmatic governance, and institutional credibility. Its neutrality, strong institutions, research base (e.g. ETH/EPFL), and Geneva’s multilateral ecosystem position it as a facilitator of practical, cross-sector cooperation. However, gaps remain in investment and in scaling innovations to market.
Two priority issue clusters dominate. First, trusted and sovereign AI infrastructure, including open models, interoperability, and reducing dependence on dominant providers—alongside a noted gap in Switzerland’s access to production-grade AI compute. Second, AI’s impact on human rights, security, and humanitarian law, particularly in relation to military use, surveillance, and preservation of human agency. Cross-cutting concerns include AI literacy, SME adoption, public-sector readiness, and equitable access for developing countries.
Strategically, Geneva 2027 should be framed as a platform for implementation, contributors highlighted, delivering a limited set of practical, internationally reusable tools backed by an inclusive preparatory process and follow-up mechanisms.
29th session of the CSTD
The 29th session of the Commission on Science and Technology for Development (CSTD) is ending today (Friday). The programme addressed the priority theme of ‘Science, Technology and Innovation in the Age of Artificial Intelligence’ and also reviewed progress in implementing and following up to the outcomes of the World Summit on the Information Society at regional and international levels. We’ll have more on the outcomes next week.
READING CORNER
Anthropic’s Mythos model is a cyber-offensive AI built to probe critical infrastructure. Why does this reality expose the flaws in current AI governance?
Anthropic’s Claude Mythos Preview is its most capable model to date, withheld from public release and made available only to a closed partner network amid concerns about its cybersecurity capabilities and governance implications.


