A group of European technology companies, Cubbit, SUSE, Elemento, and StorPool Storage, has launched a joint ‘Disaster Recovery Pack’ to support the continuity of organisations’ data and operations in the event of disruptions caused to external dependencies.
The solution was presented on 15 April 2026 at the European Data Summit organised by the Konrad-Adenauer-Foundation in Berlin. It is described as a system intended to maintain critical workloads even in scenarios involving disruptions associated with foreign technology providers.
The Disaster Recovery Pack integrates multiple components of the cloud software stack into a single deployable system. These components include storage, compute, orchestration, networking, identity, observability, and management. By combining these elements, the solution aims to reduce fragmentation and facilitate the deployment of a unified technology stack.
According to the providers, the system is designed to allow organisations to transfer critical workloads to a European-based infrastructure without major disruption. It can be used to identify essential services, establish and test recovery setups, and extend these configurations to additional workloads over time.
The solution is positioned to address operational requirements for disaster recovery while also supporting a broader transition to infrastructure based on European providers. It has already been deployed by an IT service provider in Italy and is expected to be adopted by additional partners.
Why does it matter?
The initiative is linked to efforts to reduce reliance on non-European cloud infrastructure and to strengthen the resilience of digital operations. In a statement, Sebastiano Toffaletti, Secretary General of the European DIGITAL SME Alliance, said that European companies are capable of developing and integrating such solutions, and highlighted the need for policy measures that support their adoption, including considerations related to public procurement and definitions of sovereign cloud within future policy frameworks.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Kazakhstan has introduced new rules requiring audits of high-risk AI systems before they are included in official government lists. The framework sets out procedures for identifying and publishing trusted AI systems across sectors.
Sectoral authorities will compile and update lists of high-risk AI systems based on applications submitted by system owners. These lists will be published on official government websites to promote transparency and trust.
Applicants must submit formal requests, documents confirming intellectual property rights and a positive audit conclusion. Authorities will review submissions within ten working days, assessing system purpose, functionality and required documentation.
Systems that meet all criteria will be added to the list and published within five working days. If inconsistencies are identified, applicants will be notified and may resubmit documents for review within a shortened timeframe.
Updated versions of the lists will be released as revisions occur, ensuring ongoing oversight of AI systems. The measures aim to support structured monitoring and responsible use of AI technologies.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The National Constitution Center reports that Minnesota lawmakers are considering a constitutional amendment to exclude AI systems from free speech protections. The proposal would clarify that such rights apply to people, not machines.
According to the National Constitution Center, the amendment would add language stating that AI does not have the right to speak, write or publish sentiments freely. Human free speech protections would remain unchanged under the proposal.
The article highlights ongoing debate around the measure, with supporters arguing it distinguishes human rights from technological tools, while critics warn it could affect how AI-generated content is treated under the law.
The National Constitution Center notes that the proposal reflects broader tensions over how legal systems should address AI and free expression as the issue develops in Minnesota.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
More than 70 global leaders and experts gathered in Ankara on 14–15 April to address gaps in legal identity for migrants, a key barrier to access to services and protection.
The conference was convened by the International Organization for Migration (IOM) and brought together governments, international organisations, academia, and the private sector to discuss practical solutions.
Legal identity was highlighted as a fundamental human right and a critical enabler of safe and regular migration, yet millions of migrants still lack recognised documentation. Participants examined how digital identity systems, including biometrics and mobile tools, could improve access while ensuring security, inclusion, and the protection of rights.
Discussions focused on strengthening migration governance through scalable and context-specific digital identity solutions. Attention also focused on implementation challenges, including keeping systems inclusive and secure for displaced populations affected by conflict or administrative barriers.
The COMPASS conference also showcased private sector technologies and enabled countries from Africa, the Middle East, and Europe to share experiences. Outcomes are expected to inform best practices and support the development of more resilient and inclusive identity systems for migrants.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The Medicines and Healthcare products Regulatory Agency in the UK has outlined priorities for regulating AI in healthcare, focusing on safety, effectiveness and public trust.
An approach that includes strengthening pre-market evaluation and post-market surveillance, particularly for adaptive systems operating in real-world settings.
These extend beyond technical validation to include implementation challenges, system-wide impacts and the role of human oversight in clinical environments.
The analysis emphasises that AI in healthcare operates as a socio-technical system, requiring assessment of usability, fairness and real-world outcomes. It also identifies gaps in current evaluation practices, particularly in local service assessments, which may lack consistency and reliability.
Strengthening evaluation standards, improving coordination and addressing risks such as bias and inequity are presented as central to enabling safe and scalable adoption.
Such a framework in the UK aims to balance innovation with accountability while ensuring equitable access to healthcare technologies.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Brazil’s Ministry of Development, Industry and Foreign Trade said the integration of AI and technical standardisation should be treated as a strategic issue for the country’s competitiveness.
The position was presented during a meeting organised by the Ministry of Science, Technology, and Innovation, which brought together public bodies and specialists to discuss AI governance and its effects on the productive sector and on the state.
Pedro Ivo, secretary for Competitiveness and Regulatory Policy at the Ministry of Development, Industry and Foreign Trade, said technical standards can help reduce costs, facilitate trade, and improve competitiveness. He also said linking that process to AI could support a more predictable regulatory environment.
According to the ministry, the discussion also highlighted the international dimension of the issue and Brazil’s efforts to expand its role in shaping AI-related standards and guidelines. The programme included discussions of global AI impacts, regulatory challenges, and the role of international organisations in technical regulation for information and communication technologies.
Tiago Munk, the ministry’s coordinator-general for quality infrastructure, said technical standards can play a central role in AI governance by defining criteria, requirements, and good practices for systems, products, and services. He added that Brazil should take an active role in developing international standards.
The meeting was presented as part of a broader government effort to strengthen coordination on AI, with attention to policy direction, institutional coordination, and the country’s position in the digital economy.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
UN’s Independent International Scientific Panel on AI has begun work on a global study examining how AI is reshaping economies and societies. The 40-member panel aims to assess AI’s risks and opportunities, with a focus on maintaining human oversight in decision-making.
Human-centred design stands at the core of the panel’s approach. Members are exploring how AI can complement rather than replace human capabilities, an idea often described as ‘augmented intelligence’.
Research will examine impacts across key sectors, including labour markets and healthcare, while also addressing inclusivity challenges such as language diversity and access to digital infrastructure.
Concerns over trust, ethics and accountability are driving the initiative. Warnings from UN leadership have highlighted the dangers of unregulated AI, reinforcing the need for governance frameworks that reflect social and human rights principles.
Proposals under consideration include tools such as AI watermarking to improve transparency and distinguish between human and machine-generated content.
Findings from the study are expected to inform global policy discussions, with a first report scheduled for presentation at an international dialogue on AI governance in Geneva. Long-term outcomes will depend on aligning technological innovation with ethical safeguards and inclusive development goals.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Every technological leap forces society to renegotiate its relationship with power. Intelligence, once a uniquely human advantage, is now being abstracted, scaled, and embedded into machines. As AI evolves from a tool into an autonomous force shaping economies and institutions, the question is no longer what AI can do, but who it will ultimately serve.
A new framework published by OpenAI sets out a vision for managing the transition towards advanced AI systems, often described as superintelligence. Framed as a policy agenda for governments and institutions, it attempts to define how societies should respond to rapid advances in AI governance, economic transformation, and workforce disruption.
At its core, the document is not a regulation but influence: an attempt to shape how policymakers think about industrial policy for AI, productivity gains, and the redistribution of technological power.
Image via freepik
AI industrial policy and the next economic transformation
The central argument is that AI will act as a general-purpose technology comparable to electricity or the combustion engine. It promises higher productivity, lower costs, and accelerated innovation across industries. In policy terms, this aligns with broader discussions around AI-driven productivity growth and economic restructuring.
However, historical precedent suggests that such transitions are rarely evenly distributed. Industrial revolutions typically begin with labour displacement, rising inequality, and capital concentration, before broader gains are realised. AI may intensify this dynamic due to its dependence on compute infrastructure, proprietary models, and large-scale data ecosystems.
Economic power may become increasingly concentrated among a small number of AI developers and infrastructure providers, posing a structural risk of reinforcing existing inequalities rather than reducing them.
Image via freepik
The return of industrial policy in the AI economy
A key feature of the document is its explicit endorsement of AI industrial policy as a necessary response to market limitations. Governments, it argues, must play a more active role in shaping outcomes through regulation, investment, and public-private coordination.
A broader global shift in economic thinking is reflected in this approach. Strategic sectors such as semiconductors, energy, and digital infrastructure are already experiencing increased state intervention. AI now joins that category as a critical technology.
Yet this approach introduces a significant tension. When leading AI firms contribute directly to the design of AI regulation and governance frameworks, the risk of regulatory capture increases. Policies intended to ensure fairness and safety may inadvertently reinforce the dominance of incumbent companies by raising compliance costs and technical barriers for smaller competitors.
In this sense, AI industrial policy may not only guide innovation but also determine market entry, competition, and the long-term economic structure.
Image via freepik
Redistribution, taxation, and the question of AI wealth
The document places strong emphasis on economic inclusion in the AI economy, proposing mechanisms such as a public wealth fund, AI taxation, and expanded access to capital markets. These ideas are designed to address one of the central challenges of AI-driven growth: the potential for extreme wealth concentration.
As AI systems increase productivity while reducing reliance on human labour, traditional tax bases such as wages and payroll contributions may weaken. The proposal to tax AI-generated profits or automated labour reflects an attempt to stabilise public finances in an increasingly automated economy.
Equally significant is the idea of a ‘right to AI’, which frames access to AI as a foundational requirement for participation in modern economic life. This positions AI not merely as a tool, but as a form of digital infrastructure essential to economic agency and inclusion.
However, these proposals face major implementation challenges. Measuring AI-generated value is complex, particularly in hybrid systems where human and machine inputs are deeply integrated. Without clear definitions, AI taxation frameworks and redistribution mechanisms could prove difficult to enforce at scale.
Image via freepik
Workforce disruption and the future of work
The document recognises that AI will significantly reshape labour markets. Many tasks that currently require hours of human effort are already being automated, with future systems expected to handle more complex, multi-step workflows.
To manage this transition, the proposal highlights reskilling programmes, portable benefits systems, and adaptive social safety nets, alongside experimental ideas such as a reduced working week. These measures aim to mitigate the impact of automation and workforce disruption while maintaining economic stability.
However, the pace of change introduces uncertainty. Historically, labour markets have adjusted over decades, allowing new roles to emerge gradually. AI-driven disruption may occur much faster, compressing adjustment periods and increasing transitional risk.
While the document highlights expansion in sectors such as healthcare, education, and care services, these ‘human-centred jobs’ require substantial investment in training, wages, and institutional support to absorb displaced workers effectively.
Image via freepik
AI safety, governance, and systemic control
Beyond economic considerations, the proposal places a strong emphasis on AI safety, auditing frameworks, and risk mitigation systems. The proposed measures include model evaluation standards, incident reporting mechanisms, and international coordination structures.
These safeguards respond to growing concerns around cybersecurity risks, biosecurity threats, and systemic model misalignment. As AI systems become more autonomous and embedded in critical infrastructure, governance mechanisms must evolve accordingly.
However, safety frameworks also introduce questions of control. Determining which systems are classified as high-risk inevitably centralises authority within regulatory and institutional bodies. In practice, this may restrict access to advanced AI systems to organisations capable of meeting stringent compliance requirements.
A structural trade-off between security and openness is emerging in the AI economy, raising questions about how innovation and oversight can coexist without reinforcing centralisation.
Image via freepik
Strategic influence and the future of AI governance
The proposal from OpenAI is both policy-oriented and strategically positioned. It acknowledges legitimate risks- inequality, labour disruption, and systemic instability, while offering a roadmap for managing them through structured intervention.
At the same time, it reflects the perspective of a leading actor in the AI industry. As a result, its recommendations exist at the intersection of public interest and commercial strategy. The dual role raises important questions about who defines AI governance frameworks and how economic power is distributed in the intelligence age.
The broader challenge is not only technological but also institutional: ensuring that AI industrial policy, regulation, ethics and economic design are shaped through transparent and democratic processes, rather than through concentrated private influence.
Image via freepik
AI industrial policy will define economic power
AI is no longer solely a technological development- it is a structural force reshaping global economic systems. The emergence of AI industrial policy frameworks reflects an attempt to manage this transformation proactively rather than reactively.
The success or failure of these approaches will determine whether AI-driven growth leads to broader prosperity or deeper concentration of wealth and power. Without effective governance, the risks of inequality and centralisation are significant. With carefully designed policies, there is real potential to expand access, improve productivity, and distribute benefits more widely.
Digital diplomacy may increasingly come to the fore as a mechanism for arbitrating competing approaches to AI policy and governance across jurisdictions. As regulatory frameworks diverge, diplomatic channels could serve to bridge gaps, negotiate standards, and balance strategic interests, positioning digital diplomacy as a practical tool for managing fragmentation in the evolving AI economy.
Ultimately, the intelligence age will not be defined by technology alone, but by the AI governance systems, economic frameworks, and industrial policy decisions that guide its development. The outcome will depend on the extent to which global stakeholders succeed in building a shared and coordinated vision for its future.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The United Nations Institute for Disarmament Research and the Swiss Federal Department of Foreign Affairs will co-host Geneva Cyber Week from 4 to 8 May 2026, bringing policymakers, diplomats, technical experts, industry leaders, academics, and civil society representatives to venues across Geneva and online for a week of discussions on cyber stability, resilience, governance, digitalisation, and the security implications of emerging technologies, including AI.
Returning after its inaugural edition, the event is being positioned as a response to a more fragile cyber and geopolitical environment. Held under the theme ‘Advancing Global Cooperation in Cyberspace’, Geneva Cyber Week 2026 comes at a moment of mounting cyber insecurity, intensifying geopolitical tension, and rapid technological change, with organisers framing the gathering as a space for more practical cooperation across diplomatic, technical, operational, and policy communities.
“Cybersecurity is no longer a niche technical issue; it is a strategic policy challenge with implications for international peace, economic stability and public trust. At a moment of growing fragmentation and accelerating technological change, Geneva Cyber Week brings together the communities that need to be in the room — diplomatic, technical, operational and policy — to move from shared concern to practical cooperation,” said Dr Giacomo Persi Paoli, Head of Security and Technology Programme at UNIDIR.
The programme will feature nearly 90 events and reinforce Geneva’s role as a centre for cyber diplomacy, international cooperation, and digital governance. Scheduled sessions include UNIDIR’s Cyber Stability Conference, Peak Incident Response organised by the Swiss CSIRT Forum, Digital International Geneva, the World Economic Forum Annual Meeting on Cybersecurity, and a Council of Europe session titled ‘Artificial Intelligence, Cybercrime and Electronic Evidence: Risks, Opportunities, and Global Cooperation’.
The week will also include partner-led panels, workshops, simulations, exhibitions, and networking events to connect specialist communities that do not always work in the same room. That broader structure reflects an effort to treat cyber issues not only as a technical or security matter but also as a governance, trust-building, and international-coordination challenge.
“At a time when digital threats know no borders, fostering inclusive discussions is essential to building trust, advancing common norms, and promoting a secure and open cyberspace for all. International Geneva provides an unparalleled multilateral environment to address these cybersecurity challenges collectively. Geneva Cyber Week’s diverse programme embodies this collaborative spirit,” said Marina Wyss Ross, Deputy Head of International Security Division and Chief of Section for Arms Control, Disarmament and Cybersecurity at the Swiss FDFA.
Across the city, Geneva will also mark the week visually, including flags on the Mont Blanc Bridge and special illumination of the Jet d’Eau on Monday evening. But beyond the symbolism, the event’s significance lies in how it seeks to bring cyber diplomacy, incident response, governance debates, and emerging technology risks into the same international conversation.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The UK government has said it will update and streamline its proposed code of practice for enterprise connected device security and assess further policy options, including regulation, certification, and other assurance mechanisms, following its call for views on enterprise connected device security.
The response, published by the Department for Science, Innovation and Technology, says enterprise-connected devices are often critical to business operations but can lack adequate security measures. It also states that the UK government’s call for views showed strong support for intervention to improve the cybersecurity of such devices, with 95% of respondents agreeing that the government should do more.
According to the response, 76% of respondents agreed or strongly agreed that the risks posed by enterprise-connected devices are sufficiently distinct from those of other connected devices to warrant an independent code of practice.
The UK government also reports that 78% agreed or strongly agreed with creating new legislation imposing obligations on manufacturers, while 71% agreed or strongly agreed with creating a new global standard based on the code of practice.
The UK government says it will ask manufacturers to use the National Cyber Security Centre’s existing device security principles while this work continues. It also says it will finalise the security principles, make them modular within the broader set of secure-by-design codes of practice, and explore the feasibility of a certification scheme for manufacturers.
The response also states that the UK government will assess options for regulatory measures, following feedback that it needs to go beyond voluntary adoption and include some form of assurance or enforcement mechanism. It adds that the government will review whether the scope of this work should be expanded beyond enterprise-connected devices as part of its broader analysis of technology security.
The document says the UK government will seek to align this work, where possible and necessary, with international developments, including European Union standards processes under the Cyber Resilience Act. It also notes repeated calls from respondents for implementation guides and clearer alignment with existing legislation and standards.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!