The European Investment Fund (EIF) will manage a €210 million financing initiative to support high-tech businesses in Bulgaria, focusing on sectors such as AI, microelectronics and advanced technologies.
The programme operates within the JEREMIE Bulgaria framework, which aims to improve access to capital for small and medium-sized enterprises.
An initiative that reflects a broader EU strategy to strengthen innovation capacity and support sustainable economic growth through targeted investment mechanisms.
The EIF, a subsidiary of the EIB Group, will prioritise equity financing and scale-up support to address structural gaps that often limit the expansion of high-growth companies within national markets.
A programme that also aligns with wider efforts to retain technological talent and reduce reliance on external capital by reinforcing domestic innovation ecosystems.
By supporting dual-use technologies and strategic sectors, the measure contributes to both economic competitiveness and technological resilience.
Through its revolving funding model, reinvested capital is expected to sustain long-term financing capacity, reinforcing the position of Bulgaria within regional venture capital networks and supporting the development of a more mature innovation economy.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A global AI governance initiative jointly drafted by 16 organisations, including the Chinese Association for AI, has been released under the organisation of the China Science and Technology Policy Research Association.
According to the text, the initiative calls for an open, fair, inclusive, and effective global AI governance system. Its main elements include ensuring benefits and improving livelihoods, maintaining security and preventing risks, upholding fairness, promoting balanced development, encouraging exchange and mutual learning, and building consensus.
Speakers cited in the release said rapid advances in AI are creating governance pressures that existing frameworks struggle to address. Liang Zheng, deputy secretary-general of the China Institute for Science and Technology Policy and director of the Institute for AI International Governance at Tsinghua University, said governance is not keeping pace with technological development and pointed to widening capability gaps between countries, as well as difficulties in building broader governance consensus.
The text also highlights risks linked to newer AI systems and agents. Cui Yong, a full professor at Tsinghua University, deputy director of the Network Technology Institute, council member of the China Communications Standards Association, and co-chair of the Internet Engineering Task Force Softwire Working Group on IPv6 transition, said AI agents are raising new governance concerns.
Yong said those concerns include responsibility for autonomous machine decision-making, the use of agents in crimes, including telecom fraud, and cross-border data leakage and privacy infringements linked to multi-agent interconnection.
The initiative is presented as drawing on the professional, neutral, and cross-border role of science and technology associations. The release says such bodies can help support evidence-based rulemaking, international exchange, participation in standard-setting, and talent development across both technical and governance fields.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Every technological leap forces society to renegotiate its relationship with power. Intelligence, once a uniquely human advantage, is now being abstracted, scaled, and embedded into machines. As AI evolves from a tool into an autonomous force shaping economies and institutions, the question is no longer what AI can do, but who it will ultimately serve.
A new framework published by OpenAI sets out a vision for managing the transition towards advanced AI systems, often described as superintelligence. Framed as a policy agenda for governments and institutions, it attempts to define how societies should respond to rapid advances in AI governance, economic transformation, and workforce disruption.
At its core, the document is not a regulation but influence: an attempt to shape how policymakers think about industrial policy for AI, productivity gains, and the redistribution of technological power.
Image via freepik
AI industrial policy and the next economic transformation
The central argument is that AI will act as a general-purpose technology comparable to electricity or the combustion engine. It promises higher productivity, lower costs, and accelerated innovation across industries. In policy terms, this aligns with broader discussions around AI-driven productivity growth and economic restructuring.
However, historical precedent suggests that such transitions are rarely evenly distributed. Industrial revolutions typically begin with labour displacement, rising inequality, and capital concentration, before broader gains are realised. AI may intensify this dynamic due to its dependence on compute infrastructure, proprietary models, and large-scale data ecosystems.
Economic power may become increasingly concentrated among a small number of AI developers and infrastructure providers, posing a structural risk of reinforcing existing inequalities rather than reducing them.
Image via freepik
The return of industrial policy in the AI economy
A key feature of the document is its explicit endorsement of AI industrial policy as a necessary response to market limitations. Governments, it argues, must play a more active role in shaping outcomes through regulation, investment, and public-private coordination.
A broader global shift in economic thinking is reflected in this approach. Strategic sectors such as semiconductors, energy, and digital infrastructure are already experiencing increased state intervention. AI now joins that category as a critical technology.
Yet this approach introduces a significant tension. When leading AI firms contribute directly to the design of AI regulation and governance frameworks, the risk of regulatory capture increases. Policies intended to ensure fairness and safety may inadvertently reinforce the dominance of incumbent companies by raising compliance costs and technical barriers for smaller competitors.
In this sense, AI industrial policy may not only guide innovation but also determine market entry, competition, and the long-term economic structure.
Image via freepik
Redistribution, taxation, and the question of AI wealth
The document places strong emphasis on economic inclusion in the AI economy, proposing mechanisms such as a public wealth fund, AI taxation, and expanded access to capital markets. These ideas are designed to address one of the central challenges of AI-driven growth: the potential for extreme wealth concentration.
As AI systems increase productivity while reducing reliance on human labour, traditional tax bases such as wages and payroll contributions may weaken. The proposal to tax AI-generated profits or automated labour reflects an attempt to stabilise public finances in an increasingly automated economy.
Equally significant is the idea of a ‘right to AI’, which frames access to AI as a foundational requirement for participation in modern economic life. This positions AI not merely as a tool, but as a form of digital infrastructure essential to economic agency and inclusion.
However, these proposals face major implementation challenges. Measuring AI-generated value is complex, particularly in hybrid systems where human and machine inputs are deeply integrated. Without clear definitions, AI taxation frameworks and redistribution mechanisms could prove difficult to enforce at scale.
Image via freepik
Workforce disruption and the future of work
The document recognises that AI will significantly reshape labour markets. Many tasks that currently require hours of human effort are already being automated, with future systems expected to handle more complex, multi-step workflows.
To manage this transition, the proposal highlights reskilling programmes, portable benefits systems, and adaptive social safety nets, alongside experimental ideas such as a reduced working week. These measures aim to mitigate the impact of automation and workforce disruption while maintaining economic stability.
However, the pace of change introduces uncertainty. Historically, labour markets have adjusted over decades, allowing new roles to emerge gradually. AI-driven disruption may occur much faster, compressing adjustment periods and increasing transitional risk.
While the document highlights expansion in sectors such as healthcare, education, and care services, these ‘human-centred jobs’ require substantial investment in training, wages, and institutional support to absorb displaced workers effectively.
Image via freepik
AI safety, governance, and systemic control
Beyond economic considerations, the proposal places a strong emphasis on AI safety, auditing frameworks, and risk mitigation systems. The proposed measures include model evaluation standards, incident reporting mechanisms, and international coordination structures.
These safeguards respond to growing concerns around cybersecurity risks, biosecurity threats, and systemic model misalignment. As AI systems become more autonomous and embedded in critical infrastructure, governance mechanisms must evolve accordingly.
However, safety frameworks also introduce questions of control. Determining which systems are classified as high-risk inevitably centralises authority within regulatory and institutional bodies. In practice, this may restrict access to advanced AI systems to organisations capable of meeting stringent compliance requirements.
A structural trade-off between security and openness is emerging in the AI economy, raising questions about how innovation and oversight can coexist without reinforcing centralisation.
Image via freepik
Strategic influence and the future of AI governance
The proposal from OpenAI is both policy-oriented and strategically positioned. It acknowledges legitimate risks- inequality, labour disruption, and systemic instability, while offering a roadmap for managing them through structured intervention.
At the same time, it reflects the perspective of a leading actor in the AI industry. As a result, its recommendations exist at the intersection of public interest and commercial strategy. The dual role raises important questions about who defines AI governance frameworks and how economic power is distributed in the intelligence age.
The broader challenge is not only technological but also institutional: ensuring that AI industrial policy, regulation, ethics and economic design are shaped through transparent and democratic processes, rather than through concentrated private influence.
Image via freepik
AI industrial policy will define economic power
AI is no longer solely a technological development- it is a structural force reshaping global economic systems. The emergence of AI industrial policy frameworks reflects an attempt to manage this transformation proactively rather than reactively.
The success or failure of these approaches will determine whether AI-driven growth leads to broader prosperity or deeper concentration of wealth and power. Without effective governance, the risks of inequality and centralisation are significant. With carefully designed policies, there is real potential to expand access, improve productivity, and distribute benefits more widely.
Digital diplomacy may increasingly come to the fore as a mechanism for arbitrating competing approaches to AI policy and governance across jurisdictions. As regulatory frameworks diverge, diplomatic channels could serve to bridge gaps, negotiate standards, and balance strategic interests, positioning digital diplomacy as a practical tool for managing fragmentation in the evolving AI economy.
Ultimately, the intelligence age will not be defined by technology alone, but by the AI governance systems, economic frameworks, and industrial policy decisions that guide its development. The outcome will depend on the extent to which global stakeholders succeed in building a shared and coordinated vision for its future.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The Chief Executive of Hong Kong, John Lee, met the Director of the Cyberspace Administration of China (CAC), Zhuang Rongwen, in Hong Kong to discuss cooperation in innovation and technology.
During the meeting, officials from the Innovation, Technology and Industry Bureau and the CAC signed a Memorandum of Understanding (MOU) on innovation and technology development. The agreement covers areas including AI, cross-border data flow and blockchain.
The MOU aims to support the development of Hong Kong as an international innovation and technology centre. It also focuses on strengthening cybersecurity cooperation and promoting the digital economy through technological development.
Officials said the agreement aligns with China’s national development plans and supports Hong Kong’s integration into broader economic strategies. It also highlights plans to enhance international exchanges and technology-driven economic growth.
The Chief Executive said Hong Kong will continue to expand its role as a technology and investment hub under the ‘one country, two systems’ framework. The CAC said the partnership will support long-term innovation and development goals.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Environment and Climate Change Canada has announced the launch of a hybrid AI weather forecasting model aimed at improving predictions of severe weather. The system combines AI with traditional physics-based forecasting methods.
According to Environment and Climate Change Canada, the model uses AI to analyse large datasets while relying on established models to account for local weather factors such as temperature, wind and precipitation. This combination is expected to improve forecast accuracy.
The department states the system will enhance performance across all forecast timeframes and provide earlier warnings of major weather events. In some cases, forecasts could identify large systems more than 24 hours earlier than current capabilities.
Environment and Climate Change Canada said the model has been extensively tested alongside existing systems and will support better preparedness and public safety as extreme weather events increase in Canada.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The US Department of Education has introduced a new supplemental priority focused on advancing AI in education, published in the Federal Register. The measure is intended for use in discretionary grant programmes.
According to the US Department of Education, the priority and related definitions may be applied across current and future funding competitions. The Secretary can adopt all or part of the priority depending on programme needs.
The initiative builds on earlier supplemental priorities covering areas such as literacy, educational choice, meaningful learning and workforce readiness. It forms part of a broader framework guiding federal education funding in the US.
Why does it matter?
The new priority will take effect in May 2026, expanding the role of AI in US education policy and grant allocation. This is a global shift in which AI is playing a more prominent role in education.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Microsoft has launched the MPowerHer initiative in Singapore to support women in building AI and digital skills through training, mentorship, and career pathways. The programme is delivered with partners including SG Women in Tech, Mums@Work, and Code; Without Barriers.
The initiative was officially launched by Minister of State for the Ministry of Digital Development and Information, Rahayu Mahzam, at Microsoft Public Sector Solutions Day. It aims to support women across different life and career stages, including those returning to work after a career break.
MPowerHer combines foundational AI training with practical, team-based projects and career support. It also provides access to mentorship networks and community programmes designed to help participants move into employment or entrepreneurship.
The programme includes training in AI fundamentals, Microsoft Copilot, AI agents, and low-code and no-code tools. It is open to members of national communities such as SG Women in Tech, Mums@Work, and Code; Without Barriers, as well as other women across Singapore.
Microsoft Singapore Managing Director Wee Luen Chia said the initiative focuses on ensuring women are included in the AI-driven workforce. He added that it supports inclusive skills development and prepares participants for opportunities in the digital economy.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The 2026 World Internet Conference Asia-Pacific Summit has opened in Hong Kong, hosted by the World Internet Conference, organised by the Hong Kong Special Administrative Region Government, and co-organised by the Innovation, Technology and Industry Bureau.
The Hong Kong government says the two-day summit is expected to bring together around 1,000 participants from more than 50 countries and regions, including government and business leaders, representatives of international organisations, and experts and scholars.
The programme includes remarks by Hong Kong Chief Executive John Lee and World Internet Conference Chairman and Director of the Cyberspace Administration of China Zhuang Rongwen, alongside other invited speakers from government, industry, and international organisations.
A ministerial meeting was convened during the summit, with officials and representatives of international organisations discussing topics including how AI can support high-quality economic growth. The programme also includes a government-enterprise dialogue and a main forum focused on the digital economy, innovation, and technology development.
Six sub-forums are scheduled as part of the summit, covering innovation and application of AI agents, digital finance, AI security and governance, AI for a better life, digital and intelligent health, and digital transformation and dissemination of classical texts.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The Belgian Data Protection Authority has outlined the impact of AI on privacy in a new publication, highlighting growing concerns around data use and protection. The analysis forms part of its ongoing work on emerging technologies.
According to the Belgian Data Protection Authority, AI systems rely on large volumes of data, which can increase risks related to the processing of personal data and compliance with existing regulations. This raises questions about transparency and accountability.
The authority notes that AI can make it more difficult for individuals to understand how their data is used, particularly in complex or automated decision-making systems. This may challenge established data protection principles.
The Authority emphasises the need to adapt regulatory approaches and safeguards to ensure privacy rights remain protected as AI adoption expands in Belgium.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
UNESCO will launch the Observatory on AI in Education for Latin America and the Caribbean at a high-level event during the 2026 Forum of the Countries of Latin America and the Caribbean on Sustainable Development, organised by the Economic Commission for Latin America and the Caribbean.
The observatory is intended to support states in integrating AI into education systems across the region. UNESCO says the initiative is being developed with regional and international partners, including the Development Bank of Latin America and the Caribbean, the National Centre for AI of Chile, the Regional Center for Studies on the Development of the Information Society of Brazil, and the Economic Commission for Latin America and the Caribbean.
UNESCO describes the observatory as a regional cooperation platform bringing together knowledge production, institutional strengthening, and technical assistance in response to the growing use of AI in teaching, learning, and educational management. Its work covers research and policy, capacity development, innovation, and regional collaboration.
The organisation says the observatory will support comparative analysis, identify opportunities and risks, and assist in the design of regulatory frameworks, national strategies, and pilot initiatives. It also presents the launch as a coordination space for ministries of education, universities, research centres, the technology sector, civil society, and multilateral organisations.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!