US agencies launch national AI workforce initiative

The US Department of Labor and the National Science Foundation have formalised a partnership to prepare the American workforce for the rapid expansion of AI.

The agreement supports the launch of the TechAccess: AI-Ready America initiative, designed to broaden access to AI education, tools, and training across industries.

Central to the programme is a proposed funding package of up to $224 million to support the creation of up to 56 state and territory coordination hubs. These hubs are expected to strengthen regional AI readiness and connect workforce systems with education and training providers.

The initiative brings together multiple federal partners, including the Department of Agriculture and the Small Business Administration, to coordinate national efforts. Existing workforce structures, including American Job Centers and apprenticeship programmes, will be integrated to support skills development and career transitions.

Alongside training efforts, the agreement includes joint research into how AI is reshaping labour markets, job requirements, and wider economic outcomes. The collaboration is positioned as a coordinated federal strategy to ensure workers and businesses can adapt to an AI-driven economy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Microsoft commits $10 billion to Japan’s AI future

Microsoft Corporation announced a $10 billion investment in Japan over four years to expand AI infrastructure and strengthen cybersecurity partnerships with the government. The investment aligns with Prime Minister Sanae Takaichi’s strategy for economic growth through advanced technologies.

The company will collaborate with Japanese firms SoftBank and Sakura Internet to develop domestically-based AI computing capacity, allowing Japanese businesses and government agencies to store sensitive data locally whilst accessing Microsoft Azure services.

Why does it matter?

Microsoft plans to train 1 million engineers and developers by 2030 as part of the initiative to build Japan’s digital workforce in AI and emerging technologies. The investment addresses Japan’s growing demand for cloud and AI services as part of the company’s Asia-wide expansion strategy.

The announcement, made on 3 April, reflects Microsoft’s commitment to supporting Japanese technological advancement whilst maintaining data security. Sakura Internet’s share price jumped 20 percent following the news, signalling strong market confidence in the partnership.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Nova Scotia launches five person AI team to support government operations

Nova Scotia will recruit a five-person team to help integrate AI into provincial government operations, marking a more structured push to introduce AI tools into public service work across Canada. Jennifer LaPlante, deputy minister of cybersecurity and digital solutions, said the group will develop protocols for staff across departments as the province expands its use of AI.

The team is expected to identify tools that could improve productivity and efficiency in government work, including systems such as Microsoft Copilot for tasks like drafting documents and summarising information. The move suggests that Nova Scotia is shifting from limited experimentation towards a more organised approach to AI adoption in public administration.

Officials say existing rules already govern the use of some AI meeting tools and virtual assistants, while a broader responsible-use policy is still being developed. That places the province’s AI push within a wider effort to balance innovation with security, oversight, and system protection.

Funding will come from a C$4.4 million investment to establish AI capabilities during the current fiscal year. Part of that budget will go towards licences and software, with room for the team to grow over time.

The department has also launched an AI chatbot, Scottie, to answer public questions about government services. According to officials, the tool retrieves information from existing government sources rather than generating new content, suggesting an effort to limit risk while expanding AI use in public-facing services.

Taken together, the measures point to a broader effort to embed AI more formally into provincial government operations, not only through tools and staffing but also through internal rules governing its use.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot 

Global cyber stability conference set for May 2026 in Geneva

The Cyber Stability Conference 2026 will take place on 4–5 May at the Centre International de Conférences Genève in Geneva, bringing together global stakeholders to discuss the future of ICT security and cyber governance.

Organised by the United Nations Institute for Disarmament Research, the event will run in a hybrid format during Geneva Cyber Week.

The conference comes amid growing international efforts to strengthen frameworks for responsible state behaviour in cyberspace and improve coordination on digital security challenges. It is positioned within a broader push to adapt governance systems to rapid technological change.

Discussions will focus on how cyber governance can respond to emerging technologies such as AI and quantum computing. Emphasis will be placed on aligning regulatory and security approaches with technological development to reinforce international stability.

Participants from government, academia, industry, and civil society will review past lessons, assess current risks, and explore future pathways for global ICT security governance.

Cyber stability is becoming a core pillar of global security as digital infrastructure underpins economies, governance systems, and critical services. Stronger coordination on cyber governance is essential to reducing systemic risks and ensuring technological progress does not outpace security frameworks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

World Economic Forum signals new phase for frontier technologies

Frontier technologies are entering a more explicitly geopolitical phase, according to discussions highlighted at the World Economic Forum Annual Meeting in Davos. Competition is increasingly defined by infrastructure, energy systems, supply chains and standards, rather than pure technological capability.

AI sits at the centre of this shift, with the main constraint moving from model performance to physical capacity. Rising electricity demand, grid limits and resource pressures are shaping large-scale data centre deployment, making energy infrastructure key to digital competitiveness.

New approaches are emerging to address these bottlenecks. Start-ups such as Emerald AI are developing software that enables data centres to adjust power consumption dynamically, shifting workloads, using stored energy and responding to grid conditions in real time.

Early demonstrations suggest potential reductions in peak demand, supporting more flexible integration with electricity systems.

Broader frontier technology trends reflect the same pattern, from robotics capital inflows in China to satellite infrastructure debates in Europe and accelerating post-quantum security standards.

Across sectors, infrastructure resilience and strategic coordination are becoming central to technological development. The shift matters because it reframes frontier technology as an infrastructure and governance issue rather than a purely innovation-driven race.

It reinforces the need to track how digital systems are increasingly constrained and enabled by energy, standards and cross-border coordination. Such a perspective helps explain where real power is concentrating in the global tech stack and where future regulatory and market tensions are likely to emerge.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Amnesty International warns EU tech law reforms could weaken GDPR and AI Act protections

Amnesty International has warned that proposed EU reforms presented as a way to simplify digital regulation and boost competitiveness could weaken core safeguards for privacy and fundamental rights.
At the centre of the concern is the European Commission’s ‘Digital Omnibus’ initiative, which would affect major pieces of legislation, including the General Data Protection Regulation and the AI Act.

Amnesty and other civil society groups argue that the package risks reopening key protections in the EU’s digital rulebook under the banner of regulatory simplification.

Among the most controversial proposals are changes to how personal data is defined, along with exceptions that could make it easier for companies to retain or reuse data for AI systems. Critics say that such changes would weaken safeguards intended to limit excessive data collection and to preserve accountability in how personal information is processed.

Concerns also extend to the AI Act, where proposed adjustments could reduce obligations for high-risk systems. According to Amnesty, companies may be given greater discretion in how they assess and disclose risks, potentially lowering transparency and limiting external scrutiny.

Delays in implementation, the organisation argues, could also allow harmful systems to remain in use without full regulatory oversight.

The broader reform agenda may reach beyond privacy and AI rules. Future ‘fitness checks’ could also affect frameworks such as the Digital Services Act and the Digital Markets Act, raising wider concerns about whether the EU’s digital regulatory model is being softened in the name of competitiveness.

For critics, the cumulative risk is that the balance of the EU digital framework could begin to shift away from rights protection and public accountability, and towards greater corporate flexibility in areas linked to surveillance, discrimination, and market power.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

UK’s Ofcom report reveals evolving online habits and growing AI reliance

New Ofcom research suggests that UK adults are becoming more cautious and passive in their use of social media, even as interest in AI tools grows, pointing to a wider shift in how people experience digital life.

While social media remains widely used, the report indicates that users are participating less actively and becoming more selective about what they share and how visible they are online.

That shift is tied in part to growing unease about digital well-being. Concerns about screen time and the wider effects of online platforms are rising, with fewer adults convinced that the benefits of being online outweigh the risks. Many say they are actively trying to limit their usage, reflecting broader anxieties about the impact of digital media on mental health and everyday life.

At the same time, AI adoption is accelerating, especially among younger users. Ofcom’s findings suggest that people are using AI not only for productivity and creative tasks, but also, in some cases, for conversational and emotional support, pointing to a changing relationship between users and digital tools.

Other findings reinforce the sense of a more fragmented digital environment. Trust in news remains uneven, mainstream sources still hold a central place but face growing scepticism, and confidence in digital skills does not always translate into an ability to identify misinformation, scams, or other online risks.

Taken together, the findings suggest that the UK’s digital habits are not simply expanding but changing in character. Users appear to be growing more wary of social platforms, more alert to digital harms, and more open to new forms of interaction through AI.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

IBM and ETH Zurich announce partnership on AI and quantum algorithms

International Business Machines Corporation and the Swiss Federal Institute of Technology Zurich have announced a decade-long partnership to develop algorithms that bridge classical computing, machine learning, and quantum systems.

The collaboration will focus on creating foundational algorithms to address complex business and scientific challenges as quantum computing becomes increasingly practical. IBM will support the establishment of new professorships and research initiatives at the institution.

The partnership will concentrate on four key areas: optimisation, differential equations, linear algebra and complex system modelling, strengthening the mathematical foundations required for AI and quantum progress.

This represents a significant commitment to shaping the algorithmic future of computing. Both institutions believe that algorithms, rather than hardware or software alone, will define the next computing revolution as quantum and AI technologies converge in Zurich.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Responsible AI gaps highlighted in UNESCO and Thomson Reuters Foundation report

A new global report from UNESCO and the Thomson Reuters Foundation suggests that companies are adopting AI faster than they are building the internal systems needed to govern it responsibly, exposing significant gaps in oversight, accountability, and risk management. Based on data from 3,000 companies, the report found that 44% have an AI strategy, but only 10% are publicly committed to following an AI governance framework.

The gap, according to the report, is no longer one of awareness but of implementation. Many companies now present responsible AI as a principle or ambition, yet provide far less detail on where AI is used, how risks are managed in practice, who is responsible when systems fail, or how concerns are escalated internally. Governance is often described at a conceptual level, but much less often backed by visible operational mechanisms.

Some of the sharpest weaknesses lie in areas central to public-interest AI governance. Only 11% of companies said they assess environmental impact, while just 7% evaluate the human rights impact of the AI they use. Human oversight also remains limited, with only 12% reporting a policy that ensures human supervision of AI systems.

The report also points to weak accountability and data governance structures. Only a small minority of companies could identify who is responsible for ethical risks across the AI lifecycle, while three-quarters showed no evidence of policies to verify the quality of AI training data.

Fewer than one in five reported conducting privacy or data protection impact assessments specific to AI, and only one in five had policies governing data sharing with third-party AI vendors.

Workforce preparedness appears similarly underdeveloped. While 30% of companies said they offer AI training programmes, only 12% provide structured training with comprehensive coverage. The report argues that many businesses now acknowledge the importance of skills development and workforce transition, but rarely explain how workers are supported in practice or how concerns can be raised and addressed.

Taken together, the findings suggest that the main test for responsible AI is shifting from principle to proof. The issue is no longer whether companies say the right things about ethical AI, but whether they can demonstrate that accountability, oversight, and remedies actually work when AI systems are deployed.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Serbia launches LORYA to turn cultural heritage into AI-ready language data

Serbia has launched LORYA, a new platform that uses AI-supported document processing to convert books, newspapers, manuscripts, and other written heritage materials into clean, structured, machine-readable data for research, education, and language technologies.

Developed by the UN Development Programme, the Mathematical Institute of the Serbian Academy of Sciences and Arts, and the National Library of Serbia, with support from France and Japan, the project is aimed not only at preserving written cultural heritage, but also at addressing a broader AI problem: the weak representation of underrepresented languages, scripts, and historical texts in digital training data.

The distinction matters. While many digitisation initiatives focus mainly on preservation and access, LORYA is also designed to prepare historical material for computational use. In practice, that means converting complex printed and handwritten documents into reusable data that can better support language technologies and future AI systems.

The platform focuses on books, newspapers, manuscripts, and other archival sources, including materials that traditional OCR systems often struggle to process. Its ability to work with handwritten, multi-script, and visually complex documents makes it especially relevant for collections that have remained difficult to digitise in a meaningful way.

That gives the project a wider significance beyond Serbia. As AI systems continue to depend on large volumes of digital text, many smaller or historically under-digitised languages remain poorly represented in training datasets. By transforming cultural heritage into structured digital resources, LORYA frames preservation not only as an archival task but also as part of a broader effort to make AI development more linguistically inclusive.

The project has also been released as open-source software and recognised as a Digital Public Good, suggesting that it is meant to serve as more than a national pilot. Interest from UNDP teams in Iraq and Nepal indicates that the model could be adapted in other contexts where cultural heritage, language diversity, and digital capacity intersect.

Seen in that light, LORYA is not simply a heritage digitisation tool. It is also an attempt to connect cultural preservation with public-interest AI development, while arguing that historical texts, minority languages, and local knowledge systems should not remain on the margins of the AI era.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot