Nova Scotia launches five person AI team to support government operations

Nova Scotia will recruit a five-person team to help integrate AI into provincial government operations, marking a more structured push to introduce AI tools into public service work across Canada. Jennifer LaPlante, deputy minister of cybersecurity and digital solutions, said the group will develop protocols for staff across departments as the province expands its use of AI.

The team is expected to identify tools that could improve productivity and efficiency in government work, including systems such as Microsoft Copilot for tasks like drafting documents and summarising information. The move suggests that Nova Scotia is shifting from limited experimentation towards a more organised approach to AI adoption in public administration.

Officials say existing rules already govern the use of some AI meeting tools and virtual assistants, while a broader responsible-use policy is still being developed. That places the province’s AI push within a wider effort to balance innovation with security, oversight, and system protection.

Funding will come from a C$4.4 million investment to establish AI capabilities during the current fiscal year. Part of that budget will go towards licences and software, with room for the team to grow over time.

The department has also launched an AI chatbot, Scottie, to answer public questions about government services. According to officials, the tool retrieves information from existing government sources rather than generating new content, suggesting an effort to limit risk while expanding AI use in public-facing services.

Taken together, the measures point to a broader effort to embed AI more formally into provincial government operations, not only through tools and staffing but also through internal rules governing its use.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot 

Global cyber stability conference set for May 2026 in Geneva

The Cyber Stability Conference 2026 will take place on 4–5 May at the Centre International de Conférences Genève in Geneva, bringing together global stakeholders to discuss the future of ICT security and cyber governance.

Organised by the United Nations Institute for Disarmament Research, the event will run in a hybrid format during Geneva Cyber Week.

The conference comes amid growing international efforts to strengthen frameworks for responsible state behaviour in cyberspace and improve coordination on digital security challenges. It is positioned within a broader push to adapt governance systems to rapid technological change.

Discussions will focus on how cyber governance can respond to emerging technologies such as AI and quantum computing. Emphasis will be placed on aligning regulatory and security approaches with technological development to reinforce international stability.

Participants from government, academia, industry, and civil society will review past lessons, assess current risks, and explore future pathways for global ICT security governance.

Cyber stability is becoming a core pillar of global security as digital infrastructure underpins economies, governance systems, and critical services. Stronger coordination on cyber governance is essential to reducing systemic risks and ensuring technological progress does not outpace security frameworks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

World Economic Forum signals new phase for frontier technologies

Frontier technologies are entering a more explicitly geopolitical phase, according to discussions highlighted at the World Economic Forum Annual Meeting in Davos. Competition is increasingly defined by infrastructure, energy systems, supply chains and standards, rather than pure technological capability.

AI sits at the centre of this shift, with the main constraint moving from model performance to physical capacity. Rising electricity demand, grid limits and resource pressures are shaping large-scale data centre deployment, making energy infrastructure key to digital competitiveness.

New approaches are emerging to address these bottlenecks. Start-ups such as Emerald AI are developing software that enables data centres to adjust power consumption dynamically, shifting workloads, using stored energy and responding to grid conditions in real time.

Early demonstrations suggest potential reductions in peak demand, supporting more flexible integration with electricity systems.

Broader frontier technology trends reflect the same pattern, from robotics capital inflows in China to satellite infrastructure debates in Europe and accelerating post-quantum security standards.

Across sectors, infrastructure resilience and strategic coordination are becoming central to technological development. The shift matters because it reframes frontier technology as an infrastructure and governance issue rather than a purely innovation-driven race.

It reinforces the need to track how digital systems are increasingly constrained and enabled by energy, standards and cross-border coordination. Such a perspective helps explain where real power is concentrating in the global tech stack and where future regulatory and market tensions are likely to emerge.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Amnesty International warns EU tech law reforms could weaken GDPR and AI Act protections

Amnesty International has warned that proposed EU reforms presented as a way to simplify digital regulation and boost competitiveness could weaken core safeguards for privacy and fundamental rights.
At the centre of the concern is the European Commission’s ‘Digital Omnibus’ initiative, which would affect major pieces of legislation, including the General Data Protection Regulation and the AI Act.

Amnesty and other civil society groups argue that the package risks reopening key protections in the EU’s digital rulebook under the banner of regulatory simplification.

Among the most controversial proposals are changes to how personal data is defined, along with exceptions that could make it easier for companies to retain or reuse data for AI systems. Critics say that such changes would weaken safeguards intended to limit excessive data collection and to preserve accountability in how personal information is processed.

Concerns also extend to the AI Act, where proposed adjustments could reduce obligations for high-risk systems. According to Amnesty, companies may be given greater discretion in how they assess and disclose risks, potentially lowering transparency and limiting external scrutiny.

Delays in implementation, the organisation argues, could also allow harmful systems to remain in use without full regulatory oversight.

The broader reform agenda may reach beyond privacy and AI rules. Future ‘fitness checks’ could also affect frameworks such as the Digital Services Act and the Digital Markets Act, raising wider concerns about whether the EU’s digital regulatory model is being softened in the name of competitiveness.

For critics, the cumulative risk is that the balance of the EU digital framework could begin to shift away from rights protection and public accountability, and towards greater corporate flexibility in areas linked to surveillance, discrimination, and market power.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

UK’s Ofcom report reveals evolving online habits and growing AI reliance

New Ofcom research suggests that UK adults are becoming more cautious and passive in their use of social media, even as interest in AI tools grows, pointing to a wider shift in how people experience digital life.

While social media remains widely used, the report indicates that users are participating less actively and becoming more selective about what they share and how visible they are online.

That shift is tied in part to growing unease about digital well-being. Concerns about screen time and the wider effects of online platforms are rising, with fewer adults convinced that the benefits of being online outweigh the risks. Many say they are actively trying to limit their usage, reflecting broader anxieties about the impact of digital media on mental health and everyday life.

At the same time, AI adoption is accelerating, especially among younger users. Ofcom’s findings suggest that people are using AI not only for productivity and creative tasks, but also, in some cases, for conversational and emotional support, pointing to a changing relationship between users and digital tools.

Other findings reinforce the sense of a more fragmented digital environment. Trust in news remains uneven, mainstream sources still hold a central place but face growing scepticism, and confidence in digital skills does not always translate into an ability to identify misinformation, scams, or other online risks.

Taken together, the findings suggest that the UK’s digital habits are not simply expanding but changing in character. Users appear to be growing more wary of social platforms, more alert to digital harms, and more open to new forms of interaction through AI.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

IBM and ETH Zurich announce partnership on AI and quantum algorithms

International Business Machines Corporation and the Swiss Federal Institute of Technology Zurich have announced a decade-long partnership to develop algorithms that bridge classical computing, machine learning, and quantum systems.

The collaboration will focus on creating foundational algorithms to address complex business and scientific challenges as quantum computing becomes increasingly practical. IBM will support the establishment of new professorships and research initiatives at the institution.

The partnership will concentrate on four key areas: optimisation, differential equations, linear algebra and complex system modelling, strengthening the mathematical foundations required for AI and quantum progress.

This represents a significant commitment to shaping the algorithmic future of computing. Both institutions believe that algorithms, rather than hardware or software alone, will define the next computing revolution as quantum and AI technologies converge in Zurich.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Responsible AI gaps highlighted in UNESCO and Thomson Reuters Foundation report

A new global report from UNESCO and the Thomson Reuters Foundation suggests that companies are adopting AI faster than they are building the internal systems needed to govern it responsibly, exposing significant gaps in oversight, accountability, and risk management. Based on data from 3,000 companies, the report found that 44% have an AI strategy, but only 10% are publicly committed to following an AI governance framework.

The gap, according to the report, is no longer one of awareness but of implementation. Many companies now present responsible AI as a principle or ambition, yet provide far less detail on where AI is used, how risks are managed in practice, who is responsible when systems fail, or how concerns are escalated internally. Governance is often described at a conceptual level, but much less often backed by visible operational mechanisms.

Some of the sharpest weaknesses lie in areas central to public-interest AI governance. Only 11% of companies said they assess environmental impact, while just 7% evaluate the human rights impact of the AI they use. Human oversight also remains limited, with only 12% reporting a policy that ensures human supervision of AI systems.

The report also points to weak accountability and data governance structures. Only a small minority of companies could identify who is responsible for ethical risks across the AI lifecycle, while three-quarters showed no evidence of policies to verify the quality of AI training data.

Fewer than one in five reported conducting privacy or data protection impact assessments specific to AI, and only one in five had policies governing data sharing with third-party AI vendors.

Workforce preparedness appears similarly underdeveloped. While 30% of companies said they offer AI training programmes, only 12% provide structured training with comprehensive coverage. The report argues that many businesses now acknowledge the importance of skills development and workforce transition, but rarely explain how workers are supported in practice or how concerns can be raised and addressed.

Taken together, the findings suggest that the main test for responsible AI is shifting from principle to proof. The issue is no longer whether companies say the right things about ethical AI, but whether they can demonstrate that accountability, oversight, and remedies actually work when AI systems are deployed.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Serbia launches LORYA to turn cultural heritage into AI-ready language data

Serbia has launched LORYA, a new platform that uses AI-supported document processing to convert books, newspapers, manuscripts, and other written heritage materials into clean, structured, machine-readable data for research, education, and language technologies.

Developed by the UN Development Programme, the Mathematical Institute of the Serbian Academy of Sciences and Arts, and the National Library of Serbia, with support from France and Japan, the project is aimed not only at preserving written cultural heritage, but also at addressing a broader AI problem: the weak representation of underrepresented languages, scripts, and historical texts in digital training data.

The distinction matters. While many digitisation initiatives focus mainly on preservation and access, LORYA is also designed to prepare historical material for computational use. In practice, that means converting complex printed and handwritten documents into reusable data that can better support language technologies and future AI systems.

The platform focuses on books, newspapers, manuscripts, and other archival sources, including materials that traditional OCR systems often struggle to process. Its ability to work with handwritten, multi-script, and visually complex documents makes it especially relevant for collections that have remained difficult to digitise in a meaningful way.

That gives the project a wider significance beyond Serbia. As AI systems continue to depend on large volumes of digital text, many smaller or historically under-digitised languages remain poorly represented in training datasets. By transforming cultural heritage into structured digital resources, LORYA frames preservation not only as an archival task but also as part of a broader effort to make AI development more linguistically inclusive.

The project has also been released as open-source software and recognised as a Digital Public Good, suggesting that it is meant to serve as more than a national pilot. Interest from UNDP teams in Iraq and Nepal indicates that the model could be adapted in other contexts where cultural heritage, language diversity, and digital capacity intersect.

Seen in that light, LORYA is not simply a heritage digitisation tool. It is also an attempt to connect cultural preservation with public-interest AI development, while arguing that historical texts, minority languages, and local knowledge systems should not remain on the margins of the AI era.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

MIT develops AI framework to test ethics in autonomous systems

Researchers at MIT have introduced a new framework designed to evaluate the ethical impact of autonomous systems used in high-stakes environments. The approach aims to identify cases where AI-driven decisions may be technically efficient but fail to meet fairness expectations.

Growing reliance on AI in areas such as energy distribution and traffic management has raised concerns about unintended bias. Cost-optimised systems can still disadvantage communities, especially when ethical factors are hard to measure.

The framework, known as SEED-SET, separates objective performance metrics from subjective human values. A large language model is used to simulate stakeholder preferences, enabling the system to compare scenarios and detect where outcomes diverge from ethical expectations.

Testing shows the method generates more relevant scenarios while reducing manual analysis. Findings highlight its potential to improve transparency and support more balanced decision-making before AI systems are deployed.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EIB highlights AI as key driver of Croatia’s economic growth

The European Investment Bank and the Croatian National Bank have emphasised the strategic importance of AI in strengthening Croatia’s economic competitiveness. Discussions at a joint conference focused on accelerating AI adoption through coordinated investment, policy development and skills enhancement.

Despite strong investment activity among firms in Croatia, the uptake of advanced technologies remains limited. Only a small share of companies systematically use generative AI, with applications largely confined to internal processes, highlighting significant untapped potential for productivity gains.

Participants identified key structural barriers, including limited access to finance, shortages of skilled workers and regulatory uncertainty.

Addressing these challenges requires a combined approach that mobilises private capital, improves access to funding for smaller firms and supports the development of a more robust innovation ecosystem.

The EIB continues to play a central role in Europe’s digital transformation, with major funding initiatives aimed at scaling AI technologies and strengthening strategic infrastructure.

By aligning financial instruments with policy priorities, the initiative seeks to enhance long-term growth, resilience and integration into global value chains.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!