World Economic Forum signals new phase for frontier technologies

Frontier technologies are entering a more explicitly geopolitical phase, according to discussions highlighted at the World Economic Forum Annual Meeting in Davos. Competition is increasingly defined by infrastructure, energy systems, supply chains and standards, rather than pure technological capability.

AI sits at the centre of this shift, with the main constraint moving from model performance to physical capacity. Rising electricity demand, grid limits and resource pressures are shaping large-scale data centre deployment, making energy infrastructure key to digital competitiveness.

New approaches are emerging to address these bottlenecks. Start-ups such as Emerald AI are developing software that enables data centres to adjust power consumption dynamically, shifting workloads, using stored energy and responding to grid conditions in real time.

Early demonstrations suggest potential reductions in peak demand, supporting more flexible integration with electricity systems.

Broader frontier technology trends reflect the same pattern, from robotics capital inflows in China to satellite infrastructure debates in Europe and accelerating post-quantum security standards.

Across sectors, infrastructure resilience and strategic coordination are becoming central to technological development. The shift matters because it reframes frontier technology as an infrastructure and governance issue rather than a purely innovation-driven race.

It reinforces the need to track how digital systems are increasingly constrained and enabled by energy, standards and cross-border coordination. Such a perspective helps explain where real power is concentrating in the global tech stack and where future regulatory and market tensions are likely to emerge.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Amnesty International warns EU tech law reforms could weaken GDPR and AI Act protections

Amnesty International has warned that proposed EU reforms presented as a way to simplify digital regulation and boost competitiveness could weaken core safeguards for privacy and fundamental rights.
At the centre of the concern is the European Commission’s ‘Digital Omnibus’ initiative, which would affect major pieces of legislation, including the General Data Protection Regulation and the AI Act.

Amnesty and other civil society groups argue that the package risks reopening key protections in the EU’s digital rulebook under the banner of regulatory simplification.

Among the most controversial proposals are changes to how personal data is defined, along with exceptions that could make it easier for companies to retain or reuse data for AI systems. Critics say that such changes would weaken safeguards intended to limit excessive data collection and to preserve accountability in how personal information is processed.

Concerns also extend to the AI Act, where proposed adjustments could reduce obligations for high-risk systems. According to Amnesty, companies may be given greater discretion in how they assess and disclose risks, potentially lowering transparency and limiting external scrutiny.

Delays in implementation, the organisation argues, could also allow harmful systems to remain in use without full regulatory oversight.

The broader reform agenda may reach beyond privacy and AI rules. Future ‘fitness checks’ could also affect frameworks such as the Digital Services Act and the Digital Markets Act, raising wider concerns about whether the EU’s digital regulatory model is being softened in the name of competitiveness.

For critics, the cumulative risk is that the balance of the EU digital framework could begin to shift away from rights protection and public accountability, and towards greater corporate flexibility in areas linked to surveillance, discrimination, and market power.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

UK’s Ofcom report reveals evolving online habits and growing AI reliance

New Ofcom research suggests that UK adults are becoming more cautious and passive in their use of social media, even as interest in AI tools grows, pointing to a wider shift in how people experience digital life.

While social media remains widely used, the report indicates that users are participating less actively and becoming more selective about what they share and how visible they are online.

That shift is tied in part to growing unease about digital well-being. Concerns about screen time and the wider effects of online platforms are rising, with fewer adults convinced that the benefits of being online outweigh the risks. Many say they are actively trying to limit their usage, reflecting broader anxieties about the impact of digital media on mental health and everyday life.

At the same time, AI adoption is accelerating, especially among younger users. Ofcom’s findings suggest that people are using AI not only for productivity and creative tasks, but also, in some cases, for conversational and emotional support, pointing to a changing relationship between users and digital tools.

Other findings reinforce the sense of a more fragmented digital environment. Trust in news remains uneven, mainstream sources still hold a central place but face growing scepticism, and confidence in digital skills does not always translate into an ability to identify misinformation, scams, or other online risks.

Taken together, the findings suggest that the UK’s digital habits are not simply expanding but changing in character. Users appear to be growing more wary of social platforms, more alert to digital harms, and more open to new forms of interaction through AI.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

IBM and ETH Zurich announce partnership on AI and quantum algorithms

International Business Machines Corporation and the Swiss Federal Institute of Technology Zurich have announced a decade-long partnership to develop algorithms that bridge classical computing, machine learning, and quantum systems.

The collaboration will focus on creating foundational algorithms to address complex business and scientific challenges as quantum computing becomes increasingly practical. IBM will support the establishment of new professorships and research initiatives at the institution.

The partnership will concentrate on four key areas: optimisation, differential equations, linear algebra and complex system modelling, strengthening the mathematical foundations required for AI and quantum progress.

This represents a significant commitment to shaping the algorithmic future of computing. Both institutions believe that algorithms, rather than hardware or software alone, will define the next computing revolution as quantum and AI technologies converge in Zurich.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Responsible AI gaps highlighted in UNESCO and Thomson Reuters Foundation report

A new global report from UNESCO and the Thomson Reuters Foundation suggests that companies are adopting AI faster than they are building the internal systems needed to govern it responsibly, exposing significant gaps in oversight, accountability, and risk management. Based on data from 3,000 companies, the report found that 44% have an AI strategy, but only 10% are publicly committed to following an AI governance framework.

The gap, according to the report, is no longer one of awareness but of implementation. Many companies now present responsible AI as a principle or ambition, yet provide far less detail on where AI is used, how risks are managed in practice, who is responsible when systems fail, or how concerns are escalated internally. Governance is often described at a conceptual level, but much less often backed by visible operational mechanisms.

Some of the sharpest weaknesses lie in areas central to public-interest AI governance. Only 11% of companies said they assess environmental impact, while just 7% evaluate the human rights impact of the AI they use. Human oversight also remains limited, with only 12% reporting a policy that ensures human supervision of AI systems.

The report also points to weak accountability and data governance structures. Only a small minority of companies could identify who is responsible for ethical risks across the AI lifecycle, while three-quarters showed no evidence of policies to verify the quality of AI training data.

Fewer than one in five reported conducting privacy or data protection impact assessments specific to AI, and only one in five had policies governing data sharing with third-party AI vendors.

Workforce preparedness appears similarly underdeveloped. While 30% of companies said they offer AI training programmes, only 12% provide structured training with comprehensive coverage. The report argues that many businesses now acknowledge the importance of skills development and workforce transition, but rarely explain how workers are supported in practice or how concerns can be raised and addressed.

Taken together, the findings suggest that the main test for responsible AI is shifting from principle to proof. The issue is no longer whether companies say the right things about ethical AI, but whether they can demonstrate that accountability, oversight, and remedies actually work when AI systems are deployed.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Serbia launches LORYA to turn cultural heritage into AI-ready language data

Serbia has launched LORYA, a new platform that uses AI-supported document processing to convert books, newspapers, manuscripts, and other written heritage materials into clean, structured, machine-readable data for research, education, and language technologies.

Developed by the UN Development Programme, the Mathematical Institute of the Serbian Academy of Sciences and Arts, and the National Library of Serbia, with support from France and Japan, the project is aimed not only at preserving written cultural heritage, but also at addressing a broader AI problem: the weak representation of underrepresented languages, scripts, and historical texts in digital training data.

The distinction matters. While many digitisation initiatives focus mainly on preservation and access, LORYA is also designed to prepare historical material for computational use. In practice, that means converting complex printed and handwritten documents into reusable data that can better support language technologies and future AI systems.

The platform focuses on books, newspapers, manuscripts, and other archival sources, including materials that traditional OCR systems often struggle to process. Its ability to work with handwritten, multi-script, and visually complex documents makes it especially relevant for collections that have remained difficult to digitise in a meaningful way.

That gives the project a wider significance beyond Serbia. As AI systems continue to depend on large volumes of digital text, many smaller or historically under-digitised languages remain poorly represented in training datasets. By transforming cultural heritage into structured digital resources, LORYA frames preservation not only as an archival task but also as part of a broader effort to make AI development more linguistically inclusive.

The project has also been released as open-source software and recognised as a Digital Public Good, suggesting that it is meant to serve as more than a national pilot. Interest from UNDP teams in Iraq and Nepal indicates that the model could be adapted in other contexts where cultural heritage, language diversity, and digital capacity intersect.

Seen in that light, LORYA is not simply a heritage digitisation tool. It is also an attempt to connect cultural preservation with public-interest AI development, while arguing that historical texts, minority languages, and local knowledge systems should not remain on the margins of the AI era.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

MIT develops AI framework to test ethics in autonomous systems

Researchers at MIT have introduced a new framework designed to evaluate the ethical impact of autonomous systems used in high-stakes environments. The approach aims to identify cases where AI-driven decisions may be technically efficient but fail to meet fairness expectations.

Growing reliance on AI in areas such as energy distribution and traffic management has raised concerns about unintended bias. Cost-optimised systems can still disadvantage communities, especially when ethical factors are hard to measure.

The framework, known as SEED-SET, separates objective performance metrics from subjective human values. A large language model is used to simulate stakeholder preferences, enabling the system to compare scenarios and detect where outcomes diverge from ethical expectations.

Testing shows the method generates more relevant scenarios while reducing manual analysis. Findings highlight its potential to improve transparency and support more balanced decision-making before AI systems are deployed.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EIB highlights AI as key driver of Croatia’s economic growth

The European Investment Bank and the Croatian National Bank have emphasised the strategic importance of AI in strengthening Croatia’s economic competitiveness. Discussions at a joint conference focused on accelerating AI adoption through coordinated investment, policy development and skills enhancement.

Despite strong investment activity among firms in Croatia, the uptake of advanced technologies remains limited. Only a small share of companies systematically use generative AI, with applications largely confined to internal processes, highlighting significant untapped potential for productivity gains.

Participants identified key structural barriers, including limited access to finance, shortages of skilled workers and regulatory uncertainty.

Addressing these challenges requires a combined approach that mobilises private capital, improves access to funding for smaller firms and supports the development of a more robust innovation ecosystem.

The EIB continues to play a central role in Europe’s digital transformation, with major funding initiatives aimed at scaling AI technologies and strengthening strategic infrastructure.

By aligning financial instruments with policy priorities, the initiative seeks to enhance long-term growth, resilience and integration into global value chains.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EPO accelerates digital patent shift with paperless system by 2027

The European Patent Office (EPO) is accelerating its transition towards a fully digital patent system, with plans to implement a paperless patent-granting process by 2027.

Discussions at the latest eSACEPO meeting highlighted steady progress and broad stakeholder support for modernising patent workflows.

Electronic filing and communication are set to become the default, with paper-based processes limited to exceptional cases. The shift aims to improve efficiency and accessibility, supported by legal adjustments and the gradual introduction of structured data formats to enhance processing accuracy.

Digital tools continue to evolve, with the MyEPO platform expanding its functionality through interface upgrades, self-service features and new capabilities such as colour drawing submissions.

The rollout of DOCX filing, alongside optional PDF backups, reflects a cautious approach designed to balance innovation with reliability.

AI is increasingly integrated into patent examination processes, supporting tasks such as search and documentation.

However, the EPO maintains a human-centric model, ensuring that decision-making authority remains with patent examiners while AI enhances productivity and consistency.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

New Oracle agentic AI tool streamlines CAD to procurement workflows

Oracle has launched a new agentic AI application designed to connect engineering and procurement into a single workflow. The Design-to-Source Workspace for product lifecycle management aims to reduce delays, improve traceability, and minimise compliance risks across sourcing processes.

Traditional design-to-source models often operate sequentially, with engineering and procurement working in separate stages. Oracle’s approach replaces that structure with a continuous, coordinated loop, where AI evaluates cost, supply, and risk in real time as designs evolve.

The platform translates CAD data directly into sourcing actions, eliminating manual input and reducing errors. Automated workflows handle supplier identification, risk assessment, and request-for-quote execution, while maintaining compliance and auditability throughout the process.

Expected gains include up to 60% less manual work, significantly faster RFQ cycles, and a 20% to 30% reduction in overall sourcing timelines. Greater accuracy and improved decision-making allow teams to focus on higher-value tasks rather than repetitive coordination.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot