World Economic Forum report calls for shift towards cyber resilience amid global threats

A World Economic Forum report states that the growing complexity of global cyber threats requires a shift from traditional cybersecurity approaches towards a broader model of cyber resilience.

The report notes that with nearly 70% of the global population online, digital infrastructure underpins critical sectors including healthcare, finance and public services. While interconnected systems deliver significant benefits, they also create cascading risks that can spread rapidly across borders and industries.

Recent cyber incidents have demonstrated how local breaches can escalate into global disruptions, exposing vulnerabilities in highly interconnected systems, the report notes. At the same time, the rise of state-linked cyber activity and large-scale cybercrime adds further complexity to the threat landscape.

The report by the WEF highlights fragmentation as a major barrier to effective response. Differences in political priorities, regulatory frameworks and technical capabilities create gaps that attackers can exploit, while limiting the ability of governments and organisations to coordinate effectively.

Emerging technologies such as AI and quantum computing are expanding both capabilities and risks, the report states.

The WEF report calls for a more coordinated global approach, including implementation of international norms, stronger capacity-building efforts and enhanced cooperation between governments, industry and civil society.

Why does it matter?

The WEF report is important because it reframes cyber threats as systemic, cross-border risks instead of isolated incidents, showing that fragmented regulation, uneven capabilities and weak cooperation can allow a single breach to cascade across critical infrastructure, economies and public services. Emerging technologies like AI are accelerating both the scale and sophistication of attacks, making coordinated international resilience a necessary condition for maintaining stability.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Peacebuilding and AI in focus at UNSSC webinar series

The United Nations System Staff College has highlighted growing interest across the UN and the wider peacebuilding community in how artificial intelligence is shaping conflict prevention, arguing that the technology can support peace efforts but cannot replace human judgement, diplomacy, and oversight.

The reflection draws on a three-part webinar series launched by UNSSC to examine AI governance, field use, and ethical risks in peacebuilding. According to the text, one message ran across all three discussions: AI may offer real value for conflict prevention, but its role should remain supportive rather than substitutive.

The piece argues that AI is already being used across the UN peace and security pillar and should be introduced only where it improves effectiveness, such as by handling repetitive tasks and allowing staff to focus on analysis, leadership, and political judgement. It also stresses that principles long associated with peacebuilding, including trust and ‘do no harm’, should apply across the full AI stack, from data and infrastructure to model design and deployment.

Examples cited from the webinar series include the use of augmented intelligence in early warning systems, where machine learning is combined with human contextual knowledge, and an AI-enabled WhatsApp chatbot used in Yemen to broaden participation in mediation, particularly among women and young people. The text presents these cases as evidence that AI can extend the reach of peacebuilding tools without replacing practitioners.

The final part of the reflection focuses on governance and ethics. It argues that while ethical AI principles are widely discussed, they need to be translated into practical, context-specific safeguards, especially in conflict settings. It also notes that risks differ across use cases such as early warning, social media monitoring, and mediation support, and says meaningful governance requires input from diplomats, researchers, mediators, and the private sector.

UNSSC says the webinar series drew between 300 and 500 registrants per session, which it presents as evidence of strong demand for more targeted learning on AI and peacebuilding. The college argues that its role should extend beyond convening discussion to turning those debates into practical knowledge for UN practitioners working at the intersection of AI and conflict prevention.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

US Department of Labor launches AI training portal for apprenticeship programmes

The US Department of Labor has launched an AI in Registered Apprenticeship Innovation Portal to support organisations integrating AI training into federally recognised apprenticeship programmes.

The Department said the platform brings together resources to support AI literacy and structured AI-focused training pathways across sectors.

The portal is organised around three main areas: AI skills integration in apprenticeships, industry-specific training modules, and pathways for embedding AI into both new and existing programmes.

The Department said training content spans sectors including healthcare, finance, education, construction, advanced manufacturing and technology.

Alongside the portal, the Department has introduced an AI Literacy Framework to guide employers, educators and training providers. The Department said the AI Literacy Framework outlines core competencies, including understanding AI capabilities and limits, using tools in daily tasks, and assessing output accuracy.

A separate initiative, the Make America AI-Ready programme, delivers a free text-message-based AI course aimed at workers without reliable internet access.

Officials said organisations can join existing apprenticeships, create new AI-focused schemes, or update current programmes to include AI skills. The project aligns with wider federal strategies to accelerate AI education and workforce readiness across the United States.

Why does it matter? 

The initiative signals a structural shift in how governments are preparing the workforce for AI integration, embedding practical skills into formal apprenticeship systems rather than treating them as optional add-ons.

It also broadens access to AI literacy by targeting both high-growth industries and digitally excluded workers, helping reduce future gaps in productivity and employability.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

UN-organised event to address challenges in government AI capacity-building

A side event during the 11th Multi-Stakeholder Forum on Science, Technology and Innovation for the SDGs will examine how governments can strengthen internal AI capacity as AI becomes more central to public administration, regulation, and digital development.

The event is being organised by UNU-CPR, UNU-CRIS, UNDP, and the UN Department of Economic and Social Affairs, with support from Japan’s Permanent Mission to the United Nations. Organisers said governments are facing a dual challenge of regulating AI systems while building internal expertise to understand, manage, and deploy them in the public interest.

The concept note says countries are increasingly creating dedicated AI units, appointing Chief AI Officers, and embedding technical experts in ministries and regulatory bodies, while disparities in access to resources and expertise continue to shape how capacity-building develops across regions.

The event will also address concerns about AI security and misuse of technology. Organisers highlighted risks including misinformation, cyber-enabled manipulation, and automated disinformation campaigns, and said that countries with more limited institutional and technical capacity may face disproportionate exposure.

The discussion is intended to contribute to wider debates on responsible and inclusive AI governance under the Global Digital Compact and the Sustainable Development Goals by identifying institutional models, lessons learned, and opportunities for cross-regional cooperation on building government AI capacity.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

World Economic Forum report highlights growing role of AI in cybersecurity operations

A World Economic Forum white paper (Empowering Defenders: AI for Cybersecurity), developed with KPMG, states that AI is becoming a core capability for modern cybersecurity. The report notes that attackers are using AI to increase speed, scale and sophistication, while defenders are also adopting AI to improve detection, response and resilience.

The report describes how AI is being used across the cybersecurity lifecycle, from cyber governance and risk identification to threat detection, incident response and recovery. Case studies from major organisations highlight applications in phishing detection, vulnerability management, malware analysis, threat intelligence and automated security reviews.

WEF report also states that effective adoption depends on more than technology investment. Organisations need executive support, reliable data, skilled teams, mature infrastructure and clear governance before deploying AI in critical security operations.

The report also highlights the rise of agentic AI, where autonomous systems can detect, coordinate and respond to threats with limited human intervention. It adds that while these systems could help defenders act faster, they may also introduce risks related to accountability, unintended behaviour and over-reliance on automation.

Why does it matter?

The central message of the report is that AI can strengthen cyber defence only when paired with human judgement, structured pilots, continuous monitoring and clear safeguards. Without these foundations, organisations risk creating fragile systems instead of resilient ones.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Panthalassa raises $140m to develop wave-powered AI computing

Panthalassa has raised $140 million in a Series B funding round led by investor Peter Thiel to advance technology that uses ocean wave energy to power AI computing systems.

According to the company, the funding will support the development of offshore nodes that generate electricity from wave energy and run AI computing onboard. Data from these systems is transmitted via low-Earth-orbit satellites.

Panthalassa said the initiative responds to increasing demand for computing capacity and constraints faced by terrestrial data centres, including electricity supply, cooling requirements, and infrastructure limitations.

The company stated that its systems operate in offshore environments and use locally generated energy to power computing equipment, with ocean conditions providing cooling.

Panthalassa has previously deployed prototype systems and said the new funding will support completion of a pilot manufacturing facility and deployment of additional nodes, with commercial operations targeted for 2027.

Investor Peter Thiel said the approach expands computing infrastructure beyond traditional locations, while company representatives described the technology as a potential source of clean energy for AI systems.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

Greece and the EU discuss space strategy and digital infrastructure cooperation

The Minister of Digital Governance and Artificial Intelligence, Dimitris Papastergiou, met European Union Commissioner for Defence and Space, Andrius Kubilius, to discuss Greece’s expanding role in space technologies, digital infrastructure and European defence cooperation.

Talks focused on national space policy, including satellite programmes, telecommunications systems and quantum communications, alongside projects funded through the Recovery and Resilience Facility.

The meeting followed Greece’s recent launch of thermal satellites, which Greek authorities said support civil protection and climate monitoring capabilities.

Greek authorities said investments in satellite applications and digital infrastructure are intended to support public services, economic growth and technological development. They added that the country’s role as a connectivity hub, particularly through submarine fibre optic cables, is a strategic advantage for Europe.

Both sides said space technologies are important for advancing AI, Earth observation and secure communications. They also underlined the need for stronger European cooperation to enhance resilience, innovation and strategic autonomy.

The meeting also aligned with preparations for the upcoming Presidency of the Council of the EU for Greece, where space policy is expected to be among the priorities.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Cybercrime communities face skills gap despite rise of AI tools

A major study by researchers from the universities of Cambridge, Edinburgh, and Strathclyde, published by the Centre for Emerging Technology and Security at the Alan Turing Institute, suggests cybercriminals are still struggling to use AI effectively in their operations despite widespread attention around tools such as ChatGPT.

Researchers analysed more than 100 million posts from underground and dark web forums to assess how AI is being adopted within cybercrime communities.

The research, carried out by the universities of Edinburgh, Strathclyde, and Cambridge using the CrimeBB database, found that most offenders lack the technical skills and resources needed to integrate AI into criminal activity. Rather than lowering barriers to entry, AI tools benefit already skilled actors far more than inexperienced ones.

The analysis shows AI is used most successfully in already highly automated areas, such as social media bots linked to harassment and fraud, as well as in efforts to mask patterns that cybersecurity systems might otherwise detect. While experimentation is increasing, the researchers found little sign that AI is delivering a broad or transformative boost to overall cybercriminal capability. Mainstream chatbot guardrails were also found to be limiting harmful use in practice.

The researchers argue that the more immediate concern for industry is not dramatic AI-enabled innovation among cybercriminals, but insecure adoption of AI within legitimate organisations. They point to risks from poorly secured agentic AI systems and from AI-generated ‘vibecoded’ software being deployed without adequate safeguards.

Why does it matter?

The findings challenge a common assumption that generative AI is already giving cybercriminals a major operational advantage. Instead, the more immediate and scalable risk may come from companies deploying insecure AI systems faster than they can secure them. That shifts attention away from worst-case speculation about criminal innovation and towards a more practical cyber policy question: whether organisations are introducing new AI-enabled vulnerabilities into mainstream digital infrastructure.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

UNDP supports AI training for Tajikistan parliament members

The United Nations Development Programme has supported training sessions for members of the Parliament of Tajikistan, focusing on AI and modern digital tools. The initiative aims to strengthen legislative processes and institutional capacity.

Discussions covered AI use in policymaking, legislative analysis and public engagement, alongside topics such as strategic planning and anti corruption measures. The UNDP sessions brought together parliamentarians and staff to share international and national experience.

Officials highlighted that AI can support evidence based decision making and improve efficiency, while requiring attention to transparency, ethics and accountability. Cooperation with UNDP was described as key to adapting global best practices.

The programme includes an ongoing needs assessment to identify priorities for further development and institutional strengthening. The activities are being carried out with UNDP support in Tajikistan.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Kazakhstan government reviews plans to expand AI across sectors under digital strategy

The Government of the Republic of Kazakhstan has reviewed plans to expand AI across all sectors under the proposed Digital Qazaqstan strategy. The initiative aims to drive long-term economic modernisation through digital technologies.

Officials highlighted AI as a key tool for improving productivity, industrial safety and economic planning. The strategy also focuses on strengthening infrastructure, including computing capacity and data systems.

The government stressed the need for better data access, investment incentives and stronger private sector involvement. Measures will also target skills development and support for smaller businesses adopting AI.

Authorities said AI could enhance forecasting and policy effectiveness, but that safeguards for personal data and intellectual property are required. The strategy is being developed and implemented in Kazakhstan.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot