Frontier AI cybersecurity risks highlighted by the World Economic Forum

A shift is emerging in cybersecurity as frontier AI systems become more capable and harder to control.

Anthropic’s decision to restrict access to the Claude Mythos Preview reflects growing concern about how such models can be used in real-world cybersecurity operations, as highlighted in an article published by the World Economic Forum.

Reported capabilities include identifying unknown vulnerabilities and generating working exploits. Tasks that once required specialised teams over long periods can now be accelerated significantly.

Defensive benefits exist, particularly in faster vulnerability detection, but the same capabilities can also lower barriers for attackers.

The main challenge is no longer finding weaknesses but managing them. AI can generate large volumes of vulnerabilities in a short time, while many organisations still rely on slower response cycles.

That gap increases exposure, especially for critical systems and infrastructure.

Cybersecurity is therefore moving away from static protection toward continuous monitoring and rapid response. At the same time, the lack of clear global rules on access to advanced AI systems raises broader concerns about governance and long-term stability.

Such an evolving imbalance between capability and control is likely to define the next phase of cyber risk.

The World Economic Forum report also stresses that AI-driven cyber risk is becoming a strategic issue, requiring board-level attention, stronger public–private coordination, and faster response timelines, as vulnerability discovery and exploitation compress from weeks to hours.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

UK NCSC calls for stronger cyber readiness

The UK National Cyber Security Centre has warned that organisations must urgently prepare for severe cyber threats, describing them as a growing risk to operations and national resilience. The guidance calls for immediate action from leadership.

Cyber attacks are becoming more capable and disruptive, with new technologies such as AI increasing their speed and scale. These threats can lead to major operational, financial and security impacts.

The agency emphasises that resilience, rather than prevention alone, is critical. Organisations must be able to continue operating and recover during cyber attacks, with preparation and planning carried out in advance.

The Centre states that responsibility lies with organisational leaders, urging investment, coordination and early planning to ensure essential services can continue under pressure in the UK.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Report on Geneva 2027 AI Summit preparations available

A report outlining initial consultations for the Geneva 2027 AI Summit has been submitted to the Swiss government following a preparatory event held during GenAI Zürich 2026.

The report consolidates inputs from an invite-only roundtable held on 1 April 2026 and written submissions collected through an open call. It was prepared by ICT4Peace and GenAI Zürich to support Switzerland’s planning for the summit.

According to the organisers, the roundtable brought together participants from government, academia, industry, civil society, and international organisations. It was co-moderated by Daniel Stauffacher, founder of ICT4Peace, Ambassador Thomas Schneider, Vice-Director of the Swiss Federal Office of Communications (BAKOM), and Ambassador Markus Reubi, project lead for the Geneva 2027 AI Summit at the Swiss Federal Department of Foreign Affairs.

In addition to the roundtable, the report includes written contributions submitted through an online consultation process, with organisers noting that 55 submissions were received, including 52 with substantive responses.

The report presents a synthesis of themes and proposals related to the objectives and potential outcomes of the Geneva 2027 AI Summit. According to the organisers, the analysis is based on recurring themes and areas of convergence identified during the consultation process, rather than a statistically representative survey.

Discussions were conducted under the Chatham House Rule, and the report does not attribute comments to individual participants.

The findings were submitted to the Swiss government’s Platform Tripartite on 13 April 2026 to inform further preparations for the summit.

Switzerland is scheduled to host the next global AI Summit in 2027 in Geneva, following previous summits held in the United Kingdom, the Republic of Korea, France, and India.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

DIFC unveils plan to build ‘AI-native’ financial centre in Dubai

Dubai International Financial Centre has announced plans to become what it describes as the world’s first ‘AI-native’ financial centre, embedding AI into regulation, business operations, and physical infrastructure rather than treating it as a stand-alone tool.

The initiative is being presented as a broader redesign of how a financial centre functions. Instead of limiting AI to back-office support or isolated digital services, DIFC says it wants AI to shape legal frameworks, compliance processes, client management, and the wider operation of the financial ecosystem.

The plan builds on DIFC’s longer-term AI strategy, launched in 2023 and already tied to changes in data governance and the centre’s wider innovation agenda.

According to DIFC, AI is already being used in areas such as compliance and client services, with further expansion planned across financial workflows, supervisory processes, and institutional decision-making.

DIFC also says the initiative will be supported by a broader ecosystem designed to attract investment, talent, and experimentation. That includes training programmes, venture support, accelerators, and the continued development of its AI-focused innovation infrastructure. The aim is not only to encourage firms to use AI, but to make Dubai a base for building and scaling AI-driven financial services.

The project also extends beyond software and regulation. DIFC says physical infrastructure will evolve alongside digital systems, with plans linked to smart buildings, robotics, autonomous mobility, and digital twins by the end of the decade.

That gives the announcement a broader urban and economic dimension, positioning AI as part of the district’s future design rather than simply a tool used by firms within it.

The broader significance of the move lies in how Dubai is trying to position itself in the global race to shape AI in finance. Rather than focusing only on innovation-friendly rhetoric, DIFC is presenting regulation, infrastructure, skills, and ecosystem-building as part of a single strategy.

If realised in practice, that could strengthen Dubai’s role as a hub for AI-driven financial services and as a testing ground for new governance models.

At the same time, the claim to be the world’s first ‘AI-native’ financial centre should be understood as DIFC’s own description of the project, rather than an independently established category.

The more solid story is that Dubai is trying to make AI part of the operating logic of a financial centre itself, using policy, infrastructure, and investment to support that ambition.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

IWF and Utropolis partnership strengthens AI-driven child online safety

The Internet Watch Foundation (IWF) has announced a new partnership with Utropolis, marking a step forward in efforts to strengthen online child protection. The collaboration brings together established detection tools and emerging AI-driven safeguarding technologies.

Utropolis specialises in cloud-based filtering systems designed to identify risks in real time, particularly in school environments.

By integrating IWF datasets, including verified lists of harmful content, the platform aims to improve prevention and detection capabilities while helping educators maintain safer digital spaces.

The initiative reflects a broader trend towards combining AI with established regulatory and safeguarding frameworks. As harmful material continues to spread online, organisations are increasingly focusing on scalable, automated solutions that can adapt to evolving threats.

The partnership also aligns with UK online safety standards in education, reinforcing compliance requirements and strengthening institutional responses.

As digital environments continue to expand, collaborations of this kind highlight the growing role of AI in supporting child protection strategies.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

EU turns digital strategy into infrastructure diplomacy with partner countries

The European Commission, together with the governments of France and Finland, has hosted a high-level study visit in Brussels on secure, resilient and trusted connectivity and digital infrastructure, bringing policymakers and regulators from Egypt, Indonesia, Jordan, Kenya, the Philippines and Vietnam into direct talks with the EU institutions and industry actors. The visit forms part of the EU’s effort to turn its international digital strategy into practical cooperation with partner countries.

The programme focused on policy frameworks for secure and trusted telecommunications infrastructure, including subsea cable deployment and wider digital infrastructure development. In Brussels, delegates met with the European Commission and the European External Action Service. They were briefed on the EU policy tools, including the proposed Digital Networks Act, cybersecurity measures, and the EU’s Submarine Cable Security Toolbox.

The study visit then continued in Aachen, Antwerp, Paris and Helsinki, where participants met major European technology firms and providers of trusted connectivity and digital infrastructure solutions. That industry-facing element matters because the visit was not only about sharing regulatory ideas but also about showcasing European technical and commercial capacity in secure digital infrastructure.

Seen in that context, the initiative is best understood not as a major standalone policy announcement, but as a practical piece of digital diplomacy. The EU’s International Digital Strategy, launched in June 2025, explicitly aims to expand digital partnerships, promote a high level of security for the EU and its partners, and shape global digital governance and standards through cooperation on areas such as secure connectivity, cybersecurity, digital public infrastructure, and emerging technologies.

That wider strategy also includes an ‘EU Tech Business Offer’, combining public and private investment to support the digital transition of partner countries through areas such as AI factories, secure and trusted connectivity, digital public infrastructure and cybersecurity. The Brussels study visit appears to fit squarely within that model, linking diplomacy, regulatory outreach and industrial promotion.

The significance of the visit, therefore, lies less in any immediate policy outcome than in what it says about the EU’s external digital posture. Brussels is trying to position itself not only as a regulator of digital markets at home, but also as a provider of standards, expertise and infrastructure models abroad. At a time of rising geopolitical competition over connectivity, network security and critical infrastructure, such exchanges allow the EU to present European approaches to trusted digital development as an alternative to more fragmented or politically dependent models.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Tax Practitioners Board of Australia ends submissions on AI draft for tax agents

Australia’s Tax Practitioners Board has closed submissions on its exposure draft on the use of AI and the Code of Professional Conduct. The draft information sheet, TPB(I) D62/2026, was issued on 23 March 2026 and invited comments within 28 days.

According to the exposure draft, the guidance is intended to help registered tax agents and BAS agents understand their obligations under the Tax Agent Services Act 2009 of Australia when using AI in the provision of tax agent services. The document says it focuses in particular on obligations under the Code of Professional Conduct and the Tax Agent Services (Code of Professional Conduct) Determination 2024.

The draft says tax practitioners remain ultimately responsible for the services they provide and must understand the capabilities and limitations of AI tools, assess outputs, and supplement them with professional judgement. It adds that AI outputs should inform, not replace, tax knowledge, experience, or expertise.

On competency, the draft says tax practitioners must ensure services are provided competently, maintain relevant knowledge and skills, take reasonable care in ascertaining a client’s state of affairs, and take reasonable care to ensure taxation laws are applied correctly. It also says practitioners should verify AI-generated content for accuracy and establish processes to understand and contest AI decisions or outputs.

The exposure draft also addresses confidentiality. It says tax practitioners must not disclose information relating to a client’s affairs to a third party without the client’s permission, and notes that this may include entering client information into AI chatbots or copilots, depending on how those tools are configured and used. It also says practitioners should review commercial AI tools to ensure client information will be kept secure and that Privacy Act 1988 requirements are met.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

WHO/Europe warns safeguards lag as AI use grows in health care

AI is becoming more deeply embedded in health systems across WHO European Region, according to a new WHO/Europe report that maps adoption, governance, and readiness across 50 of the region’s 53 member states. Rather than presenting a purely positive picture of rapid innovation, the report warns that legal and ethical safeguards are not keeping pace with deployment.

The report shows that AI is already being used in a wide range of medical and administrative functions. Thirty-two countries, or 64%, said they are using AI-assisted diagnostics, particularly in imaging and detection, while half reported deploying AI chatbots for patient engagement and support. Countries most often said they were adopting AI to improve patient care, reduce pressure on health workers, and increase efficiency across health services.

WHO/Europe’s findings suggest that health systems are beginning to adapt institutionally, but unevenly. Only four countries have adopted a dedicated national strategy on AI in health, while seven more are developing one. That leaves much of the region in a transitional phase, where AI tools are entering clinical and administrative settings faster than governments are building the structures needed to govern them properly.

The report places particular emphasis on accountability, regulation, and public trust. Legal uncertainty was identified by 43 countries, or 86%, as the main barrier to wider AI adoption in health. At the same time, fewer than one in ten countries reported having liability standards in place for AI in health care, raising difficult questions about responsibility when systems fail or cause harm.

That warning gives the report its real policy weight. The main issue is not simply that AI use is growing in diagnostics, administration, and patient interaction, but that many health systems still lack the legal clarity and governance capacity needed to use it safely. In that sense, WHO/Europe is framing AI less as a breakthrough story than as a test of whether public institutions can build trustworthy safeguards around fast-moving digital tools.

The broader significance is that the debate over AI in health care is shifting. Early attention focused on what the technology might do for diagnosis, triage, and efficiency. WHO/Europe is now pointing to a harder question: whether health systems can make AI useful without weakening patient safety, privacy, accountability, and public confidence.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

UK invests £500 million in Sovereign AI fund to boost startups

The UK government has launched a £500 million Sovereign AI initiative to support domestic startups, aiming to strengthen national capabilities and reduce reliance on foreign technology providers.

The programme is designed to help companies start, scale and compete globally while remaining rooted in Britain.

An initiative that combines direct investment with broader support, including fast-track visas, access to high-performance computing and assistance in navigating regulation and procurement.

Early backers target firms working on advanced AI infrastructure, life sciences and next-generation computing, reflecting a strategic focus on sectors with long-term economic and security implications.

A central feature is access to national supercomputing resources, addressing one of the most significant barriers to AI development.

By providing large-scale compute capacity and linking it to potential future investment, the programme aims to accelerate research, testing and deployment within the UK ecosystem.

Essentially, the policy signals a shift toward a more interventionist approach, positioning the state as an active investor rather than a passive regulator.

The objective is to anchor innovation domestically, ensuring that intellectual property, talent and economic value remain within the UK as global competition in AI intensifies.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

OpenAI accelerates life sciences research with a new specialised model

OpenAI has launched GPT-Rosalind, a purpose-built model. It is designed to support complex workflows in biology, drug discovery and translational medicine.

A system that focuses on improving reasoning across scientific domains, enabling researchers to process large volumes of data, literature and experimental inputs more efficiently.

The model is engineered to assist with early-stage discovery, where improvements can significantly influence downstream outcomes.

By supporting hypothesis generation, evidence synthesis and experimental design, GPT-Rosalind aims to streamline fragmented research processes that often slow scientific progress.

Integration with specialised tools and access to more than 50 scientific databases enable the new OpenAI model to operate across multi-step workflows.

Why does it matter?

Early evaluations indicate stronger performance in areas such as protein analysis, genomics and chemical reasoning, alongside improved capability in selecting and using domain-specific tools.

Access is currently limited through a controlled deployment framework, ensuring use within governed research environments.

Partnerships with organisations including Amgen and Moderna reflect a broader effort to apply AI to real-world scientific challenges while maintaining safeguards and oversight.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!