Report on Geneva 2027 AI Summit preparations available

A report outlining initial consultations for the Geneva 2027 AI Summit has been submitted to the Swiss government following a preparatory event held during GenAI Zürich 2026.

The report consolidates inputs from an invite-only roundtable held on 1 April 2026 and written submissions collected through an open call. It was prepared by ICT4Peace and GenAI Zürich to support Switzerland’s planning for the summit.

According to the organisers, the roundtable brought together participants from government, academia, industry, civil society, and international organisations. It was co-moderated by Daniel Stauffacher, founder of ICT4Peace, Ambassador Thomas Schneider, Vice-Director of the Swiss Federal Office of Communications (BAKOM), and Ambassador Markus Reubi, project lead for the Geneva 2027 AI Summit at the Swiss Federal Department of Foreign Affairs.

In addition to the roundtable, the report includes written contributions submitted through an online consultation process, with organisers noting that 55 submissions were received, including 52 with substantive responses.

The report presents a synthesis of themes and proposals related to the objectives and potential outcomes of the Geneva 2027 AI Summit. According to the organisers, the analysis is based on recurring themes and areas of convergence identified during the consultation process, rather than a statistically representative survey.

Discussions were conducted under the Chatham House Rule, and the report does not attribute comments to individual participants.

The findings were submitted to the Swiss government’s Platform Tripartite on 13 April 2026 to inform further preparations for the summit.

Switzerland is scheduled to host the next global AI Summit in 2027 in Geneva, following previous summits held in the United Kingdom, the Republic of Korea, France, and India.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

DIFC unveils plan to build ‘AI-native’ financial centre in Dubai

Dubai International Financial Centre has announced plans to become what it describes as the world’s first ‘AI-native’ financial centre, embedding AI into regulation, business operations, and physical infrastructure rather than treating it as a stand-alone tool.

The initiative is being presented as a broader redesign of how a financial centre functions. Instead of limiting AI to back-office support or isolated digital services, DIFC says it wants AI to shape legal frameworks, compliance processes, client management, and the wider operation of the financial ecosystem.

The plan builds on DIFC’s longer-term AI strategy, launched in 2023 and already tied to changes in data governance and the centre’s wider innovation agenda.

According to DIFC, AI is already being used in areas such as compliance and client services, with further expansion planned across financial workflows, supervisory processes, and institutional decision-making.

DIFC also says the initiative will be supported by a broader ecosystem designed to attract investment, talent, and experimentation. That includes training programmes, venture support, accelerators, and the continued development of its AI-focused innovation infrastructure. The aim is not only to encourage firms to use AI, but to make Dubai a base for building and scaling AI-driven financial services.

The project also extends beyond software and regulation. DIFC says physical infrastructure will evolve alongside digital systems, with plans linked to smart buildings, robotics, autonomous mobility, and digital twins by the end of the decade.

That gives the announcement a broader urban and economic dimension, positioning AI as part of the district’s future design rather than simply a tool used by firms within it.

The broader significance of the move lies in how Dubai is trying to position itself in the global race to shape AI in finance. Rather than focusing only on innovation-friendly rhetoric, DIFC is presenting regulation, infrastructure, skills, and ecosystem-building as part of a single strategy.

If realised in practice, that could strengthen Dubai’s role as a hub for AI-driven financial services and as a testing ground for new governance models.

At the same time, the claim to be the world’s first ‘AI-native’ financial centre should be understood as DIFC’s own description of the project, rather than an independently established category.

The more solid story is that Dubai is trying to make AI part of the operating logic of a financial centre itself, using policy, infrastructure, and investment to support that ambition.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

IWF and Utropolis partnership strengthens AI-driven child online safety

The Internet Watch Foundation (IWF) has announced a new partnership with Utropolis, marking a step forward in efforts to strengthen online child protection. The collaboration brings together established detection tools and emerging AI-driven safeguarding technologies.

Utropolis specialises in cloud-based filtering systems designed to identify risks in real time, particularly in school environments.

By integrating IWF datasets, including verified lists of harmful content, the platform aims to improve prevention and detection capabilities while helping educators maintain safer digital spaces.

The initiative reflects a broader trend towards combining AI with established regulatory and safeguarding frameworks. As harmful material continues to spread online, organisations are increasingly focusing on scalable, automated solutions that can adapt to evolving threats.

The partnership also aligns with UK online safety standards in education, reinforcing compliance requirements and strengthening institutional responses.

As digital environments continue to expand, collaborations of this kind highlight the growing role of AI in supporting child protection strategies.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

EU turns digital strategy into infrastructure diplomacy with partner countries

The European Commission, together with the governments of France and Finland, has hosted a high-level study visit in Brussels on secure, resilient and trusted connectivity and digital infrastructure, bringing policymakers and regulators from Egypt, Indonesia, Jordan, Kenya, the Philippines and Vietnam into direct talks with the EU institutions and industry actors. The visit forms part of the EU’s effort to turn its international digital strategy into practical cooperation with partner countries.

The programme focused on policy frameworks for secure and trusted telecommunications infrastructure, including subsea cable deployment and wider digital infrastructure development. In Brussels, delegates met with the European Commission and the European External Action Service. They were briefed on the EU policy tools, including the proposed Digital Networks Act, cybersecurity measures, and the EU’s Submarine Cable Security Toolbox.

The study visit then continued in Aachen, Antwerp, Paris and Helsinki, where participants met major European technology firms and providers of trusted connectivity and digital infrastructure solutions. That industry-facing element matters because the visit was not only about sharing regulatory ideas but also about showcasing European technical and commercial capacity in secure digital infrastructure.

Seen in that context, the initiative is best understood not as a major standalone policy announcement, but as a practical piece of digital diplomacy. The EU’s International Digital Strategy, launched in June 2025, explicitly aims to expand digital partnerships, promote a high level of security for the EU and its partners, and shape global digital governance and standards through cooperation on areas such as secure connectivity, cybersecurity, digital public infrastructure, and emerging technologies.

That wider strategy also includes an ‘EU Tech Business Offer’, combining public and private investment to support the digital transition of partner countries through areas such as AI factories, secure and trusted connectivity, digital public infrastructure and cybersecurity. The Brussels study visit appears to fit squarely within that model, linking diplomacy, regulatory outreach and industrial promotion.

The significance of the visit, therefore, lies less in any immediate policy outcome than in what it says about the EU’s external digital posture. Brussels is trying to position itself not only as a regulator of digital markets at home, but also as a provider of standards, expertise and infrastructure models abroad. At a time of rising geopolitical competition over connectivity, network security and critical infrastructure, such exchanges allow the EU to present European approaches to trusted digital development as an alternative to more fragmented or politically dependent models.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Tax Practitioners Board of Australia ends submissions on AI draft for tax agents

Australia’s Tax Practitioners Board has closed submissions on its exposure draft on the use of AI and the Code of Professional Conduct. The draft information sheet, TPB(I) D62/2026, was issued on 23 March 2026 and invited comments within 28 days.

According to the exposure draft, the guidance is intended to help registered tax agents and BAS agents understand their obligations under the Tax Agent Services Act 2009 of Australia when using AI in the provision of tax agent services. The document says it focuses in particular on obligations under the Code of Professional Conduct and the Tax Agent Services (Code of Professional Conduct) Determination 2024.

The draft says tax practitioners remain ultimately responsible for the services they provide and must understand the capabilities and limitations of AI tools, assess outputs, and supplement them with professional judgement. It adds that AI outputs should inform, not replace, tax knowledge, experience, or expertise.

On competency, the draft says tax practitioners must ensure services are provided competently, maintain relevant knowledge and skills, take reasonable care in ascertaining a client’s state of affairs, and take reasonable care to ensure taxation laws are applied correctly. It also says practitioners should verify AI-generated content for accuracy and establish processes to understand and contest AI decisions or outputs.

The exposure draft also addresses confidentiality. It says tax practitioners must not disclose information relating to a client’s affairs to a third party without the client’s permission, and notes that this may include entering client information into AI chatbots or copilots, depending on how those tools are configured and used. It also says practitioners should review commercial AI tools to ensure client information will be kept secure and that Privacy Act 1988 requirements are met.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

WHO/Europe warns safeguards lag as AI use grows in health care

AI is becoming more deeply embedded in health systems across WHO European Region, according to a new WHO/Europe report that maps adoption, governance, and readiness across 50 of the region’s 53 member states. Rather than presenting a purely positive picture of rapid innovation, the report warns that legal and ethical safeguards are not keeping pace with deployment.

The report shows that AI is already being used in a wide range of medical and administrative functions. Thirty-two countries, or 64%, said they are using AI-assisted diagnostics, particularly in imaging and detection, while half reported deploying AI chatbots for patient engagement and support. Countries most often said they were adopting AI to improve patient care, reduce pressure on health workers, and increase efficiency across health services.

WHO/Europe’s findings suggest that health systems are beginning to adapt institutionally, but unevenly. Only four countries have adopted a dedicated national strategy on AI in health, while seven more are developing one. That leaves much of the region in a transitional phase, where AI tools are entering clinical and administrative settings faster than governments are building the structures needed to govern them properly.

The report places particular emphasis on accountability, regulation, and public trust. Legal uncertainty was identified by 43 countries, or 86%, as the main barrier to wider AI adoption in health. At the same time, fewer than one in ten countries reported having liability standards in place for AI in health care, raising difficult questions about responsibility when systems fail or cause harm.

That warning gives the report its real policy weight. The main issue is not simply that AI use is growing in diagnostics, administration, and patient interaction, but that many health systems still lack the legal clarity and governance capacity needed to use it safely. In that sense, WHO/Europe is framing AI less as a breakthrough story than as a test of whether public institutions can build trustworthy safeguards around fast-moving digital tools.

The broader significance is that the debate over AI in health care is shifting. Early attention focused on what the technology might do for diagnosis, triage, and efficiency. WHO/Europe is now pointing to a harder question: whether health systems can make AI useful without weakening patient safety, privacy, accountability, and public confidence.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

UK invests £500 million in Sovereign AI fund to boost startups

The UK government has launched a £500 million Sovereign AI initiative to support domestic startups, aiming to strengthen national capabilities and reduce reliance on foreign technology providers.

The programme is designed to help companies start, scale and compete globally while remaining rooted in Britain.

An initiative that combines direct investment with broader support, including fast-track visas, access to high-performance computing and assistance in navigating regulation and procurement.

Early backers target firms working on advanced AI infrastructure, life sciences and next-generation computing, reflecting a strategic focus on sectors with long-term economic and security implications.

A central feature is access to national supercomputing resources, addressing one of the most significant barriers to AI development.

By providing large-scale compute capacity and linking it to potential future investment, the programme aims to accelerate research, testing and deployment within the UK ecosystem.

Essentially, the policy signals a shift toward a more interventionist approach, positioning the state as an active investor rather than a passive regulator.

The objective is to anchor innovation domestically, ensuring that intellectual property, talent and economic value remain within the UK as global competition in AI intensifies.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

OpenAI accelerates life sciences research with a new specialised model

OpenAI has launched GPT-Rosalind, a purpose-built model. It is designed to support complex workflows in biology, drug discovery and translational medicine.

A system that focuses on improving reasoning across scientific domains, enabling researchers to process large volumes of data, literature and experimental inputs more efficiently.

The model is engineered to assist with early-stage discovery, where improvements can significantly influence downstream outcomes.

By supporting hypothesis generation, evidence synthesis and experimental design, GPT-Rosalind aims to streamline fragmented research processes that often slow scientific progress.

Integration with specialised tools and access to more than 50 scientific databases enable the new OpenAI model to operate across multi-step workflows.

Why does it matter?

Early evaluations indicate stronger performance in areas such as protein analysis, genomics and chemical reasoning, alongside improved capability in selecting and using domain-specific tools.

Access is currently limited through a controlled deployment framework, ensuring use within governed research environments.

Partnerships with organisations including Amgen and Moderna reflect a broader effort to apply AI to real-world scientific challenges while maintaining safeguards and oversight.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

New India partnership targets AI innovation and digital transformation

Broadcast Engineering Consultants India Limited (BECIL) and the Centre for Development of Advanced Computing (C-DAC) have signed a Memorandum of Understanding to collaborate on advanced technologies and digital transformation. The agreement focuses on joint projects, consultancy, and technical support across sectors.

The partnership covers AI, machine learning, Internet of Things, cybersecurity, 5G, and cloud computing. It also includes the development of turnkey solutions, technology transfer, and the commercialisation of innovative products.

Capacity development is a key component of the collaboration. Both organisations will support workforce upskilling and skill development to strengthen technical capabilities.

Officials stated that the partnership aims to leverage complementary strengths to deliver technology solutions. It is also expected to support innovation and contribute to India’s broader digital development objectives.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot 

Health queries dominate AI chatbot use, study finds

A large-scale study analysing more than 500,000 health-related conversations with Microsoft Copilot offers a detailed look at how people are using general-purpose AI chatbots for medical information, symptom questions, and healthcare navigation.

Published in Nature Health, the study suggests that conversational AI is increasingly being used as an early point of contact for health concerns outside formal clinical settings.

The largest share of conversations fell into the health information and education category, accounting for 40.7% of the sample. Users frequently asked about symptoms, conditions, nutrition, treatments, and medicines, often in ways that reflected personal concerns rather than detached information-seeking.

The study found that 18.8% of conversations involved users discussing their own health conditions, while roughly one in seven personal health queries concerned someone else, such as a child, partner, or parent.

Patterns of use also varied by device and time of day. Mobile users were more likely to ask personal and emotionally sensitive questions, particularly about symptoms and well-being, with activity rising in the evening and overnight.

Desktop use, by contrast, was more closely associated with work, study, and administrative tasks, including research, documentation, and medical paperwork during office hours.

The study also points to growing use of AI for practical healthcare navigation. Beyond questions about symptoms or conditions, users turned to Copilot for help with appointments, provider access, paperwork, and understanding parts of the healthcare system that can be difficult to navigate. That suggests people are not using chatbots only for medical curiosity, but also to manage the bureaucratic and logistical side of care.

The broader significance of the findings lies in what they reveal about the changing role of conversational AI in everyday health decision-making. General-purpose chatbots are not replacing clinicians, but they are increasingly occupying the space before, between, and around formal care, where people seek quick explanations, reassurance, and guidance.

That makes questions of accuracy, safety, and health literacy more important, especially when users may act on AI-generated responses without professional context or oversight.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!