Geneva 2027 Summit and Switzerland’s vision for AI

In 2027, Geneva will host the AI Summit at a pivotal moment in the global race to shape AI. Previous summits reflected the character of their hosts. Bletchley Park focused on existential risk, Seoul on innovation and security, Paris on economic and societal impact, and New Delhi on development and inclusion.

Switzerland now has the opportunity to define the next chapter by promoting a practical, balanced, and human-centred approach to AI.

At the heart of Switzerland’s potential contribution is a model built on innovation, governance, and subsidiarity. The country’s strong innovation culture favours grounded, low-hype solutions that address real needs, as illustrated by open-source initiatives such as the multilingual Apertus language model.

But Swiss thinking goes beyond technology alone, recognising that meaningful AI progress also requires advances in education, management, and disciplines such as law, philosophy, linguistics, and the arts.

On governance, Switzerland is well placed to encourage a pragmatic approach. Rather than creating entirely new rules, much of AI’s impact can be addressed through existing frameworks on trade, human rights, intellectual property, and security, provided they are effectively implemented.

As home to numerous international organisations, Geneva offers a natural venue for aligning AI with established global institutions. At the same time, Switzerland’s tradition of bottom-up policymaking ensures that citizens remain part of the conversation.

The principle of subsidiarity, which holds that decisions be made as close as possible to the people affected, adds another dimension. In an era when AI power is concentrated in a handful of global platforms, Switzerland can champion more distributed models that anchor AI development in local communities.

By linking technology to local knowledge, culture, and economic life, AI can become a tool that empowers citizens rather than centralising control.

Trust, institutions, and multilateral cooperation will also be central themes on the road to 2027. Public confidence in AI has been shaken by alarmist narratives and fears of job loss, disinformation, and monopolisation.

Switzerland’s high-trust political culture and lean but effective institutions provide a model for rebuilding confidence through transparency and accountability. Strengthening, rather than sidelining, international organisations and equipping them with AI tools to enhance participation and legitimacy could help ensure that global governance keeps pace with technological change.

Ultimately, the Geneva AI Summit has the potential to mark a shift from polarised debates about doom or blind acceleration towards a mature conversation about how AI can serve humanity in concrete ways. By combining innovation with ethical reflection, sovereignty with interdependence, and global cooperation with local empowerment, Switzerland could help set a steady and credible course for the next phase of AI transformation.

Diplo’s role

Diplo is positioning itself as an active contributor to the road to the 2027 Geneva AI Summit by combining research, training, and practical policy engagement. Drawing on decades of experience in internet governance and digital diplomacy, Diplo approaches AI not as an abstract technological race, but as a policy and societal challenge that requires informed, inclusive, and realistic responses.

Through its humAInism methodology, Diplo situates AI within a broader human context, linking technology with philosophy, sociology, law, and diplomacy to ensure that innovation remains anchored in human values.

Beyond analysis, Diplo focuses on capacity development. Its AI Apprenticeship model promotes learning-by-doing, enabling diplomats, civil society representatives, and professionals to build AI skills through hands-on engagement.

At the same time, Diplo monitors global AI policy developments through the Digital Watch Observatory and develops practical tools, such as AI-supported reporting and knowledge preservation systems, to strengthen institutional memory and multilateral processes.

In this way, Diplo aims not only to observe the AI transformation but to help shape it in a way that is informed, inclusive, and fit for the realities of global governance.

First AI Tuesday of the Month

As preparations for the 2027 Geneva AI Summit gather pace, engagement will be key. One practical way to join the conversation is through the ‘First AI Tuesday of the Month’ luncheon series. These informal networking and briefing sessions bring together diplomats, experts, and practitioners to explore three core AI vectors shaping Geneva today. Those vectors are the road to the AI Summit, evolving governance dynamics, and the latest technological developments.

The next session takes place on Tuesday at 13:00, offering participants an opportunity to exchange ideas, build connections, and contribute to a more informed and inclusive AI debate. By marking the first Tuesday of each month in their calendars, stakeholders can take an active step on the Road to Geneva 2027 and help shape a balanced and forward-looking AI agenda.

You can register for the session here.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot 

New MIT system turns creative AI models into durable objects

Researchers at MIT have introduced a system designed to close the gap between imaginative AI designs and everyday-use objects.

The tool, known as MIT Computer Science and Artificial Intelligence Laboratory’s PhysiOpt, combines generative AI with physics simulations to produce 3D models that are both visually appealing and structurally reliable.

Generative models often produce complex shapes that fail in real-world use due to instability or material limitations. PhysiOpt uses finite element analysis to stress-test designs and identify weak points, while preserving their intended look and function.

Users can input an item, its load, and material, letting the system optimise designs like cups or hooks in seconds. Researchers say the system works faster than other methods while creating more realistic, 3D-print-ready designs.

Development continues with plans to automate constraint prediction and improve manufacturing compatibility. The project, supported by the MIT-IBM Watson AI Lab, was presented at SIGGRAPH Asia, highlighting its potential to streamline the path from concept to physical product.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

ESMA sets guidance for crypto perpetuals and CFDs

The European Securities and Markets Authority (ESMA) has clarified that many crypto-perpetual contracts, including those for Bitcoin and Ether, are likely to be classified as contracts for difference (CFDs).

Due to their leverage, complexity, and risk, these products should target a narrow audience, with distribution strategies aligned accordingly.

The announcement came as Kraken launched perpetual futures for ten tokenised assets, including major indices, gold, and top tech and crypto stocks. ESMA warned that mass marketing or promotions targeting inexperienced investors are inappropriate under its guidance.

Firms must ensure that derivatives falling within the CFD category comply with product intervention requirements. Requirements include leverage limits, risk warnings, margin close-outs, negative balance protection, and a ban on incentives or benefits.

Non-advised services must include an appropriateness assessment to protect investors from unsuitable offerings.

ESMA also emphasised the importance of identifying and managing conflicts of interest arising from these products. The statement seeks to ensure firms market and distribute leveraged crypto products responsibly.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Microsoft backs Australia’s next phase of digital government with new AI and cloud agreement

Australia’s rise to second place in the OECD Digital Government Index signals renewed momentum for national digital transformation.

A shift that comes as Microsoft signs a new five-year Volume Sourcing Arrangement with the Federal Government, designed to underpin modernisation across public services and create a secure, future-ready foundation for responsible AI adoption.

The agreement led by the Digital Transformation Agency gives agencies access to Microsoft Copilot, Azure, Microsoft 365, Dynamics 365 and a strengthened security and compliance framework instead of continuing reliance on ageing systems.

The arrangement sets clearer strategic pathways for innovation, procurement and skills development through an enhanced governance structure.

It recommits both sides to national security requirements, including the Security of Critical Infrastructure legislation, the Cloud Hosting Certification Framework and IRAP.

These measures allow agencies to expand AI use while retaining control of data and meeting the expectations placed on government institutions.

A successful Copilot trial in 2024 already demonstrated personal productivity gains of around one hour per day for participating staff.

Microsoft is also establishing a $1.55 million training fund for the Australian Public Service to support capability building in ethical AI use and modern cloud operations.

The company emphasises that Australia’s partner ecosystem will gain new opportunities because the agreement simplifies how local firms engage with government agencies. Such an approach forms an important part of the wider public sector reform agenda announced last year.

The new deal aligns with national priorities set out in the Whole-of-Government Cloud Computing Policy and the National AI Plan.

Australia now enters a pivotal period in which digital transformation is guided not only by technological capacity but by the frameworks of trust, resilience and public benefit that shape how government services evolve.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

AI misuse exposed as OpenAI details global disinformation and scam networks

OpenAI said criminal and state-linked groups misused ChatGPT for disinformation, scams and covert influence. Its latest threat report details coordinated account bans and highlights how AI tools are embedded within broader operational workflows rather than used in isolation.

One investigation linked accounts to Chinese law enforcement engaged in what were described as ‘cyber special operations’. Activities included planning influence campaigns, mass-reporting dissidents and drafting forged materials, with related efforts continuing through other tools despite model refusals.

The report also outlined a Cambodia-based romance scam targeting young men in Indonesia through a fake dating agency. Operators combined manual prompting with automated chatbots to sustain conversations and facilitate financial fraud, leading to account removals.

Separately, accounts tied to Russia’s ‘Rybar’ network used ChatGPT to draft and translate posts distributed across multiple platforms. OpenAI noted that campaign impact depended more on account reach and coordination than on AI-generated content alone.

Across China, Russia and parts of Southeast Asia, actors treated AI as one tool among many, alongside fake profiles, paid advertising and forged documents. OpenAI called for cross-industry vigilance, stressing the need to analyse behavioural patterns across platforms.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Conduent breach exposes data of 25 million people across US

More than 25 million people across the United States have had personal information exposed following a ransomware attack on government contractor Conduent. Updated state breach notifications indicate the incident is larger than initially understood.

Conduent provides printing, payment processing, and benefit administration services for state agencies and large corporations. Its systems support food assistance, unemployment benefits, and workplace programmes, reaching more than 100 million individuals, according to the company.

US State disclosures show Oregon and Texas account for most of the affected records, with additional cases reported in Massachusetts, New Hampshire, and Washington. Compromised data includes names, dates of birth, addresses, Social Security numbers, health insurance information, and medical details.

Public information from Conduent has been limited since the January 2025 attack. An incident notice published in October carried a ‘noindex’ tag in its source code, preventing search engines from listing the page, which critics say reduced visibility for affected individuals.

The breach ranks among the largest recent ransomware incidents, though it is smaller than the 2024 Change Healthcare attack that affected 190 million people. Regulators and affected users continue seeking clarity on the Conduent case and its security failures.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Anthropic faces data theft claims from Musk

Elon Musk, CEO of Tesla and xAI, has publicly accused Anthropic of stealing large volumes of data to train its AI models. The allegation was made on X in response to posts referencing Community Notes attached to Anthropic-related content.

Musk claimed the company had engaged in large-scale data theft and suggested that it had paid multi-billion-dollar settlements. Those financial claims remain contested, and no official confirmation has been provided to substantiate the figures.

Anthropic, known for developing the Claude AI model, was founded by former OpenAI employees and promotes an approach centred on AI safety and responsible development. The company has not publicly responded to Musk’s latest accusations.

The dispute reflects a broader conflict across the AI industry over how companies collect the text, images and other materials required to train large language models. Much of this data is scraped from the internet, often without explicit permission from rights holders.

Multiple lawsuits filed by authors, media organisations and software developers are testing whether large-scale scraping qualifies as fair use under copyright law. Court rulings in these cases could reshape licensing practices, impose financial penalties, and alter the economics of AI development.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI-enhanced electronic nose shows promise for early ovarian cancer detection

Scientists are combining AI with advanced sensor technology, commonly known as an electronic nose, to detect subtle patterns in volatile organic compounds (VOCs) associated with ovarian cancer.

The AI component improves the system’s ability to differentiate disease-specific chemical fingerprints from benign or background VOC profiles, increasing sensitivity and specificity compared with earlier sensor-only approaches.

Ovarian cancer is notoriously difficult to diagnose in early stages due to vague symptoms and a lack of reliable screening tools. The AI-boosted electronic nose aims to fill this gap by analysing breath, urine, or blood headspace samples in a non-invasive manner, with the potential to be deployed in clinical or even point-of-care settings.

Early experimental results suggest that regressing VOC patterns using machine learning models can distinguish ovarian cancer cases with greater accuracy than traditional methods alone. However, larger clinical validation studies are still underway.

Researchers emphasise that this technology is intended as a screening and triage tool to flag individuals for more definitive diagnostics, not as a standalone diagnostic test at present.

If successfully scaled and validated, AI-enhanced VOC detection could lead to earlier interventions and improved survival outcomes for patients with ovarian cancer.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI automation quietly reshapes core insurance operations

A Business Reporter analysis notes that AI in the insurance sector has progressed from pilots and back-office experiments to core operational automation, spanning underwriting, claims processing, customer servicing, document interpretation and financial workflows.

This shift is driven by the need to reduce high operating costs, estimated at roughly 22% of global premiums, which have long limited the industry’s growth and agility.

Modern AI systems are increasingly deployed as intelligent processing layers that interpret applications, policy documents and financial records, route work, reconcile data and assist human judgement without requiring wholesale replacement of legacy systems.

Insurers see potential for real-time underwriting support, dramatically faster claims intake and near-instant reconciliation of finance tasks, enabling staff to shift focus from repetitive administration to higher-value activities such as risk assessment, customer relationships and portfolio insights.

The commentary suggests that resistance to broader AI adoption in insurance is cultural rather than technical, as the industry’s traditionally cautious stance can slow integration even when automation delivers measurable value.

The core message is that AI’s role in insurance is not to replace humans but to remove friction and elevate human work by automating routine functions efficiently and at scale.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI helps researchers see the bigger picture in cell biology

Scientists at Massachusetts Institute of Technology (MIT) report progress in applying AI to integrate and interpret diverse biological datasets, helping overcome key challenges in cell biology research.

Traditional experimental approaches often generate fragmented data, such as gene expression profiles, imaging, and molecular interactions, that are difficult to combine into a coherent view of cellular systems.

By contrast, AI models can learn patterns across multiple data types, reveal connections between disparate datasets, and generate holistic representations of cell behaviour that would otherwise require extensive manual synthesis.

The new AI techniques allow researchers to uncover relationships between genes, proteins and cellular processes with greater clarity, enabling improved hypothesis generation, experimental design and understanding of complex biological phenomena such as development, disease progression and response to therapies.

Because these AI tools can help prioritise experimental directions and reduce reliance on trial-and-error studies, they may accelerate breakthroughs in areas ranging from immunology to cancer biology.

Researchers emphasise that AI complements, rather than replaces, traditional biological expertise, acting as a data-driven partner that expands scientists’ ability to see the ‘bigger picture’ across scales and contexts.

Ethical and methodological considerations also underscore the importance of validating AI-generated insights with rigorous experiments.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!