Geneva 2027 Summit and Switzerland’s vision for AI

In 2027, Geneva will host the AI Summit at a pivotal moment in the global race to shape AI. Previous summits reflected the character of their hosts. Bletchley Park focused on existential risk, Seoul on innovation and security, Paris on economic and societal impact, and New Delhi on development and inclusion.

Switzerland now has the opportunity to define the next chapter by promoting a practical, balanced, and human-centred approach to AI.

At the heart of Switzerland’s potential contribution is a model built on innovation, governance, and subsidiarity. The country’s strong innovation culture favours grounded, low-hype solutions that address real needs, as illustrated by open-source initiatives such as the multilingual Apertus language model.

But Swiss thinking goes beyond technology alone, recognising that meaningful AI progress also requires advances in education, management, and disciplines such as law, philosophy, linguistics, and the arts.

On governance, Switzerland is well placed to encourage a pragmatic approach. Rather than creating entirely new rules, much of AI’s impact can be addressed through existing frameworks on trade, human rights, intellectual property, and security, provided they are effectively implemented.

As home to numerous international organisations, Geneva offers a natural venue for aligning AI with established global institutions. At the same time, Switzerland’s tradition of bottom-up policymaking ensures that citizens remain part of the conversation.

The principle of subsidiarity, which holds that decisions be made as close as possible to the people affected, adds another dimension. In an era when AI power is concentrated in a handful of global platforms, Switzerland can champion more distributed models that anchor AI development in local communities.

By linking technology to local knowledge, culture, and economic life, AI can become a tool that empowers citizens rather than centralising control.

Trust, institutions, and multilateral cooperation will also be central themes on the road to 2027. Public confidence in AI has been shaken by alarmist narratives and fears of job loss, disinformation, and monopolisation.

Switzerland’s high-trust political culture and lean but effective institutions provide a model for rebuilding confidence through transparency and accountability. Strengthening, rather than sidelining, international organisations and equipping them with AI tools to enhance participation and legitimacy could help ensure that global governance keeps pace with technological change.

Ultimately, the Geneva AI Summit has the potential to mark a shift from polarised debates about doom or blind acceleration towards a mature conversation about how AI can serve humanity in concrete ways. By combining innovation with ethical reflection, sovereignty with interdependence, and global cooperation with local empowerment, Switzerland could help set a steady and credible course for the next phase of AI transformation.

Diplo’s role

Diplo is positioning itself as an active contributor to the road to the 2027 Geneva AI Summit by combining research, training, and practical policy engagement. Drawing on decades of experience in internet governance and digital diplomacy, Diplo approaches AI not as an abstract technological race, but as a policy and societal challenge that requires informed, inclusive, and realistic responses.

Through its humAInism methodology, Diplo situates AI within a broader human context, linking technology with philosophy, sociology, law, and diplomacy to ensure that innovation remains anchored in human values.

Beyond analysis, Diplo focuses on capacity development. Its AI Apprenticeship model promotes learning-by-doing, enabling diplomats, civil society representatives, and professionals to build AI skills through hands-on engagement.

At the same time, Diplo monitors global AI policy developments through the Digital Watch Observatory and develops practical tools, such as AI-supported reporting and knowledge preservation systems, to strengthen institutional memory and multilateral processes.

In this way, Diplo aims not only to observe the AI transformation but to help shape it in a way that is informed, inclusive, and fit for the realities of global governance.

First AI Tuesday of the Month

As preparations for the 2027 Geneva AI Summit gather pace, engagement will be key. One practical way to join the conversation is through the ‘First AI Tuesday of the Month’ luncheon series. These informal networking and briefing sessions bring together diplomats, experts, and practitioners to explore three core AI vectors shaping Geneva today. Those vectors are the road to the AI Summit, evolving governance dynamics, and the latest technological developments.

The next session takes place on Tuesday at 13:00, offering participants an opportunity to exchange ideas, build connections, and contribute to a more informed and inclusive AI debate. By marking the first Tuesday of each month in their calendars, stakeholders can take an active step on the Road to Geneva 2027 and help shape a balanced and forward-looking AI agenda.

You can register for the session here.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot 

Italy orders Amazon to stop processing sensitive employee data after privacy ruling

The Italian data protection authority has ordered Amazon Italia Logistics to halt processing of sensitive employee data after investigators found that the company gathered details ranging from health conditions to union involvement.

Information about workers’ private lives and family members had also been collected, often retained for a decade through internal tracking systems rather than being limited to what labour rules in Italy allow.

Regulators discovered that some data originated from cameras positioned near restrooms and staff break areas, a practice that breached EU privacy standards.

The watchdog concluded that the company’s monitoring went far beyond what employers are permitted to compile when assessing staff performance or workplace needs.

Amazon responded by stressing that protecting employee information remains a priority and said that internal rules and training programmes are designed to ensure compliance. The company added that any findings from the Italian authority would prompt a review of its procedures instead of being dismissed.

An order that arrives as Amazon attempts to regain its lobby badges at the European Parliament.

Access was suspended in 2024 after senior representatives declined to attend hearings on warehouse working conditions, and opposition from MEPs continues to place pressure on Parliament President Roberta Metsola to reject reinstatement.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

EU moves to enforce digital fairness rules with stronger consumer oversight

Regulatory scrutiny of the EU’s digital fairness framework is set to begin on 1 July as the European Commission moves to tighten its supervision of online platforms.

An initiative that forms part of a broader effort to ensure stronger consumer protection across digital markets, with officials signalling stricter oversight of commercial practices that disadvantage users.

The Commission is preparing a major upgrade of its consumer protection framework, expected by December 2026.

The reforms aim to reinforce enforcement tools under the Unfair Commercial Practices Directive and the Consumer Protection Cooperation Regulation, allowing regulators to intervene more effectively when platforms breach fairness standards.

Michael McGrath, Commissioner for Democracy, Justice and Rule of Law, has highlighted the need for greater transparency and accountability as digital markets expand rapidly.

The forthcoming scrutiny focuses on ensuring that platforms respect transparency obligations, avoid manipulating users and provide fair conditions in online transactions.

Regulators seek to replace fragmented enforcement with a more coordinated model that reflects the increasingly cross-border nature of digital commerce.

Stronger consumer safeguards are becoming central to the digital agenda of the EU.

The next phase of reforms is expected to streamline investigations across member states and deliver more predictable outcomes for affected consumers, offering steadier enforcement instead of reactive measures taken after violations escalate.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

AI misuse exposed as OpenAI details global disinformation and scam networks

OpenAI said criminal and state-linked groups misused ChatGPT for disinformation, scams and covert influence. Its latest threat report details coordinated account bans and highlights how AI tools are embedded within broader operational workflows rather than used in isolation.

One investigation linked accounts to Chinese law enforcement engaged in what were described as ‘cyber special operations’. Activities included planning influence campaigns, mass-reporting dissidents and drafting forged materials, with related efforts continuing through other tools despite model refusals.

The report also outlined a Cambodia-based romance scam targeting young men in Indonesia through a fake dating agency. Operators combined manual prompting with automated chatbots to sustain conversations and facilitate financial fraud, leading to account removals.

Separately, accounts tied to Russia’s ‘Rybar’ network used ChatGPT to draft and translate posts distributed across multiple platforms. OpenAI noted that campaign impact depended more on account reach and coordination than on AI-generated content alone.

Across China, Russia and parts of Southeast Asia, actors treated AI as one tool among many, alongside fake profiles, paid advertising and forged documents. OpenAI called for cross-industry vigilance, stressing the need to analyse behavioural patterns across platforms.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Meta AI flood of unusable abuse tips overwhelms US investigators

Investigators in the US say that AI used by Meta is flooding child protection units with large volumes of unhelpful reports, thereby draining resources rather than assisting ongoing cases.

Officers in the Internet Crimes Against Children network told a New Mexico court that most alerts generated by the company’s platforms lack essential evidence or contain material that is not criminal, leaving teams unable to progress investigations.

Meta rejects the claim that it prioritises profit, stressing its cooperation with law enforcement and highlighting rapid response times to emergency requests.

Its position is challenged by officers who say the volume of AI-generated alerts has doubled since 2024, particularly after the Report Act broadened reporting obligations.

They argue that adolescent conversations and incomplete data now form a sizeable portion of the alerts, while genuine cases of child sexual abuse material are becoming harder to detect.

Internal company documents disclosed at trial show Meta executives raising concerns as early as 2019 about the impact of end-to-end encryption on the firm’s ability to identify child exploitation and support investigators.

Child safety groups have long warned that encryption could limit early detection, even though Meta says it has introduced new tools designed to operate safely within encrypted environments.

The growing influx of unusable tips is taking a heavy toll on investigative teams. Officers in the US say each report must still be reviewed manually, despite the low likelihood of actionable evidence, and this backlog is diminishing morale at a time when they say resources have not kept pace with demand.

They warn that meaningful cases risk being delayed as units struggle with a workload swollen by AI systems tuned to avoid regulatory penalties rather than investigative value.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

UK enforces mandatory ETA as digital border era begins

Non-visa nationals are now barred from entering the UK, as the country has begun enforcing mandatory digital permission through the Electronic Travel Authorisation.

Travellers from 85 nations, including the US, Canada and France, must obtain an ETA before departure; otherwise, airlines will prevent them from boarding rather than allow last-minute checks at the border. The authorisation costs £16 and remains valid for two years or until a passport expires.

British and Irish citizens remain exempt but must present valid proof of status when travelling. Authorities say the scheme brings the UK into line with similar systems used by the US and the EU.

The Home Office emphasises that the measure strengthens border security and supports a modern, efficient entry process designed to benefit both visitors and the wider public.

A requirement that also applies to travellers passing through the UK to take connecting flights, reinforcing the shift toward a fully digital immigration system.

Over 19 million people have already used the ETA since its launch in 2023, generating significant revenue that is being reinvested in broader border improvements. Officials argue that the momentum paves the way for a future contactless border, supported by the steady transition from physical documents to eVisas.

From 26 February, Certificates of Entitlement will also be issued digitally, creating a single record that no longer expires with a passport.

Most ETA applications are processed automatically within minutes, allowing short-notice trips to remain possible. However, authorities still recommend applying up to 3 working days in advance to avoid delays for the small number of cases that require additional review.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Colorado targets AI chatbot safety

AI chatbots operating in Colorado would face new child safety and suicide prevention requirements under a bipartisan bill introduced in the Colorado legislature. Lawmakers say the measure addresses parents to concerns about harmful chatbot interactions.

House Bill 1263 would require companies to clearly inform children in Colorado that they are interacting with AI rather than a real person. Platforms would also be barred from offering engagement rewards to child users.

The proposal mandates reasonable safeguards to prevent sexually explicit content and to stop chatbots from encouraging emotional dependence, including romantic role-playing. Parental control options would also be required where services are accessible to children in Colorado.

Companies would need to provide suicide prevention resources when users express self-harm thoughts and report such incidents to the Colorado attorney general. Violations would be treated as consumer protection infractions, carrying fines of up to $1,000 per occurrence in Colorado.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UAE builds sovereign financial cloud

The Central Bank of the UAE has partnered with Abu Dhabi-based AI company Core42 to develop a sovereign financial cloud infrastructure in the UAE. The system is designed to ensure data sovereignty and strengthen protection against cyber threats.

According to the Central Bank of the UAE, the platform will operate on a centralised, highly secure and isolated infrastructure. It aims to support continuous financial services while boosting operational agility across the UAE.

The infrastructure will be powered by AI and provide automation and real-time data analysis for licensed institutions in the UAE. It will also enable unified management of multi-cloud services within a single regulatory framework.

Core42, established by G42 in 2023, said finance must remain sovereign as it relies on digital infrastructure. The Central Bank of the UAE described the project as a key pillar of its financial infrastructure transformation programme.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Conduent breach exposes data of 25 million people across US

More than 25 million people across the United States have had personal information exposed following a ransomware attack on government contractor Conduent. Updated state breach notifications indicate the incident is larger than initially understood.

Conduent provides printing, payment processing, and benefit administration services for state agencies and large corporations. Its systems support food assistance, unemployment benefits, and workplace programmes, reaching more than 100 million individuals, according to the company.

US State disclosures show Oregon and Texas account for most of the affected records, with additional cases reported in Massachusetts, New Hampshire, and Washington. Compromised data includes names, dates of birth, addresses, Social Security numbers, health insurance information, and medical details.

Public information from Conduent has been limited since the January 2025 attack. An incident notice published in October carried a ‘noindex’ tag in its source code, preventing search engines from listing the page, which critics say reduced visibility for affected individuals.

The breach ranks among the largest recent ransomware incidents, though it is smaller than the 2024 Change Healthcare attack that affected 190 million people. Regulators and affected users continue seeking clarity on the Conduent case and its security failures.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Anthropic faces data theft claims from Musk

Elon Musk, CEO of Tesla and xAI, has publicly accused Anthropic of stealing large volumes of data to train its AI models. The allegation was made on X in response to posts referencing Community Notes attached to Anthropic-related content.

Musk claimed the company had engaged in large-scale data theft and suggested that it had paid multi-billion-dollar settlements. Those financial claims remain contested, and no official confirmation has been provided to substantiate the figures.

Anthropic, known for developing the Claude AI model, was founded by former OpenAI employees and promotes an approach centred on AI safety and responsible development. The company has not publicly responded to Musk’s latest accusations.

The dispute reflects a broader conflict across the AI industry over how companies collect the text, images and other materials required to train large language models. Much of this data is scraped from the internet, often without explicit permission from rights holders.

Multiple lawsuits filed by authors, media organisations and software developers are testing whether large-scale scraping qualifies as fair use under copyright law. Court rulings in these cases could reshape licensing practices, impose financial penalties, and alter the economics of AI development.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!