How Switzerland can shape AI in 2026

Switzerland is heading into 2026 facing an AI transition marked by uncertainty, and it may not win a raw ‘compute race’ dominated by the biggest hardware buyers. In his blog ‘10 Swiss values and practices for AI & digitalisation in 2026,’ Jovan Kurbalija argues that Switzerland’s best response is to build resilience around an ‘AI Trinity’ of Zurich’s entrepreneurship, Geneva’s governance, and communal subsidiarity, using long-standing Swiss practices as a practical compass rather than a slogan.

A central idea is subsidiarity. When top-down approaches hit limits, Switzerland can push ‘bottom-up AI’ grounded in local knowledge and real community needs. Kurbalija points to practical steps such as turning libraries, post offices, and community centres into AI knowledge hubs, creating apprenticeship-style AI programmes, and small grants that help communities develop local AI tools. He also cites a proposal for a ‘Geneva stack’ of sovereign digital tools adopted across public institutions, alongside the notion of a decentralised ‘cyber militia’ capacity for defence.

The blog also leans heavily on entrepreneurship and innovation, especially Switzerland’s SME culture and Zurich’s tech ecosystem. The message for 2026 is to strengthen partnerships between Swiss startups and major global tech firms present in the region, while also connecting more actively with fast-growing digital economy actors from places like India and Singapore.

Instead of chasing moonshots alone, Kurbalija says Switzerland can double down on ‘precision AI’ in areas such as medtech, fintech, and cleantech, and expand its move toward open-source AI tools across the full lifecycle, from models to localised agents.

Another theme is trust and quality, and the challenge of translating Switzerland’s high-trust reputation into the AI era. Beyond cybersecurity, the question is whether Switzerland can help define ‘trustworthy AI,’ potentially even as an international verifier certifying systems.

At the same time, Kurbalija frames quality as a Swiss competitive edge in a world frustrated with low-grade ‘AI slop,’ arguing that better outcomes often depend less on new algorithms and more on well-curated knowledge and data.

He also flags neutrality and sovereignty as issues that will move from abstract debates to urgent policy questions, such as what neutrality means when cyber weapons and AI systems are involved, and how much control a country can realistically keep over data and infrastructure in an interdependent world. He notes that digital sovereignty is a key priority in Switzerland’s 2026 digital strategy, with a likely focus on mapping where critical digital assets are stored and on protecting sensitive domains, such as health, elections, and security, while running local systems when feasible.

Finally, the blog stresses solidarity and resilience as the social and infrastructural foundations of the transition. As AI-driven centralisation risks widening divides, Kurbalija calls for reskilling, support for regions and industries in transition, and digital tools that strengthen social safety nets rather than weaken them.

His bottom line is that Switzerland can’t, and shouldn’t, try to outspend others on hardware. Still, it can choose whether to ‘import the future as a dependency’ or build it as a durable capability, carefully and inclusively, on unmistakably Swiss strengths.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Canada turns to AI to parse feedback on federal AI strategy consultation

Canada’s Innovation, Science and Economic Development (ISED) department saw an overwhelming volume of comments on its national AI strategy consultation, prompting officials to use AI tools to analyse and organise responses from citizens, organisations and stakeholders.

The consultation was part of a broader effort to shape Canada’s approach to AI governance, regulation and adoption, with the government seeking input on how to balance innovation, competitiveness and responsible AI development.

Analysts and advocates have highlighted Canadians’ demand for strong oversight, transparency, and protections related to privacy and data protection, misinformation and ethical uses of AI.

Public interest groups have urged that rights, privacy and sustainability be central pillars of the AI strategy rather than secondary considerations, and recommended risk-based, people-centred regulations similar to frameworks adopted in other jurisdictions.

The use of AI to process feedback illustrates both the scale of engagement and the government’s willingness to employ the very technology it seeks to govern in drafting its policy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU lawmakers push limits on AI nudity apps

More than 50 EU lawmakers have called on the European Commission to clarify whether AI-powered applications for nudity are prohibited under existing EU legislation, citing concerns about online harm and legal uncertainty.

The request follows public scrutiny of the Grok, owned by xAI, which was found to generate manipulated intimate images involving women and minors.

Lawmakers argue that such systems enable gender-based online violence and the production of child sexual abuse material instead of legitimate creative uses.

In their letter, lawmakers questioned whether current provisions under the EU AI Act sufficiently address nudification tools or whether additional prohibitions are required. They also warned that enforcement focused only on substantial online platforms risks leaving similar applications operating elsewhere.

While EU authorities have taken steps under the Digital Services Act to assess platform responsibilities, lawmakers stressed the need for broader regulatory clarity and consistent application across the digital market.

Further political debate on the issue is expected in the coming days.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Grok faces investigation over deepfake abuse claims

California Attorney General Rob Bonta has launched an investigation into xAI, the company behind the Grok chatbot, over the creation and spread of nonconsensual sexually explicit images.

Bonta’s office said Grok has been used to generate deepfake intimate images of women and children, which have then been shared on social media platforms, including X.

Officials said users have taken ordinary photos and manipulated them into sexually explicit scenarios without consent, with xAI’s ‘spicy mode’ contributing to the problem.

‘We have zero tolerance for the AI-based creation and dissemination of nonconsensual intimate images or child sexual abuse material,’ Bonta said in a statement.

The investigation will examine whether xAI has violated the law and follows earlier calls for stronger safeguards to protect children from harmful AI content.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Wikipedia marks 25 years with new global tech partnerships

Wikipedia marked its 25th anniversary by showcasing the rapid expansion of Wikimedia Enterprise and its growing tech partnerships. The milestone reflects Wikipedia’s evolution into one of the most trusted and widely used knowledge sources in the digital economy.

Amazon, Meta, Microsoft, Mistral AI, and Perplexity have joined the partner roster for the first time, alongside Google, Ecosia, and several other companies already working with Wikimedia Enterprise.

These organisations integrate human-curated Wikipedia content into search engines, AI models, voice assistants, and data platforms, helping deliver verified knowledge to billions of users worldwide.

Wikipedia remains one of the top ten most visited websites globally and the only one in that group operated by a non-profit organisation. With over 65 million articles in 300+ languages, the platform is a key dataset for training large language models.

Wikimedia Enterprise provides structured, high-speed access to this content through on-demand, snapshot, and real-time APIs, allowing companies to use Wikipedia data at scale while supporting its long-term sustainability.

As Wikipedia continues to expand into new languages and subject areas, its value for AI development, search, and specialised knowledge applications is expected to grow further.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Cerebras to supply large-scale AI compute for OpenAI

OpenAI has agreed to purchase up to 750 megawatts of computing power from AI chipmaker Cerebras over the next three years. The deal, announced on 14 January, is expected to be worth more than US$10 billion and will support ChatGPT and other AI services.

Cerebras will provide cloud services powered by its wafer-scale chips, which are designed to run large AI models more efficiently than traditional GPUs. OpenAI plans to use the capacity primarily for inference and reasoning models that require high compute.

Cerebras will build or lease data centres filled with its custom hardware, with computing capacity coming online in stages through 2028. OpenAI said the partnership would help improve the speed and responsiveness of its AI systems as user demand continues to grow.

The deal is also essential for Cerebras as it prepares for a second attempt at a public listing, following a 2025 IPO that was postponed. Diversifying its customer base beyond major backers such as UAE-based G42 could strengthen its financial position ahead of a potential 2026 flotation.

The agreement highlights the wider race among AI firms to secure vast computing resources, as investment in AI infrastructure accelerates. However, some analysts have warned that soaring valuations and heavy spending could resemble past technology bubbles.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Gemini gains new features through Personal Intelligence

A new beta feature has been launched in the United States that lets users personalise the Gemini assistant by connecting Google apps such as Gmail, Photos, YouTube and Search. The tool, called Personal Intelligence, is designed to make the service more proactive and context-aware.

When enabled, Personal Intelligence allows Gemini to reason across a user’s emails, photos, and search history to answer questions or retrieve specific details. Google says users remain in control of which apps are connected and can turn the feature off at any time.

The company showed how Gemini can use connected data to offer tailored suggestions, such as identifying vehicle details from Photos or recommending trips based on past travel.

Google said the system includes privacy safeguards. Personal Intelligence is turned off by default, and Gemini does not train on users’ Gmail inboxes or photo libraries.

The beta is rolling out to Google AI Pro and AI Ultra subscribers in the US and will work across web, Android, and iOS. Google plans to expand access over time and bring the feature to more countries and users.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Nvidia H200 chip sales to China cleared by US administration

The US administration has approved the export of Nvidia’s H200 AI chips to China, reversing years of tight US restrictions on advanced AI hardware. The Nvidia H200 chips represent the company’s second-most-powerful chip series and were previously barred from sale due to national security concerns.

The US president announced the move last month, linking approval to a 25 per cent fee payable to the US government. The administration said the policy balances economic competitiveness with security interests, while critics warned it could strengthen China’s military and surveillance capabilities.

Under the new rules, Nvidia H200 chips may be shipped to China only after third-party testing verifies their performance. Chinese buyers are limited to 50 per cent of the volume sold to US customers and must provide assurances that the chips will not be used for military purposes.

Nvidia welcomed the decision, saying it would support US jobs and global competitiveness. However, analysts questioned whether the safeguards can be effectively enforced, noting that Chinese firms have previously accessed restricted technologies through intermediaries.

Chinese companies have reportedly ordered more than two million Nvidia H200 chips, far exceeding the chipmaker’s current inventory. The scale of demand has intensified debate over whether the policy will limit China’s AI ambitions or accelerate its access to advanced computing power.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

New TranslateGemma models support 55 languages efficiently

A new suite of open translation models, TranslateGemma, has been launched, bringing advanced multilingual capabilities to users worldwide. Built on the Gemma 3 architecture, the models support 55 languages and come in 4B, 12B, and 27B parameter sizes.

The release aims to make high-quality translation accessible across devices without compromising efficiency.

TranslateGemma delivers impressive performance gains, with the 12B model surpassing the 27B Gemma 3 baseline on WMT24++ benchmarks. The models achieve higher accuracy while requiring fewer parameters, enabling faster translations with lower latency.

The 4B model also performs on par with larger models, making it ideal for mobile deployment.

The development combines supervised fine-tuning on diverse parallel datasets with reinforcement learning guided by advanced metrics. TranslateGemma performs well in high- and low-resource languages and supports accurate text translation within images.

Designed for flexible deployment, the models cater to mobile devices, consumer laptops, and cloud environments. Researchers and developers can use TranslateGemma to build customised translation solutions and improve coverage for low-resource languages.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Qalb brings Urdu-language AI to Pakistan

Pakistan has launched its own Urdu-focused generative AI model, Qalb, trained on 1.97 billion tokens and evaluated across more than seven international benchmarking frameworks. The developers say the model outperforms existing Urdu-language systems on key real-world performance indicators.

With Urdu spoken by over 230 million people worldwide, Qalb aims to expand access to advanced AI tools in Pakistan’s national language. The model is designed to support local businesses, startups, education platforms, digital services, and voice-based AI agents.

Qalb was developed by a small team led by Taimoor Hassan, a serial entrepreneur who has launched and exited multiple startups and previously won the Microsoft Cup. He completed his undergraduate studies in computer science in Pakistan and is currently pursuing postgraduate education in the United States.

‘I had the opportunity to contribute in a small way to a much bigger mission for the country,’ Hassan said, noting that the project was built with his former university teammates Jawad Ahmed and Muhammad Awais. The group plans to continue refining localised AI models for specific industries.

The launch of Qalb highlights how smaller teams can develop advanced AI tools outside major technology hubs. Supporters say Urdu-first models could help drive innovation across Pakistan’s digital economy.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!