Switzerland’s unique AI path: Blending innovation, governance, and local empowerment

In his recent blog post ‘Advancing Swiss AI Trinity: Zurich’s entrepreneurship, Geneva’s governance, and Communal subsidiarity,’ Jovan Kurbalija proposes a distinctive roadmap for Switzerland to navigate the rapidly evolving landscape of AI. Rather than mimicking the AI power plays of the United States or China, Kurbalija argues that Switzerland can lead by integrating three national strengths: Zurich’s thriving innovation ecosystem, Geneva’s global leadership in governance, and the country’s foundational principle of subsidiarity rooted in local decision-making.

Zurich, already a global tech hub, is positioned to drive cutting-edge development through its academic excellence and robust entrepreneurial culture. Institutions like ETH Zurich and the presence of major tech firms provide a fertile ground for collaborations that turn research into practical solutions.

With AI tools becoming increasingly accessible, Kurbalija emphasises that success now depends on how societies harness the interplay of human and machine intelligence—a field where Switzerland’s education and apprenticeship systems give it a competitive edge. Meanwhile, Geneva is called upon to spearhead balanced international governance and standard-setting for AI.

Kurbalija stresses that AI policy must go beyond abstract discussions and address real-world issues—health, education, the environment—by embedding AI tools in global institutions and negotiations. He notes that Geneva’s experience in multilateral diplomacy and technical standardisation offers a strong foundation for shaping ethical, inclusive AI frameworks.

The third pillar—subsidiarity—empowers Swiss cantons and communities to develop AI that reflects local values and needs. By supporting grassroots innovation through mini-grants, reimagining libraries as AI learning hubs, and embedding AI literacy from primary school to professional training, Switzerland can build an AI model that is democratic and inclusive.

Why does it matter?

Kurbalija’s call to action is clear: with its tools, talent, and traditions aligned, Switzerland must act now to chart a future where AI serves society, not the other way around.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google Gemini now summarizes PDFs with actionable prompts in Drive

Google is expanding Gemini’s capabilities by allowing the AI assistant to summarize PDF documents directly in Google Drive—and it’s doing more than just generating summaries.

Users will now see clickable suggestions like drafting proposals or creating interview questions based on resume content, making Gemini a more proactive productivity tool.

However, this update builds on earlier integrations of Gemini in Drive, which now surface pop-up summaries and action prompts when a PDF is opened.

Users with smart features and personalization turned on will notice a new preview window interface, eliminating the need to open a separate tab.

Gemini’s PDF summaries are now available in over 20 languages and will gradually roll out over the next two weeks.

The feature supports personal and business accounts, including Business Standard/Plus users, Enterprise tiers, Gemini Education, and Google AI Pro and Ultra plans.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Santa Clara offers AI training with Silicon Valley focus

Santa Clara University has launched a new master’s programme in AI designed to equip students with technical expertise and ethical insight.

The interdisciplinary degree, offered through the School of Engineering, blends software and hardware tracks to address the growing need for professionals who can manage AI systems responsibly.

The course offers two concentrations: one focusing on algorithms and computation for computer science students and another tailored to engineering students interested in robotics, devices, and AI chip design. Students will also engage in real-world practicums with Silicon Valley companies.

Faculty say the programme integrates ethical training into its core, aiming to produce graduates who can develop intelligent technologies with social awareness. As AI tools increasingly shape society and education, the university hopes to prepare students for both innovation and accountability.

Professor Yi Fang, director of the Responsible AI initiative, said students will leave with a deeper understanding of AI’s societal impact. The initiative reflects a broader trend in higher education, where demand for AI-related skills continues to rise.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Nvidia’s Huang: ‘The new programming language is human’

Speaking at London Tech Week, Nvidia CEO Jensen Huang called AI ‘the great equaliser,’ explaining how AI has transformed who can access and control computing power.

In the past, computing was limited to a select few with technical skills in languages like C++ or Python. ‘We had to learn programming languages. We had to architect it. We had to design these computers that are very complicated,’ Huang said.

That’s no longer necessary, he explained. ‘Now, all of a sudden, there’s a new programming language. This new programming language is called ‘human’,’ Huang said, highlighting how AI now understands natural language commands. ‘Most people don’t know C++, very few people know Python, and everybody, as you know, knows human.’

He illustrated his point with an example: asking an AI to write a poem in the style of Shakespeare. The AI delivers, he said—and if you ask it to improve, it will reflect and try again, just like a human collaborator.

For Huang, this shift is not just technical but transformational. It makes the power of advanced computing accessible to billions, not just a trained few.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK health sector adopts AI while legacy tech lags

The UK’s healthcare sector has rapidly embraced AI, with adoption rising from 47% in 2024 to 94% in 2025, according to SOTI’s new report ‘Healthcare’s Digital Dilemma’.

AI is no longer confined to administrative tasks, as 52% of healthcare professionals now use it for diagnosis and 57% to personalise treatments. SOTI’s Stefan Spendrup said AI is improving how care is delivered and helping clinicians make more accurate, patient-specific decisions.

However, outdated systems continue to hamper progress. Nearly all UK health IT leaders report challenges from legacy infrastructure, Internet of Things (IoT) tech and telehealth tools.

While connected devices are widely used to support patients remotely, 73% rely on outdated, unintegrated systems, significantly higher than the global average of 65%.

These systems limit interoperability and heighten security risks, with 64% experiencing regular tech failures and 43% citing network vulnerabilities.

The strain on IT teams is evident. Nearly half report being unable to deploy or manage new devices efficiently, and more than half struggle to offer remote support or access detailed diagnostics. Time lost to troubleshooting remains a common frustration.

The UK appears more affected by these challenges than other countries surveyed, indicating a pressing need to modernise infrastructure instead of continuing to patch ageing technology.

While data security remains the top IT concern in UK healthcare, fewer IT teams see it as a priority, falling from 33% in 2024 to 24% in 2025. Despite a sharp increase in data breaches, the number rose from 71% to 84%.

Spendrup warned that innovation risks being undermined unless the sector rebalances priorities, with more focus on securing systems and replacing legacy tools instead of delaying necessary upgrades.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI companions are becoming emotional lifelines

Researchers at Waseda University found that three in four users turn to AI for emotional advice, reflecting growing psychological attachment to chatbot companions. Their new tool, the Experiences in Human-AI Relationships Scale, reveals that many users see AI as a steady presence in their lives.

Two patterns of attachment emerged: anxiety, where users fear being emotionally let down by AI, and avoidance, marked by discomfort with emotional closeness. These patterns closely resemble human relationship styles, despite AI’s inability to reciprocate or abandon its users.

Lead researcher Fan Yang warned that emotionally vulnerable individuals could be exploited by platforms encouraging overuse or financial spending. Sudden disruptions in service, he noted, might even trigger feelings akin to grief or separation anxiety.

The study, based on Chinese participants, suggests AI systems might shape user behaviour depending on design and cultural context. Further research is planned to explore links between AI use and long-term well-being, social function, and emotional regulation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Fake DeepSeek ads deliver ‘BrowserVenom’ malware to curious AI users

Cybercriminals are exploiting the surge in interest around local AI tools by spreading a new malware strain via Google ads.

According to antivirus firm Kaspersky, attackers use fake ads for DeepSeek’s R1 AI model to deliver ‘BrowserVenom,’ malware designed to intercept and manipulate a user’s internet traffic instead of merely infecting the device.

The attackers purchased ads appearing in Google search results for ‘deep seek r1.’ Users who clicked were redirected to a fake website—deepseek-platform[.]com—which mimicked the official DeepSeek site and offered a file named AI_Launcher_1.21.exe.

Kaspersky’s analysis of the site’s source code uncovered developer notes in Russian, suggesting the campaign is operated by Russian-speaking actors.

Once launched, the fake installer displayed a decoy installation screen for the R1 model, but silently deployed malware that altered browser configurations.

BrowserVenom rerouted web traffic through a proxy server controlled by the hackers, allowing them to decrypt browsing sessions and capture sensitive data, while evading most antivirus tools.

Kaspersky reports confirmed infections across multiple countries, including Brazil, Cuba, India, and South Africa.

The malicious domain has since been taken down. However, the incident highlights the dangers of downloading AI tools from unofficial sources. Open-source models like DeepSeek R1 require technical setup, typically involving multiple configuration steps, instead of a simple Windows installer.

As interest in running local AI grows, users should verify official domains and avoid shortcuts that could lead to malware.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Coreweave expands AI infrastructure with Google tie‑up

CoreWeave has secured a pivotal role in Google Cloud’s new infrastructure partnership with OpenAI. The specialist GPU cloud provider will supply Nvidia‑based compute resources to Google, which will allocate them to OpenAI to support the rising demand for services like ChatGPT.

Already under a $11.9 billion, five‑year contract with OpenAI and backed by a $350 million equity investment, CoreWeave recently expanded the deal by another. 

Adding Google Cloud as a customer helps the company diversify beyond Microsoft, its top client in 2024.

The arrangement positions Google as a neutral provider of AI computing power amid fierce competition with Amazon and Microsoft.

CoreWeave’s stock has surged over 270 percent since its March IPO, illustrating investor confidence in its expanding role in the AI infrastructure boom.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta sues AI firm over fake nude images created without consent

Meta has filed a lawsuit against Joy Timeline HK Ltd in Hong Kong, accusing the firm of using its platforms to promote a generative AI app called CrushAI.

The app allows users to digitally strip clothes from images of people, often without consent. Meta said the company repeatedly attempted to bypass ad review systems to push harmful content, advertising phrases like ‘see anyone naked’ on Facebook and Instagram.

The lawsuit follows Meta’s broader investigation into ‘nudity’ apps, which are increasingly being used to create sexualised deepfakes. Despite bans on nonconsensual explicit content, the company said such apps evade detection by disguising ads or rotating domain names after bans.

According to research by Cornell Tech, over 8,000 ads linked to CrushAI appeared on Meta platforms in recent months. Meta responded by updating its detection systems with a broader range of flagged terms and emojis.

While many of the manipulated images target celebrities, concerns are growing about the use of such technology to exploit minors. In one case in Florida, two teenagers used similar AI tools to create sexualised images of classmates.

The issue has sparked legal action in the US, where the Take It Down Act, signed into law earlier this year, criminalises the publication of nonconsensual deepfake imagery and simplifies removal processes for victims.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI video tool Veo 3 Fast rolls out for Gemini and Flow users

Google has introduced Veo 3 Fast, a speedier version of its AI video-generation tool that promises to cut production time in half.

Now available to Gemini Pro and Flow Pro users, the updated model creates 720p videos more than twice as fast as its predecessor—marking a step forward in scaling Google’s video AI infrastructure.

Gemini Pro subscribers can now generate three Veo 3 Fast videos daily as part of their plan. Meanwhile, Flow Pro users can create videos using 20 credits per clip, significantly reducing costs compared to previous models. Gemini Ultra subscribers enjoy even more generous limits under their premium tier.

The upgrade is more than a performance boost. According to Google’s Josh Woodward, the improved infrastructure also paves the way for smoother playback and better subtitles—enhancements that aim to make video creation more seamless and accessible.

Google also tests voice prompt capabilities, allowing users to express video ideas and watch them materialise on-screen.

Although Veo 3 Fast is currently limited to 720p resolution, it encourages creativity through rapid iteration. Users can experiment with prompts and edits without perfecting their first try.

While the results won’t rival Hollywood, the model opens up new possibilities for businesses, creators, and filmmakers looking to prototype video ideas or produce content without traditional filming quickly.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!