Armenia plans major AI hub with NVIDIA and Firebird

Armenia has unveiled plans to develop a $500mn AI supercomputing hub in partnership with US tech leader NVIDIA, AI cloud firm Firebird, and local telecoms group Team.

Announced at the Viva Technology conference in Paris, the initiative marks the largest tech investment ever seen in the South Caucasus.

Due to open in 2026, the facility will house thousands of NVIDIA’s Blackwell GPUs and offer more than 100 megawatts of scalable computing power. Designed to advance AI research, training and entrepreneurship, the hub aims to position Armenia as a leading player in global AI development.

Prime Minister Nikol Pashinyan described the project as the ‘Stargate of Armenia’, underscoring its potential to transform the national tech sector.

Firebird CEO Razmig Hovaghimian said the hub would help develop local talent and attract international attention, while the Afeyan Foundation, led by Noubar Afeyan, is set to come on board as a founding investor.

Instead of limiting its role to funding, the Armenian government will also provide land, tax breaks and simplified regulation to support the project, strengthening its push toward a competitive digital economy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI turns to Google Cloud in shift from solo AI race

OpenAI has entered into an unexpected partnership with Google, using Google Cloud to support its growing AI infrastructure needs.

Despite being fierce competitors in AI, the two tech giants recognise that long-term success may require collaboration instead of isolation.

As the demand for high-performance hardware soars, traditional rivals join forces to keep pace. OpenAI, previously backed heavily by Microsoft, now draws from Google’s vast cloud resources, hinting at a changing attitude in the AI race.

Rather than going it alone, firms may benefit more by leveraging each other’s strengths to accelerate development.

Google CEO Sundar Pichai, speaking on a podcast, suggested there is room for multiple winners in the AI sector. He even noted that a major competitor had ‘invited me to a dance’, underscoring a new phase of pragmatic cooperation.

While Google still faces threats to its search dominance from tools like ChatGPT, business incentives may override rivalry.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI health tools need clinicians to prevent serious risks, Oxford study warns

The University of Oxford has warned that AI in healthcare, primarily through chatbots, should not operate without human oversight.

Researchers found that relying solely on AI for medical self-assessment could worsen patient outcomes instead of improving access to care. The study highlights how these tools, while fast and data-driven, fall short in delivering the judgement and empathy that only trained professionals can offer.

The findings raise alarm about the growing dependence on AI to fill gaps caused by doctor shortages and rising costs. Chatbots are often seen as scalable solutions, but without rigorous human-in-the-loop validation, they risk providing misleading or inconsistent information, particularly to vulnerable groups.

Rather than helping, they might increase health disparities by delaying diagnosis or giving patients false reassurance.

Experts are calling for safer, hybrid approaches that embed clinicians into the design and ongoing use of AI tools. The Oxford researchers stress that continuous testing, ethical safeguards and clear protocols must be in place.

Instead of replacing clinical judgement, AI should support it. The future of digital healthcare hinges not just on innovation but on responsibility and partnership between technology and human care.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Switzerland’s unique AI path: Blending innovation, governance, and local empowerment

In his recent blog post ‘Advancing Swiss AI Trinity: Zurich’s entrepreneurship, Geneva’s governance, and Communal subsidiarity,’ Jovan Kurbalija proposes a distinctive roadmap for Switzerland to navigate the rapidly evolving landscape of AI. Rather than mimicking the AI power plays of the United States or China, Kurbalija argues that Switzerland can lead by integrating three national strengths: Zurich’s thriving innovation ecosystem, Geneva’s global leadership in governance, and the country’s foundational principle of subsidiarity rooted in local decision-making.

Zurich, already a global tech hub, is positioned to drive cutting-edge development through its academic excellence and robust entrepreneurial culture. Institutions like ETH Zurich and the presence of major tech firms provide a fertile ground for collaborations that turn research into practical solutions.

With AI tools becoming increasingly accessible, Kurbalija emphasises that success now depends on how societies harness the interplay of human and machine intelligence—a field where Switzerland’s education and apprenticeship systems give it a competitive edge. Meanwhile, Geneva is called upon to spearhead balanced international governance and standard-setting for AI.

Kurbalija stresses that AI policy must go beyond abstract discussions and address real-world issues—health, education, the environment—by embedding AI tools in global institutions and negotiations. He notes that Geneva’s experience in multilateral diplomacy and technical standardisation offers a strong foundation for shaping ethical, inclusive AI frameworks.

The third pillar—subsidiarity—empowers Swiss cantons and communities to develop AI that reflects local values and needs. By supporting grassroots innovation through mini-grants, reimagining libraries as AI learning hubs, and embedding AI literacy from primary school to professional training, Switzerland can build an AI model that is democratic and inclusive.

Why does it matter?

Kurbalija’s call to action is clear: with its tools, talent, and traditions aligned, Switzerland must act now to chart a future where AI serves society, not the other way around.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google Gemini now summarizes PDFs with actionable prompts in Drive

Google is expanding Gemini’s capabilities by allowing the AI assistant to summarize PDF documents directly in Google Drive—and it’s doing more than just generating summaries.

Users will now see clickable suggestions like drafting proposals or creating interview questions based on resume content, making Gemini a more proactive productivity tool.

However, this update builds on earlier integrations of Gemini in Drive, which now surface pop-up summaries and action prompts when a PDF is opened.

Users with smart features and personalization turned on will notice a new preview window interface, eliminating the need to open a separate tab.

Gemini’s PDF summaries are now available in over 20 languages and will gradually roll out over the next two weeks.

The feature supports personal and business accounts, including Business Standard/Plus users, Enterprise tiers, Gemini Education, and Google AI Pro and Ultra plans.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Santa Clara offers AI training with Silicon Valley focus

Santa Clara University has launched a new master’s programme in AI designed to equip students with technical expertise and ethical insight.

The interdisciplinary degree, offered through the School of Engineering, blends software and hardware tracks to address the growing need for professionals who can manage AI systems responsibly.

The course offers two concentrations: one focusing on algorithms and computation for computer science students and another tailored to engineering students interested in robotics, devices, and AI chip design. Students will also engage in real-world practicums with Silicon Valley companies.

Faculty say the programme integrates ethical training into its core, aiming to produce graduates who can develop intelligent technologies with social awareness. As AI tools increasingly shape society and education, the university hopes to prepare students for both innovation and accountability.

Professor Yi Fang, director of the Responsible AI initiative, said students will leave with a deeper understanding of AI’s societal impact. The initiative reflects a broader trend in higher education, where demand for AI-related skills continues to rise.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Nvidia’s Huang: ‘The new programming language is human’

Speaking at London Tech Week, Nvidia CEO Jensen Huang called AI ‘the great equaliser,’ explaining how AI has transformed who can access and control computing power.

In the past, computing was limited to a select few with technical skills in languages like C++ or Python. ‘We had to learn programming languages. We had to architect it. We had to design these computers that are very complicated,’ Huang said.

That’s no longer necessary, he explained. ‘Now, all of a sudden, there’s a new programming language. This new programming language is called ‘human’,’ Huang said, highlighting how AI now understands natural language commands. ‘Most people don’t know C++, very few people know Python, and everybody, as you know, knows human.’

He illustrated his point with an example: asking an AI to write a poem in the style of Shakespeare. The AI delivers, he said—and if you ask it to improve, it will reflect and try again, just like a human collaborator.

For Huang, this shift is not just technical but transformational. It makes the power of advanced computing accessible to billions, not just a trained few.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK health sector adopts AI while legacy tech lags

The UK’s healthcare sector has rapidly embraced AI, with adoption rising from 47% in 2024 to 94% in 2025, according to SOTI’s new report ‘Healthcare’s Digital Dilemma’.

AI is no longer confined to administrative tasks, as 52% of healthcare professionals now use it for diagnosis and 57% to personalise treatments. SOTI’s Stefan Spendrup said AI is improving how care is delivered and helping clinicians make more accurate, patient-specific decisions.

However, outdated systems continue to hamper progress. Nearly all UK health IT leaders report challenges from legacy infrastructure, Internet of Things (IoT) tech and telehealth tools.

While connected devices are widely used to support patients remotely, 73% rely on outdated, unintegrated systems, significantly higher than the global average of 65%.

These systems limit interoperability and heighten security risks, with 64% experiencing regular tech failures and 43% citing network vulnerabilities.

The strain on IT teams is evident. Nearly half report being unable to deploy or manage new devices efficiently, and more than half struggle to offer remote support or access detailed diagnostics. Time lost to troubleshooting remains a common frustration.

The UK appears more affected by these challenges than other countries surveyed, indicating a pressing need to modernise infrastructure instead of continuing to patch ageing technology.

While data security remains the top IT concern in UK healthcare, fewer IT teams see it as a priority, falling from 33% in 2024 to 24% in 2025. Despite a sharp increase in data breaches, the number rose from 71% to 84%.

Spendrup warned that innovation risks being undermined unless the sector rebalances priorities, with more focus on securing systems and replacing legacy tools instead of delaying necessary upgrades.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI companions are becoming emotional lifelines

Researchers at Waseda University found that three in four users turn to AI for emotional advice, reflecting growing psychological attachment to chatbot companions. Their new tool, the Experiences in Human-AI Relationships Scale, reveals that many users see AI as a steady presence in their lives.

Two patterns of attachment emerged: anxiety, where users fear being emotionally let down by AI, and avoidance, marked by discomfort with emotional closeness. These patterns closely resemble human relationship styles, despite AI’s inability to reciprocate or abandon its users.

Lead researcher Fan Yang warned that emotionally vulnerable individuals could be exploited by platforms encouraging overuse or financial spending. Sudden disruptions in service, he noted, might even trigger feelings akin to grief or separation anxiety.

The study, based on Chinese participants, suggests AI systems might shape user behaviour depending on design and cultural context. Further research is planned to explore links between AI use and long-term well-being, social function, and emotional regulation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Fake DeepSeek ads deliver ‘BrowserVenom’ malware to curious AI users

Cybercriminals are exploiting the surge in interest around local AI tools by spreading a new malware strain via Google ads.

According to antivirus firm Kaspersky, attackers use fake ads for DeepSeek’s R1 AI model to deliver ‘BrowserVenom,’ malware designed to intercept and manipulate a user’s internet traffic instead of merely infecting the device.

The attackers purchased ads appearing in Google search results for ‘deep seek r1.’ Users who clicked were redirected to a fake website—deepseek-platform[.]com—which mimicked the official DeepSeek site and offered a file named AI_Launcher_1.21.exe.

Kaspersky’s analysis of the site’s source code uncovered developer notes in Russian, suggesting the campaign is operated by Russian-speaking actors.

Once launched, the fake installer displayed a decoy installation screen for the R1 model, but silently deployed malware that altered browser configurations.

BrowserVenom rerouted web traffic through a proxy server controlled by the hackers, allowing them to decrypt browsing sessions and capture sensitive data, while evading most antivirus tools.

Kaspersky reports confirmed infections across multiple countries, including Brazil, Cuba, India, and South Africa.

The malicious domain has since been taken down. However, the incident highlights the dangers of downloading AI tools from unofficial sources. Open-source models like DeepSeek R1 require technical setup, typically involving multiple configuration steps, instead of a simple Windows installer.

As interest in running local AI grows, users should verify official domains and avoid shortcuts that could lead to malware.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!