Santa Clara offers AI training with Silicon Valley focus

Santa Clara University has launched a new master’s programme in AI designed to equip students with technical expertise and ethical insight.

The interdisciplinary degree, offered through the School of Engineering, blends software and hardware tracks to address the growing need for professionals who can manage AI systems responsibly.

The course offers two concentrations: one focusing on algorithms and computation for computer science students and another tailored to engineering students interested in robotics, devices, and AI chip design. Students will also engage in real-world practicums with Silicon Valley companies.

Faculty say the programme integrates ethical training into its core, aiming to produce graduates who can develop intelligent technologies with social awareness. As AI tools increasingly shape society and education, the university hopes to prepare students for both innovation and accountability.

Professor Yi Fang, director of the Responsible AI initiative, said students will leave with a deeper understanding of AI’s societal impact. The initiative reflects a broader trend in higher education, where demand for AI-related skills continues to rise.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Nvidia’s Huang: ‘The new programming language is human’

Speaking at London Tech Week, Nvidia CEO Jensen Huang called AI ‘the great equaliser,’ explaining how AI has transformed who can access and control computing power.

In the past, computing was limited to a select few with technical skills in languages like C++ or Python. ‘We had to learn programming languages. We had to architect it. We had to design these computers that are very complicated,’ Huang said.

That’s no longer necessary, he explained. ‘Now, all of a sudden, there’s a new programming language. This new programming language is called ‘human’,’ Huang said, highlighting how AI now understands natural language commands. ‘Most people don’t know C++, very few people know Python, and everybody, as you know, knows human.’

He illustrated his point with an example: asking an AI to write a poem in the style of Shakespeare. The AI delivers, he said—and if you ask it to improve, it will reflect and try again, just like a human collaborator.

For Huang, this shift is not just technical but transformational. It makes the power of advanced computing accessible to billions, not just a trained few.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK health sector adopts AI while legacy tech lags

The UK’s healthcare sector has rapidly embraced AI, with adoption rising from 47% in 2024 to 94% in 2025, according to SOTI’s new report ‘Healthcare’s Digital Dilemma’.

AI is no longer confined to administrative tasks, as 52% of healthcare professionals now use it for diagnosis and 57% to personalise treatments. SOTI’s Stefan Spendrup said AI is improving how care is delivered and helping clinicians make more accurate, patient-specific decisions.

However, outdated systems continue to hamper progress. Nearly all UK health IT leaders report challenges from legacy infrastructure, Internet of Things (IoT) tech and telehealth tools.

While connected devices are widely used to support patients remotely, 73% rely on outdated, unintegrated systems, significantly higher than the global average of 65%.

These systems limit interoperability and heighten security risks, with 64% experiencing regular tech failures and 43% citing network vulnerabilities.

The strain on IT teams is evident. Nearly half report being unable to deploy or manage new devices efficiently, and more than half struggle to offer remote support or access detailed diagnostics. Time lost to troubleshooting remains a common frustration.

The UK appears more affected by these challenges than other countries surveyed, indicating a pressing need to modernise infrastructure instead of continuing to patch ageing technology.

While data security remains the top IT concern in UK healthcare, fewer IT teams see it as a priority, falling from 33% in 2024 to 24% in 2025. Despite a sharp increase in data breaches, the number rose from 71% to 84%.

Spendrup warned that innovation risks being undermined unless the sector rebalances priorities, with more focus on securing systems and replacing legacy tools instead of delaying necessary upgrades.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI companions are becoming emotional lifelines

Researchers at Waseda University found that three in four users turn to AI for emotional advice, reflecting growing psychological attachment to chatbot companions. Their new tool, the Experiences in Human-AI Relationships Scale, reveals that many users see AI as a steady presence in their lives.

Two patterns of attachment emerged: anxiety, where users fear being emotionally let down by AI, and avoidance, marked by discomfort with emotional closeness. These patterns closely resemble human relationship styles, despite AI’s inability to reciprocate or abandon its users.

Lead researcher Fan Yang warned that emotionally vulnerable individuals could be exploited by platforms encouraging overuse or financial spending. Sudden disruptions in service, he noted, might even trigger feelings akin to grief or separation anxiety.

The study, based on Chinese participants, suggests AI systems might shape user behaviour depending on design and cultural context. Further research is planned to explore links between AI use and long-term well-being, social function, and emotional regulation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Fake DeepSeek ads deliver ‘BrowserVenom’ malware to curious AI users

Cybercriminals are exploiting the surge in interest around local AI tools by spreading a new malware strain via Google ads.

According to antivirus firm Kaspersky, attackers use fake ads for DeepSeek’s R1 AI model to deliver ‘BrowserVenom,’ malware designed to intercept and manipulate a user’s internet traffic instead of merely infecting the device.

The attackers purchased ads appearing in Google search results for ‘deep seek r1.’ Users who clicked were redirected to a fake website—deepseek-platform[.]com—which mimicked the official DeepSeek site and offered a file named AI_Launcher_1.21.exe.

Kaspersky’s analysis of the site’s source code uncovered developer notes in Russian, suggesting the campaign is operated by Russian-speaking actors.

Once launched, the fake installer displayed a decoy installation screen for the R1 model, but silently deployed malware that altered browser configurations.

BrowserVenom rerouted web traffic through a proxy server controlled by the hackers, allowing them to decrypt browsing sessions and capture sensitive data, while evading most antivirus tools.

Kaspersky reports confirmed infections across multiple countries, including Brazil, Cuba, India, and South Africa.

The malicious domain has since been taken down. However, the incident highlights the dangers of downloading AI tools from unofficial sources. Open-source models like DeepSeek R1 require technical setup, typically involving multiple configuration steps, instead of a simple Windows installer.

As interest in running local AI grows, users should verify official domains and avoid shortcuts that could lead to malware.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Coreweave expands AI infrastructure with Google tie‑up

CoreWeave has secured a pivotal role in Google Cloud’s new infrastructure partnership with OpenAI. The specialist GPU cloud provider will supply Nvidia‑based compute resources to Google, which will allocate them to OpenAI to support the rising demand for services like ChatGPT.

Already under a $11.9 billion, five‑year contract with OpenAI and backed by a $350 million equity investment, CoreWeave recently expanded the deal by another. 

Adding Google Cloud as a customer helps the company diversify beyond Microsoft, its top client in 2024.

The arrangement positions Google as a neutral provider of AI computing power amid fierce competition with Amazon and Microsoft.

CoreWeave’s stock has surged over 270 percent since its March IPO, illustrating investor confidence in its expanding role in the AI infrastructure boom.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta sues AI firm over fake nude images created without consent

Meta has filed a lawsuit against Joy Timeline HK Ltd in Hong Kong, accusing the firm of using its platforms to promote a generative AI app called CrushAI.

The app allows users to digitally strip clothes from images of people, often without consent. Meta said the company repeatedly attempted to bypass ad review systems to push harmful content, advertising phrases like ‘see anyone naked’ on Facebook and Instagram.

The lawsuit follows Meta’s broader investigation into ‘nudity’ apps, which are increasingly being used to create sexualised deepfakes. Despite bans on nonconsensual explicit content, the company said such apps evade detection by disguising ads or rotating domain names after bans.

According to research by Cornell Tech, over 8,000 ads linked to CrushAI appeared on Meta platforms in recent months. Meta responded by updating its detection systems with a broader range of flagged terms and emojis.

While many of the manipulated images target celebrities, concerns are growing about the use of such technology to exploit minors. In one case in Florida, two teenagers used similar AI tools to create sexualised images of classmates.

The issue has sparked legal action in the US, where the Take It Down Act, signed into law earlier this year, criminalises the publication of nonconsensual deepfake imagery and simplifies removal processes for victims.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI video tool Veo 3 Fast rolls out for Gemini and Flow users

Google has introduced Veo 3 Fast, a speedier version of its AI video-generation tool that promises to cut production time in half.

Now available to Gemini Pro and Flow Pro users, the updated model creates 720p videos more than twice as fast as its predecessor—marking a step forward in scaling Google’s video AI infrastructure.

Gemini Pro subscribers can now generate three Veo 3 Fast videos daily as part of their plan. Meanwhile, Flow Pro users can create videos using 20 credits per clip, significantly reducing costs compared to previous models. Gemini Ultra subscribers enjoy even more generous limits under their premium tier.

The upgrade is more than a performance boost. According to Google’s Josh Woodward, the improved infrastructure also paves the way for smoother playback and better subtitles—enhancements that aim to make video creation more seamless and accessible.

Google also tests voice prompt capabilities, allowing users to express video ideas and watch them materialise on-screen.

Although Veo 3 Fast is currently limited to 720p resolution, it encourages creativity through rapid iteration. Users can experiment with prompts and edits without perfecting their first try.

While the results won’t rival Hollywood, the model opens up new possibilities for businesses, creators, and filmmakers looking to prototype video ideas or produce content without traditional filming quickly.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Czechia bids to host major EU AI computing centre

Czechia is positioning itself to host one of the European Union’s planned AI ‘gigafactories’—large-scale computing centres designed to strengthen Europe’s AI capabilities and reduce dependence on global powers like the United States.

Jan Kavalírek, the Czech government’s AI envoy, confirmed to the Czech News Agency that talks with a private investor are progressing well and potential locations have already been identified.

While the application for the EU funding is not yet final, Kavalírek said, ‘We are very close.’ The EU has allocated around €20 billion for these AI infrastructure projects, with significant contributions also expected from private sources.

Germany and Denmark are also vying to host similar facilities. If successful, the bid made by Czechia could transform the country into a key AI infrastructure hub for Europe, offering powerful computational resources for sectors such as public administration, healthcare, and finance.

Lukáš Benzl, director of the Czech Association of Artificial Intelligence, described the initiative as a potential ‘motor for the AI economy’ across the continent.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Canva makes AI use mandatory in coding interviews

Australian design giant Canva has revamped its technical interview process to reflect modern software development, requiring job candidates to demonstrate their ability to use AI coding assistants.

The shift aims to assess better how candidates would perform on the job, where tools like Copilot and Claude are already part of engineers’ daily workflows.

Previously, interviews focused on coding fundamentals without assistance. Now, candidates must solve engineering problems using AI tools in ways that reflect real-world scenarios, demanding effective prompting and judgement rather than simply getting correct outputs.

The change follows internal experiments where Canva found that AI could easily handle traditional interview questions. Company leaders argue that the old approach no longer measured actual job readiness, given that many engineers rely on AI to navigate codebases and accelerate prototyping.

By integrating AI into hiring, Canva joins many firms that are adapting to a tech workforce increasingly shaped by intelligent automation. The company says the goal is not to test if candidates know how to use AI but how well they use it to build solutions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!