Nvidia’s Huang: ‘The new programming language is human’

Speaking at London Tech Week, Nvidia CEO Jensen Huang called AI ‘the great equaliser,’ explaining how AI has transformed who can access and control computing power.

In the past, computing was limited to a select few with technical skills in languages like C++ or Python. ‘We had to learn programming languages. We had to architect it. We had to design these computers that are very complicated,’ Huang said.

That’s no longer necessary, he explained. ‘Now, all of a sudden, there’s a new programming language. This new programming language is called ‘human’,’ Huang said, highlighting how AI now understands natural language commands. ‘Most people don’t know C++, very few people know Python, and everybody, as you know, knows human.’

He illustrated his point with an example: asking an AI to write a poem in the style of Shakespeare. The AI delivers, he said—and if you ask it to improve, it will reflect and try again, just like a human collaborator.

For Huang, this shift is not just technical but transformational. It makes the power of advanced computing accessible to billions, not just a trained few.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK health sector adopts AI while legacy tech lags

The UK’s healthcare sector has rapidly embraced AI, with adoption rising from 47% in 2024 to 94% in 2025, according to SOTI’s new report ‘Healthcare’s Digital Dilemma’.

AI is no longer confined to administrative tasks, as 52% of healthcare professionals now use it for diagnosis and 57% to personalise treatments. SOTI’s Stefan Spendrup said AI is improving how care is delivered and helping clinicians make more accurate, patient-specific decisions.

However, outdated systems continue to hamper progress. Nearly all UK health IT leaders report challenges from legacy infrastructure, Internet of Things (IoT) tech and telehealth tools.

While connected devices are widely used to support patients remotely, 73% rely on outdated, unintegrated systems, significantly higher than the global average of 65%.

These systems limit interoperability and heighten security risks, with 64% experiencing regular tech failures and 43% citing network vulnerabilities.

The strain on IT teams is evident. Nearly half report being unable to deploy or manage new devices efficiently, and more than half struggle to offer remote support or access detailed diagnostics. Time lost to troubleshooting remains a common frustration.

The UK appears more affected by these challenges than other countries surveyed, indicating a pressing need to modernise infrastructure instead of continuing to patch ageing technology.

While data security remains the top IT concern in UK healthcare, fewer IT teams see it as a priority, falling from 33% in 2024 to 24% in 2025. Despite a sharp increase in data breaches, the number rose from 71% to 84%.

Spendrup warned that innovation risks being undermined unless the sector rebalances priorities, with more focus on securing systems and replacing legacy tools instead of delaying necessary upgrades.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Ghana to bridge the digital divide with fairer data pricing

Ghana will boost mobile data bundle values starting July 2025 to improve affordability and bridge digital divides. The Minister for Communication, Digital Technology and Innovations announced that all major mobile network operators in Ghana: AirtelTigo, Telecel, and MTN, will implement a minimum 10% increase in data bundle volumes.

MTN will go further, increasing bundles by 15% and reinstating its popular GHC399 Social Media bundle. These changes aim to address consumer concerns about data pricing and improving value for money.

To support this initiative, telecom providers have pledged significant investments. AirtelTigo, Telecel, and MTN will collectively invest around $150 million in network upgrades by the end of 2025. The National Communications Authority (NCA) will step up its oversight, conducting a nationwide quality of service assessment in the final quarter of 2025.

Additionally, quarterly billing integrity tests will be introduced to ensure that users are charged fairly and accurately. Operators failing to meet service standards will face sanctions. Furthermore, the Minister noted that tax rationalisation could lead to future reductions in data prices. A new telecom tariff framework is under development, which may result in additional cost savings for consumers.

The reforms target steep, uneven data prices that still block many Ghanaians from online services, especially in rural areas. By raising bundle values and tightening oversight, authorities aim to make internet access fairer and more affordable nationwide.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

India urges preference for state telecom providers

The Department of Telecommunications (DoT) in India has introduced a policy urging all state governments and Union Territories to prioritise state-run telecom operators Bharat Sanchar Nigam Ltd (BSNL) and Mahanagar Telephone Nigam Ltd (MTNL) for their communication needs. Although not legally binding, that policy directive emphasises data security as a key reason for favouring these public sector providers.

DoT Secretary underscored the increasing competitiveness of BSNL and MTNL, noting that BSNL now manages MTNL’s operations and will set up a dedicated nodal point to cater to state governments efficiently. The move marks a significant strategic shift toward promoting state-owned telecom companies in government communications.

The policy has raised concerns among private telecom companies, who fear losing valuable government contracts to BSNL and MTNL. Private providers currently hold over 92% of the market’s revenue, and government contracts are especially important for smaller ISPs with tight margins. Diverting these contracts could significantly hurt their financial stability.

BSNL and MTNL were initially created to operate independently and compete fairly with private firms. This new policy, favouring them, risks undermining that independence and disrupting the telecom sector’s competitive balance in India.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI companions are becoming emotional lifelines

Researchers at Waseda University found that three in four users turn to AI for emotional advice, reflecting growing psychological attachment to chatbot companions. Their new tool, the Experiences in Human-AI Relationships Scale, reveals that many users see AI as a steady presence in their lives.

Two patterns of attachment emerged: anxiety, where users fear being emotionally let down by AI, and avoidance, marked by discomfort with emotional closeness. These patterns closely resemble human relationship styles, despite AI’s inability to reciprocate or abandon its users.

Lead researcher Fan Yang warned that emotionally vulnerable individuals could be exploited by platforms encouraging overuse or financial spending. Sudden disruptions in service, he noted, might even trigger feelings akin to grief or separation anxiety.

The study, based on Chinese participants, suggests AI systems might shape user behaviour depending on design and cultural context. Further research is planned to explore links between AI use and long-term well-being, social function, and emotional regulation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Brazilian telcos to push back on network fee ban

Brazilian telecom operators strongly oppose a bill that would ban charging network fees to big tech companies, arguing that these companies consume most of the network traffic, about 80% of mobile and 55% of fixed usage. The telcos propose a compromise where big techs either pay for usage above a set threshold or contribute a portion of their revenues to help fund network infrastructure expansion.

While internet companies claim they already invest heavily in infrastructure such as submarine cables and content delivery networks, telcos view the bill as unconstitutional economic intervention but prefer to reach a negotiated agreement rather than pursue legal battles. In addition, telcos are advocating for the renewal of existing tax exemptions on Internet of Things (IoT) devices and connectivity fees, which are set to expire in 2025.

These exemptions have supported significant growth in IoT applications across sectors like banking and agribusiness, with non-human connections such as sensors and payment machines now driving mobile network growth more than traditional phone lines. Although the federal government aims to reduce broad tax breaks, Congress’s outlook favours maintaining these IoT incentives to sustain connectivity expansion.

Discussions are also underway about expanding the regulatory scope of Brazil’s telecom watchdog, Anatel, to cover additional digital infrastructure elements such as DNS services, internet exchange points, content delivery networks, and cloud platforms. That potential expansion would require amendments to Brazil’s internet civil rights and telecommunications frameworks, reflecting evolving priorities in managing the country’s digital infrastructure and services.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

NSA and allies set AI data security standards

The National Security Agency (NSA), in partnership with cybersecurity agencies from the UK, Australia, New Zealand, and others, has released new guidance aimed at protecting the integrity of data used in AI systems.

The Cybersecurity Information Sheet (CSI), titled AI Data Security: Best Practices for Securing Data Used to Train & Operate AI Systems, outlines emerging threats and sets out 10 recommendations for mitigating them.

The CSI builds on earlier joint guidance from 2024 and signals growing global urgency around safeguarding AI data instead of allowing systems to operate without scrutiny.

The report identifies three core risks across the AI lifecycle: tampered datasets in the supply chain, deliberately poisoned data intended to manipulate models, and data drift—where changes in data over time reduce performance or create new vulnerabilities.

These threats may erode accuracy and trust in AI systems, particularly in sensitive areas like defence, cybersecurity, and critical infrastructure, where even small failures could have far-reaching consequences.

To reduce these risks, the CSI recommends a layered approach—starting with sourcing data from reliable origins and tracking provenance using digital credentials. It advises encrypting data at every stage, verifying integrity with cryptographic tools, and storing data securely in certified systems.

Additional measures include deploying zero trust architecture, using digital signatures for dataset updates, and applying access controls based on data classification instead of relying on broad administrative trust.

The CSI also urges ongoing risk assessments using frameworks like NIST’s AI RMF, encouraging organisations to anticipate emerging challenges such as quantum threats and advanced data manipulation.

Privacy-preserving techniques, secure deletion protocols, and infrastructure controls round out the recommendations.

Rather than treating AI as a standalone tool, the guidance calls for embedding strong data governance and security throughout its lifecycle to prevent compromised systems from shaping critical outcomes.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Fake DeepSeek ads deliver ‘BrowserVenom’ malware to curious AI users

Cybercriminals are exploiting the surge in interest around local AI tools by spreading a new malware strain via Google ads.

According to antivirus firm Kaspersky, attackers use fake ads for DeepSeek’s R1 AI model to deliver ‘BrowserVenom,’ malware designed to intercept and manipulate a user’s internet traffic instead of merely infecting the device.

The attackers purchased ads appearing in Google search results for ‘deep seek r1.’ Users who clicked were redirected to a fake website—deepseek-platform[.]com—which mimicked the official DeepSeek site and offered a file named AI_Launcher_1.21.exe.

Kaspersky’s analysis of the site’s source code uncovered developer notes in Russian, suggesting the campaign is operated by Russian-speaking actors.

Once launched, the fake installer displayed a decoy installation screen for the R1 model, but silently deployed malware that altered browser configurations.

BrowserVenom rerouted web traffic through a proxy server controlled by the hackers, allowing them to decrypt browsing sessions and capture sensitive data, while evading most antivirus tools.

Kaspersky reports confirmed infections across multiple countries, including Brazil, Cuba, India, and South Africa.

The malicious domain has since been taken down. However, the incident highlights the dangers of downloading AI tools from unofficial sources. Open-source models like DeepSeek R1 require technical setup, typically involving multiple configuration steps, instead of a simple Windows installer.

As interest in running local AI grows, users should verify official domains and avoid shortcuts that could lead to malware.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta sues AI firm over fake nude images created without consent

Meta has filed a lawsuit against Joy Timeline HK Ltd in Hong Kong, accusing the firm of using its platforms to promote a generative AI app called CrushAI.

The app allows users to digitally strip clothes from images of people, often without consent. Meta said the company repeatedly attempted to bypass ad review systems to push harmful content, advertising phrases like ‘see anyone naked’ on Facebook and Instagram.

The lawsuit follows Meta’s broader investigation into ‘nudity’ apps, which are increasingly being used to create sexualised deepfakes. Despite bans on nonconsensual explicit content, the company said such apps evade detection by disguising ads or rotating domain names after bans.

According to research by Cornell Tech, over 8,000 ads linked to CrushAI appeared on Meta platforms in recent months. Meta responded by updating its detection systems with a broader range of flagged terms and emojis.

While many of the manipulated images target celebrities, concerns are growing about the use of such technology to exploit minors. In one case in Florida, two teenagers used similar AI tools to create sexualised images of classmates.

The issue has sparked legal action in the US, where the Take It Down Act, signed into law earlier this year, criminalises the publication of nonconsensual deepfake imagery and simplifies removal processes for victims.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta hires top AI talent from Google and Sesame

Meta is assembling a new elite AI research team aimed at developing artificial general intelligence (AGI), luring top talent from rivals including Google and AI voice startup Sesame.

Among the high-profile recruits is Jack Rae, a principal researcher from Google DeepMind, and Johan Schalkwyk, a machine learning lead from Sesame.

Meta is also close to finalising a multibillion-dollar investment in Scale AI, a data-labelling startup led by CEO Alexandr Wang, who is also expected to join the new initiative.

The new group, referred to internally as the ‘superintelligence’ team, is central to CEO Mark Zuckerberg’s plan to close the gap with competitors like Google and OpenAI.

Following disappointment over Meta’s recent AI model, Llama 4, Zuckerberg hopes the newly acquired expertise will help improve future models and expand AI capabilities in areas like voice and personalisation.

Zuckerberg has taken a hands-on approach, personally recruiting engineers and researchers, sometimes meeting with them at his homes in California. Meta is reportedly offering compensation packages worth tens of millions of dollars, including equity, to attract leading AI talent.

The company aims to hire around 50 people for the team and is also seeking a chief scientist to help lead the effort.

The broader strategy involves investing heavily in data, chips, and human expertise — three pillars of advanced AI development. By partnering with Scale AI and recruiting high-profile researchers, Meta is trying to strengthen its position in the AI race.

Meanwhile, rivals like Google are reinforcing their defences, with Koray Kavukcuoglu named as chief AI architect in a new senior leadership role to ensure DeepMind’s technologies are more tightly integrated into Google’s products.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!