AI health tools need clinicians to prevent serious risks, Oxford study warns

The University of Oxford has warned that AI in healthcare, primarily through chatbots, should not operate without human oversight.

Researchers found that relying solely on AI for medical self-assessment could worsen patient outcomes instead of improving access to care. The study highlights how these tools, while fast and data-driven, fall short in delivering the judgement and empathy that only trained professionals can offer.

The findings raise alarm about the growing dependence on AI to fill gaps caused by doctor shortages and rising costs. Chatbots are often seen as scalable solutions, but without rigorous human-in-the-loop validation, they risk providing misleading or inconsistent information, particularly to vulnerable groups.

Rather than helping, they might increase health disparities by delaying diagnosis or giving patients false reassurance.

Experts are calling for safer, hybrid approaches that embed clinicians into the design and ongoing use of AI tools. The Oxford researchers stress that continuous testing, ethical safeguards and clear protocols must be in place.

Instead of replacing clinical judgement, AI should support it. The future of digital healthcare hinges not just on innovation but on responsibility and partnership between technology and human care.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Taiwan tightens rules on chip shipments to China

Taiwan has officially banned the export of chips and chiplets to China’s Huawei and SMIC, joining the US in tightening restrictions on advanced semiconductor transfers.

The decision follows reports that TSMC, the world’s largest contract chipmaker, was unknowingly misled into supplying chiplets used in Huawei’s Ascend 910B AI accelerator. The US Commerce Department had reportedly considered a fine of over $1 billion against TSMC for that incident.

Taiwan’s new rules aim to prevent further breaches by requiring export permits for any transactions with Huawei or SMIC.

The distinction between chips and chiplets is key to the case. Traditional chips are built as single-die monoliths using the same process node, while chiplets are modular and can combine various specialised components, such as CPU or AI cores.

Huawei allegedly used shell companies to acquire chiplets from TSMC, bypassing existing US restrictions. If TSMC had known the true customer, it likely would have withheld the order. Taiwan’s new export controls are designed to ensure stricter oversight of future transactions and prevent repeat deceptions.

The broader geopolitical stakes are clear. Taiwan views the transfer of advanced chips to China as a national security threat, given Beijing’s ambitions to reunify with Taiwan and the potential militarisation of high-end semiconductors.

With Huawei claiming its processors are nearly on par with Western chips—though analysts argue they lag two to three generations behind—the export ban could further isolate China’s chipmakers.

Speculation persists that Taiwan’s move was partly influenced by negotiations with the US to avoid the proposed fine on TSMC, bringing both countries into closer alignment on chip sanctions.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

UK health sector adopts AI while legacy tech lags

The UK’s healthcare sector has rapidly embraced AI, with adoption rising from 47% in 2024 to 94% in 2025, according to SOTI’s new report ‘Healthcare’s Digital Dilemma’.

AI is no longer confined to administrative tasks, as 52% of healthcare professionals now use it for diagnosis and 57% to personalise treatments. SOTI’s Stefan Spendrup said AI is improving how care is delivered and helping clinicians make more accurate, patient-specific decisions.

However, outdated systems continue to hamper progress. Nearly all UK health IT leaders report challenges from legacy infrastructure, Internet of Things (IoT) tech and telehealth tools.

While connected devices are widely used to support patients remotely, 73% rely on outdated, unintegrated systems, significantly higher than the global average of 65%.

These systems limit interoperability and heighten security risks, with 64% experiencing regular tech failures and 43% citing network vulnerabilities.

The strain on IT teams is evident. Nearly half report being unable to deploy or manage new devices efficiently, and more than half struggle to offer remote support or access detailed diagnostics. Time lost to troubleshooting remains a common frustration.

The UK appears more affected by these challenges than other countries surveyed, indicating a pressing need to modernise infrastructure instead of continuing to patch ageing technology.

While data security remains the top IT concern in UK healthcare, fewer IT teams see it as a priority, falling from 33% in 2024 to 24% in 2025. Despite a sharp increase in data breaches, the number rose from 71% to 84%.

Spendrup warned that innovation risks being undermined unless the sector rebalances priorities, with more focus on securing systems and replacing legacy tools instead of delaying necessary upgrades.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

NSA and allies set AI data security standards

The National Security Agency (NSA), in partnership with cybersecurity agencies from the UK, Australia, New Zealand, and others, has released new guidance aimed at protecting the integrity of data used in AI systems.

The Cybersecurity Information Sheet (CSI), titled AI Data Security: Best Practices for Securing Data Used to Train & Operate AI Systems, outlines emerging threats and sets out 10 recommendations for mitigating them.

The CSI builds on earlier joint guidance from 2024 and signals growing global urgency around safeguarding AI data instead of allowing systems to operate without scrutiny.

The report identifies three core risks across the AI lifecycle: tampered datasets in the supply chain, deliberately poisoned data intended to manipulate models, and data drift—where changes in data over time reduce performance or create new vulnerabilities.

These threats may erode accuracy and trust in AI systems, particularly in sensitive areas like defence, cybersecurity, and critical infrastructure, where even small failures could have far-reaching consequences.

To reduce these risks, the CSI recommends a layered approach—starting with sourcing data from reliable origins and tracking provenance using digital credentials. It advises encrypting data at every stage, verifying integrity with cryptographic tools, and storing data securely in certified systems.

Additional measures include deploying zero trust architecture, using digital signatures for dataset updates, and applying access controls based on data classification instead of relying on broad administrative trust.

The CSI also urges ongoing risk assessments using frameworks like NIST’s AI RMF, encouraging organisations to anticipate emerging challenges such as quantum threats and advanced data manipulation.

Privacy-preserving techniques, secure deletion protocols, and infrastructure controls round out the recommendations.

Rather than treating AI as a standalone tool, the guidance calls for embedding strong data governance and security throughout its lifecycle to prevent compromised systems from shaping critical outcomes.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Fake DeepSeek ads deliver ‘BrowserVenom’ malware to curious AI users

Cybercriminals are exploiting the surge in interest around local AI tools by spreading a new malware strain via Google ads.

According to antivirus firm Kaspersky, attackers use fake ads for DeepSeek’s R1 AI model to deliver ‘BrowserVenom,’ malware designed to intercept and manipulate a user’s internet traffic instead of merely infecting the device.

The attackers purchased ads appearing in Google search results for ‘deep seek r1.’ Users who clicked were redirected to a fake website—deepseek-platform[.]com—which mimicked the official DeepSeek site and offered a file named AI_Launcher_1.21.exe.

Kaspersky’s analysis of the site’s source code uncovered developer notes in Russian, suggesting the campaign is operated by Russian-speaking actors.

Once launched, the fake installer displayed a decoy installation screen for the R1 model, but silently deployed malware that altered browser configurations.

BrowserVenom rerouted web traffic through a proxy server controlled by the hackers, allowing them to decrypt browsing sessions and capture sensitive data, while evading most antivirus tools.

Kaspersky reports confirmed infections across multiple countries, including Brazil, Cuba, India, and South Africa.

The malicious domain has since been taken down. However, the incident highlights the dangers of downloading AI tools from unofficial sources. Open-source models like DeepSeek R1 require technical setup, typically involving multiple configuration steps, instead of a simple Windows installer.

As interest in running local AI grows, users should verify official domains and avoid shortcuts that could lead to malware.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta sues AI firm over fake nude images created without consent

Meta has filed a lawsuit against Joy Timeline HK Ltd in Hong Kong, accusing the firm of using its platforms to promote a generative AI app called CrushAI.

The app allows users to digitally strip clothes from images of people, often without consent. Meta said the company repeatedly attempted to bypass ad review systems to push harmful content, advertising phrases like ‘see anyone naked’ on Facebook and Instagram.

The lawsuit follows Meta’s broader investigation into ‘nudity’ apps, which are increasingly being used to create sexualised deepfakes. Despite bans on nonconsensual explicit content, the company said such apps evade detection by disguising ads or rotating domain names after bans.

According to research by Cornell Tech, over 8,000 ads linked to CrushAI appeared on Meta platforms in recent months. Meta responded by updating its detection systems with a broader range of flagged terms and emojis.

While many of the manipulated images target celebrities, concerns are growing about the use of such technology to exploit minors. In one case in Florida, two teenagers used similar AI tools to create sexualised images of classmates.

The issue has sparked legal action in the US, where the Take It Down Act, signed into law earlier this year, criminalises the publication of nonconsensual deepfake imagery and simplifies removal processes for victims.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta hires top AI talent from Google and Sesame

Meta is assembling a new elite AI research team aimed at developing artificial general intelligence (AGI), luring top talent from rivals including Google and AI voice startup Sesame.

Among the high-profile recruits is Jack Rae, a principal researcher from Google DeepMind, and Johan Schalkwyk, a machine learning lead from Sesame.

Meta is also close to finalising a multibillion-dollar investment in Scale AI, a data-labelling startup led by CEO Alexandr Wang, who is also expected to join the new initiative.

The new group, referred to internally as the ‘superintelligence’ team, is central to CEO Mark Zuckerberg’s plan to close the gap with competitors like Google and OpenAI.

Following disappointment over Meta’s recent AI model, Llama 4, Zuckerberg hopes the newly acquired expertise will help improve future models and expand AI capabilities in areas like voice and personalisation.

Zuckerberg has taken a hands-on approach, personally recruiting engineers and researchers, sometimes meeting with them at his homes in California. Meta is reportedly offering compensation packages worth tens of millions of dollars, including equity, to attract leading AI talent.

The company aims to hire around 50 people for the team and is also seeking a chief scientist to help lead the effort.

The broader strategy involves investing heavily in data, chips, and human expertise — three pillars of advanced AI development. By partnering with Scale AI and recruiting high-profile researchers, Meta is trying to strengthen its position in the AI race.

Meanwhile, rivals like Google are reinforcing their defences, with Koray Kavukcuoglu named as chief AI architect in a new senior leadership role to ensure DeepMind’s technologies are more tightly integrated into Google’s products.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Turing Institute urges stronger AI research security

The Alan Turing Institute has warned that urgent action is needed to protect the UK’s AI research from espionage, intellectual property theft and risky international collaborations.

Its Centre for Emerging Technology and Security (CETaS) has published a report calling for a culture shift across academia to better recognise and mitigate these risks.

The report highlights inconsistencies in how security risks are understood within universities and a lack of incentives for researchers to follow government guidelines. Sensitive data, the dual-use potential of AI, and the risk of reverse engineering make the field particularly vulnerable to foreign interference.

Lead author Megan Hughes stressed the need for a coordinated response, urging government and academia to find the right balance between academic freedom and security.

The report outlines 13 recommendations, including expanding support for academic due diligence and issuing clearer guidance on high-risk international partnerships.

Further proposals call for compulsory research security training, better threat communication from national agencies, and standardised risk assessments before publishing AI research.

The aim is to build a more resilient research ecosystem as global interest in UK-led AI innovation continues to grow.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta launches AI to teach machines physical reasoning

Meta Platforms has unveiled V-JEPA 2, an open-source AI model designed to help machines understand and interact with the physical world more like humans do.

The technology allows AI agents, including delivery robots and autonomous vehicles, to observe object movement and predict how those objects may behave in response to actions.

The company explained that just as people intuitively understand that a ball tossed into the air will fall due to gravity, AI systems using V-JEPA 2 gain a similar ability to reason about cause and effect in the real world.

Trained using video data, the model recognises patterns in how humans and objects move and interact, helping machines learn to reach, grasp, and reposition items more naturally.

Meta described the tool as a step forward in building AI that can think ahead, plan actions and respond intelligently to dynamic environments. In lab tests, robots powered by V-JEPA 2 performed simple tasks that relied on spatial awareness and object handling.

The company, led by CEO Mark Zuckerberg, is ramping up its AI initiatives to compete with rivals like Microsoft, Google, and OpenAI. By improving machine reasoning through world models such as V-JEPA 2, Meta aims to accelerate its progress toward more advanced AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta and TikTok contest the EU’s compliance charges

Meta and TikTok have taken their fight against an the EU supervisory fee to Europe’s second-highest court, arguing that the charges are disproportionate and based on flawed calculations.

The fee, introduced under the Digital Services Act (DSA), requires major online platforms to pay 0.05% of their annual global net income to cover the European Commission’s oversight costs.

Meta questioned the Commission’s methodology, claiming the levy was based on the entire group’s revenue instead of the specific EU-based subsidiary.

The company’s lawyer told judges it still lacked clarity on how the fee was calculated, describing the process as opaque and inconsistent with the spirit of the law.

TikTok also criticised the charge, alleging inaccurate and discriminatory data inflated its payment.

Its legal team argued that user numbers were double-counted when people switched between devices. The Commission had wrongly calculated fees based on group profits rather than platform-specific earnings.

The Commission defended its approach, saying group resources should bear the cost when consolidated accounts are used. A ruling is expected from the General Court sometime next year.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!