Stethoscope with AI identifies heart issues in seconds

A new stethoscope powered by AI could enable doctors to identify three serious heart conditions in just seconds, according to UK researchers.

The device replaces the traditional chest piece with a small sensor that records both electrical signals from the heart and the sound of blood flow, which are then analysed in the cloud by AI trained on large datasets.

The AI tool has shown strong results in trials across more than 200 GP practices, with patients tested using the stethoscope being more than twice as likely to be diagnosed with heart failure within 12 months compared with those assessed through usual care.

It was also 3.45 times more likely to detect atrial fibrillation and almost twice as likely to identify heart valve disease.

Researchers from Imperial College London and Imperial College Healthcare NHS Trust said the technology could help doctors provide treatment at an earlier stage instead of waiting until patients present in hospital with advanced symptoms.

The findings, known as Tricorder, will be presented at the European Society of Cardiology Congress in Madrid.

The project, supported by the National Institute for Health and Care Research, is now preparing for further rollouts in Wales, south London and Sussex. Experts described the innovation as a significant step in updating a medical tool that has remained largely unchanged for over 200 years.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

How local LLMs are changing AI access

As AI adoption rises, more users explore running large language models (LLMs) locally instead of relying on cloud providers.

Local deployment gives individuals control over data, reduces costs, and avoids limits imposed by AI-as-a-service companies. Users can now experiment with AI on their own hardware thanks to software and hardware capabilities.

Concerns over privacy and data sovereignty are driving interest. Many cloud AI services retain user data for years, even when privacy assurances are offered.

By running models locally, companies and hobbyists can ensure compliance with GDPR and maintain control over sensitive information while leveraging high-performance AI tools.

Hardware considerations like GPU memory and processing power are central to local LLM performance. Quantisation techniques allow models to run efficiently with reduced precision, enabling use on consumer-grade machines or enterprise hardware.

Software frameworks like llama.cpp, Jan, and LM Studio simplify deployment, making local AI accessible to non-engineers and professionals across industries.

Local models are suitable for personalised tasks, learning, coding assistance, and experimentation, although cloud models remain stronger for large-scale enterprise applications.

As tools and model quality improve, running AI on personal devices may become a standard alternative, giving users more control over cost, privacy, and performance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Salt Typhoon hack reveals fragility of global communications networks

The FBI has warned that Chinese hackers are exploiting structural weaknesses in global telecom infrastructure, following the Salt Typhoon incident that penetrated US networks on an unprecedented scale. Officials say the Beijing-linked group has compromised data from millions of Americans since 2019.

Unlike previous cyber campaigns focused narrowly on government targets, Salt Typhoon’s intrusions exposed how ordinary mobile users can be swept up in espionage. Call records, internet traffic, and even geolocation data were siphoned from carriers, with the operation spreading to more than 80 countries.

Investigators linked the campaign to three Chinese tech firms supplying products to intelligence agencies and China’s People’s Liberation Army. Experts warn that the attacks demonstrate the fragility of cross-border telecom systems, where a single compromised provider can expose entire networks.

US and allied agencies have urged providers to harden defences with encryption and stricter monitoring. Analysts caution that global telecoms will continue to be fertile ground for state-backed groups without structural reforms.

The revelations have intensified geopolitical tensions, with the FBI describing Salt Typhoon as one of the most reckless and far-reaching espionage operations ever detected.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Estonia’s Vocal Image uses AI to boost communication skills

Estonia-based startup Vocal Image is deploying AI to help people improve their vocal and communication skills. Its app features an interactive library of tongue twisters, breathing exercises and suggestions for gestures, all enhanced with automated feedback and personalised coaching tips.

Led by CEO Nick Lahoika, the company has scaled rapidly, achieving upwards of 4 million downloads and serving approximately 160,000 active users.

Vocal Image positions itself as an affordable, mobile-first alternative to traditional one-on-one voice training, rooted in Lahoika’s own journey overcoming speaking anxiety.

The app’s design enables users to practice at home with privacy and convenience, offering daily, bite-sized lessons informed by AI that assess strengths, suggest improvements and nurture confidence with no need for human instructors.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Fragmenting digital identities with aliases offers added security

People often treat their email address as harmless, just a digital ID for receipts and updates. In reality, it acts as a skeleton key linking behaviour, purchases, and personal data across platforms.

Using the same email everywhere makes tracking easy. Companies may encrypt addresses, but behavioural patterns remain intact. Aliases disrupt this chain by creating unique addresses that forward mail without revealing your true identity.

Each alias becomes a useful tracker. If one is compromised or starts receiving spam, it can simply be disabled, cutting off the problem at its source.

Aliases also reduce the fallout of data breaches. Instead of exposing your main email to countless third-party tools, scripts, and mailing platforms, an alias shields your core digital identity.

Beyond privacy, aliases encourage healthier habits. They force a pause before signing up, add structure through custom rules, and help fragment your identity, thereby lowering the risks associated with any single breach.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

China sets 10-year targets for mass AI adoption

China has set its most ambitious AI adoption targets yet, aiming to embed the technology across industries, governance, and daily life within the next decade.

According to a new State Council directive, AI use should reach 70% of the population by 2027 and 90% by 2030, with a complete shift to what it calls an ‘intelligent society’ by 2035.

The plan would mean nearly one billion Chinese citizens regularly using AI-powered services or devices within two years, a timeline compared to the rapid rise of smartphones.

Although officials acknowledge risks such as opaque models, hallucinations and algorithmic discrimination, the policy calls for frameworks to govern ‘natural persons, digital persons, and intelligent robots’.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Anthropic updates Claude’s policy with new data training choices

The US AI startup has announced an update to its data policy for Claude users, introducing an option to allow conversations and coding sessions to be used for training future AI models.

Anthropic stated that all Claude Free, Pro, and Max users, including those using Claude Code, will be asked to make a decision by September 28, 2025.

According to Anthropic, users who opt in will permit retention of their conversations for up to five years, with the data contributing to improvements in areas such as reasoning, coding, and analysis.

Those who choose not to participate will continue under the current policy, where conversations are deleted within thirty days unless flagged for legal or policy reasons.

The new policy does not extend to enterprise products, including Claude for Work, Claude Gov, Claude for Education, or API access through partners like Amazon Bedrock and Google Cloud Vertex AI. These remain governed by separate contractual agreements.

Anthropic noted that the choice will also apply to new users during sign-up, while existing users will be prompted through notifications to review their privacy settings.

The company emphasised that users remain in control of their data and that manually deleted conversations will not be used for training.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Attackers bypass email security by abusing Microsoft Teams defaults

A phishing campaign exploits Microsoft Teams’ external communication features, with attackers posing as IT helpdesk staff to gain access to screen sharing and remote control. The method sidesteps traditional email security controls by using Teams’ default settings.

The attacks exploit Microsoft 365’s default external collaboration feature, which allows unauthenticated users to contact organisations. Axon Team reports attackers create malicious Entra ID tenants with .onmicrosoft.com domains or use compromised accounts to initiate chats.

Although Microsoft issues warnings for suspicious messages, attackers bypass these by initiating external voice calls, which generate no alerts. Once trust is established, they request screen sharing, enabling them to monitor victims’ activity and guide them toward malicious actions.

The highest risk arises where organisations enable external remote-control options, giving attackers potential full access to workstations directly through Teams. However, this eliminates the need for traditional remote tools like QuickAssist or AnyDesk, creating a severe security exposure.

Defenders are advised to monitor Microsoft 365 audit logs for markers such as ChatCreated, MessageSent, and UserAccepted events, as well as TeamsImpersonationDetected alerts. Restricting external communication and strengthening user awareness remain key to mitigating this threat.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

FBI says China’s Salt Typhoon breached millions of Americans’ data

China’s Salt Typhoon cyberspies have stolen data from millions of Americans through a years-long intrusion into telecommunications networks, according to senior FBI officials. The campaign represents one of the most significant espionage breaches uncovered in the United States.

The Beijing-backed operation began in 2019 and remained hidden until last year. Authorities say at least 80 countries were affected, far beyond the nine American telcos initially identified, with around 200 US organisations compromised.

Targets included Verizon, AT&T, and over 100 current and former administration officials. Officials say the intrusions enabled Chinese operatives to geolocate mobile users, monitor internet traffic, and sometimes record phone calls.

Three Chinese firms, Sichuan Juxinhe, Beijing Huanyu Tianqiong, and Sichuan Zhixin Ruijie, have been tied to Salt Typhoon. US officials say they support China’s security services and military.

The FBI warns that the scale of indiscriminate targeting falls outside traditional espionage norms. Officials stress the need for stronger cybersecurity measures as China, Russia, Iran, and North Korea continue to advance their cyber operations against critical infrastructure and private networks.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Anthropic reports misuse of its AI tools in cyber incidents

AI company Anthropic has reported that its chatbot Claude was misused in cyber incidents, including attempts to carry out hacking operations and employment-related fraud.

The firm said its technology had been used to help write malicious code and assist threat actors in planning attacks. However, it also stated that it could disrupt the activity and notify authorities. Anthropic said it is continuing to improve its monitoring and detection systems.

In one case, the company reported that AI-supported attacks targeted at least 17 organisations, including government entities. The attackers allegedly relied on the tool to support decision-making, from choosing which data to target to drafting ransom demands.

Experts note that the rise of so-called agentic AI, which can operate with greater autonomy, has increased concerns about potential misuse.

Anthropic also identified attempts to use AI models to support fraudulent applications for remote jobs at major companies. The AI was reportedly used to create convincing profiles, generate applications, and assist in work-related tasks once jobs had been secured.

Analysts suggest that AI can strengthen such schemes, but most cyber incidents still involve long-established techniques like phishing and exploiting software vulnerabilities.

Cybersecurity specialists emphasise the importance of proactive defence as AI tools evolve. They caution that organisations should treat AI platforms as sensitive systems requiring strong safeguards to prevent their exploitation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!