Beijing seeks to curb excess AI investment while sustaining growth

China has pledged to rein in excessive competition in AI, signalling Beijing’s desire to avoid wasteful investment while keeping the technology central to its economic strategy.

The National Development and Reform Commission stated that provinces should develop AI in a coordinated manner, leveraging local strengths to prevent duplication and overlap. Officials in China emphasised the importance of orderly flows of talent, capital, and resources.

The move follows President Xi Jinping’s warnings about unchecked local investment. Authorities aim to prevent overcapacity problems, such as those seen in electric vehicles, which have fueled deflationary pressures in other industries.

While global investment in data centres has surged, Beijing is adopting a calibrated approach. The state also vowed stronger national planning and support for private firms, aiming to nurture new domestic leaders in AI.

At the same time, policymakers are pushing to attract private capital into traditional sectors, while considering more central spending on social projects to ease local government debt burdens and stimulate long-term consumption.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Stethoscope with AI identifies heart issues in seconds

A new stethoscope powered by AI could enable doctors to identify three serious heart conditions in just seconds, according to UK researchers.

The device replaces the traditional chest piece with a small sensor that records both electrical signals from the heart and the sound of blood flow, which are then analysed in the cloud by AI trained on large datasets.

The AI tool has shown strong results in trials across more than 200 GP practices, with patients tested using the stethoscope being more than twice as likely to be diagnosed with heart failure within 12 months compared with those assessed through usual care.

It was also 3.45 times more likely to detect atrial fibrillation and almost twice as likely to identify heart valve disease.

Researchers from Imperial College London and Imperial College Healthcare NHS Trust said the technology could help doctors provide treatment at an earlier stage instead of waiting until patients present in hospital with advanced symptoms.

The findings, known as Tricorder, will be presented at the European Society of Cardiology Congress in Madrid.

The project, supported by the National Institute for Health and Care Research, is now preparing for further rollouts in Wales, south London and Sussex. Experts described the innovation as a significant step in updating a medical tool that has remained largely unchanged for over 200 years.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

How local LLMs are changing AI access

As AI adoption rises, more users explore running large language models (LLMs) locally instead of relying on cloud providers.

Local deployment gives individuals control over data, reduces costs, and avoids limits imposed by AI-as-a-service companies. Users can now experiment with AI on their own hardware thanks to software and hardware capabilities.

Concerns over privacy and data sovereignty are driving interest. Many cloud AI services retain user data for years, even when privacy assurances are offered.

By running models locally, companies and hobbyists can ensure compliance with GDPR and maintain control over sensitive information while leveraging high-performance AI tools.

Hardware considerations like GPU memory and processing power are central to local LLM performance. Quantisation techniques allow models to run efficiently with reduced precision, enabling use on consumer-grade machines or enterprise hardware.

Software frameworks like llama.cpp, Jan, and LM Studio simplify deployment, making local AI accessible to non-engineers and professionals across industries.

Local models are suitable for personalised tasks, learning, coding assistance, and experimentation, although cloud models remain stronger for large-scale enterprise applications.

As tools and model quality improve, running AI on personal devices may become a standard alternative, giving users more control over cost, privacy, and performance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Schneider joins SK Telecom on new AI data centre project in Ulsan

SK Telecom has expanded its partnership with Schneider Electric to develop an AI Data Centre (AIDC) in Ulsan.

Under the deal, Schneider Electric will supply mechanical, electrical and plumbing equipment, such as switchgear, transformers, automated control systems and Uninterruptible Power Supply units.

The agreement builds on a partnership announced at Mobile World Congress 2025 and includes using Schneider’s Electrical Transient Analyser Program within SK Telecom’s data centre management system.

It will allow operations to be optimised through a digital twin model instead of relying only on traditional monitoring tools.

Both companies have also agreed on prefabricated solutions to shorten construction times, reference designs for new facilities, and joint efforts to grow the Energy-as-a-Service business.

A Memorandum of Understanding extends the partnership to other SK Group affiliates, combining battery technologies with Uninterruptible Power Supply and Energy Storage Systems.

Executives said the collaboration would help set new standards for AI data centres and create synergies across the SK Group. It is also expected to support SK Telecom’s broader AI strategy while contributing to sustainable and efficient infrastructure development.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Salesforce customers hit by OAuth token breach

Security researchers have warned Salesforce customers after hackers stole data by exploiting OAuth access tokens linked to the Salesloft Drift integration, highlighting critical cybersecurity flaws.

Google’s Threat Intelligence Group (GTIG) reported that the threat actor UNC6395 used the tokens to infiltrate hundreds of Salesforce environments, exporting large volumes of sensitive information. Stolen data included AWS keys, passwords, and Snowflake tokens.

Experts warn that compromised SaaS integrations pose a central blind spot, since attackers inherit the same permissions as trusted apps and can often bypass multifactor authentication. Investigations are ongoing to determine whether connected systems, such as AWS or VPNs, were also breached.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Fragmenting digital identities with aliases offers added security

People often treat their email address as harmless, just a digital ID for receipts and updates. In reality, it acts as a skeleton key linking behaviour, purchases, and personal data across platforms.

Using the same email everywhere makes tracking easy. Companies may encrypt addresses, but behavioural patterns remain intact. Aliases disrupt this chain by creating unique addresses that forward mail without revealing your true identity.

Each alias becomes a useful tracker. If one is compromised or starts receiving spam, it can simply be disabled, cutting off the problem at its source.

Aliases also reduce the fallout of data breaches. Instead of exposing your main email to countless third-party tools, scripts, and mailing platforms, an alias shields your core digital identity.

Beyond privacy, aliases encourage healthier habits. They force a pause before signing up, add structure through custom rules, and help fragment your identity, thereby lowering the risks associated with any single breach.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

China sets 10-year targets for mass AI adoption

China has set its most ambitious AI adoption targets yet, aiming to embed the technology across industries, governance, and daily life within the next decade.

According to a new State Council directive, AI use should reach 70% of the population by 2027 and 90% by 2030, with a complete shift to what it calls an ‘intelligent society’ by 2035.

The plan would mean nearly one billion Chinese citizens regularly using AI-powered services or devices within two years, a timeline compared to the rapid rise of smartphones.

Although officials acknowledge risks such as opaque models, hallucinations and algorithmic discrimination, the policy calls for frameworks to govern ‘natural persons, digital persons, and intelligent robots’.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Anthropic updates Claude’s policy with new data training choices

The US AI startup has announced an update to its data policy for Claude users, introducing an option to allow conversations and coding sessions to be used for training future AI models.

Anthropic stated that all Claude Free, Pro, and Max users, including those using Claude Code, will be asked to make a decision by September 28, 2025.

According to Anthropic, users who opt in will permit retention of their conversations for up to five years, with the data contributing to improvements in areas such as reasoning, coding, and analysis.

Those who choose not to participate will continue under the current policy, where conversations are deleted within thirty days unless flagged for legal or policy reasons.

The new policy does not extend to enterprise products, including Claude for Work, Claude Gov, Claude for Education, or API access through partners like Amazon Bedrock and Google Cloud Vertex AI. These remain governed by separate contractual agreements.

Anthropic noted that the choice will also apply to new users during sign-up, while existing users will be prompted through notifications to review their privacy settings.

The company emphasised that users remain in control of their data and that manually deleted conversations will not be used for training.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Espionage fears rise as TAG-144 evolves techniques

A threat group known as TAG-144 has stepped up cyberattacks on South American government agencies, researchers have warned.

The group, also called Blind Eagle and APT-C-36, has been active since 2018 and is linked to espionage and extortion campaigns. Recent activity shows a sharp rise in cybercrime, spear-phishing, often using spoofed government email accounts to deliver remote access trojans.

Analysts say the group has shifted towards more advanced methods, embedding malware inside image files through steganography. Payloads are then extracted in memory, allowing attackers to evade antivirus software and maintain access to compromised systems.

Colombian government institutions have been hit hardest, with stolen credentials and sensitive data raising concerns over both financial and national security risks. Security experts warn that TAG-144’s evolving tactics blur the line between organised crime and state-backed espionage.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Attackers bypass email security by abusing Microsoft Teams defaults

A phishing campaign exploits Microsoft Teams’ external communication features, with attackers posing as IT helpdesk staff to gain access to screen sharing and remote control. The method sidesteps traditional email security controls by using Teams’ default settings.

The attacks exploit Microsoft 365’s default external collaboration feature, which allows unauthenticated users to contact organisations. Axon Team reports attackers create malicious Entra ID tenants with .onmicrosoft.com domains or use compromised accounts to initiate chats.

Although Microsoft issues warnings for suspicious messages, attackers bypass these by initiating external voice calls, which generate no alerts. Once trust is established, they request screen sharing, enabling them to monitor victims’ activity and guide them toward malicious actions.

The highest risk arises where organisations enable external remote-control options, giving attackers potential full access to workstations directly through Teams. However, this eliminates the need for traditional remote tools like QuickAssist or AnyDesk, creating a severe security exposure.

Defenders are advised to monitor Microsoft 365 audit logs for markers such as ChatCreated, MessageSent, and UserAccepted events, as well as TeamsImpersonationDetected alerts. Restricting external communication and strengthening user awareness remain key to mitigating this threat.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!