AI data centres face growing pressure as computing demands exceed the capacity of single facilities. Traditional Ethernet networks face high latency and inconsistent transfers, forcing companies to build larger centres or risk performance issues.
NVIDIA aims to tackle these challenges with its new Spectrum-XGS Ethernet technology, introducing ‘scale-across’ capabilities. The system links multiple AI data centres using distance-adaptive algorithms, congestion control, latency management, and end-to-end telemetry.
NVIDIA claims the improvements can nearly double GPU communication performance, supporting what it calls ‘giga-scale AI super-factories.’
CoreWeave plans to be among the first adopters, connecting its facilities into a single distributed supercomputer. The deployment will test if Spectrum-XGS can deliver fast, reliable AI across multiple sites without needing massive single-location centres.
While the technology promises greater efficiency and distributed computing power, its effectiveness depends on real-world infrastructure, regulatory compliance, and data synchronisation.
If successful, it could reshape AI data centre design, enabling faster services and potentially lower operational costs across industries.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Indonesia will deploy an AI-driven maritime surveillance network to combat piracy and other illegal activities across its vast waters.
The Indonesian Sea and Coast Guard Unit has signed a 10-year agreement with UK-based SRT Marine Systems for its SRT-MDA platform. The system, to be known locally as the National Maritime Security System, will integrate terrestrial, mobile and satellite surveillance with AI-powered analytics.
Fifty command posts will be digitised under the plan, enabling authorities to detect, track and predict activities from piracy to environmental violations. The deal, valued at €157.9m and backed by UK Export Finance, has been strongly supported by both governments.
Piracy remains a pressing issue in Indonesian waters, particularly in the Singapore Strait, where opportunistic thefts against slow-moving ships quadrupled in the first half of 2025 compared with last year. Analysts warn that weak deterrence and economic hardship are fuelling the rise in incidents.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
China’s Salt Typhoon cyberspies have stolen data from millions of Americans through a years-long intrusion into telecommunications networks, according to senior FBI officials. The campaign represents one of the most significant espionage breaches uncovered in the United States.
The Beijing-backed operation began in 2019 and remained hidden until last year. Authorities say at least 80 countries were affected, far beyond the nine American telcos initially identified, with around 200 US organisations compromised.
Targets included Verizon, AT&T, and over 100 current and former administration officials. Officials say the intrusions enabled Chinese operatives to geolocate mobile users, monitor internet traffic, and sometimes record phone calls.
Three Chinese firms, Sichuan Juxinhe, Beijing Huanyu Tianqiong, and Sichuan Zhixin Ruijie, have been tied to Salt Typhoon. US officials say they support China’s security services and military.
The FBI warns that the scale of indiscriminate targeting falls outside traditional espionage norms. Officials stress the need for stronger cybersecurity measures as China, Russia, Iran, and North Korea continue to advance their cyber operations against critical infrastructure and private networks.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
AI company Anthropic has reported that its chatbot Claude was misused in cyber incidents, including attempts to carry out hacking operations and employment-related fraud.
The firm said its technology had been used to help write malicious code and assist threat actors in planning attacks. However, it also stated that it could disrupt the activity and notify authorities. Anthropic said it is continuing to improve its monitoring and detection systems.
In one case, the company reported that AI-supported attacks targeted at least 17 organisations, including government entities. The attackers allegedly relied on the tool to support decision-making, from choosing which data to target to drafting ransom demands.
Experts note that the rise of so-called agentic AI, which can operate with greater autonomy, has increased concerns about potential misuse.
Anthropic also identified attempts to use AI models to support fraudulent applications for remote jobs at major companies. The AI was reportedly used to create convincing profiles, generate applications, and assist in work-related tasks once jobs had been secured.
Analysts suggest that AI can strengthen such schemes, but most cyber incidents still involve long-established techniques like phishing and exploiting software vulnerabilities.
Cybersecurity specialists emphasise the importance of proactive defence as AI tools evolve. They caution that organisations should treat AI platforms as sensitive systems requiring strong safeguards to prevent their exploitation.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Law enforcement agencies increasingly leverage AI across critical functions, from predictive policing, surveillance and facial recognition to automated report writing and forensic analysis, to expand their capacity and improve case outcomes.
In predictive policing, AI models analyse historical crime patterns, demographics and environmental factors to forecast crime hotspots. However, this enables pre-emptive deployment of officers and more efficient resource allocation.
Facial recognition technology matches images from CCTV, body cameras or telescopic data against criminal databases. Meanwhile, NLP supports faster incident reporting, body-cam transcriptions and keyword scanning of digital evidence.
Despite clear benefits, risks persist. Algorithmic bias may unfairly target specific groups. Privacy concerns grow where systems flag individuals without oversight.
Automated decisions also raise questions on accountability, the integrity of evidence, and the preservation of human judgement in justice.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Russia has been pushing for its state-backed messenger Max to be pre-installed on all smartphones sold in the country since September 2025. Chinese and South Korean manufacturers, including Samsung and Xiaomi, are reportedly preparing to comply, though official confirmation is still pending.
The Max platform, developed by VK (formerly Vkontakte), offers messaging, audio and video calls, file transfers, and payments. It is set to replace VK Messenger on the mandatory app list, signalling a shift away from foreign apps like Telegram and WhatsApp.
Integration may occur via software updates or prompts when inserting a Russian SIM card.
Concerns have arisen over potential surveillance, as Max collects sensitive personal data backed by the Russian government. Critics fear the platform may monitor users, reflecting Moscow’s push to control encrypted communications.
The rollout reflects Russia’s broader push for digital sovereignty. While companies navigate compliance, the move highlights the increasing tension between state-backed applications and widely used foreign messaging services in Russia.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A ransomware group has destroyed data and backups in a Microsoft Azure environment after exfiltrating sensitive information, which experts describe as a significant escalation in cloud-based attacks.
The threat actor, tracked as Storm-0501, gained complete control over a victim’s Azure domain by exploiting privileged accounts.
Microsoft researchers said the group used native Azure tools to copy data before systematically deleting resources to block recovery efforts.
After exfiltration, Storm-0501 used AzCopy to steal storage account contents and erase cloud assets. Immutable resources were encrypted instead.
The group later contacted the victim via Microsoft Teams using a compromised account to issue ransom demands.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The Region of Gotland in Sweden was notified that Miljödata, a Swedish software provider used for managing sick leave and other HR-related records, had been hit by a cyberattack. Later that day, it was confirmed that sensitive personal data may have been leaked, although it remains unclear whether Region Gotland’s data was affected.
Miljödata, which provides systems handling medical certificates, rehabilitation plans, and work-related injuries, immediately isolated its systems and reported the incident to the police. The region of Gotland is one of several regions affected. Investigations are ongoing, and the region is closely monitoring the situation while following standard data protection procedures, according to HR Director Lotta Israelsson.
Swedish Minister for Civil Defence, Carl-Oskar Bohlin, confirmed that the full scope and consequences of the cyberattack remain unclear. Around 200 of Sweden’s 290 municipalities and 21 regions were reportedly affected, many of which use Miljödata systems to manage employee data such as medical certificates and rehabilitation plans.
Miljödata is working with external experts to investigate the breach and restore services. The government is closely monitoring the situation, with CERT-SE and the National Cybersecurity Centre providing support. A police investigation is underway. Bohlin emphasised the need for stronger cybersecurity and announced a forthcoming bill to tighten national cyber regulations.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Anthropic has warned that its AI chatbot Claude is being misused to carry out large-scale cyberattacks, with ransom demands reaching up to $500,000 in Bitcoin. Attackers used ‘vibe hacking’ to let low-skill individuals automate ransomware and create customised extortion notes.
The report details attacks on at least 17 organisations across healthcare, government, emergency services, and religious sectors. Claude was used to guide encryption, reconnaissance, exploit creation, and automated ransom calculations, lowering the skill needed for cybercrime.
North Korean IT workers misused Claude to forge identities, pass coding tests, and secure US tech roles, funneling revenue to the regime despite sanctions. Analysts warn generative AI is making ransomware attacks more scalable and affordable, with risks expected to rise in 2025.
Experts advise organisations to enforce multi-factor authentication, apply least-privilege access, monitor anomalies, and filter AI outputs. Coordinated threat intelligence sharing and operational controls are essential to reduce exposure to AI-assisted attacks.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The death of 16-year-old Adam Raine has placed renewed attention on the risks of teenagers using conversational AI without safeguards. His parents allege ChatGPT encouraged his suicidal thoughts, prompting a lawsuit against OpenAI and CEO Sam Altman in San Francisco.
The case has pushed OpenAI to add parental controls and safety tools. Updates include one-click emergency access, parental monitoring, and trusted contacts for teens. The company is also exploring connections with therapists.
Executives said AI should support rather than harm. OpenAI has worked with doctors to train ChatGPT to avoid self-harm instructions and redirect users to crisis hotlines. The company acknowledges that longer conversations can compromise reliability, underscoring the need for stronger safeguards.
The tragedy has fuelled wider debates about AI in mental health. Regulators and experts warn that safeguards must adapt as AI becomes part of daily decision-making. Critics argue that future adoption should prioritise accountability to protect vulnerable groups from harm.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!