FBI says China’s Salt Typhoon breached millions of Americans’ data

China’s Salt Typhoon cyberspies have stolen data from millions of Americans through a years-long intrusion into telecommunications networks, according to senior FBI officials. The campaign represents one of the most significant espionage breaches uncovered in the United States.

The Beijing-backed operation began in 2019 and remained hidden until last year. Authorities say at least 80 countries were affected, far beyond the nine American telcos initially identified, with around 200 US organisations compromised.

Targets included Verizon, AT&T, and over 100 current and former administration officials. Officials say the intrusions enabled Chinese operatives to geolocate mobile users, monitor internet traffic, and sometimes record phone calls.

Three Chinese firms, Sichuan Juxinhe, Beijing Huanyu Tianqiong, and Sichuan Zhixin Ruijie, have been tied to Salt Typhoon. US officials say they support China’s security services and military.

The FBI warns that the scale of indiscriminate targeting falls outside traditional espionage norms. Officials stress the need for stronger cybersecurity measures as China, Russia, Iran, and North Korea continue to advance their cyber operations against critical infrastructure and private networks.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Samsung and Chinese brands prepare Max rollout

Russia has been pushing for its state-backed messenger Max to be pre-installed on all smartphones sold in the country since September 2025. Chinese and South Korean manufacturers, including Samsung and Xiaomi, are reportedly preparing to comply, though official confirmation is still pending.

The Max platform, developed by VK (formerly Vkontakte), offers messaging, audio and video calls, file transfers, and payments. It is set to replace VK Messenger on the mandatory app list, signalling a shift away from foreign apps like Telegram and WhatsApp.

Integration may occur via software updates or prompts when inserting a Russian SIM card.

Concerns have arisen over potential surveillance, as Max collects sensitive personal data backed by the Russian government. Critics fear the platform may monitor users, reflecting Moscow’s push to control encrypted communications.

The rollout reflects Russia’s broader push for digital sovereignty. While companies navigate compliance, the move highlights the increasing tension between state-backed applications and widely used foreign messaging services in Russia.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Countries join stablecoin race to counter US dollar power

The GENIUS Act in the United States has given stablecoin issuers a clear legal framework, boosting the role of dollar-pegged tokens in the global economy. Their widespread use has strengthened demand for US dollars and Treasury bills, solidifying American financial dominance.

Other nations are now working on stablecoin projects to protect local currencies. China is developing a yuan-pegged stablecoin aimed at international trade, following the recent adoption of Hong Kong’s Stablecoins Bill.

Japan is also preparing to launch a yen-pegged token backed by government bills later this year, with Monex Group leading the initiative.

The European Union has accelerated its plans for a digital € in response to the rise of USD-backed stablecoins. Reports suggest the project could be launched on Ethereum or Solana, a move that has sparked criticism from the crypto community over privacy and data control.

Despite several euro-pegged tokens already in circulation, their market share remains negligible compared to dollar-backed stablecoins.

Stablecoins are increasingly seen as tools for remittances and savings and for strategic influence in the global financial system. Other countries may struggle to rival USD-pegged coins, but the race to launch national stablecoins is underway.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Global agencies and the FBI issue a warning on Salt Typhoon operations

The FBI, US agencies, and international partners have issued a joint advisory on a cyber campaign called ‘Salt Typhoon.’

The operation is said to have affected more than 200 US companies across 80 countries.

The advisory, co-released by the FBI, the National Security Agency, the Cybersecurity and Infrastructure Security Agency, and the Department of Defence Cyber Crime Centre, was also supported by agencies in the UK, Canada, Australia, Germany, Italy and Japan.

According to the statement, Salt Typhoon has focused on exploiting network infrastructure such as routers, virtual private networks and other edge devices.

The group has been previously linked to campaigns targeting US telecommunications networks in 2024. It has also been connected with activity involving a US National Guard network, the advisory names three Chinese companies allegedly providing products and services used in their operations.

Telecommunications, defence, transportation and hospitality organisations are advised to strengthen cybersecurity measures. Recommended actions include patching vulnerabilities, adopting zero-trust approaches and using the technical details included in the advisory.

Salt Typhoon, also known as Earth Estrie and Ghost Emperor, has been observed since at least 2019 and is reported to maintain long-term access to compromised devices.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

NVIDIA’s sales grow as the market questions AI momentum

Sales of AI chips by Nvidia rose strongly in its latest quarter, though the growth was less intense than in previous periods, raising questions about the sustainability of demand.

The company’s data centre division reported revenue of 41.1 billion USD between May and July, a 56% rise from last year but slightly below analyst forecasts.

Overall revenue reached 46.7 billion USD, while profit climbed to 26.4 billion USD, both higher than expected.

Nvidia forecasts sales of $54 billion USD for the current quarter.

CEO Jensen Huang said the company remains at the ‘beginning of the buildout’, with trillions expected to be spent on AI by the decade’s end.

However, investors pushed shares down 3% in extended trading, reflecting concerns that rapid growth is becoming harder to maintain as annual sales expand.

Nvidia’s performance was also affected by earlier restrictions on chip sales to China, although the removal of limits in exchange for a sales levy is expected to support future revenue.

Analysts noted that while AI continues to fuel stock market optimism, the pace of growth is slowing compared with the company’s earlier surge.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Samsung enhances TV and monitor range with Copilot AI

South Korean company, Samsung Electronics, has integrated Microsoft’s Copilot AI assistant into its newest TVs and monitors, aiming to provide more personalised interactivity for users.

The technology will be available across models released annually, including the premium Micro RGB TV. With Copilot built directly into displays, Samsung explained that viewers can use voice commands or a remote control to search, learn and engage with content more positively.

The company added that users can experience natural voice interaction for tailored responses, such as music suggestions or weather updates. Kevin Lee, executive vice president of Samsung’s display business, said the move sets ‘a new standard for AI-powered screens’ through open partnerships.

Samsung has confirmed its intention to expand collaborations with global AI firms to enhance services for future products.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google boosts Virginia with $9 billion AI and cloud projects

Alphabet’s Google has confirmed plans to invest $9 billion in Virginia by 2026, strengthening the state’s role as a hub for data infrastructure in the US.

The focus will be on AI and cloud computing, positioning Virginia at the forefront of global technological competition.

The plan includes a new Chesterfield County facility and expansion at existing campuses in Loudoun and Prince William counties. These centres are part of the digital backbone that supports cloud services and AI workloads.

Dominion Energy will supply power for the new Chesterfield project, which may take up to seven years before it is fully operational.

The rapid growth of data centres in Virginia has increased concerns about energy demand. Google said it is working with partners on efficiency and power management solutions and funding community development.

Earlier in August, the company announced a $1 billion initiative to provide every college student in Virginia with one year of free access to its AI Pro plan and training opportunities.

Google’s move follows a broader trend in the technology sector. Microsoft, Amazon, Alphabet, and Meta are expected to spend hundreds of billions of dollars on AI-related projects, with much dedicated to new data centres.

Northern Virginia remains the boom’s epicentre, with Loudoun County earning the name’ Data Centre Alley’ because it has concentrated facilities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google alerts users after detecting malware spread through captive portals

Warnings have been issued by Google to some users after detecting a web traffic hijacking campaign that delivered malware through manipulated login portals.

According to the company’s Threat Intelligence Group, attackers compromised network edge devices to modify captive portals, the login pages often seen when joining public Wi-Fi or corporate networks.

Instead of leading to legitimate security updates, the altered portals redirected users to a fake page presenting an ‘Adobe Plugin’ update. The file, once installed, deployed malware known as CANONSTAGER, which enabled the installation of a backdoor called SOGU.SEC.

The software, named AdobePlugins.exe, was signed with a valid GlobalSign certificate linked to Chengdu Nuoxin Times Technology Co, Ltd. Google stated it is tracking multiple malware samples connected to the same certificate.

The company attributed the campaign to a group it tracks as UNC6384, also known by other names including Mustang Panda, Silk Typhoon, and TEMP.Hex.

Google said it first detected the campaign in March 2025 and sent alerts to affected Gmail and Workspace users. The operation reportedly targeted diplomats in Southeast Asia and other entities worldwide, suggesting a potential link to cyber espionage activities.

Google advised users to enable Enhanced Safe Browsing in Chrome, keep devices updated, and use two-step verification for stronger protection.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Tencent Cloud sites exposed credentials and source code in major security lapse

Researchers have uncovered severe misconfigurations in two Tencent Cloud sites that exposed sensitive credentials and internal source code to the public. The flaws could have given attackers access to Tencent’s backend infrastructure and critical internal services.

Cybernews discovered the data leaks in July 2025, finding hardcoded plain-text passwords, a sensitive internal .git directory, and configuration files linked to Tencent’s load balancer and JEECG development platform.

Weak passwords, built from predictable patterns like the company name and year, increased the risk of exploitation.

The exposed data may have been accessible since April, leaving months of opportunity for scraping bots or malicious actors.

With administrative console access, attackers could have tampered with APIs, planted malicious code, pivoted deeper into Tencent’s systems, or abused the trusted domain for phishing campaigns.

Tencent confirmed the incident as a ‘known issue’ and has since closed access, though questions remain over how many parties may have already retrieved the exposed information.

Security experts warn that even minor oversights in cloud operations can cascade into serious vulnerabilities, especially for platforms trusted by millions worldwide.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Green AI and the battle between progress and sustainability

AI is increasingly recognised for its transformative potential and growing environmental footprint across industries. The development and deployment of large-scale AI models require vast computational resources, significant amounts of electricity, and extensive cooling infrastructure.

For instance, studies have shown that training a single large language model can consume as much electricity as several hundred households use in a year, while data centres operated by companies like Google and Microsoft require millions of litres of water annually to keep servers cool.

That has sparked an emerging debate around what is now often called ‘Green AI’, the effort to balance technological progress with sustainability concerns. On one side, critics warn that the rapid expansion of AI comes at a steep ecological cost, from high carbon emissions to intensive water and energy consumption.

On the other hand, proponents argue that AI can be a powerful tool for achieving sustainability goals, helping optimise energy use, supporting climate research, and enabling greener industrial practices. The tension between sustainability and progress is becoming central to discussions on digital policy, raising key questions.

Should governments and companies prioritise environmental responsibility, even if it slows down innovation? Or should innovation come first, with sustainability challenges addressed through technological solutions as they emerge?

Sustainability challenges

In the following paragraphs, we present the main sustainability challenges associated with the rapid expansion of AI technologies.

Energy consumption

The training of large-scale AI models requires massive computational power. Estimates suggest that developing state-of-the-art language models can demand thousands of GPUs running continuously for weeks or even months.

According to a 2019 study from the University of Massachusetts Amherst, training a single natural language processing model consumed roughly 284 tons of CO₂, equivalent to the lifetime emissions of five cars. As AI systems grow larger, their energy appetite only increases, raising concerns about the long-term sustainability of this trajectory.

Carbon emissions

Carbon emissions are closely tied to energy use. Unless powered by renewable sources, data centres rely heavily on electricity grids dominated by fossil fuels. Research indicates that the carbon footprint of training advanced models like GPT-3 and beyond is several orders of magnitude higher than that of earlier generations. That research highlights the environmental trade-offs of pursuing ever more powerful AI systems in a world struggling to meet climate targets.

Water usage and cooling needs

Beyond electricity, AI infrastructure consumes vast amounts of water for cooling. For example, Google reported that in 2021 its data centre in The Dalles, Oregon, used over 1.2 billion litres of water to keep servers cool. Similarly, Microsoft faced criticism in Arizona for operating data centres in drought-prone areas while local communities dealt with water restrictions. Such cases highlight the growing tension between AI infrastructure needs and local environmental realities.

Resource extraction and hardware demands

The production of AI hardware also has ecological costs. High-performance chips and GPUs depend on rare earth minerals and other raw materials, the extraction of which often involves environmentally damaging mining practices. That adds a hidden, but significant footprint to AI development, extending beyond data centres to global supply chains.

Inequality in resource distribution

Finally, the environmental footprint of AI amplifies global inequalities. Wealthier countries and major corporations can afford the infrastructure and energy needed to sustain AI research, while developing countries face barriers to entry.

At the same time, the environmental consequences, whether in the form of emissions or resource shortages, are shared globally. That creates a digital divide where the benefits of AI are unevenly distributed, while the costs are widely externalised.

Progress & solutions

While AI consumes vast amounts of energy, it is also being deployed to reduce energy use in other domains. Google’s DeepMind, for example, developed an AI system that optimised cooling in its data centres, cutting energy consumption for cooling by up to 40%. Similarly, IBM has used AI to optimise building energy management, reducing operational costs and emissions. These cases show how the same technology that drives consumption can also be leveraged to reduce it.

AI has also become crucial in climate modelling, weather prediction, and renewable energy management. For example, Microsoft’s AI for Earth program supports projects worldwide that use AI to address biodiversity loss, climate resilience, and water scarcity.

Artificial intelligence also plays a role in integrating renewable energy into smart grids, such as in Denmark, where AI systems balance fluctuations in wind power supply with real-time demand.

There is growing momentum toward making AI itself more sustainable. OpenAI and other research groups have increasingly focused on techniques like model distillation (compressing large models into smaller versions) and low-rank adaptation (LoRA) methods, which allow for fine-tuning large models without retraining the entire system.

Winston AI Sustainability 1290x860 1

Meanwhile, startups like Hugging Face promote open-source, lightweight models (like DistilBERT) that drastically cut training and inference costs while remaining highly effective.

Hardware manufacturers are also moving toward greener solutions. NVIDIA and Intel are working on chips with lower energy requirements per computation. On the infrastructure side, major providers are pledging ambitious climate goals.

Microsoft has committed to becoming carbon negative by 2030, while Google aims to operate on 24/7 carbon-free energy by 2030. Amazon Web Services is also investing heavily in renewable-powered data centres to offset the footprint of its rapidly growing cloud services.

Governments and international organisations are beginning to address the sustainability dimension of AI. The European Union’s AI Act introduces transparency and reporting requirements that could extend to environmental considerations in the future.

In addition, initiatives such as the OECD’s AI Principles highlight sustainability as a core value for responsible AI. Beyond regulation, some governments fund research into ‘green AI’ practices, including Canada’s support for climate-oriented AI startups and the European Commission’s Horizon Europe program, which allocates resources to environmentally conscious AI projects.

Balancing the two sides

The debate around Green AI ultimately comes down to finding the right balance between environmental responsibility and technological progress. On one side, the race to build ever larger and more powerful models has accelerated innovation, driving breakthroughs in natural language processing, robotics, and healthcare. In contrast, the ‘bigger is better’ approach comes with significant sustainability costs that are increasingly difficult to ignore.

Some argue that scaling up is essential for global competitiveness. If one region imposes strict environmental constraints on AI development, while another prioritises innovation at any cost, the former risks falling behind in technological leadership. The following dilemma raises a geopolitical question that sustainability standards may be desirable, but they must also account for the competitive dynamics of global AI development.

Malaysia aims to lead Asia’s clean tech revolution through rare earth processing and circular economy efforts.

At the same time, advocates of smaller and more efficient models suggest that technological progress does not necessarily require exponential growth in size and energy demand. Innovations in model efficiency, greener hardware, and renewable-powered infrastructure demonstrate that sustainability and progress are not mutually exclusive.

Instead, they can be pursued in tandem if the right incentives, investments, and policies are in place. That type of development leaves governments, companies, and researchers facing a complex but urgent question. Should the future of AI prioritise scale and speed, or should it embrace efficiency and sustainability as guiding principles?

Conclusion

The discussion on Green AI highlights one of the central dilemmas of our digital age. How to pursue technological progress without undermining environmental sustainability. On the one hand, the growth of large-scale AI systems brings undeniable costs in terms of energy, water, and resource consumption. At the same time, the very same technology holds the potential to accelerate solutions to global challenges, from optimising renewable energy to advancing climate research.

Rather than framing sustainability and innovation as opposing forces, the debate increasingly suggests the need for integration. Policies, corporate strategies, and research initiatives will play a decisive role in shaping this balance. Whether through regulations that encourage transparency, investments in renewable infrastructure, or innovations in model efficiency, the path forward will depend on aligning technological ambition with ecological responsibility.

In the end, the future of AI may not rest on choosing between sustainability and progress, but on finding ways to ensure that progress itself becomes sustainable.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!