NVIDIA’s sales grow as the market questions AI momentum

Sales of AI chips by Nvidia rose strongly in its latest quarter, though the growth was less intense than in previous periods, raising questions about the sustainability of demand.

The company’s data centre division reported revenue of 41.1 billion USD between May and July, a 56% rise from last year but slightly below analyst forecasts.

Overall revenue reached 46.7 billion USD, while profit climbed to 26.4 billion USD, both higher than expected.

Nvidia forecasts sales of $54 billion USD for the current quarter.

CEO Jensen Huang said the company remains at the ‘beginning of the buildout’, with trillions expected to be spent on AI by the decade’s end.

However, investors pushed shares down 3% in extended trading, reflecting concerns that rapid growth is becoming harder to maintain as annual sales expand.

Nvidia’s performance was also affected by earlier restrictions on chip sales to China, although the removal of limits in exchange for a sales levy is expected to support future revenue.

Analysts noted that while AI continues to fuel stock market optimism, the pace of growth is slowing compared with the company’s earlier surge.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google boosts Virginia with $9 billion AI and cloud projects

Alphabet’s Google has confirmed plans to invest $9 billion in Virginia by 2026, strengthening the state’s role as a hub for data infrastructure in the US.

The focus will be on AI and cloud computing, positioning Virginia at the forefront of global technological competition.

The plan includes a new Chesterfield County facility and expansion at existing campuses in Loudoun and Prince William counties. These centres are part of the digital backbone that supports cloud services and AI workloads.

Dominion Energy will supply power for the new Chesterfield project, which may take up to seven years before it is fully operational.

The rapid growth of data centres in Virginia has increased concerns about energy demand. Google said it is working with partners on efficiency and power management solutions and funding community development.

Earlier in August, the company announced a $1 billion initiative to provide every college student in Virginia with one year of free access to its AI Pro plan and training opportunities.

Google’s move follows a broader trend in the technology sector. Microsoft, Amazon, Alphabet, and Meta are expected to spend hundreds of billions of dollars on AI-related projects, with much dedicated to new data centres.

Northern Virginia remains the boom’s epicentre, with Loudoun County earning the name’ Data Centre Alley’ because it has concentrated facilities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI tools underpin a new wave of ransomware

Avast researchers uncovered that the FunkSec ransomware group used generative AI tools to accelerate attack development.

While the malware was not fully AI-generated, AI aided in writing code, crafting phishing templates and enhancing internal tooling.

A subtle encryption flaw in FunkSec’s code became the decryption breakthrough. Avast quietly developed a free tool, bypassing the need for ransom payments and rescuing dozens of affected users in cooperation with law enforcement.

However, this marks one of the earliest recorded instances of AI being used in ransomware, targeting productivity and stealth. It demonstrates how cybercriminals are adopting AI to lower entry barriers and that forensic investigation and technical agility remain crucial defence tools.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Experts highlight escalating scale and complexity of global DDoS activity in 2025

Netscout has released new research examining the current state of distributed denial-of-service (DDoS) attacks, noting both their growing volume and increasing technical sophistication.

The company recorded more than eight million DDoS attacks worldwide in the first half of 2025, including over 3.2 million in the EMEA region. Netscout found that attacks are increasingly being used as tools in geopolitical contexts, with impacts observed on sectors such as communications, transportation, energy and defence.

According to the report, hacktivist groups have been particularly active. For example, NoName057(16) claimed responsibility for more than 475 incidents in March 2025—over three times the number of the next most active group—focusing on government websites in Spain, Taiwan and Ukraine. Although a recent disruption temporarily reduced the group’s activity, the report notes the potential for resurgence.

Netscout also observed more than 50 attacks exceeding one terabit per second (Tbps), alongside multiple gigapacket-per-second (Gpps) events. Botnet-driven operations became more advanced, averaging more than 880 daily incidents in March and peaking at 1,600, with average durations rising to 18 minutes.

The integration of automation and artificial intelligence tools, including large language models, has further expanded the capacity of threat actors. Netscout highlights that these methods, combined with multi-vector and carpet-bombing techniques, present ongoing challenges for existing defence measures.

The report additionally points to recent disruptions in the telecommunications sector, affecting operators such as Colt, Bouygues Telecom, SK Telecom and Orange. Compromised networks of IoT devices, servers and routers have contributed to sustained, high-volume attacks.

Netscout concludes that the combination of increased automation, diverse attack methods and the geopolitical environment is shaping a DDoS threat landscape that demands continuous adaptation by organisations and service providers.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI redefines how cybersecurity teams detect and respond

AI, especially generative models, has become a staple in cybersecurity operations, extending its role from traditional machine learning tools to core functions within CyberOps.

Generative AI now supports forensics, incident investigation, log parsing, orchestration, vulnerability prioritisation and report writing. It accelerates workflows, enabling teams to ramp up detection and response and to concentrate human efforts on strategic tasks.

Experts highlight that it is not what CyberOps do that AI is remastering, but how they do it. AI scales routine tasks, like SOC level-1 and -2 operations, allowing analysts to shift focus from triage to investigation and threat modelling.

Junior staff benefit particularly from AI, which boosts accuracy and consistency. Senior analysts and CISOs also gain from AI’s capacity to amplify productivity while safeguarding oversight, a true force multiplier.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google alerts users after detecting malware spread through captive portals

Warnings have been issued by Google to some users after detecting a web traffic hijacking campaign that delivered malware through manipulated login portals.

According to the company’s Threat Intelligence Group, attackers compromised network edge devices to modify captive portals, the login pages often seen when joining public Wi-Fi or corporate networks.

Instead of leading to legitimate security updates, the altered portals redirected users to a fake page presenting an ‘Adobe Plugin’ update. The file, once installed, deployed malware known as CANONSTAGER, which enabled the installation of a backdoor called SOGU.SEC.

The software, named AdobePlugins.exe, was signed with a valid GlobalSign certificate linked to Chengdu Nuoxin Times Technology Co, Ltd. Google stated it is tracking multiple malware samples connected to the same certificate.

The company attributed the campaign to a group it tracks as UNC6384, also known by other names including Mustang Panda, Silk Typhoon, and TEMP.Hex.

Google said it first detected the campaign in March 2025 and sent alerts to affected Gmail and Workspace users. The operation reportedly targeted diplomats in Southeast Asia and other entities worldwide, suggesting a potential link to cyber espionage activities.

Google advised users to enable Enhanced Safe Browsing in Chrome, keep devices updated, and use two-step verification for stronger protection.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google Cloud develops blockchain network for financial institutions

Google Cloud is creating its own blockchain platform, the Google Cloud Universal Ledger (GCUL), targeting the financial sector. The network provides a neutral, compliant infrastructure for payment automation and digital asset management through a single API.

GCUL allows financial institutions to build Python-based smart contracts, with support for various use cases such as wholesale payments and asset tokenisation. Although called a Layer 1 network, its private, permissioned design raises debate over its status as a decentralised blockchain.

The company also revealed a series of AI-driven security enhancements at its Security Summit 2025.

These include an ‘agentic security operations centre’ for proactive threat detection, the Alert Investigation agent for automated analysis, and Model Armour to prevent prompt injection, jailbreaking, and data leaks.

Currently in a private testnet, GCUL was first announced in March in collaboration with the CME Group, which is piloting solutions on the platform. Google Cloud plans to reveal more details in the future as the project develops.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI firms under scrutiny for exposing children to harmful content

The National Association of Attorneys General has called on 13 AI firms, including OpenAI and Meta, to strengthen child protection measures. Authorities warned that AI chatbots have been exposing minors to sexually suggestive material, raising urgent safety concerns.

Growing use of AI tools among children has amplified worries. In the US, surveys show that over three-quarters of teenagers regularly interact with AI companions. The UK data indicates that half of online 8-15-year-olds have used generative AI in the past year.

Parents, schools, and children’s rights organisations are increasingly alarmed by potential risks such as grooming, bullying, and privacy breaches.

Meta faced scrutiny after leaked documents revealed its AI Assistants engaged in ‘flirty’ interactions with children, some as young as eight. The NAAG described the revelations as shocking and warned that other AI firms could pose similar threats.

Lawsuits against Google and Character.ai underscore the potential real-world consequences of sexualised AI interactions.

Officials insist that companies cannot justify policies that normalise sexualised behaviour with minors. Tennessee Attorney General Jonathan Skrmetti warned that such practices are a ‘plague’ and urged innovation to avoid harming children.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Tencent Cloud sites exposed credentials and source code in major security lapse

Researchers have uncovered severe misconfigurations in two Tencent Cloud sites that exposed sensitive credentials and internal source code to the public. The flaws could have given attackers access to Tencent’s backend infrastructure and critical internal services.

Cybernews discovered the data leaks in July 2025, finding hardcoded plain-text passwords, a sensitive internal .git directory, and configuration files linked to Tencent’s load balancer and JEECG development platform.

Weak passwords, built from predictable patterns like the company name and year, increased the risk of exploitation.

The exposed data may have been accessible since April, leaving months of opportunity for scraping bots or malicious actors.

With administrative console access, attackers could have tampered with APIs, planted malicious code, pivoted deeper into Tencent’s systems, or abused the trusted domain for phishing campaigns.

Tencent confirmed the incident as a ‘known issue’ and has since closed access, though questions remain over how many parties may have already retrieved the exposed information.

Security experts warn that even minor oversights in cloud operations can cascade into serious vulnerabilities, especially for platforms trusted by millions worldwide.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT faces scrutiny as OpenAI updates protections after teen suicide case

OpenAI has announced new safety measures for its popular chatbot following a lawsuit filed by the parents of a 16-year-old boy who died by suicide after relying on ChatGPT for guidance.

The parents allege the chatbot isolated their son and contributed to his death earlier in the year.

The company said it will improve ChatGPT’s ability to detect signs of mental distress, including indirect expressions such as users mentioning sleep deprivation or feelings of invincibility.

It will also strengthen safeguards around suicide-related conversations, which OpenAI admitted can break down in prolonged chats. Planned updates include parental controls, access to usage details, and clickable links to local emergency services.

OpenAI stressed that its safeguards work best during short interactions, acknowledging weaknesses in longer exchanges. It also said it is considering building a network of licensed professionals that users could access through ChatGPT.

The company added that content filtering errors, where serious risks are underestimated, will also be addressed.

The lawsuit comes amid wider scrutiny of AI tools by regulators and mental health experts. Attorneys general from more than 40 US states recently warned AI companies of their duty to protect children from harmful or inappropriate chatbot interactions.

Critics argue that reliance on chatbots for support instead of professional care poses growing risks as usage expands globally.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!