Global agencies and the FBI issue a warning on Salt Typhoon operations

The FBI, US agencies, and international partners have issued a joint advisory on a cyber campaign called ‘Salt Typhoon.’

The operation is said to have affected more than 200 US companies across 80 countries.

The advisory, co-released by the FBI, the National Security Agency, the Cybersecurity and Infrastructure Security Agency, and the Department of Defence Cyber Crime Centre, was also supported by agencies in the UK, Canada, Australia, Germany, Italy and Japan.

According to the statement, Salt Typhoon has focused on exploiting network infrastructure such as routers, virtual private networks and other edge devices.

The group has been previously linked to campaigns targeting US telecommunications networks in 2024. It has also been connected with activity involving a US National Guard network, the advisory names three Chinese companies allegedly providing products and services used in their operations.

Telecommunications, defence, transportation and hospitality organisations are advised to strengthen cybersecurity measures. Recommended actions include patching vulnerabilities, adopting zero-trust approaches and using the technical details included in the advisory.

Salt Typhoon, also known as Earth Estrie and Ghost Emperor, has been observed since at least 2019 and is reported to maintain long-term access to compromised devices.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

WhatsApp launches AI assistant for editing messages

Meta’s WhatsApp has introduced a new AI feature called Writing Help, designed to assist users in editing, rewriting, and refining the tone of their messages. The tool can adjust grammar, improve phrasing, or reframe a message in a more professional, humorous, or encouraging style before it is sent.

The feature operates through Meta’s Private Processing technology, which ensures that messages remain encrypted and private instead of being visible to WhatsApp or Meta.

According to the company, Writing Help processes requests anonymously and cannot trace them back to the user. The function is optional, disabled by default, and only applies to the chosen message.

To activate the feature, users can tap a small pencil icon that appears while composing a message.

In a demonstration, WhatsApp showed how the tool could turn ‘Please don’t leave dirty socks on the sofa’ into more light-hearted alternatives, including ‘Breaking news: Socks found chilling on the couch’ or ‘Please don’t turn the sofa into a sock graveyard.’

By introducing Writing Help, WhatsApp aims to make communication more flexible and engaging while keeping user privacy intact. The company emphasises that no information is stored, and AI-generated suggestions only appear if users decide to enable the option.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Experts highlight escalating scale and complexity of global DDoS activity in 2025

Netscout has released new research examining the current state of distributed denial-of-service (DDoS) attacks, noting both their growing volume and increasing technical sophistication.

The company recorded more than eight million DDoS attacks worldwide in the first half of 2025, including over 3.2 million in the EMEA region. Netscout found that attacks are increasingly being used as tools in geopolitical contexts, with impacts observed on sectors such as communications, transportation, energy and defence.

According to the report, hacktivist groups have been particularly active. For example, NoName057(16) claimed responsibility for more than 475 incidents in March 2025—over three times the number of the next most active group—focusing on government websites in Spain, Taiwan and Ukraine. Although a recent disruption temporarily reduced the group’s activity, the report notes the potential for resurgence.

Netscout also observed more than 50 attacks exceeding one terabit per second (Tbps), alongside multiple gigapacket-per-second (Gpps) events. Botnet-driven operations became more advanced, averaging more than 880 daily incidents in March and peaking at 1,600, with average durations rising to 18 minutes.

The integration of automation and artificial intelligence tools, including large language models, has further expanded the capacity of threat actors. Netscout highlights that these methods, combined with multi-vector and carpet-bombing techniques, present ongoing challenges for existing defence measures.

The report additionally points to recent disruptions in the telecommunications sector, affecting operators such as Colt, Bouygues Telecom, SK Telecom and Orange. Compromised networks of IoT devices, servers and routers have contributed to sustained, high-volume attacks.

Netscout concludes that the combination of increased automation, diverse attack methods and the geopolitical environment is shaping a DDoS threat landscape that demands continuous adaptation by organisations and service providers.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Google alerts users after detecting malware spread through captive portals

Warnings have been issued by Google to some users after detecting a web traffic hijacking campaign that delivered malware through manipulated login portals.

According to the company’s Threat Intelligence Group, attackers compromised network edge devices to modify captive portals, the login pages often seen when joining public Wi-Fi or corporate networks.

Instead of leading to legitimate security updates, the altered portals redirected users to a fake page presenting an ‘Adobe Plugin’ update. The file, once installed, deployed malware known as CANONSTAGER, which enabled the installation of a backdoor called SOGU.SEC.

The software, named AdobePlugins.exe, was signed with a valid GlobalSign certificate linked to Chengdu Nuoxin Times Technology Co, Ltd. Google stated it is tracking multiple malware samples connected to the same certificate.

The company attributed the campaign to a group it tracks as UNC6384, also known by other names including Mustang Panda, Silk Typhoon, and TEMP.Hex.

Google said it first detected the campaign in March 2025 and sent alerts to affected Gmail and Workspace users. The operation reportedly targeted diplomats in Southeast Asia and other entities worldwide, suggesting a potential link to cyber espionage activities.

Google advised users to enable Enhanced Safe Browsing in Chrome, keep devices updated, and use two-step verification for stronger protection.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Tencent Cloud sites exposed credentials and source code in major security lapse

Researchers have uncovered severe misconfigurations in two Tencent Cloud sites that exposed sensitive credentials and internal source code to the public. The flaws could have given attackers access to Tencent’s backend infrastructure and critical internal services.

Cybernews discovered the data leaks in July 2025, finding hardcoded plain-text passwords, a sensitive internal .git directory, and configuration files linked to Tencent’s load balancer and JEECG development platform.

Weak passwords, built from predictable patterns like the company name and year, increased the risk of exploitation.

The exposed data may have been accessible since April, leaving months of opportunity for scraping bots or malicious actors.

With administrative console access, attackers could have tampered with APIs, planted malicious code, pivoted deeper into Tencent’s systems, or abused the trusted domain for phishing campaigns.

Tencent confirmed the incident as a ‘known issue’ and has since closed access, though questions remain over how many parties may have already retrieved the exposed information.

Security experts warn that even minor oversights in cloud operations can cascade into serious vulnerabilities, especially for platforms trusted by millions worldwide.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT faces scrutiny as OpenAI updates protections after teen suicide case

OpenAI has announced new safety measures for its popular chatbot following a lawsuit filed by the parents of a 16-year-old boy who died by suicide after relying on ChatGPT for guidance.

The parents allege the chatbot isolated their son and contributed to his death earlier in the year.

The company said it will improve ChatGPT’s ability to detect signs of mental distress, including indirect expressions such as users mentioning sleep deprivation or feelings of invincibility.

It will also strengthen safeguards around suicide-related conversations, which OpenAI admitted can break down in prolonged chats. Planned updates include parental controls, access to usage details, and clickable links to local emergency services.

OpenAI stressed that its safeguards work best during short interactions, acknowledging weaknesses in longer exchanges. It also said it is considering building a network of licensed professionals that users could access through ChatGPT.

The company added that content filtering errors, where serious risks are underestimated, will also be addressed.

The lawsuit comes amid wider scrutiny of AI tools by regulators and mental health experts. Attorneys general from more than 40 US states recently warned AI companies of their duty to protect children from harmful or inappropriate chatbot interactions.

Critics argue that reliance on chatbots for support instead of professional care poses growing risks as usage expands globally.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cyberattack disrupts Nevada government systems

The State of Nevada reported a cyberattack affecting several state government systems, with recovery efforts underway. Some websites and phone lines may be slow or offline while officials restore operations.

Governor Joe Lombardo’s office stated there is no evidence that personal information has been compromised, emphasising that the issue is limited to state systems. The incident is under investigation by both state and federal authorities, although technical details have not been released.

Several agencies, including the Department of Motor Vehicles, have been affected, prompting temporary office closures until normal operations can resume. Emergency services, including 911, continue to operate without disruption.

Officials prioritise system validation and safe restoration to prevent further disruption to state services.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Insecure code blamed for 74 percent of company breaches

Nearly three-quarters of companies have experienced a security breach in the past year due to flaws in their software code.

According to a new SecureFlag study, 74% of organisations admitted to at least one incident caused by insecure code, with almost half suffering multiple breaches.

The report has renewed scrutiny of AI-generated code, which is growing in popularity across the industry. While some experts claim AI can outperform humans, concerns remain that these tools are reproducing insecure coding patterns at scale.

On the upside, companies are increasing developer security training. Around 44% provide quarterly updates, while 29% do so monthly.

Most use video tutorials and eLearning platforms, with a third hosting interactive events like capture-the-flag hacking games.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Brave uncovers vulnerability in Perplexity’s Comet that risked sensitive user data

Perplexity’s AI-powered browser, Comet, was found to have a serious vulnerability that could have exposed sensitive user data through indirect prompt injection, according to researchers at Brave, a rival browser company.

The flaw stemmed from how Comet handled webpage-summarisation requests. By embedding hidden instructions on websites, attackers could trick the browser’s large language model into executing unintended actions, such as extracting personal emails or accessing saved passwords.

Brave researchers demonstrated how the exploit could bypass traditional protections, such as the same-origin policy, showing scenarios where attackers gained access to Gmail or banking data by manipulating Comet into following malicious cues.

Brave disclosed the vulnerability to Perplexity on 11 August, but stated that it remained unfixed when they published their findings on 20 August. Perplexity later confirmed to CNET that the flaw had been patched, and Brave was credited for working with them to resolve it.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

YouTube under fire for AI video edits without creator consent

Anger grows as YouTube secretly alters some uploaded videos using machine learning. The company admitted that it had been experimenting with automated edits, which sharpen images, smooth skin, and enhance clarity, without notifying creators.

Although tools like ChatGPT or Gemini did not generate these changes, they still relied on AI.

The issue has sparked concern among creators, who argue that the lack of consent undermines trust.

YouTuber Rhett Shull publicly criticised the platform, prompting YouTube liaison Rene Ritchie to clarify that the edits were simply efforts to ‘unblur and denoise’ footage, similar to smartphone processing.

However, creators emphasise that the difference lies in transparency, since phone users know when enhancements are applied, whereas YouTube users were unaware.

Consent remains central to debates around AI adoption, especially as regulation lags and governments push companies to expand their use of the technology.

Critics warn that even minor, automatic edits can treat user videos as training material without permission, raising broader concerns about control and ownership on digital platforms.

YouTube has not confirmed whether the experiment will expand or when it might end.

For now, viewers noticing oddly upscaled Shorts may be seeing the outcome of these hidden edits, which have only fuelled anger about how AI is being introduced into creative spaces.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!