Espionage fears rise as TAG-144 evolves techniques

A threat group known as TAG-144 has stepped up cyberattacks on South American government agencies, researchers have warned.

The group, also called Blind Eagle and APT-C-36, has been active since 2018 and is linked to espionage and extortion campaigns. Recent activity shows a sharp rise in cybercrime, spear-phishing, often using spoofed government email accounts to deliver remote access trojans.

Analysts say the group has shifted towards more advanced methods, embedding malware inside image files through steganography. Payloads are then extracted in memory, allowing attackers to evade antivirus software and maintain access to compromised systems.

Colombian government institutions have been hit hardest, with stolen credentials and sensitive data raising concerns over both financial and national security risks. Security experts warn that TAG-144’s evolving tactics blur the line between organised crime and state-backed espionage.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Attackers bypass email security by abusing Microsoft Teams defaults

A phishing campaign exploits Microsoft Teams’ external communication features, with attackers posing as IT helpdesk staff to gain access to screen sharing and remote control. The method sidesteps traditional email security controls by using Teams’ default settings.

The attacks exploit Microsoft 365’s default external collaboration feature, which allows unauthenticated users to contact organisations. Axon Team reports attackers create malicious Entra ID tenants with .onmicrosoft.com domains or use compromised accounts to initiate chats.

Although Microsoft issues warnings for suspicious messages, attackers bypass these by initiating external voice calls, which generate no alerts. Once trust is established, they request screen sharing, enabling them to monitor victims’ activity and guide them toward malicious actions.

The highest risk arises where organisations enable external remote-control options, giving attackers potential full access to workstations directly through Teams. However, this eliminates the need for traditional remote tools like QuickAssist or AnyDesk, creating a severe security exposure.

Defenders are advised to monitor Microsoft 365 audit logs for markers such as ChatCreated, MessageSent, and UserAccepted events, as well as TeamsImpersonationDetected alerts. Restricting external communication and strengthening user awareness remain key to mitigating this threat.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

NVIDIA launches Spectrum-XGS to connect AI data centres

AI data centres face growing pressure as computing demands exceed the capacity of single facilities. Traditional Ethernet networks face high latency and inconsistent transfers, forcing companies to build larger centres or risk performance issues.

NVIDIA aims to tackle these challenges with its new Spectrum-XGS Ethernet technology, introducing ‘scale-across’ capabilities. The system links multiple AI data centres using distance-adaptive algorithms, congestion control, latency management, and end-to-end telemetry.

NVIDIA claims the improvements can nearly double GPU communication performance, supporting what it calls ‘giga-scale AI super-factories.’

CoreWeave plans to be among the first adopters, connecting its facilities into a single distributed supercomputer. The deployment will test if Spectrum-XGS can deliver fast, reliable AI across multiple sites without needing massive single-location centres.

While the technology promises greater efficiency and distributed computing power, its effectiveness depends on real-world infrastructure, regulatory compliance, and data synchronisation.

If successful, it could reshape AI data centre design, enabling faster services and potentially lower operational costs across industries.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Nigeria drafts framework for AI use in governance and services

According to the IT regulator, Nigeria is preparing a national framework to guide responsible use of AI in governance, healthcare, education and agriculture.

NITDA Director General Kashifu Abdullahi told a policy lecture in Abuja that AI could accelerate economic transformation if properly harnessed. He emphasised that Nigeria’s youthful population should move from being consumers to becoming innovators and creators.

He urged stakeholders to view automation as an opportunity to generate jobs, highlighting that over 60% of Nigerians are under 25. Abdullahi described this demographic as a key asset in positioning the nation for global competitiveness.

Meanwhile, a joint report from the Digital Education Council and the Global Finance & Technology Network found that AI boosts productivity, though adoption remains uneven. It warned of a growing divide between organisations that use AI effectively and those falling behind.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Anthropic reports misuse of its AI tools in cyber incidents

AI company Anthropic has reported that its chatbot Claude was misused in cyber incidents, including attempts to carry out hacking operations and employment-related fraud.

The firm said its technology had been used to help write malicious code and assist threat actors in planning attacks. However, it also stated that it could disrupt the activity and notify authorities. Anthropic said it is continuing to improve its monitoring and detection systems.

In one case, the company reported that AI-supported attacks targeted at least 17 organisations, including government entities. The attackers allegedly relied on the tool to support decision-making, from choosing which data to target to drafting ransom demands.

Experts note that the rise of so-called agentic AI, which can operate with greater autonomy, has increased concerns about potential misuse.

Anthropic also identified attempts to use AI models to support fraudulent applications for remote jobs at major companies. The AI was reportedly used to create convincing profiles, generate applications, and assist in work-related tasks once jobs had been secured.

Analysts suggest that AI can strengthen such schemes, but most cyber incidents still involve long-established techniques like phishing and exploiting software vulnerabilities.

Cybersecurity specialists emphasise the importance of proactive defence as AI tools evolve. They caution that organisations should treat AI platforms as sensitive systems requiring strong safeguards to prevent their exploitation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Global agencies and the FBI issue a warning on Salt Typhoon operations

The FBI, US agencies, and international partners have issued a joint advisory on a cyber campaign called ‘Salt Typhoon.’

The operation is said to have affected more than 200 US companies across 80 countries.

The advisory, co-released by the FBI, the National Security Agency, the Cybersecurity and Infrastructure Security Agency, and the Department of Defence Cyber Crime Centre, was also supported by agencies in the UK, Canada, Australia, Germany, Italy and Japan.

According to the statement, Salt Typhoon has focused on exploiting network infrastructure such as routers, virtual private networks and other edge devices.

The group has been previously linked to campaigns targeting US telecommunications networks in 2024. It has also been connected with activity involving a US National Guard network, the advisory names three Chinese companies allegedly providing products and services used in their operations.

Telecommunications, defence, transportation and hospitality organisations are advised to strengthen cybersecurity measures. Recommended actions include patching vulnerabilities, adopting zero-trust approaches and using the technical details included in the advisory.

Salt Typhoon, also known as Earth Estrie and Ghost Emperor, has been observed since at least 2019 and is reported to maintain long-term access to compromised devices.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

NVIDIA’s sales grow as the market questions AI momentum

Sales of AI chips by Nvidia rose strongly in its latest quarter, though the growth was less intense than in previous periods, raising questions about the sustainability of demand.

The company’s data centre division reported revenue of 41.1 billion USD between May and July, a 56% rise from last year but slightly below analyst forecasts.

Overall revenue reached 46.7 billion USD, while profit climbed to 26.4 billion USD, both higher than expected.

Nvidia forecasts sales of $54 billion USD for the current quarter.

CEO Jensen Huang said the company remains at the ‘beginning of the buildout’, with trillions expected to be spent on AI by the decade’s end.

However, investors pushed shares down 3% in extended trading, reflecting concerns that rapid growth is becoming harder to maintain as annual sales expand.

Nvidia’s performance was also affected by earlier restrictions on chip sales to China, although the removal of limits in exchange for a sales levy is expected to support future revenue.

Analysts noted that while AI continues to fuel stock market optimism, the pace of growth is slowing compared with the company’s earlier surge.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

WhatsApp launches AI assistant for editing messages

Meta’s WhatsApp has introduced a new AI feature called Writing Help, designed to assist users in editing, rewriting, and refining the tone of their messages. The tool can adjust grammar, improve phrasing, or reframe a message in a more professional, humorous, or encouraging style before it is sent.

The feature operates through Meta’s Private Processing technology, which ensures that messages remain encrypted and private instead of being visible to WhatsApp or Meta.

According to the company, Writing Help processes requests anonymously and cannot trace them back to the user. The function is optional, disabled by default, and only applies to the chosen message.

To activate the feature, users can tap a small pencil icon that appears while composing a message.

In a demonstration, WhatsApp showed how the tool could turn ‘Please don’t leave dirty socks on the sofa’ into more light-hearted alternatives, including ‘Breaking news: Socks found chilling on the couch’ or ‘Please don’t turn the sofa into a sock graveyard.’

By introducing Writing Help, WhatsApp aims to make communication more flexible and engaging while keeping user privacy intact. The company emphasises that no information is stored, and AI-generated suggestions only appear if users decide to enable the option.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google alerts users after detecting malware spread through captive portals

Warnings have been issued by Google to some users after detecting a web traffic hijacking campaign that delivered malware through manipulated login portals.

According to the company’s Threat Intelligence Group, attackers compromised network edge devices to modify captive portals, the login pages often seen when joining public Wi-Fi or corporate networks.

Instead of leading to legitimate security updates, the altered portals redirected users to a fake page presenting an ‘Adobe Plugin’ update. The file, once installed, deployed malware known as CANONSTAGER, which enabled the installation of a backdoor called SOGU.SEC.

The software, named AdobePlugins.exe, was signed with a valid GlobalSign certificate linked to Chengdu Nuoxin Times Technology Co, Ltd. Google stated it is tracking multiple malware samples connected to the same certificate.

The company attributed the campaign to a group it tracks as UNC6384, also known by other names including Mustang Panda, Silk Typhoon, and TEMP.Hex.

Google said it first detected the campaign in March 2025 and sent alerts to affected Gmail and Workspace users. The operation reportedly targeted diplomats in Southeast Asia and other entities worldwide, suggesting a potential link to cyber espionage activities.

Google advised users to enable Enhanced Safe Browsing in Chrome, keep devices updated, and use two-step verification for stronger protection.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI firms under scrutiny for exposing children to harmful content

The National Association of Attorneys General has called on 13 AI firms, including OpenAI and Meta, to strengthen child protection measures. Authorities warned that AI chatbots have been exposing minors to sexually suggestive material, raising urgent safety concerns.

Growing use of AI tools among children has amplified worries. In the US, surveys show that over three-quarters of teenagers regularly interact with AI companions. The UK data indicates that half of online 8-15-year-olds have used generative AI in the past year.

Parents, schools, and children’s rights organisations are increasingly alarmed by potential risks such as grooming, bullying, and privacy breaches.

Meta faced scrutiny after leaked documents revealed its AI Assistants engaged in ‘flirty’ interactions with children, some as young as eight. The NAAG described the revelations as shocking and warned that other AI firms could pose similar threats.

Lawsuits against Google and Character.ai underscore the potential real-world consequences of sexualised AI interactions.

Officials insist that companies cannot justify policies that normalise sexualised behaviour with minors. Tennessee Attorney General Jonathan Skrmetti warned that such practices are a ‘plague’ and urged innovation to avoid harming children.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot