AI scams target seniors’ savings

Cybersecurity experts have warned that AI is being used to target senior citizens in sophisticated financial scams. The Phantom Hacker scam impersonates tech support, bank, and government workers to steal seniors’ life savings.

The first stage involves a fake tech support worker accessing the victim’s computer to check accounts under the pretence of spotting fraud. A fraud department impersonator then tells victims to transfer funds to a ‘safe’ account allegedly at risk from foreign hackers.

A fake government worker then directs the victim to transfer money to an alias account controlled by the scammers. Check Point CIO Pete Nicoletti says AI helps scammers identify targets by analysing social media and online activity.

Experts stress that reporting the theft immediately is crucial. Delays significantly reduce the chance of recovering stolen funds, leaving many victims permanently defrauded.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Azure Active Directory flaw exposes sensitive credentials

A critical security flaw in Azure Active Directory has exposed application credentials stored in appsettings.json files, allowing attackers unprecedented access to Microsoft 365 tenants.

By exploiting these credentials, threat actors can masquerade as trusted applications and gain unauthorised entry to sensitive organisational data.

The vulnerability leverages the OAuth 2.0 Client Credentials Flow, enabling attackers to generate valid access tokens.

Once authenticated, they can access Microsoft Graph APIs to enumerate users, groups, and directory roles, especially when applications have been granted excessive permissions such as Directory.Read.All or Mail.Read. Such access permits data harvesting across SharePoint, OneDrive, and Exchange Online.

Attackers can also deploy malicious applications under compromised tenants, escalating privileges from limited read access to complete administrative control.

Additional exposed secrets like storage account keys or database connection strings enable lateral movement, modification of critical data, and the creation of persistent backdoors within cloud infrastructure.

Organisations face profound compliance implications under GDPR, HIPAA, or SOX. The vulnerability emphasises the importance of auditing configuration files, storing credentials securely in solutions like Azure Key Vault, and monitoring authentication patterns to prevent long-term, sophisticated attacks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI-generated media must now carry labels in China

China has introduced a sweeping new law that requires all AI-generated content online to carry labels. The measure, which came into effect on 1 September, aims to tackle misinformation, fraud and copyright infringement by ensuring greater transparency in digital media.

The law, first announced in March by the Cyberspace Administration of China, mandates that all AI-created text, images, video and audio must carry explicit and implicit markings.

These include visible labels and embedded metadata such as watermarks in files. Authorities argue that the rules will help safeguard users while reinforcing Beijing’s tightening grip over online spaces.

Major platforms such as WeChat, Douyin, Weibo and RedNote moved quickly to comply, rolling out new features and notifications for their users. The regulations also form part of the Qinglang campaign, a broader effort by Chinese authorities to clean up online activity with a strong focus on AI oversight.

While Google and other US companies are experimenting with content authentication tools, China has enacted legally binding rules nationwide.

Observers suggest that other governments may soon follow, as global concern about the risks of unlabelled AI-generated material grows.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT safety checks may trigger police action

OpenAI has confirmed that ChatGPT conversations signalling a risk of serious harm to others can be reviewed by human moderators and may even reach the police.

The company explained these measures in a blog post, stressing that its system is designed to balance user privacy with public safety.

The safeguards treat self-harm differently from threats to others. When a user expresses suicidal intent, ChatGPT directs them to professional resources instead of contacting law enforcement.

By contrast, conversations showing intent to harm someone else are escalated to trained moderators, and if they identify an imminent risk, OpenAI may alert authorities and suspend accounts.

The company admitted its safety measures work better in short conversations than in lengthy or repeated ones, where safeguards can weaken.

OpenAI is working to strengthen consistency across interactions and developing parental controls, new interventions for risky behaviour, and potential connections to professional help before crises worsen.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI oversight and audits at core of Pakistan’s security plan

Pakistan plans to roll out AI-driven cybersecurity systems to monitor and respond to attacks on critical infrastructure and sensitive data in real time. Documents from the Ministry for Information Technology outline a framework to integrate AI into every stage of security operations.

The initiative will enforce protocols like secure data storage, sandbox testing, and collaborative intelligence sharing. Human oversight will remain mandatory, with public sector AI deployments registered and subject to transparency requirements.

Audits and impact assessments will ensure compliance with evolving standards, backed by legal penalties for breaches. A national policy on data security will define authentication, auditing, and layered defence strategies across network, host, and application levels.

New governance measures include identity management policies with multi-factor authentication, role-based controls, and secure frameworks for open-source AI. AI-powered simulations will help anticipate threats, while regulatory guidelines address risks from disinformation and generative AI.

Regulatory sandboxes will allow enterprises in Pakistan to test systems under controlled conditions, with at least 20 firms expected to benefit by 2027. Officials say the measures will balance innovation with security, safeguarding infrastructure and citizens.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Salt Typhoon hack reveals fragility of global communications networks

The FBI has warned that Chinese hackers are exploiting structural weaknesses in global telecom infrastructure, following the Salt Typhoon incident that penetrated US networks on an unprecedented scale. Officials say the Beijing-linked group has compromised data from millions of Americans since 2019.

Unlike previous cyber campaigns focused narrowly on government targets, Salt Typhoon’s intrusions exposed how ordinary mobile users can be swept up in espionage. Call records, internet traffic, and even geolocation data were siphoned from carriers, with the operation spreading to more than 80 countries.

Investigators linked the campaign to three Chinese tech firms supplying products to intelligence agencies and China’s People’s Liberation Army. Experts warn that the attacks demonstrate the fragility of cross-border telecom systems, where a single compromised provider can expose entire networks.

US and allied agencies have urged providers to harden defences with encryption and stricter monitoring. Analysts caution that global telecoms will continue to be fertile ground for state-backed groups without structural reforms.

The revelations have intensified geopolitical tensions, with the FBI describing Salt Typhoon as one of the most reckless and far-reaching espionage operations ever detected.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Salesforce customers hit by OAuth token breach

Security researchers have warned Salesforce customers after hackers stole data by exploiting OAuth access tokens linked to the Salesloft Drift integration, highlighting critical cybersecurity flaws.

Google’s Threat Intelligence Group (GTIG) reported that the threat actor UNC6395 used the tokens to infiltrate hundreds of Salesforce environments, exporting large volumes of sensitive information. Stolen data included AWS keys, passwords, and Snowflake tokens.

Experts warn that compromised SaaS integrations pose a central blind spot, since attackers inherit the same permissions as trusted apps and can often bypass multifactor authentication. Investigations are ongoing to determine whether connected systems, such as AWS or VPNs, were also breached.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Claude chatbot misused in unprecedented cyber extortion case

A hacker exploited Anthropic’s Claude chatbot to automate one of the most extensive AI-driven cybercrime operations yet recorded, targeting at least 17 companies across multiple sectors, the firm revealed.

According to Anthropic’s report, the attacker used Claude Code to identify vulnerable organisations, generate malicious software, and extract sensitive files, including defence data, financial records, and patients’ medical information.

The chatbot then sorted the stolen material, identified leverage for extortion, calculated realistic bitcoin demands, and even drafted ransom notes and extortion emails on behalf of the hacker.

Victims included a defence contractor, a financial institution, and healthcare providers. Extortion demands reportedly ranged from $75,000 to over $500,000, although it remains unclear how much was actually paid.

Anthropic declined to disclose the companies affected but confirmed new safeguards are in place. The firm warned that AI lowers the barrier to entry for sophisticated cybercrime, making such misuse increasingly likely.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Fragmenting digital identities with aliases offers added security

People often treat their email address as harmless, just a digital ID for receipts and updates. In reality, it acts as a skeleton key linking behaviour, purchases, and personal data across platforms.

Using the same email everywhere makes tracking easy. Companies may encrypt addresses, but behavioural patterns remain intact. Aliases disrupt this chain by creating unique addresses that forward mail without revealing your true identity.

Each alias becomes a useful tracker. If one is compromised or starts receiving spam, it can simply be disabled, cutting off the problem at its source.

Aliases also reduce the fallout of data breaches. Instead of exposing your main email to countless third-party tools, scripts, and mailing platforms, an alias shields your core digital identity.

Beyond privacy, aliases encourage healthier habits. They force a pause before signing up, add structure through custom rules, and help fragment your identity, thereby lowering the risks associated with any single breach.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!