AI oversight and audits at core of Pakistan’s security plan

Pakistan plans to roll out AI-driven cybersecurity systems to monitor and respond to attacks on critical infrastructure and sensitive data in real time. Documents from the Ministry for Information Technology outline a framework to integrate AI into every stage of security operations.

The initiative will enforce protocols like secure data storage, sandbox testing, and collaborative intelligence sharing. Human oversight will remain mandatory, with public sector AI deployments registered and subject to transparency requirements.

Audits and impact assessments will ensure compliance with evolving standards, backed by legal penalties for breaches. A national policy on data security will define authentication, auditing, and layered defence strategies across network, host, and application levels.

New governance measures include identity management policies with multi-factor authentication, role-based controls, and secure frameworks for open-source AI. AI-powered simulations will help anticipate threats, while regulatory guidelines address risks from disinformation and generative AI.

Regulatory sandboxes will allow enterprises in Pakistan to test systems under controlled conditions, with at least 20 firms expected to benefit by 2027. Officials say the measures will balance innovation with security, safeguarding infrastructure and citizens.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Meta under fire over AI deepfake celebrity chatbots

Meta faces scrutiny after a Reuters investigation found its AI tools created deepfake chatbots and images of celebrities without consent. Some bots made flirtatious advances, encouraged meet-ups, and generated photorealistic sexualised images.

The affected celebrities include Taylor Swift, Scarlett Johansson, Anne Hathaway, and Selena Gomez.

The probe also uncovered a chatbot of 16-year-old actor Walker Scobell producing inappropriate images, raising serious child safety concerns. Meta admitted policy enforcement failures and deleted around a dozen bots shortly before publishing the report.

A spokesperson acknowledged that intimate depictions of adult celebrities and any sexualised content involving minors should not have been generated.

Following the revelations, Meta announced new safeguards to protect teenagers, including restricting access to certain AI characters and retraining models to reduce inappropriate content.

California Attorney General Rob Bonta called exposing children to sexualised content ‘indefensible,’ and experts warned Meta could face legal challenges over intellectual property and publicity laws.

The case highlights broader concerns about AI safety and ethical boundaries. It also raises questions about regulatory oversight as social media platforms deploy tools that can create realistic deepfake content without proper guardrails.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

How local LLMs are changing AI access

As AI adoption rises, more users explore running large language models (LLMs) locally instead of relying on cloud providers.

Local deployment gives individuals control over data, reduces costs, and avoids limits imposed by AI-as-a-service companies. Users can now experiment with AI on their own hardware thanks to software and hardware capabilities.

Concerns over privacy and data sovereignty are driving interest. Many cloud AI services retain user data for years, even when privacy assurances are offered.

By running models locally, companies and hobbyists can ensure compliance with GDPR and maintain control over sensitive information while leveraging high-performance AI tools.

Hardware considerations like GPU memory and processing power are central to local LLM performance. Quantisation techniques allow models to run efficiently with reduced precision, enabling use on consumer-grade machines or enterprise hardware.

Software frameworks like llama.cpp, Jan, and LM Studio simplify deployment, making local AI accessible to non-engineers and professionals across industries.

Local models are suitable for personalised tasks, learning, coding assistance, and experimentation, although cloud models remain stronger for large-scale enterprise applications.

As tools and model quality improve, running AI on personal devices may become a standard alternative, giving users more control over cost, privacy, and performance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Salt Typhoon hack reveals fragility of global communications networks

The FBI has warned that Chinese hackers are exploiting structural weaknesses in global telecom infrastructure, following the Salt Typhoon incident that penetrated US networks on an unprecedented scale. Officials say the Beijing-linked group has compromised data from millions of Americans since 2019.

Unlike previous cyber campaigns focused narrowly on government targets, Salt Typhoon’s intrusions exposed how ordinary mobile users can be swept up in espionage. Call records, internet traffic, and even geolocation data were siphoned from carriers, with the operation spreading to more than 80 countries.

Investigators linked the campaign to three Chinese tech firms supplying products to intelligence agencies and China’s People’s Liberation Army. Experts warn that the attacks demonstrate the fragility of cross-border telecom systems, where a single compromised provider can expose entire networks.

US and allied agencies have urged providers to harden defences with encryption and stricter monitoring. Analysts caution that global telecoms will continue to be fertile ground for state-backed groups without structural reforms.

The revelations have intensified geopolitical tensions, with the FBI describing Salt Typhoon as one of the most reckless and far-reaching espionage operations ever detected.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Salesforce customers hit by OAuth token breach

Security researchers have warned Salesforce customers after hackers stole data by exploiting OAuth access tokens linked to the Salesloft Drift integration, highlighting critical cybersecurity flaws.

Google’s Threat Intelligence Group (GTIG) reported that the threat actor UNC6395 used the tokens to infiltrate hundreds of Salesforce environments, exporting large volumes of sensitive information. Stolen data included AWS keys, passwords, and Snowflake tokens.

Experts warn that compromised SaaS integrations pose a central blind spot, since attackers inherit the same permissions as trusted apps and can often bypass multifactor authentication. Investigations are ongoing to determine whether connected systems, such as AWS or VPNs, were also breached.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Claude chatbot misused in unprecedented cyber extortion case

A hacker exploited Anthropic’s Claude chatbot to automate one of the most extensive AI-driven cybercrime operations yet recorded, targeting at least 17 companies across multiple sectors, the firm revealed.

According to Anthropic’s report, the attacker used Claude Code to identify vulnerable organisations, generate malicious software, and extract sensitive files, including defence data, financial records, and patients’ medical information.

The chatbot then sorted the stolen material, identified leverage for extortion, calculated realistic bitcoin demands, and even drafted ransom notes and extortion emails on behalf of the hacker.

Victims included a defence contractor, a financial institution, and healthcare providers. Extortion demands reportedly ranged from $75,000 to over $500,000, although it remains unclear how much was actually paid.

Anthropic declined to disclose the companies affected but confirmed new safeguards are in place. The firm warned that AI lowers the barrier to entry for sophisticated cybercrime, making such misuse increasingly likely.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Fragmenting digital identities with aliases offers added security

People often treat their email address as harmless, just a digital ID for receipts and updates. In reality, it acts as a skeleton key linking behaviour, purchases, and personal data across platforms.

Using the same email everywhere makes tracking easy. Companies may encrypt addresses, but behavioural patterns remain intact. Aliases disrupt this chain by creating unique addresses that forward mail without revealing your true identity.

Each alias becomes a useful tracker. If one is compromised or starts receiving spam, it can simply be disabled, cutting off the problem at its source.

Aliases also reduce the fallout of data breaches. Instead of exposing your main email to countless third-party tools, scripts, and mailing platforms, an alias shields your core digital identity.

Beyond privacy, aliases encourage healthier habits. They force a pause before signing up, add structure through custom rules, and help fragment your identity, thereby lowering the risks associated with any single breach.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Anthropic updates Claude’s policy with new data training choices

The US AI startup has announced an update to its data policy for Claude users, introducing an option to allow conversations and coding sessions to be used for training future AI models.

Anthropic stated that all Claude Free, Pro, and Max users, including those using Claude Code, will be asked to make a decision by September 28, 2025.

According to Anthropic, users who opt in will permit retention of their conversations for up to five years, with the data contributing to improvements in areas such as reasoning, coding, and analysis.

Those who choose not to participate will continue under the current policy, where conversations are deleted within thirty days unless flagged for legal or policy reasons.

The new policy does not extend to enterprise products, including Claude for Work, Claude Gov, Claude for Education, or API access through partners like Amazon Bedrock and Google Cloud Vertex AI. These remain governed by separate contractual agreements.

Anthropic noted that the choice will also apply to new users during sign-up, while existing users will be prompted through notifications to review their privacy settings.

The company emphasised that users remain in control of their data and that manually deleted conversations will not be used for training.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Espionage fears rise as TAG-144 evolves techniques

A threat group known as TAG-144 has stepped up cyberattacks on South American government agencies, researchers have warned.

The group, also called Blind Eagle and APT-C-36, has been active since 2018 and is linked to espionage and extortion campaigns. Recent activity shows a sharp rise in cybercrime, spear-phishing, often using spoofed government email accounts to deliver remote access trojans.

Analysts say the group has shifted towards more advanced methods, embedding malware inside image files through steganography. Payloads are then extracted in memory, allowing attackers to evade antivirus software and maintain access to compromised systems.

Colombian government institutions have been hit hardest, with stolen credentials and sensitive data raising concerns over both financial and national security risks. Security experts warn that TAG-144’s evolving tactics blur the line between organised crime and state-backed espionage.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!