Privacy concerns lead India to withdraw cyber safety app mandate

India has scrapped its order mandating smartphone manufacturers to pre-install the state-run Sanchar Saathi cyber safety app. The directive, which faced widespread criticism, had raised concerns over privacy and potential government surveillance.

Smartphone makers, including Apple and Samsung, reportedly resisted the order, highlighting that it was issued without prior consultation and challenged user privacy norms. The government argued the app was necessary to verify handset authenticity.

So far, the Sanchar Saathi app has attracted 14 million users, reporting around 2,000 frauds daily, with a sharp spike of 600,000 new registrations in a single day. Despite these figures, the mandatory pre-installation rule provoked intense backlash from cybersecurity experts and digital rights advocates.

India’s Minister of Communications, Jyotiraditya Scindia, dismissed concerns about surveillance, insisting that the app does not enable snooping. Digital advocacy groups welcomed the withdrawal but called for complete legal clarity on the revised Cyber Security Rules, 2024.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Apple support scam targets users with real tickets

Cybercriminals are increasingly exploiting Apple’s support system to trick users into surrendering their accounts. Fraudsters open real support tickets in a victim’s name, which triggers official Apple emails and creates a false sense of legitimacy. These messages appear professional, making it difficult for users to detect the scam.

Victims often receive a flood of alerts, including two-factor authentication notifications, followed by phone calls from callers posing as Apple agents. The scammers guide users through steps that appear to secure their accounts, often directing them to convincing fake websites that request sensitive information.

Entering verification codes or following instructions on these fraudulent pages gives attackers access to the account. Even experienced users can fall prey because the emails come from official Apple domains, and the phone calls are carefully scripted to build trust.

Experts recommend checking support tickets directly within your Apple ID account, never sharing verification codes, and reviewing all devices linked to your account. Using antivirus software, activating two-factor authentication, and limiting personal information online further strengthen protection against such sophisticated phishing attacks.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Safran and UAE institute join forces on AI geospatial intelligence

Safran.AI, the AI division of Safran Electronics & Defence, and the UAE’s Technology Innovation Institute have formed a strategic partnership to develop a next-generation agentic AI geospatial intelligence platform.

The collaboration aims to transform high-resolution satellite imagery into actionable intelligence for defence operations.

The platform will combine human oversight with advanced geospatial reasoning, enabling operators to interpret and respond to emerging situations faster and with greater precision.

Key initiatives include agentic reasoning systems powered by large language models, a mission-specific AI detector factory, and an autonomous multimodal fusion engine for persistent, all-weather monitoring.

Under the agreement, a joint team operating across France and the UAE will accelerate innovation within a unified operational structure.

Leaders from both organisations emphasise that the alliance strengthens sovereign geospatial intelligence capabilities and lays the foundations for decision intelligence in national security.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Europe boosts defence with Leonardo’s Michelangelo Dome

Italian defence company Leonardo has revealed plans for the ‘Michelangelo Dome’, an AI-powered shield designed to protect cities and critical infrastructure from missile attacks and drone swarms. The system will integrate multiple defence platforms and is expected to be fully operational by the end of the decade.

The project follows a surge in European defence spending amid geopolitical tensions and uncertainty over US support.

Leonardo’s CEO, Roberto Cingolani, highlighted the system’s open architecture, allowing compatibility with other nations’ defence networks and emphasising the need for innovation and international cooperation.

European defence companies are increasingly investing in integrated command systems rather than standalone hardware.

Private investors are also backing startups developing autonomous and AI-driven defence technologies, creating competition for traditional primes such as Leonardo, BAE Systems, Rheinmetall, and Thales.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Asahi faces major disruption after cyberattack

Growing concern surrounds Asahi Group after the company acknowledged a possible leak of nearly two million personal records linked to a cyberattack that began in late September.

Company president Atsushi Katsuki apologised publicly and confirmed that operations remain heavily disrupted as logistics teams work towards full recovery by February.

Investigators found that attackers infiltrated network equipment at one of Asahi’s facilities, obtained administrator credentials and accessed servers repeatedly.

Atsushi Katsuki noted that the breach demonstrated significant vulnerabilities, although he stressed that improvements had already been implemented and no ransom had been paid.

Production and shipments across most domestic factories were halted, forcing employees to handle orders manually and slowing the resumption of supply lines.

Competitors Kirin, Suntory and Sapporo have struggled to meet unexpected demand, triggering shipping limits and suspensions on some products across the wider beer industry.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Underground AI tools marketed for hacking raise alarms among cybersecurity experts

Cybersecurity researchers say cybercriminals are turning to a growing underground market of customised large language models designed to support low-level hacking tasks.

A new report from Palo Alto Networks’ Unit 42 describes how dark web forums promote jailbroken, open-source and bespoke AI models as hacking assistants or dual-use penetration testing tools, often sold via monthly or annual subscriptions.

Some appear to be repurposed commercial models trained on malware datasets and maintained by active online communities.

These models help users scan for vulnerabilities, write scripts, encrypt or exfiltrate data and generate exploit or phishing code, tasks that can support both attackers and defenders.

Unit 42’s Andy Piazza compared them to earlier dual-use tools, such as Metasploit and Cobalt Strike, which were developed for security testing but are now widely abused by criminal groups. He warned that AI now plays a similar role, lowering the expertise needed to launch attacks.

One example is a new version of WormGPT, a jailbroken LLM that resurfaced on underground forums in September after first appearing in 2023.

The updated ‘WormGPT 4’ is marketed as an unrestricted hacking assistant, with lifetime access reportedly starting at around $220 and an option to buy the complete source code. Researchers say it signals a shift from simple jailbreaks to commercialised, specialised tools that train AI for cybercrime.

Another model, KawaiiGPT, is available for free on GitHub and brands itself as a playful ‘cyber pentesting’ companion while generating malicious content.

Unit 42 calls it an entry-level but effective malicious LLM, with a casual, friendly style that masks its purpose. Around 500 contributors support and update the project, making it easier for non-experts to use.

Piazza noted that internal tests suggest much of the malware generated by these tools remains detectable and less advanced than code seen in some recent AI-assisted campaigns. The wider concern, he said, is that such models make hacking more accessible by translating technical knowledge into simple prompts.

Users no longer need to know jargon like ‘lateral movement’ and can instead ask everyday questions, such as how to find other systems on a network, and receive ready-made scripts.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

VPN credential theft emerges as top ransomware entry point

Cyber Express reports that compromised VPN credentials are now the most common method for ransomware attackers to gain entry. In Q3 2025, nearly half of all ransomware incidents began with valid, stolen VPN logins.

The analysis, based on data from Beazley Security (the insurance arm of Beazley), reveals that threat actors are increasingly exploiting remote access tools, rather than relying solely on software exploits or phishing.

Notably, VPN misuse accounted for more initial access than social engineering, supply chain attacks or remote desktop credential compromises.

One contributing factor is that many organisations do not enforce multi-factor authentication (MFA) or maintain strict access controls for VPN accounts. Cyber Express highlights that this situation underscores the ‘critical need’ for MFA and for firms to monitor for credential leaks on the dark web.

The report also mentions specific ransomware groups such as Akira, Qilin and INC, which are known to exploit compromised VPN credentials, often via brute-force attacks or credential stuffing.

From a digital-security policy standpoint, the trend has worrying implications. It shows how traditional perimeter security (like VPNs) is under pressure, and reinforces calls for zero-trust architectures, tighter access governance and proactive credentials-monitoring.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Kremlin launches new push for domestic AI development

Russian President Vladimir Putin has ordered the creation of a national task force to accelerate the development of domestic generative AI systems, arguing that homegrown models are essential to safeguarding the country’s technological and strategic sovereignty. Speaking at AI Journey, Russia’s flagship AI forum, he warned that foreign large-language models shape global information flows and can influence entire populations, making reliance on external technologies unacceptable.

Putin said the new task force will prioritise expanding data-centre infrastructure and securing reliable energy supplies, including through small-scale nuclear power stations. Russia still trails global leaders like the United States and China, but local companies have produced notable systems such as Sberbank’s Gigachat and Yandex’s GPT.

Sberbank unveiled a new version of Gigachat and showcased AI-powered tools, ranging from humanoid robots to medical-scanning ATMs. However, recent public demonstrations have drawn unwanted attention, including an incident in which a Russian AI robot toppled over on stage.

The Kremlin aims for AI technologies to contribute more than 11 trillion roubles ($136 billion) to Russia’s economy by 2030. Putin urged state bodies and major companies to adopt AI more aggressively while cautioning against overly strict regulation.

However, he stressed that only Russian-made AI systems should be used for national security to prevent sensitive data from flowing abroad. Western sanctions, which restrict access to advanced hardware, particularly microchips, continue to hinder Russia’s ambitions.

The push for domestic AI comes as Ukraine warns that Russia is developing a new generation of autonomous, AI-driven drones capable of operating in coordinated swarms and striking targets up to 62 miles away, underscoring the growing military stakes of the AI race.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Azure weathers record 15.7 Tbps cloud DDoS attack

According to Microsoft, Azure was hit on 24 October 2025 by a massive multi-vector DDoS attack that peaked at 15.72 terabits per second and unleashed 3.64 billion packets per second on a single endpoint.

The attack was traced to the Aisuru botnet, a Mirai-derived IoT botnet. More than 500,000 unique IP addresses, mostly residential devices, participated in the assault. UDP floods with random ports made the attack particularly potent and harder to spoof.

Azure’s automated DDoS Protection infrastructure handled the traffic surge, filtering out malicious packets in real time and keeping customer workloads online.

From a security-policy viewpoint, this incident underscores how IoT devices continue to fuel some of the biggest cyber threats, and how major cloud platforms must scale defences rapidly to cope.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Anthropic uncovers a major AI-led cyberattack

The US R&D firm, Anthropic, has revealed details of the first known cyber espionage operation largely executed by an autonomous AI system.

Suspicious activity detected in September 2025 led to an investigation that uncovered an attack framework, which used Claude Code as an automated agent to infiltrate about thirty high-value organisations across technology, finance, chemicals and government.

The attackers relied on recent advances in model intelligence, agency and tool access.

By breaking tasks into small prompts and presenting Claude as a defensive security assistant instead of an offensive tool, they bypassed safeguards and pushed the model to analyse systems, identify weaknesses, write exploit code and harvest credentials.

The AI completed most of the work with only a few moments of human direction, operating at a scale and speed that human hackers would struggle to match.

Anthropic responded by banning accounts, informing affected entities and working with authorities as evidence was gathered. The company argues that the case shows how easily sophisticated operations can now be carried out by less-resourced actors who use agentic AI instead of traditional human teams.

Errors such as hallucinated credentials remain a limitation, yet the attack marks a clear escalation in capability and ambition.

The firm maintains that the same model abilities exploited by the attackers are needed for cyber defence. Greater automation in threat detection, vulnerability analysis and incident response is seen as vital.

Safeguards, stronger monitoring and wider information sharing are presented as essential steps for an environment where adversaries are increasingly empowered by autonomous AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!