AI helps Google curb scams and deepfakes in India

Google has introduced its Safety Charter for India to combat rising online fraud, deepfakes and cybersecurity threats. The charter outlines a collaborative plan focused on user safety, responsible AI development and protection of digital infrastructure.

AI-powered measures have already helped Google detect 20 times more scam-related pages, block over 500 million scam messages monthly, and issue 2.5 billion suspicious link warnings. Its ‘Digikavach’ programme has reached over 177 million Indians with fraud prevention tools and awareness campaigns.

Google Pay alone averted financial fraud worth ₹13,000 crore in 2024, while Google Play Protect stopped nearly 6 crore high-risk app installations. These achievements reflect the company’s ‘AI-first, secure-by-design’ strategy for early threat detection and response.

The tech giant is also collaborating with IIT-Madras on post-quantum cryptography and privacy-first technologies. Through language models like Gemini and watermarking initiatives such as SynthID, Google aims to build trust and inclusion across India’s digital ecosystem.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT now supports MCP for business data access, but safety risks remain

OpenAI has officially enabled support for Anthropic’s Model Context Protocol (MCP) in ChatGPT, allowing businesses to connect their internal tools directly to the chatbot through Deep Research.

The development enables employees to retrieve company data from previously siloed systems, offering real-time access to documents and search results via custom-built MCP servers.

Adopting MCP — an open industry protocol recently embraced by OpenAI, Google and Microsoft — opens new possibilities and presents security risks.

OpenAI advises users to avoid third-party MCP servers unless hosted by the official service provider, warning that unverified connections may carry prompt injections or hidden malicious directives. Users are urged to report suspicious activity and avoid exposing sensitive data during integration.

To connect tools, developers must set up an MCP server and create a tailored connector within ChatGPT, complete with detailed instructions. The feature is now live for ChatGPT Enterprise, Team and Edu users, who can share the connector across their workspace as a trusted data source.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Salt Typhoon hackers breached Viasat during 2024 presidential campaign

According to Bloomberg News, satellite communications firm Viasat Inc. was reportedly among the targets of the Chinese-linked cyberespionage operation known as Salt Typhoon, which coincided with the 2024 US presidential campaign.

The breach, believed to have occurred last year, was discovered in 2025. Viasat confirmed it had investigated the incident in cooperation with an independent cybersecurity partner and relevant government authorities.

According to the company, the unauthorised access stemmed from a compromised device, though no evidence of customer impact has been found. ‘Viasat believes that the incident has been remediated and has not detected any recent activity related to this event,’ the firm stated, reaffirming its collaboration with United States officials.

Salt Typhoon, attributed to China by US intelligence, has previously been accused of breaching major telecom networks, including Verizon, AT&T and Lumen. Hackers allegedly gained full access to internal systems, enabling the geolocation of millions of users and the interception of phone calls.

In December 2024, US officials disclosed that a ninth telecom company had been compromised and confirmed that individuals connected to both Kamala Harris’s and Donald Trump’s presidential campaigns were targeted.

Chinese authorities have consistently rejected the claims, labelling them disinformation. Beijing maintains it ‘firmly opposes and combats cyberattacks and cybertheft in all forms’.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK cyber agency warns AI will accelerate cyber threats by 2027

The UK’s National Cyber Security Centre has warned that integrating AI into national infrastructure creates a broader attack surface, raising concerns about an increased risk of cyber threats.

Its latest report outlines how AI may amplify the capabilities of threat actors, especially when it comes to exploiting known vulnerabilities more rapidly than ever before.

By 2027, AI-enabled tools are expected to shorten the time between vulnerability disclosure and exploitation significantly. The evolution could pose a serious challenge for defenders, particularly within critical systems.

The NCSC notes that the risk of advanced cyber attacks will likely escalate unless organisations can keep pace with so-called ‘frontier AI’.

The centre also predicts a growing ‘digital divide’ between organisations that adapt to AI-driven threats and those left behind. The divide could further endanger the overall cyber resilience of the UK. As a result, decisive action is being urged to close the gap and reduce future risks.

NCSC operations director Paul Chichester said AI is expanding attack surfaces, increasing the volume of threats, and speeding up malicious activity. He emphasised that while these dangers are real, AI can strengthen the UK’s cyber defences.

Organisations are encouraged to adopt robust security practices using resources like the Cyber Assessment Framework, the 10 Steps to Cyber Security, and the new AI Cyber Security Code of Practice.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google warns against weak passwords amid £12bn scams

Gmail users are being urged to upgrade their security as online scams continue to rise sharply, with cyber criminals stealing over £12 billion in the past year alone. Google is warning that simple passwords leave people vulnerable to phishing and account takeovers.

To combat the threat, users are encouraged to switch to passkeys or use ‘Sign in with Google’, both of which offer stronger protections through fingerprint, face ID or PIN verification. Over 60% of Baby Boomers and Gen X users still rely on weak passwords, increasing their exposure to attacks.

Despite the availability of secure alternatives, only 30% of users reportedly use them daily. Gen Z is leading the shift by adopting newer tools, bypassing outdated security habits altogether.

Google recommends adding 2-Step Verification for those unwilling to leave passwords behind. With scams growing more sophisticated, extra security measures are no longer optional, they are essential.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

CISOs warn AI-driven cyberattacks are rising, with DNS infrastructure at risk

A new report warns that chief information security officers (CISOs) are bracing for a sharp increase in cyber-attacks as AI continues to reshape the global threat landscape. According to CSC’s report, 98% of CISOs expect rising attacks over the next three years, with domain infrastructure a key concern.

AI-powered domain generation algorithms (DGAs) have been flagged as a key threat by 87% of security leaders. Cyber-squatting, DNS hijacking, and DDoS attacks remain top risks, with nearly all CISOs expressing concern over bad actors’ increasing use of AI.

However, only 7% said they feel confident in defending against domain-based threats.

Concerns have also been raised about identity verification. Around 99% of companies worry their domain registrars fail to apply adequate Know Your Customer (KYC) policies, leaving them vulnerable to infiltration.

Meanwhile, half of organisations have not implemented or tested a formal incident response plan or adopted AI-driven monitoring tools.

Budget constraints continue to limit cybersecurity readiness. Despite the growing risks, only 7% of CISOs reported a significant increase in security budgets between 2024 and 2025. CSC’s Ihab Shraim warned that DNS infrastructure is a prime target and urged firms to act before facing technical and reputational fallout.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Workplace deepfake abuse: What employers must know

Deepfake technology—AI-generated videos, images, and audio—has entered the workplace in alarming ways.

Once difficult to produce, deepfakes are now widely accessible and are being used to harass, impersonate, or intimidate employees. These synthetic media attacks can cause deep psychological harm, damage reputations, and expose employers to serious legal risks.

While US federal law hasn’t yet caught up, new legislation like the Take It Down Act and Florida’s Brooke’s Law require platforms to remove non-consensual deepfake content within 48 hours.

Meanwhile, employers could face claims under existing workplace laws if they fail to act on deepfake harassment. Inaction may lead to lawsuits for creating a hostile environment or for negligent oversight.

Most workplace policies still don’t mention synthetic media and something like this creates blind spots, especially during investigations, where fake images or audio could wrongly influence decisions.

Employers need to shift how they assess evidence and protect both accused and accuser fairly. It’s time to update handbooks, train staff, and build clear response plans that include digital impersonation and deepfake abuse.

By treating deepfakes as a modern form of harassment instead of just a tech issue, organisations can respond faster, protect staff, and maintain trust. Proactive training, updated policies, and legal awareness will be crucial to workplace safety in the age of AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Anubis ransomware threatens permanent data loss

A new ransomware threat known as Anubis is making waves in the cybersecurity world, combining file encryption with aggressive monetisation tactics and a rare file-wiping feature that prevents data recovery.

Victims discover their files renamed with the .anubis extension and are presented with a ransom note warning that stolen data will be leaked unless payment is made.

What sets Anubis apart is its ability to permanently erase file contents using a command that overwrites them with zero-byte shells. Although the filenames remain, the data inside is lost forever, rendering recovery impossible.

Researchers have flagged the destructive feature as highly unusual for ransomware, typically seen in cyberespionage rather than financially motivated attacks.

The malware also attempts to change the victim’s desktop wallpaper to reinforce the impact, although in current samples, the image file was missing. Anubis spreads through phishing emails and uses tactics like command-line scripting and stolen tokens to escalate privileges and evade defences.

It operates as a ransomware-as-a-service model, meaning less-skilled cybercriminals can rent and use it easily.

Security experts urge organisations to treat Anubis as more than a typical ransomware threat. Besides strong backup practices, firms are advised to improve email security, limit user privileges, and train staff to spot phishing attempts.

As attackers look to profit from stolen access and unrecoverable destruction, prevention becomes the only true line of defence.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Crypto scandal sparks no-confidence vote in Czech Parliament

The Czech government faced pressure as Parliament debated a no-confidence motion over a scandal tied to a Bitcoin donation. The Justice Ministry sold a crypto donation worth nearly 1 billion koruna ($47 million), triggering outrage over the donor’s criminal record for drug offences.

Justice Minister Pavel Blazek resigned in May, stating he wanted to avoid further damage to the coalition. Prime Minister Petr Fiala accepted his resignation and praised him for acting responsibly.

Eva Decroix, also from the Civic Democratic Party, took over the post and announced an independent investigation into the ministry’s donation handling.

Opposition leaders, including Andrej Babis of the ANO movement, accused the government of possibly laundering money. The organised crime unit is investigating, but the origin and reason for the Bitcoin donation remain unclear.

The four-party coalition still holds a majority, making it unlikely that the no-confidence motion will succeed. However, the affair arrives just months before crucial elections in October, with polls predicting a win for Babis.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Hacktivists target Iran’s Bank Sepah in major cyberattack

A cyberattack has reportedly hit Iran’s Bank Sepah by the hacktivist group Predatory Sparrow. The group announced on Tuesday that it had ‘destroyed all data’ at the bank, which is closely linked to the Islamic Revolutionary Guard Corps (IRGC) and Iran’s military.

Several Bank Sepah branches were closed, and customers reported being unable to access their accounts.
The attack coincided with broader banking disruptions in Iran, affecting services at Kosar and Ansar banks, both associated with military entities and subject to US sanctions.

Authorities in Iran have yet to publicly acknowledge the attack, though the IRGC-linked Fars news agency claimed the issues would be resolved in a few hours.

Predatory Sparrow said it targeted Bank Sepah for its alleged role in financing Iran’s missile and nuclear programmes and in helping the country circumvent international sanctions.

The group has previously claimed responsibility for attacks on Iranian steel plants and fuel stations and is widely believed by Tehran to receive foreign support, particularly from Israel.

Bank Sepah, one of the country’s oldest financial institutions, operates around 1,800 branches within Iran and maintains offices across Europe. The United States sanctioned the bank in 2019 following Iran’s withdrawal from the 2015 nuclear deal.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!