Chatham House analyst targeted in phishing attack

Chatham House expert Keir Giles has been targeted by a highly sophisticated spear phishing campaign, with suspected ties to Russian intelligence.

The cyber operation impersonated a senior official at the US State Department and attempted to extract sensitive credentials under the guise of a legitimate diplomatic consultation.

The incident, which took place in May 2025, was investigated by Google’s Threat Intelligence Group (GTIG) and Citizen Lab. It has been linked to a threat actor tracked as UNC6293, possibly associated with APT29—an espionage group believed to be backed by Russia’s Foreign Intelligence Service (SVR).

Giles received an email from an individual claiming to be ‘Claudie S. Weber’, a non-existent official at the US Department of State. The message invited him to a meeting to discuss ‘recent developments’, a type of request not uncommon in his line of work.

Although the attacker used a Gmail address, they copied several fake @state.gov email addresses to lend the communication authenticity. According to Citizen Lab, the US State Department’s email servers do not bounce invalid addresses, allowing this tactic to go unnoticed.

The tone of the message, coupled with evasive language, led investigators to suspect that the attackers may have employed a large language model to generate the email content.

While the first message contained no direct malware, a later email included a PDF instructing Giles to create an app-specific password (ASP) for accessing a supposed government platform. In reality, this would have handed full access of his Gmail account to the attackers.

Although Giles followed the instructions, he used a different Gmail account than the one targeted—likely limiting the damage. After ten further email exchanges, he shared details of the attempted attack publicly, warning that the stolen material could be altered and leaked as part of a disinformation campaign.

He noted that the attackers’ patient approach made the scam appear more plausible. Citizen Lab confirmed the threat actor’s ability to adapt based on Giles’ replies, avoiding pressure tactics and instead suggesting future collaboration.

Google ultimately blocked the offending Gmail account and secured the affected inbox. GTIG later disclosed a broader campaign, including another incident themed around Ukraine and Microsoft, beginning in April 2025.

In response, GTIG advised high-risk users to avoid app-specific passwords altogether, particularly when enrolled in the Advanced Protection Program (APP). Other recommendations included promptly revoking unused ASPs, monitoring account activity, and enabling advanced security measures.

The case underscores the evolving tactics of state-aligned cyber actors, who now combine social engineering with AI and deep reconnaissance to breach high-value targets.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Hidden privacy risk: Meta AI app may make sensitive chats public

Meta’s new AI app raises privacy concerns as users unknowingly expose sensitive personal information to the public.

The app includes a Discover feed where anyone can view AI chats — even those involving health, legal or financial data. Many users have accidentally shared full resumes, private conversations and medical queries without realising they’re visible to others.

Despite this, Meta’s privacy warnings are minimal. On iPhones, there’s no clear indication during setup that chats will be made public unless manually changed in settings.

Android users see a brief, easily missed message. Even the ‘Post to Feed’ button is ambiguous, often mistaken as referring to a user’s private chat history rather than public content.

Users must navigate deep into the app’s settings to make chats private. They can restrict who sees AI prompts there, stop sharing on Facebook and Instagram, and delete previous interactions.

Critics argue the app’s lack of clarity burdens users, leaving many at risk of oversharing without realising it.

While Meta describes the Discover feed as a way to explore creative AI usage, the result has been a chaotic mix of deeply personal content and bizarre prompts.

Privacy experts warn that the situation mirrors Meta’s longstanding issues with user data. Users are advised to avoid sharing personal details with the AI entirely and immediately turn off all public sharing options.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

North Korea’s BlueNoroff uses deepfakes in Zoom calls to hack crypto workers

The North Korea-linked threat group BlueNoroff has been caught deploying deepfake Zoom meetings to target an employee at a cryptocurrency foundation, aiming to install malware on macOS systems.

According to cybersecurity firm Huntress, the attack began through a Telegram message that redirected the victim to a fake Zoom site. Over several weeks, the employee was lured into a group video call featuring AI-generated replicas of company executives.

When the employee encountered microphone issues during the meeting, the fake participants instructed them to download a Zoom extension, which instead executed a malicious AppleScript.

The script covertly fetched multiple payloads, installed Rosetta 2, and prompted for the system password while wiping command histories to hide forensic traces. Eight malicious binaries were uncovered on the compromised machine, including keyloggers, information stealers, and remote access tools.

BlueNoroff, also known as APT38 and part of the Lazarus Group, has a track record of targeting financial and blockchain organisations for monetary gain. The group’s past operations include the Bybit and Axie Infinity breaches.

Their campaigns often combine deep social engineering with sophisticated multi-stage malware tailored for macOS, with new tactics now mimicking audio and camera malfunctions to trick remote workers.

Cybersecurity analysts have noted that BlueNoroff has fractured into subgroups like TraderTraitor and CryptoCore, specialising in cryptocurrency theft.

Recent offshoot campaigns involve fake job interview portals and dual-platform malware, such as the Python-based PylangGhost and GolangGhost trojans, which harvest sensitive data from victims across operating systems.

The attackers have impersonated firms like Coinbase and Uniswap, mainly targeting users in India.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI-generated photo falsely claims to show a downed Israeli jet

Following Iranian state media claims that its forces shot down two Israeli fighter jets, an image circulated online falsely purporting to show the wreckage of an F-35.

The photo, which shows a large jet crash-landing in a desert, quickly spread across platforms like Threads and South Korean forums, including Aagag and Ruliweb. An Israeli official dismissed the shootdown claim as ‘fake news’.

The image’s caption in Korean read: ‘The F-35 shot down by Iran. Much bigger than I thought.’ However, a detailed AFP analysis found the photo contained several hallmarks of AI generation.

People near the aircraft appear the same size as buses, and one vehicle appears to merge with the road — visual anomalies common in synthetic images.

In addition to size distortions, the aircraft’s markings did not match those used on actual Israeli F-35s. Lockheed Martin specifications confirm the F-35 is just under 16 metres long, unlike the oversized version shown in the image.

Furthermore, the wing insignia in the image differed from the Israeli Air Force’s authentic emblem.

Amid escalating tensions between Iran and Israel, such misinformation continues to spread rapidly. Although AI-generated content is becoming more sophisticated, inconsistencies in scale, symbols, and composition remain key indicators of digital fabrication.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Massive data leak exposes 16 billion login credentials from Google, Facebook, and more

One of the largest-ever leaks of stolen login data has come to light, exposing more than 16 billion records across widely used services, including Facebook, Google, Telegram, and GitHub. The breach, uncovered by researchers at Cybernews, highlights a growing threat to individuals and organisations.

The exposed data reportedly originated from info stealer malware, previous leaks, and credential-stuffing tools. A total of 30 separate datasets were identified, some containing over 3.5 billion entries.

These were briefly available online due to unsecured cloud storage before being removed. Despite the swift takedown, the data had already been collected and analysed.

Experts have warned that the breach could lead to identity theft, phishing, and account takeovers. Smaller websites and users with poor cybersecurity practices are especially vulnerable. Many users continue to reuse passwords or minor variations of them, increasing the risk of exploitation.

While the leak is severe, users employing two-factor authentication (2FA), password managers, or passkeys are less likely to be affected.

Passkeys, increasingly adopted by companies like Google and Apple, offer a phishing-resistant login method that bypasses the need for passwords altogether.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Episource data breach impacts patients at Sharp Healthcare

Episource, a UnitedHealth Group-owned health analytics firm, has confirmed that patient data was compromised during a ransomware attack earlier this year.

The breach affected customers, including Sharp Healthcare and Sharp Community Medical Group, who have started notifying impacted patients. Although electronic health records and patient portals remained untouched, sensitive data such as health plan details, diagnoses and test results were exposed.

The cyberattack, which occurred between 27 January and 6 February, involved unauthorised access to Episource’s internal systems.

A forensic investigation verified that cybercriminals viewed and copied files containing personal information, including insurance plan data, treatment plans, and medical imaging. Financial details and payment card data, however, were mostly unaffected.

Sharp Healthcare confirmed that it was informed of the breach on 24 April and has since worked closely with Episource to identify which patients were impacted.

Compromised information may include names, addresses, insurance ID numbers, doctors’ names, prescribed medications, and other protected health data.

The breach follows a troubling trend of ransomware attacks targeting healthcare-related businesses, including Change Healthcare in 2024, which disrupted services for months. Comparitech reports at least three confirmed ransomware attacks on healthcare firms already in 2025, with 24 more suspected.

Given the scale of patient data involved, experts warn of growing risks tied to third-party healthcare service providers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UBS employee data leaked after Chain IQ ransomware attack

UBS Group AG has confirmed a serious data breach affecting around 130,000 of its employees, following a cyberattack on its third-party supplier, Chain IQ Group AG.

The exposed information included employee names, emails, phone numbers, roles, office locations, and preferred languages. No client data has been impacted, according to UBS.

Chain IQ, a procurement services firm spun off from UBS in 2013, was reportedly targeted by the cybercrime group World Leaks, previously known as Hunters International.

Unlike traditional ransomware operators, World Leaks avoids encryption and instead steals data, threatening public release if ransoms are not paid.

While Chain IQ has acknowledged the breach, it has not disclosed the extent of the stolen data or named all affected clients. Notably, companies such as Swiss Life, AXA, FedEx, IBM, KPMG, Swisscom, and Pictet are among its clients—only Pictet has confirmed it was impacted.

Cybersecurity experts warn that the breach may have long-term implications for the Swiss banking sector. Leaked employee data could be exploited for impersonation, fraud, phishing scams, or even blackmail.

The increasing availability of generative AI may further amplify the risks through voice and video impersonation, potentially aiding in money laundering and social engineering attacks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Ryuk ransomware hacker extradited to US after arrest in Ukraine

A key member of the infamous Ryuk ransomware gang has been extradited to the US after his arrest in Kyiv, Ukraine.

The 33-year-old man was detained in April 2025 at the request of the FBI and arrived in the US on 18 June to face multiple charges.

The suspect played a critical role within Ryuk by gaining initial access to corporate networks, which he then passed on to accomplices who stole data and launched ransomware attacks.

Ukrainian authorities identified him during a larger investigation into ransomware groups like LockerGoga, Dharma, Hive, and MegaCortex that targeted companies across Europe and North America.

According to Ukraine’s National Police, forensic analysis revealed the man’s responsibility for locating security flaws in enterprise networks.

Information gathered by the hacker allowed others in the gang to infiltrate systems, steal data, and deploy ransomware payloads that disrupted various industries, including healthcare, during the COVID pandemic.

Ryuk operated from 2018 until mid-2020 before rebranding as the notorious Conti gang, which later fractured into several smaller but still active groups. Researchers estimate that Ryuk alone collected over $150 million in ransom payments before shutting down.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI helps Google curb scams and deepfakes in India

Google has introduced its Safety Charter for India to combat rising online fraud, deepfakes and cybersecurity threats. The charter outlines a collaborative plan focused on user safety, responsible AI development and protection of digital infrastructure.

AI-powered measures have already helped Google detect 20 times more scam-related pages, block over 500 million scam messages monthly, and issue 2.5 billion suspicious link warnings. Its ‘Digikavach’ programme has reached over 177 million Indians with fraud prevention tools and awareness campaigns.

Google Pay alone averted financial fraud worth ₹13,000 crore in 2024, while Google Play Protect stopped nearly 6 crore high-risk app installations. These achievements reflect the company’s ‘AI-first, secure-by-design’ strategy for early threat detection and response.

The tech giant is also collaborating with IIT-Madras on post-quantum cryptography and privacy-first technologies. Through language models like Gemini and watermarking initiatives such as SynthID, Google aims to build trust and inclusion across India’s digital ecosystem.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT now supports MCP for business data access, but safety risks remain

OpenAI has officially enabled support for Anthropic’s Model Context Protocol (MCP) in ChatGPT, allowing businesses to connect their internal tools directly to the chatbot through Deep Research.

The development enables employees to retrieve company data from previously siloed systems, offering real-time access to documents and search results via custom-built MCP servers.

Adopting MCP — an open industry protocol recently embraced by OpenAI, Google and Microsoft — opens new possibilities and presents security risks.

OpenAI advises users to avoid third-party MCP servers unless hosted by the official service provider, warning that unverified connections may carry prompt injections or hidden malicious directives. Users are urged to report suspicious activity and avoid exposing sensitive data during integration.

To connect tools, developers must set up an MCP server and create a tailored connector within ChatGPT, complete with detailed instructions. The feature is now live for ChatGPT Enterprise, Team and Edu users, who can share the connector across their workspace as a trusted data source.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!