Google Gemini flaw lets hackers trick email summaries

Security researchers have identified a serious flaw in Google Gemini for Workspace that allows cybercriminals to hide malicious commands inside email content.

The attack involves embedding hidden HTML and CSS instructions, which Gemini processes when summarising emails instead of showing the genuine content.

Attackers use invisible text styling such as white-on-white fonts or zero font size to embed fake warnings that appear to originate from Google.

When users click Gemini’s ‘Summarise this email’ feature, these hidden instructions trigger deceptive alerts urging users to call fake numbers or visit phishing sites, potentially stealing sensitive information.

Unlike traditional scams, there is no need for links, attachments, or scripts—only crafted HTML within the email body. The vulnerability extends beyond Gmail, affecting Docs, Slides, and Drive, raising fears of AI-powered phishing beacons and self-replicating ‘AI worms’ across Google Workspace services.

Experts advise businesses to implement inbound HTML checks, LLM firewalls, and user training to treat AI summaries as informational only. Google is urged to sanitise incoming HTML, improve context attribution, and add visibility for hidden prompts processed by Gemini.

Security teams are reminded that AI tools now form part of the attack surface and must be monitored accordingly.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Indonesia opens AI centre with global tech partners

Indonesia has inaugurated a National AI Centre of Excellence in Jakarta in partnership with Indosat Ooredoo Hutchison, NVIDIA and Cisco. The centre is designed to fast-track the adoption of AI and build digital talent to support Indonesia’s ambitions for its 2045 digital vision.

Deputy Minister Nezar Patria said the initiative will help train one million Indonesians in AI, networking and cybersecurity by 2027. Officials and industry leaders stressed the importance of human capability in maximising AI’s potential.

The centre will also serve as a hub for research and developing practical solutions through collaborations with universities and local communities. Indosat launched a related AI security initiative on the same day, highlighting national ambitions for digital resilience.

Executives at the launch said they hope the centre becomes a national movement that helps position Indonesia as a regional and global AI leader.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Enhancing email security through multi-factor authentication

Many users overlook one critical security setting that can stop hackers in their tracks: multi-factor authentication (MFA). Passwords alone are no longer enough. Easy-to-remember passwords are insecure, and strong passwords are rarely memorable or widely reused.

Brute-force attacks and credential leaks are common, especially since many users repeat passwords across different platforms. MFA solves this by requiring a second verification form, usually from your phone or an authenticator app, to confirm your identity.

The extra step can block attackers, even if they have your password, because they still need access to your second device. Two-factor authentication (2FA) is the most common form of MFA. It combines something you know (your password) with something you have.

Many email providers, including Gmail, Outlook, and Proton Mail, now offer built-in 2FA options under account security settings. On Gmail, visit your Google Account, select Security, and enable 2-Step Verification. Use Google Authenticator instead of SMS for better safety.

Outlook.com users can turn on 2FA through their Microsoft account’s Security settings, using an authenticator app for code generation. Proton Mail allows you to scan a QR code with Google Authenticator after enabling 2FA under Account and Password settings.

Authenticator apps are preferred over SMS, as they are vulnerable to SIM-swapping and phishing-based interception. Adding MFA is a fast, simple way to strengthen your email security and avoid becoming a victim of password-related breaches.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Vatican urges ethical AI development

At the AI for Good Summit in Geneva, the Vatican urged global leaders to adopt ethical principles when designing and using AI.

The message, delivered by Cardinal Pietro Parolin on behalf of Pope Leo XIV, warned against letting technology outpace moral responsibility.

Framing the digital age as a defining moment, the Vatican cautioned that AI cannot replace human judgement or relationships, no matter how advanced. It highlighted the risk of injustice if AI is developed without a commitment to human dignity and ethical governance.

The statement called for inclusive innovation that addresses the digital divide, stressing the need to reach underserved communities worldwide. It also reaffirmed Catholic teaching that human flourishing must guide technological progress.

Pope Leo XIV supported a unified global approach to AI oversight, grounded in shared values and respect for freedom. His message underscored the belief that wisdom, not just innovation, must shape the digital future.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

CISA 2015 expiry threatens private sector threat sharing

Congress has under 90 days to renew the Cybersecurity Information Sharing Act (CISA) of 2015 and avoid a regulatory setback. The law protects companies from liability when they share cyber threat indicators with the government or other firms, fostering collaboration.

Before CISA, companies hesitated due to antitrust and data privacy concerns. CISA removed ambiguity by offering explicit legal protections. Without reauthorisation, fear of lawsuits could silence private sector warnings, slowing responses to significant cyber incidents across critical infrastructure sectors.

Debates over reauthorisation include possible expansions of CISA’s scope. However, many lawmakers and industry groups in the United States now support a simple renewal. Health care, finance, and energy groups say the law is crucial for collective defence and rapid cyber threat mitigation.

Security experts warn that a lapse would reverse years of progress in information sharing, leaving networks more vulnerable to large-scale attacks. With only 35 working days left for Congress before the 30 September deadline, the pressure to act is mounting.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Hackers steal $500K via malicious Cursor AI extension

A cyberattack targeting the Cursor AI development environment has resulted in the theft of $500,000 in cryptocurrency from a Russian developer. Despite strong security practices and a fresh operating system, the victim downloaded a malicious extension named ‘Solidity Language’ in June 2025.

Masquerading as a syntax highlighting tool, the fake extension exploited search rankings to appear more legitimate than actual alternatives. Once installed, the extension served as a dropper for malware rather than offering any development features.

It contacted a command-and-control server and began deploying scripts designed to check for remote desktop software and install backdoors. The malware used PowerShell scripts to install ScreenConnect, granting persistent access to the victim’s system through a relay server.

Securelist analysts found that the extension exploited Open VSX registry algorithms by publishing with a more recent update date. Further investigation revealed the same attack methods were used in other packages, including npm’s ‘solsafe’ and three VS Code extensions.

The campaign reflects a growing trend of supply chain attacks exploiting AI coding tools to distribute persistent, stealthy malware.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI can reshape the insurance industry, but carries real-world risks

AI is creating new opportunities for the insurance sector, from faster claims processing to enhanced fraud detection.

According to Jeremy Stevens, head of EMEA business at Charles Taylor InsureTech, AI allows insurers to handle repetitive tasks in seconds instead of hours, offering efficiency gains and better customer service. Yet these opportunities come with risks, especially if AI is introduced without thorough oversight.

Poorly deployed AI systems can easily cause more harm than good. For instance, if an insurer uses AI to automate motor claims but trains the model on biassed or incomplete data, two outcomes are likely: the system may overpay specific claims while wrongly rejecting genuine ones.

The result would not simply be financial losses, but reputational damage, regulatory investigations and customer attrition. Instead of reducing costs, the company would find itself managing complaints and legal challenges.

To avoid such pitfalls, AI in insurance must be grounded in trust and rigorous testing. Systems should never operate as black boxes. Models must be explainable, auditable and stress-tested against real-world scenarios.

It is essential to involve human experts across claims, underwriting and fraud teams, ensuring AI decisions reflect technical accuracy and regulatory compliance.

For sensitive functions like fraud detection, blending AI insights with human oversight prevents mistakes that could unfairly affect policyholders.

While flawed AI poses dangers, ignoring AI entirely risks even greater setbacks. Insurers that fail to modernise may be outpaced by more agile competitors already using AI to deliver faster, cheaper and more personalised services.

Instead of rushing or delaying adoption, insurers should pursue carefully controlled pilot projects, working with partners who understand both AI systems and insurance regulation.

In Stevens’s view, AI should enhance professional expertise—not replace it—striking a balance between innovation and responsibility.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Hackers use fake Termius app to infect macOS devices

Hackers are bundling legitimate Mac apps with ZuRu malware and poisoning search results to lure users into downloading trojanized versions. Security firm SentinelOne reported that the Termius SSH client was recently compromised and distributed through malicious domains and fake downloads.

The ZuRu backdoor, originally detected in 2021, allows attackers to silently access infected machines and execute remote commands undetected. Attackers continue to target developers and IT professionals by trojanising trusted tools such as SecureCRT, Navicat, and Microsoft Remote Desktop.

Infected disk image files are slightly larger than legitimate ones due to embedded malicious binaries. Victims unknowingly launch malware alongside the real app.

The malware bypasses macOS code-signing protections by injecting a temporary developer signature into the compromised application bundle. The updated variant of ZuRu requires macOS Sonoma 14.1 or newer and supports advanced command-and-control functions using the open-source Khepri beacon.

The functions include file transfers, command execution, system reconnaissance and process control, with captured outputs sent back to attacker-controlled domains. The latest campaign used termius.fun and termius.info to host the trojanized packages. Affected users often lack proper endpoint security.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

WSIS+20: Inclusive ICT policies urged to close global digital divide

At the WSIS+20 High-Level Event in Geneva, Dr Hakikur Rahman and Dr Ranojit Kumar Dutta presented a sobering picture of global digital inequality, revealing that more than 2.6 billion people remain offline. Their session, marking two decades of the World Summit on the Information Society (WSIS), emphasised that affordability, poor infrastructure, and a lack of digital literacy continue to block access, especially for marginalised communities.

The speakers proposed a structured three-pillar framework — inclusion, ethics, and sustainability- to ensure that no one is left behind in the digital age.

The inclusion pillar advocated for universal connectivity through affordable broadband, multilingual content, and skills-building programs, citing India’s Digital India and Kenya’s Community Networks as examples of success. On ethics, they called for policies grounded in human rights, data privacy, and transparent AI governance, pointing to the EU’s AI Act and UNESCO guidelines as benchmarks.

The sustainability pillar highlighted the importance of energy-efficient infrastructure, proper e-waste management, and fair public-private collaboration, showcasing Rwanda’s green ICT strategy and Estonia’s e-residency program.

Dr Dutta presented detailed data from Bangladesh, showing stark urban-rural and gender-based gaps in internet access and digital literacy. While urban broadband penetration has soared, rural and female participation lags behind.

Encouraging trends, such as rising female enrollment in ICT education and the doubling of ICT sector employment since 2022, were tempered by low data protection awareness and a dire e-waste recycling rate of only 3%.

The session concluded with a call for coordinated global and regional action, embedding ethics and inclusion in every digital policy. The speakers urged stakeholders to bridge divides in connectivity, opportunity, access, and environmental responsibility, ensuring digital progress uplifts all communities.

Track all key events from the WSIS+20 High-Level Event 2025 on our dedicated page.

Grok chatbot relies on Musk’s views instead of staying neutral

Grok, the AI chatbot owned by Elon Musk’s company xAI, appears to search for Musk’s personal views before answering sensitive or divisive questions.

Rather than relying solely on a balanced range of sources, Grok has been seen citing Musk’s opinions when responding to topics like Israel and Palestine, abortion, and US immigration.

Evidence gathered from a screen recording by data scientist Jeremy Howard shows Grok actively ‘considering Elon Musk’s views’ in its reasoning process. Out of 64 citations Grok provided about Israel and Palestine, 54 were linked to Musk.

Others confirmed similar results when asking about abortion and immigration laws, suggesting a pattern.

While the behaviour might seem deliberate, some experts believe it happens naturally instead of through intentional programming. Programmer Simon Willison noted that Grok’s system prompt tells it to avoid media bias and search for opinions from all sides.

Yet, Grok may prioritise Musk’s stance because it ‘knows’ its owner, especially when addressing controversial matters.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!