Google Gemini flaw lets hackers trick email summaries

Security researchers have identified a serious flaw in Google Gemini for Workspace that allows cybercriminals to hide malicious commands inside email content.

The attack involves embedding hidden HTML and CSS instructions, which Gemini processes when summarising emails instead of showing the genuine content.

Attackers use invisible text styling such as white-on-white fonts or zero font size to embed fake warnings that appear to originate from Google.

When users click Gemini’s ‘Summarise this email’ feature, these hidden instructions trigger deceptive alerts urging users to call fake numbers or visit phishing sites, potentially stealing sensitive information.

Unlike traditional scams, there is no need for links, attachments, or scripts—only crafted HTML within the email body. The vulnerability extends beyond Gmail, affecting Docs, Slides, and Drive, raising fears of AI-powered phishing beacons and self-replicating ‘AI worms’ across Google Workspace services.

Experts advise businesses to implement inbound HTML checks, LLM firewalls, and user training to treat AI summaries as informational only. Google is urged to sanitise incoming HTML, improve context attribution, and add visibility for hidden prompts processed by Gemini.

Security teams are reminded that AI tools now form part of the attack surface and must be monitored accordingly.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI could save billions but healthcare adoption is slow

AI is being hailed as a transformative force in healthcare, with the potential to reduce costs and improve outcomes dramatically. Estimates suggest widespread AI integration could save up to 360 billion dollars annually by accelerating diagnosis and reducing inefficiencies across the system.

Although tools like AI scribes, triage assistants, and scheduling systems are gaining ground, clinical adoption remains slow. Only a small percentage of doctors, roughly 12%, currently rely on AI for diagnostic decisions. This cautious rollout reflects deeper concerns about the risks associated with medical AI.

Challenges include algorithmic drift when systems are exposed to real-world conditions, persistent racial and ethnic biases in training data, and the opaque ‘black box’ nature of many AI models. Privacy issues also loom, as healthcare data remains among the most sensitive and tightly regulated.

Experts argue that meaningful AI adoption in clinical care must be incremental. It requires rigorous validation, clinician training, transparent algorithms, and clear regulatory guidance. While the potential to save lives and money is significant, the transformation will be slow and deliberate, not overnight.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Latin America struggling to join the global AI race

Currently, Latin America is lagging in AI innovation. It contributes only 0.3% of global startup activity and attracts a mere 1% of worldwide investment, despite housing around 8% of the global population.

Experts point to a significant brain drain, a lack of local funding options, weak policy frameworks, and dependency on foreign technology as major obstacles. Many high‑skilled professionals emigrate in search of better opportunities elsewhere.

To bridge the gap, regional governments are urged to develop coherent national AI strategies, foster regional collaboration, invest in digital education, and strengthen ties between the public and private sectors.

Strategic regulation and talent retention initiatives could help Latin America build its capacity and compete globally.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Vatican urges ethical AI development

At the AI for Good Summit in Geneva, the Vatican urged global leaders to adopt ethical principles when designing and using AI.

The message, delivered by Cardinal Pietro Parolin on behalf of Pope Leo XIV, warned against letting technology outpace moral responsibility.

Framing the digital age as a defining moment, the Vatican cautioned that AI cannot replace human judgement or relationships, no matter how advanced. It highlighted the risk of injustice if AI is developed without a commitment to human dignity and ethical governance.

The statement called for inclusive innovation that addresses the digital divide, stressing the need to reach underserved communities worldwide. It also reaffirmed Catholic teaching that human flourishing must guide technological progress.

Pope Leo XIV supported a unified global approach to AI oversight, grounded in shared values and respect for freedom. His message underscored the belief that wisdom, not just innovation, must shape the digital future.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

CISA 2015 expiry threatens private sector threat sharing

Congress has under 90 days to renew the Cybersecurity Information Sharing Act (CISA) of 2015 and avoid a regulatory setback. The law protects companies from liability when they share cyber threat indicators with the government or other firms, fostering collaboration.

Before CISA, companies hesitated due to antitrust and data privacy concerns. CISA removed ambiguity by offering explicit legal protections. Without reauthorisation, fear of lawsuits could silence private sector warnings, slowing responses to significant cyber incidents across critical infrastructure sectors.

Debates over reauthorisation include possible expansions of CISA’s scope. However, many lawmakers and industry groups in the United States now support a simple renewal. Health care, finance, and energy groups say the law is crucial for collective defence and rapid cyber threat mitigation.

Security experts warn that a lapse would reverse years of progress in information sharing, leaving networks more vulnerable to large-scale attacks. With only 35 working days left for Congress before the 30 September deadline, the pressure to act is mounting.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI can reshape the insurance industry, but carries real-world risks

AI is creating new opportunities for the insurance sector, from faster claims processing to enhanced fraud detection.

According to Jeremy Stevens, head of EMEA business at Charles Taylor InsureTech, AI allows insurers to handle repetitive tasks in seconds instead of hours, offering efficiency gains and better customer service. Yet these opportunities come with risks, especially if AI is introduced without thorough oversight.

Poorly deployed AI systems can easily cause more harm than good. For instance, if an insurer uses AI to automate motor claims but trains the model on biassed or incomplete data, two outcomes are likely: the system may overpay specific claims while wrongly rejecting genuine ones.

The result would not simply be financial losses, but reputational damage, regulatory investigations and customer attrition. Instead of reducing costs, the company would find itself managing complaints and legal challenges.

To avoid such pitfalls, AI in insurance must be grounded in trust and rigorous testing. Systems should never operate as black boxes. Models must be explainable, auditable and stress-tested against real-world scenarios.

It is essential to involve human experts across claims, underwriting and fraud teams, ensuring AI decisions reflect technical accuracy and regulatory compliance.

For sensitive functions like fraud detection, blending AI insights with human oversight prevents mistakes that could unfairly affect policyholders.

While flawed AI poses dangers, ignoring AI entirely risks even greater setbacks. Insurers that fail to modernise may be outpaced by more agile competitors already using AI to deliver faster, cheaper and more personalised services.

Instead of rushing or delaying adoption, insurers should pursue carefully controlled pilot projects, working with partners who understand both AI systems and insurance regulation.

In Stevens’s view, AI should enhance professional expertise—not replace it—striking a balance between innovation and responsibility.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Samsung confirms core Galaxy AI tools remain free

Samsung has confirmed that core Galaxy AI features will continue to be available free of charge for all users.

Speaking during the recent Galaxy Unpacked event, a company representative clarified that any AI tools installed on a device by default—such as Live Translate, Note Assist, Zoom Nightography and Audio Eraser—will not require a paid subscription.

Instead of leaving users uncertain, Samsung has publicly addressed speculation around possible Galaxy AI subscription plans.

While there are no additional paid AI features on offer at present, the company has not ruled out future developments. Samsung has already hinted that upcoming subscription services linked to Samsung Health could eventually include extra AI capabilities.

Alongside Samsung’s announcement, attention has also turned towards Google’s freemium model for its Gemini AI assistant, which appears on many Android devices. Users can access basic features without charge, but upgrading to Google AI Pro or Ultra unlocks advanced tools and increased storage.

New Galaxy Z Fold 7 and Z Flip 7 handsets even come bundled with six months of free access to premium Google AI services.

Although Samsung is keeping its pre-installed Galaxy AI features free, industry observers expect further changes as AI continues to evolve.

Whether Samsung will follow Google’s path with a broader subscription model remains to be seen, but for now, essential Galaxy AI functions stay open to all users without extra cost.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Hackers use fake Termius app to infect macOS devices

Hackers are bundling legitimate Mac apps with ZuRu malware and poisoning search results to lure users into downloading trojanized versions. Security firm SentinelOne reported that the Termius SSH client was recently compromised and distributed through malicious domains and fake downloads.

The ZuRu backdoor, originally detected in 2021, allows attackers to silently access infected machines and execute remote commands undetected. Attackers continue to target developers and IT professionals by trojanising trusted tools such as SecureCRT, Navicat, and Microsoft Remote Desktop.

Infected disk image files are slightly larger than legitimate ones due to embedded malicious binaries. Victims unknowingly launch malware alongside the real app.

The malware bypasses macOS code-signing protections by injecting a temporary developer signature into the compromised application bundle. The updated variant of ZuRu requires macOS Sonoma 14.1 or newer and supports advanced command-and-control functions using the open-source Khepri beacon.

The functions include file transfers, command execution, system reconnaissance and process control, with captured outputs sent back to attacker-controlled domains. The latest campaign used termius.fun and termius.info to host the trojanized packages. Affected users often lack proper endpoint security.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Huawei challenges Nvidia in global AI chip market

Huawei Technologies is exploring AI chip exports to the Middle East and Southeast Asia in a bid to compete with Nvidia, according to a Bloomberg News report published Thursday.

The Chinese telecom firm has contacted potential buyers in the United Arab Emirates, Saudi Arabia, and Thailand to promote its Ascend 910B chips, an earlier-generation AI processor.

The offer involves a limited number of chips, reportedly in the low thousands, although specific quantities remain undisclosed. No deals have been finalised so far. Sources cited in the report said there is limited interest in the UAE, and the status of talks in Thailand remains uncertain.

Government representatives in Thailand and Saudi Arabia did not immediately respond to Reuters’ requests for comment. Huawei also declined to comment. The initiative is part of a broader strategy to expand into markets where US chipmakers have long held dominance.

Huawei also promotes remote access to CloudMatrix 384, a China-based AI system built using its more advanced chipsets. However, due to supply limitations, the company cannot export these high-end models outside China.

The Middle East has quickly become a high-demand region for AI infrastructure, attracting interest from leading technology companies. Nvidia has already struck several regional deals, positioning itself as a major player in AI development across Saudi Arabia and neighbouring countries.

Huawei is simultaneously focusing on domestic sales of its newer 910C chips, offering them to Chinese firms that cannot purchase US AI chips due to ongoing export restrictions imposed by Washington.

US administrations have long cited national security concerns in limiting China’s access to cutting-edge chip technologies, fearing their potential use in military applications.

‘With the current export controls, we are effectively out of the China datacenter market, which is now served only by competitors such as Huawei,’ an Nvidia spokesperson told Reuters.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New Gemini AI tool animates photos into short video clips

Google has rolled out a new feature for Gemini AI that transforms still photos into short, animated eight-second videos with sound. The capability is powered by Veo 3, Google’s latest video generation model, and is currently available to Gemini Advanced Ultra and Pro subscribers.

The tool supports background noise, ambient audio, and even spoken dialogue, with support gradually expanding to users in select countries, including India. At launch, access to the web interface is limited, though Google has announced that mobile support will follow later in the week.

To use the tool, users upload a photo, describe the intended motion, and optionally add prompts for sound effects or narration. Gemini then generates a 720p MP4 video in a 16:9 landscape format, automatically synchronising visuals and audio.

Josh Woodward, Vice President of the Gemini app and Google Labs, showcased the feature on X (formerly Twitter), animating a child’s drawing. ‘Still experimental, but we wanted our Pro and Ultra members to try it first,’ he said, calling the result fun and expressive.

To maintain authenticity, each video includes a visible ‘Veo’ watermark in the bottom-right corner and an invisible SynthID watermark. This hidden digital signature, developed by Google DeepMind, helps identify AI-generated content and preserve transparency around synthetic media.

The company has emphasised its commitment to responsible AI deployment by embedding traceable markers in all output from this tool. These safeguards come amid increasing scrutiny of generative video tools and deepfakes across digital platforms.

To animate a photo using Gemini AI’s new tool, users should follow these steps: Click on the ‘tools’ icon in the prompt bar, then choose the ‘video’ option from the menu. Upload the still image, describe the desired motion, and provide sound or narration instructions, optionally.

The underlying Veo 3 model was first introduced at Google I/O as the company’s most advanced video generation engine. It can produce high-quality visuals, simulate real-world physics, and even lip-sync dialogue from text and image-based prompts.

A Google blog post explains: ‘Veo 3 excels from text and image prompting to real-world physics and accurate lip syncing.’ The company says users can craft short story prompts and expect realistic, cinematic responses from the model.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!