Perplexity’s AI-powered browser, Comet, was found to have a serious vulnerability that could have exposed sensitive user data through indirect prompt injection, according to researchers at Brave, a rival browser company.
The flaw stemmed from how Comet handled webpage-summarisation requests. By embedding hidden instructions on websites, attackers could trick the browser’s large language model into executing unintended actions, such as extracting personal emails or accessing saved passwords.
Brave researchers demonstrated how the exploit could bypass traditional protections, such as the same-origin policy, showing scenarios where attackers gained access to Gmail or banking data by manipulating Comet into following malicious cues.
Brave disclosed the vulnerability to Perplexity on 11 August, but stated that it remained unfixed when they published their findings on 20 August. Perplexity later confirmed to CNET that the flaw had been patched, and Brave was credited for working with them to resolve it.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Hackers linked to the ShinyHunters group have compromised Google’s Salesforce systems, leading to a data leak that puts Gmail and Google Cloud users at risk of phishing attacks.
Google confirmed that customer and company names were exposed, though no passwords were stolen. Attackers are now exploiting the breach with phishing schemes, including fake account resets and malware injection attempts through outdated access points.
With Gmail and Google Cloud serving around 2.5 billion users worldwide, both companies and individuals could be targeted. Early reports on Reddit describe callers posing as Google staff warning of supposed account breaches.
Google urges users to strengthen protections by running its Security Checkup, enabling Advanced Protection, and switching to passkeys instead of passwords. The company emphasised that its staff never initiates unsolicited password resets by phone or email.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Poland has become the leading global target for politically and socially motivated cyberattacks, recording over 450 incidents in the second quarter of 2025, according to Spain’s Industrial Cybersecurity Center.
The report ranked Poland ahead of Ukraine, the UK, France, Germany, and other European states in hacktivist activity. Government institutions and the energy sector were among the most targeted, with organisations supporting Ukraine described as especially vulnerable.
ZIUR’s earlier first-quarter analysis had warned of a sharp rise in attacks against state bodies across Europe. Pro-Russian groups were identified as among the most active, increasingly turning to denial-of-service campaigns to disrupt critical operations.
Europe accounted for the largest share of global hacktivism in the second quarter, with more than 2,500 successful denial-of-service attacks recorded between April and June, underlining the region’s heightened exposure.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Cyberattacks are intensifying worldwide, with Australia now ranked fourth globally for threats against operational technology and industrial sectors. Rising AI-powered incursions have exposed serious vulnerabilities in the country’s national defence and critical infrastructure.
The 2023–2030 Cyber Security Strategy designed by the Government of Australia aims to strengthen resilience through six ‘cyber shields’, including legislation and intelligence sharing. But a skills shortage leaves organisations vulnerable as ransomware attacks on mining and manufacturing continue to rise.
One proposal gaining traction is the creation of a volunteer ‘cyber militia’. Inspired by the cyber defence unit in Estonia, this network would mobilise unconventional talent, retirees, hobbyist hackers, and students, to bolster monitoring, threat hunting, and incident response.
Supporters argue that such a force could fill gaps left by formal recruitment, particularly in smaller firms and rural networks. Critics, however, warn of vetting risks, insider threats, and the need for new legal frameworks to govern liability and training.
Pilot schemes in high-risk sectors, such as energy and finance, have been proposed, with public-private funding viewed as crucial. Advocates argue that a cyber militia could democratise security and foster collective responsibility, aligning with the country’s long-term cybersecurity strategy.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The discovery of hundreds of non-consensual deepfake images on a student’s laptop at the University of Hong Kong has reignited debate about privacy, technology, and accountability. The scandal echoes the 2008 Edison Chen photo leak, which exposed gaps in law and gender double standards.
Unlike stolen private images, today’s fabrications are AI-generated composites that can tarnish reputations with a single photo scraped from social media. The dismissal that such content is ‘not real’ fails to address the damage caused by its existence.
The legal system of Hong Kong struggles to keep pace with this shift. Its privacy ordinance, drafted in the 1990s, was not designed for machine-learning fabrications, while traditional harassment and defamation laws predate the advent of AI. Victims risk harm before distribution is even proven.
The city’s privacy watchdog has launched a criminal investigation, but questions remain over whether creation or possession of deepfakes is covered by existing statutes. Critics warn that overreach could suppress legitimate uses, yet inaction leaves space for abuse.
Observers argue that just as the snapshot camera spurred the development of modern privacy law, deepfakes must drive a new legal boundary to safeguard dignity. Without reform, victims may continue facing harm without recourse.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
South Korea’s new administration has unveiled a five-year economic plan to build what it calls a ‘super-innovation economy’ by integrating AI across all sectors of society.
The strategy, led by President Lee Jae-myung, commits 100 trillion won (approximately US$71.5 billion) to position the country among the world’s top three AI powerhouses. Private firms will drive development, with government support for nationwide adoption.
Plans include a sovereign Korean-language AI model, humanoid robots for logistics and industry, and commercialising autonomous vehicles by 2027. Unmanned ships are targeted for completion by 2030, alongside widespread use of drones in firefighting and aviation.
AI will also be introduced into drug approvals, smart factories, welfare services, and tax administration, with AI-based tax consultations expected by 2026. Education initiatives and a national AI training data cluster will nurture talent and accelerate innovation.
Five domestic firms, including Naver Cloud, SK Telecom, and LG AI Research, will receive state support to build homegrown AI foundation models. Industry reports currently rank South Korea between sixth and 10th in global AI competitiveness.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Google has expanded its AI Mode in Search to 180 additional countries and territories, introducing new agentic features to help users make restaurant reservations. The service remains limited to English and is not yet available in the European Union.
The update enables users to specify their dining preferences and constraints, allowing the system to scan multiple platforms and present real-time availability. Once a choice is made, users are directed to the restaurant’s booking page.
Partners supporting the service include OpenTable, Resy, SeatGeek, StubHub, Booksy, Tock, and Ticketmaster. The feature is part of Google’s Search Labs experiment, available to subscribers of Google AI Ultra in the United States.
AI Mode also tailors suggestions based on previous searches and introduces a Share function, letting users share restaurant options or planning results with others, with the option to delete links.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Google has announced that Gemini will soon power its smart home platform, replacing Google Assistant on existing Nest speakers and displays from October. The feature will launch initially as an early preview.
Gemini for Home promises more natural conversations and can manage complex household tasks, including controlling smart devices, creating calendars, and handling lists or timers through natural language commands. It will also support Gemini Live for ongoing dialogue.
Google says the upgrade is designed to serve all household members and visitors, offering hands-free help and integration with streaming platforms. The move signals a renewed focus on Google Home, a product line that has been largely overlooked in recent years.
The announcement hints at potential new hardware, given that Google’s last Nest Hub was released in 2021 and the Nest Audio speaker dates back to 2020.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Google has released Gemma 3 270M, an open-source AI model with 270 million parameters designed to run efficiently on smartphones and Internet of Things devices.
Drawing on technology from the larger Gemini family, it focuses on portability, low energy use and quick fine-tuning, enabling developers to create AI tools that work on everyday hardware instead of relying on high-end servers.
The model supports instruction-following and text structuring with a 256,000-token vocabulary, offering scope for natural language processing and on-device personalisation.
Its design includes quantisation-aware training to work in low-precision formats such as INT4, reducing memory use and improving speed on mobile processors instead of requiring extensive computational power.
Industry commentators note that the model could help meet demand for efficient AI in edge computing, with applications in healthcare wearables and autonomous IoT systems. Keeping processing on-device also supports privacy and reduces dependence on cloud infrastructure.
Google highlights the environmental benefits of the model, pointing to reduced carbon impact and greater accessibility for smaller firms and independent developers. While safeguards like ShieldGemma aim to limit risks, experts say careful use will still be needed to avoid misuse.
Future developments may bring new features, including multimodal capabilities, as part of Google’s strategy to blend open and proprietary AI within hybrid systems.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Anthropic has outlined a multi-layered safety plan for Claude, aiming to keep it useful while preventing misuse. Its Safeguards team blends policy experts, engineers, and threat analysts to anticipate and counter risks.
The Usage Policy establishes clear guidelines for sensitive areas, including elections, finance, and child safety. Guided by the Unified Harm Framework, the team assesses potential physical, psychological, and societal harms, utilizing external experts for stress tests.
During the 2024 US elections, a TurboVote banner was added after detecting outdated voting info, ensuring users saw only accurate, non-partisan updates.
Safety is built into development, with guardrails to block illegal or malicious requests. Partnerships like ThroughLine help Claude handle sensitive topics, such as mental health, with care rather than avoidance or refusal.
Before launch, Claude undergoes safety, risk, and bias evaluations with government and industry partners. Once live, classifiers scan for violations in real time, while analysts track patterns of coordinated misuse.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!