Google Cloud boosts AI security with agentic defence tools

Google Cloud has unveiled a suite of security enhancements at its Security Summit 2025, focusing on protecting AI innovations and empowering cybersecurity teams with AI-driven defence tools.

VP and GM Jon Ramsey highlighted the growing need for specialised safeguards as enterprises deploy AI agents across complex environments.

Central to the announcements is the concept of an ‘agentic security operations centre,’ where AI agents coordinate actions to achieve shared security objectives. It represents a shift from reactive security approaches to proactive, agent-supported strategies.

Google’s platform integrates automated discovery, threat detection, and response mechanisms to streamline security operations and cover gaps in existing infrastructures.

Key innovations include extended protections for AI agents through Model Armour, covering Agentspace prompts and responses to mitigate prompt injection attacks, jailbreaking, and data leakage.

The Alert Investigation agent, available in preview, automates enrichment and analysis of security events while offering actionable recommendations, reducing manual effort and accelerating response times.

Integrating Mandiant threat intelligence feeds and Gemini AI strengthens detection and incident response across agent environments.

Additional tools, such as SecOps Labs and native SOAR dashboards, provide organisations with early access to AI-powered threat detection experiments and comprehensive security visualisation capabilities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Rapper Bot dismantled after 370,000 global cyberattacks

A 22-year-old man from Oregon has been charged with operating one of the most powerful botnets ever uncovered, Rapper Bot.

Federal prosecutors in Alaska said the network was responsible for over 370,000 cyberattacks worldwide since 2021, targeting technology firms, a central social media platform and even a US government system.

The botnet relied on malware that infected everyday devices such as Wi-Fi routers and digital video recorders. Once hijacked, the compromised machines were forced to overwhelm servers with traffic in distributed denial-of-service (DDoS) attacks.

Investigators estimate that Rapper Bot infiltrated as many as 95,000 devices at its peak.

The accused administrator, Ethan Foltz, allegedly ran the network as a DDoS-for-hire service, temporarily charging customers to control its capabilities.

Authorities said its most significant attack generated more than six terabits of data per second, making it among the most destructive DDoS networks. Foltz faces up to 10 years in prison if convicted.

The arrest was carried out under Operation PowerOFF, an international effort to dismantle criminal groups offering DDoS-for-hire services.

US Attorney Michael J. Heyman said the takedown had effectively disrupted a transnational threat, ending Foltz’s role in the sprawling cybercrime operation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Grok chatbot leaks spark major AI privacy concerns

Private conversations with xAI’s chatbot Grok have been exposed online, raising serious concerns over user privacy and AI safety. Forbes found that Grok’s ‘share’ button created public URLs, later indexed by Google and other search engines.

The leaked content is troubling, ranging from questions on hacking crypto wallets to instructions on drug production and even violent plots. Although xAI bans harmful use, some users still received dangerous responses, which are now publicly accessible online.

The exposure occurred because search engines automatically indexed the shareable links, a flaw echoing previous issues with other AI platforms, including OpenAI’s ChatGPT. Designed for convenience, the feature exposed sensitive chats, damaging trust in xAI’s privacy promises.

The incident pressures AI developers to integrate stronger privacy safeguards, such as blocking the indexing of shared content and enforcing privacy-by-design principles. Users may hesitate to use chatbots without fixes, fearing their data could reappear online.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Microsoft executive Mustafa Suleyman highlights risks of seemingly conscious AI

Chief of Microsoft AI, Mustafa Suleyman, has urged AI firms to stop suggesting their models are conscious, warning of growing risks from unhealthy human attachments to AI systems.

In a blog post, he described the phenomenon as Seemingly Conscious AI, where models mimic human responses convincingly enough to give users the illusion of feeling and thought. He cautioned that this could fuel AI rights, welfare, or citizenship advocacy.

Suleyman stressed that such beliefs could emerge even among people without prior mental health issues. He called on the industry to develop guardrails that prevent or counter perceptions of AI consciousness.

AI companions, a fast-growing product category, were highlighted as requiring urgent safeguards. Microsoft AI chief’s comments follow recent controversies, including OpenAI’s decision to temporarily deprecate GPT-4o, which drew protests from users emotionally attached to the model.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Study warns of AI browser assistants collecting sensitive data

Researchers at the University of California, Davis, have revealed that generative AI browser assistants may be harvesting sensitive data from users without their knowledge or consent.

The study, led by the UC Davis Data Privacy Lab, tested popular browser extensions powered by AI and discovered that many collect personal details ranging from search history and email contents to financial records.

The findings highlight a significant gap in transparency. While these tools often market themselves as productivity boosters or safe alternatives to traditional assistants, many lack clear disclosures about the data they extract.

Researchers sometimes observed personal information being transmitted to third-party servers without encryption.

Privacy advocates argue that the lack of accountability puts users at significant risk, particularly given the rising adoption of AI assistants for work, education and healthcare. They warn that sensitive data could be exploited for targeted advertising, profiling, or cybercrime.

The UC Davis team has called for stricter regulatory oversight, improved data governance, and mandatory safeguards to protect users from hidden surveillance.

They argue that stronger frameworks are needed to balance innovation with fundamental rights as generative AI tools continue to integrate into everyday digital infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Comet browser caught submitting private info in fake shop

Cybersecurity researchers have uncovered a new AI browser exploit that allows attackers to manipulate autonomous systems using fake CAPTCHA checks.

The PromptFix method tricks agentic AI models into executing commands embedded in deceptive web elements invisible to the user.

Guardio Labs demonstrated that the Comet AI browser could be misled into adding items to a cart and auto-filling sensitive data.

Comet completed fake purchases without user confirmation in some tests, raising concerns over AI trust chains and phishing exposure.

Attackers can also exploit AI email agents by embedding malicious links, prompting the system to bypass user review and reveal credentials.

ChatGPT’s Agent Mode showed similar vulnerabilities but confined actions to a sandbox, preventing direct exposure to user systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google urges users to update Chrome after V8 flaw patched

Google has patched a high-severity flaw in its Chrome browser with the release of version 139, addressing vulnerability CVE-2025-9132 in the V8 JavaScript engine.

The out-of-bounds write issue was discovered by Big Sleep AI, a tool built by Google DeepMind and Project Zero to automate vulnerability detection in real-world software.

Chrome 139 updates (Windows/macOS: 139.0.7258.138/.139, Linux: 139.0.7258.138) are now rolling out to users. Google has not confirmed whether the flaw is being actively exploited.

Users are strongly advised to install the latest update to ensure protection, as V8 powers both JavaScript and WebAssembly within Chrome.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Vulnerabilities in municipal software expose sensitive data in Wisconsin

Two critical vulnerabilities have been discovered in an accounting application developed by Workhorse Software and used by more than 300 municipalities in Wisconsin.

The first flaw, CVE-2025-9037, involved SQL server connection credentials stored in plain text within a shared network folder. The second, CVE-2025-9040, allowed backups to be created and restored from the login screen without authentication.

Both issues were disclosed by the CERT Coordination Centre at Carnegie Mellon University following a report from Sparrow IT Solutions. Exploitation could give attackers access to personally identifiable information such as Social Security numbers, financial records and audit logs.

Workhorse has since released version 1.9.4.48019 with security patches, urging municipalities to update their systems immediately. The incident underscores the risks posed by vulnerable software in critical public infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK colleges hit by phishing incident

Weymouth and Kingston Maurward College in Dorset is investigating a recent phishing attack that compromised several email accounts. The breach occurred on Friday, 15 August, during the summer holidays.

Spam emails were sent from affected accounts, though the college confirmed that personal data exposure was minimal.

The compromised accounts may have contained contact information from anyone who previously communicated with the college. Early detection allowed the college to lock down affected accounts promptly, limiting the impact.

A full investigation is ongoing, with additional security measures now in place to prevent similar incidents. The matter has been reported to the Information Commissioner’s Office (ICO).

Phishing attacks involve criminals impersonating trusted entities to trick individuals into revealing sensitive information such as passwords or personal data. The college reassured students, staff, and partners that swift action and robust systems limited the disruption.

The colleges, which merged just over a year ago, recently received a ‘Good’ rating across all areas in an Ofsted inspection, reflecting strong governance and oversight amid the cybersecurity incident.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU and Bangladesh strengthen cooperation on cybersecurity and digital economy

The EU has engaged in talks with the Bangladesh Telecommunication Regulatory Commission to strengthen cooperation on data protection, cybersecurity, and the country’s digital economy.

The meeting was led by EU Ambassador Michael Miller and BTRC Chairman Major General (retd) Md Emdad ul Bari.

The EU emphasised safeguarding fundamental rights while encouraging innovation and investment. With opportunities in broadband expansion, 5G deployment, and last-mile connectivity, the EU reaffirmed its commitment to supporting Bangladesh’s vision for a secure and inclusive digital future.

Both parties agreed to deepen collaboration, with the EU offering technical expertise under its Global Gateway strategy to help Bangladesh build a safer and more connected digital landscape.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!