Europol warns that the $50,000 Qilin reward is fake

Europol has warned that a reported $50,000 reward for information on two members of the Qilin ransomware group is fake. The message, circulating on Telegram, claimed the suspects, known as Haise and XORacle, coordinate affiliates and manage extortion operations.

Europol clarified that it does not operate a Telegram channel and that the message does not originate from its official accounts, which are active on Instagram, LinkedIn, X, Bluesky, YouTube, and Facebook.

Qilin, also known as Agenda, has been active since 2022 and, in 2025, listed over 400 victims on its leak website, including media and pharmaceutical companies.

Recent attacks, such as the one targeting Inotiv, demonstrate the group’s ongoing threat. Analysts note that cybercriminals often circulate false claims to undermine competitors, mislead affiliates, or sow distrust within rival gangs.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New robotic system uses AI to analyse water quality

A new study in Robot Learning has introduced a robotic system that combines machine learning with decision-making to analyse water samples. The approach enables robots to detect, classify, and distinguish drinking water on Earth and potentially other planets.

Researchers used a hybrid method that merged the TOPSIS decision-making technique with a Random Forest Classifier trained on the Water Quality and Potability Dataset from Kaggle. By applying data balancing techniques, classification accuracy rose from 69% to 73%.

The robotic prototype includes thrusters, motors, solar power, sensors, and a robotic arm for sample collection. Water samples are tested in real time, with the onboard model classifying them as drinkable.

The system has the potential for rapid crisis response, sustainable water management, and planetary exploration, although challenges remain regarding sensor accuracy, data noise, and scalability. Researchers emphasise that further testing is necessary before real-world deployment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI improves customer experience at Citi

Citi has expanded its digital client platform, CitiDirect Commercial Banking, with new AI capabilities to improve customer service and security.

The platform now supports over half of Citi’s global commercial banking client base and handles around 2.3 million sessions.

AI features assist in fraud detection, automate customer queries, and provide real-time onboarding updates and guidance.

KYC renewals have been simplified through automated alerts and pre-filled forms, cutting effort and processing time for clients.

Live in markets including the UK, US, India, and others, the platform has received positive feedback from over 10,000 users. Citi says the enhancements are part of a broader effort to make mid-sized corporate banking faster, more innovative, and more efficient.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Fed urges banks to embrace blockchain innovation

Federal Reserve Vice Chair for Supervision Michelle Bowman has warned that banks must embrace blockchain technology or risk fading into irrelevance. At the Wyoming Blockchain Symposium on 19 August, she urged banks and regulators to drop caution and embrace innovation.

Bowman highlighted tokenisation as one of the most immediate applications, enabling assets to be transferred digitally without intermediaries or physical movement.

She explained that tokenised systems could cut operational delays, reduce risks, and expand access across large and smaller banks. Regulatory alignment, she added, could accelerate tokenisation from pilots to mainstream adoption.

Fraud prevention was also a key point of her remarks. Bowman said financial institutions face growing threats from scams and identity theft, but argued blockchain could help reduce fraud.

She called for regulators to ensure frameworks support adoption rather than hinder it, framing the technology as a chance for collaboration between the industry and the Fed.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Indian firms accelerate growth through AI, Microsoft finds

Indian firms are accelerating the adoption of AI, with many using AI agents to enhance workforce capabilities rather than relying solely on traditional methods.

According to Microsoft’s 2025 Work Trend Index, 93% of leaders in India plan to extend AI integration across their organisations within the next 12 to 18 months.

Frontier firms in India are leading the charge, redesigning operations around collaboration between humans and AI agents instead of following conventional hierarchies.

Over half of leaders already deploy AI to automate workflows and business processes across entire teams, enabling faster and more agile decision-making.

Microsoft notes that AI is becoming a true thought partner, fuelling creativity, accelerating decisions, and redefining teamwork instead of merely supporting routine tasks. Leaders report that embedding AI into daily operations drives measurable improvements in productivity, innovation, and business outcomes.

The findings are part of a global survey of 31,000 participants across 31 countries, highlighting India’s role at the forefront of AI-driven organisational transformation rather than merely following international trends.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google Cloud boosts AI security with agentic defence tools

Google Cloud has unveiled a suite of security enhancements at its Security Summit 2025, focusing on protecting AI innovations and empowering cybersecurity teams with AI-driven defence tools.

VP and GM Jon Ramsey highlighted the growing need for specialised safeguards as enterprises deploy AI agents across complex environments.

Central to the announcements is the concept of an ‘agentic security operations centre,’ where AI agents coordinate actions to achieve shared security objectives. It represents a shift from reactive security approaches to proactive, agent-supported strategies.

Google’s platform integrates automated discovery, threat detection, and response mechanisms to streamline security operations and cover gaps in existing infrastructures.

Key innovations include extended protections for AI agents through Model Armour, covering Agentspace prompts and responses to mitigate prompt injection attacks, jailbreaking, and data leakage.

The Alert Investigation agent, available in preview, automates enrichment and analysis of security events while offering actionable recommendations, reducing manual effort and accelerating response times.

Integrating Mandiant threat intelligence feeds and Gemini AI strengthens detection and incident response across agent environments.

Additional tools, such as SecOps Labs and native SOAR dashboards, provide organisations with early access to AI-powered threat detection experiments and comprehensive security visualisation capabilities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Rapper Bot dismantled after 370,000 global cyberattacks

A 22-year-old man from Oregon has been charged with operating one of the most powerful botnets ever uncovered, Rapper Bot.

Federal prosecutors in Alaska said the network was responsible for over 370,000 cyberattacks worldwide since 2021, targeting technology firms, a central social media platform and even a US government system.

The botnet relied on malware that infected everyday devices such as Wi-Fi routers and digital video recorders. Once hijacked, the compromised machines were forced to overwhelm servers with traffic in distributed denial-of-service (DDoS) attacks.

Investigators estimate that Rapper Bot infiltrated as many as 95,000 devices at its peak.

The accused administrator, Ethan Foltz, allegedly ran the network as a DDoS-for-hire service, temporarily charging customers to control its capabilities.

Authorities said its most significant attack generated more than six terabits of data per second, making it among the most destructive DDoS networks. Foltz faces up to 10 years in prison if convicted.

The arrest was carried out under Operation PowerOFF, an international effort to dismantle criminal groups offering DDoS-for-hire services.

US Attorney Michael J. Heyman said the takedown had effectively disrupted a transnational threat, ending Foltz’s role in the sprawling cybercrime operation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Grok chatbot leaks spark major AI privacy concerns

Private conversations with xAI’s chatbot Grok have been exposed online, raising serious concerns over user privacy and AI safety. Forbes found that Grok’s ‘share’ button created public URLs, later indexed by Google and other search engines.

The leaked content is troubling, ranging from questions on hacking crypto wallets to instructions on drug production and even violent plots. Although xAI bans harmful use, some users still received dangerous responses, which are now publicly accessible online.

The exposure occurred because search engines automatically indexed the shareable links, a flaw echoing previous issues with other AI platforms, including OpenAI’s ChatGPT. Designed for convenience, the feature exposed sensitive chats, damaging trust in xAI’s privacy promises.

The incident pressures AI developers to integrate stronger privacy safeguards, such as blocking the indexing of shared content and enforcing privacy-by-design principles. Users may hesitate to use chatbots without fixes, fearing their data could reappear online.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Microsoft executive Mustafa Suleyman highlights risks of seemingly conscious AI

Chief of Microsoft AI, Mustafa Suleyman, has urged AI firms to stop suggesting their models are conscious, warning of growing risks from unhealthy human attachments to AI systems.

In a blog post, he described the phenomenon as Seemingly Conscious AI, where models mimic human responses convincingly enough to give users the illusion of feeling and thought. He cautioned that this could fuel AI rights, welfare, or citizenship advocacy.

Suleyman stressed that such beliefs could emerge even among people without prior mental health issues. He called on the industry to develop guardrails that prevent or counter perceptions of AI consciousness.

AI companions, a fast-growing product category, were highlighted as requiring urgent safeguards. Microsoft AI chief’s comments follow recent controversies, including OpenAI’s decision to temporarily deprecate GPT-4o, which drew protests from users emotionally attached to the model.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Study warns of AI browser assistants collecting sensitive data

Researchers at the University of California, Davis, have revealed that generative AI browser assistants may be harvesting sensitive data from users without their knowledge or consent.

The study, led by the UC Davis Data Privacy Lab, tested popular browser extensions powered by AI and discovered that many collect personal details ranging from search history and email contents to financial records.

The findings highlight a significant gap in transparency. While these tools often market themselves as productivity boosters or safe alternatives to traditional assistants, many lack clear disclosures about the data they extract.

Researchers sometimes observed personal information being transmitted to third-party servers without encryption.

Privacy advocates argue that the lack of accountability puts users at significant risk, particularly given the rising adoption of AI assistants for work, education and healthcare. They warn that sensitive data could be exploited for targeted advertising, profiling, or cybercrime.

The UC Davis team has called for stricter regulatory oversight, improved data governance, and mandatory safeguards to protect users from hidden surveillance.

They argue that stronger frameworks are needed to balance innovation with fundamental rights as generative AI tools continue to integrate into everyday digital infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!