Russia pushes mandatory messaging app Max on all new devices

Russia will require all new mobile phones and tablets sold starting in September, including a government-backed messenger called Max. Developed by Kremlin-controlled tech firm VK, the app offers messaging, video calls, mobile payments, and access to state services.

Authorities claim Max is a safe alternative to Western apps, but critics warn it could act as a state surveillance tool. The platform is reported to collect financial data, purchase history, and location details, all accessible to security services.

Journalist Andrei Okun described Max as a ‘Digital Gulag’ designed to control daily life and communications.

The move is part of Russia’s broader push to replace Western platforms. New restrictions have already limited calls on WhatsApp and Telegram, and officials hinted that WhatsApp may face a ban.

Telegram remains widely used but is expected to face greater pressure as the Kremlin directs officials to adopt Max.

VK says Max has already attracted 18 million downloads, though parts of the app remain in testing. From 2026, Russia will also require smart TVs to come preloaded with a state-backed service offering free access to government channels.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Google pushes agentic AI worldwide with AI Mode rollout

Google has expanded its AI Mode service to 180 additional countries, extending advanced agentic capabilities to a global audience.

Previously available only in markets such as the US, UK and India, the service allows users to search for information and carry out tasks on their behalf. The update reflects Google’s ambition to move from simple answers to action-oriented assistance.

A key rollout feature is the restaurant booking tool for AI Ultra subscribers. Using natural language requests such as ”find a romantic Italian spot for two tonight,” the system can check availability, offer personalised suggestions and confirm reservations directly within search.

The feature relies on real-time data from partners like OpenTable and highlights how Google’s AI can execute tasks instead of simply presenting options.

Further tools are expected soon, including ticketing for events and appointment scheduling. These are powered by the Gemini models, which tailor recommendations based on user behaviour and allow group planning through shared responses.

While the services could reduce reliance on third-party apps in sectors such as travel and hospitality, they also raise concerns over data privacy, inclusivity and cultural differences in an English-only rollout.

The global expansion strengthens Google’s position against rivals like Microsoft and OpenAI, who are also pushing forward in agentic AI. The company sees subscription upgrades to AI Ultra as a way to offset slower advertising growth, while early reports suggest increased user engagement.

However, the long-term impact will depend on balancing innovation with ethical safeguards as Google works to deliver more multilingual and accessible features.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Skype used to distribute hidden malware in small business attacks

Security researchers at Kaspersky discovered that hackers used Skype to distribute a Remote Access Trojan known as GodRAT. Initially spread via malicious screensaver files disguised as financial documents, the malware employed steganography to conceal shellcode inside image files, which then downloaded GodRAT from a remote server.

Once activated, GodRAT collected detailed system information, including OS specs, antivirus presence, user account data and more. The trojan could also download additional plugins such as file explorers and password stealers. In some cases, it deployed a second malware, AsyncRAT, granting attackers prolonged access.

GodRAT appears to be an evolution of previous tools, such as AwesomePuppet, and shares artifacts with Gh0st RAT, suggesting a link to the Winnti APT group. While Kaspersky did not disclose the number of victims, the campaign primarily targeted small and medium-sized businesses in the UAE, Hong Kong, Jordan, and Lebanon. Cybercrime using Skype as a vector reportedly ceased around March 2025 as criminals shifted to other distribution channels.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft limits certain companies’ access to the SharePoint early warning system

Microsoft has limited certain Chinese companies’ access to its early warning system for cybersecurity vulnerabilities following suspicions about their involvement in recent SharePoint hacking attempts.

The decision restricts the sharing of proof-of-concept code, which mimics genuine malicious software. While valuable for cybersecurity professionals strengthening their systems, the code can also be misused by hackers.

The restrictions follow Microsoft’s observation of exploitation attempts targeting SharePoint servers in July. Concerns arose that a member of the Microsoft Active Protections Program may have repurposed early warnings for offensive activity.

Microsoft maintains that it regularly reviews participants and suspends those violating contracts, including prohibitions on participating in cyber attacks.

Beijing has denied involvement in the hacking, while Microsoft has refrained from disclosing which companies were affected or details of the ongoing investigation.

Analysts note that balancing collaboration with international security partners and preventing information misuse remains a key challenge for global cybersecurity programmes.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Students seek emotional support from AI chatbots

College students are increasingly turning to AI chatbots for emotional support, prompting concern among mental health professionals. A 2025 report ranked ‘therapy and companionship’ as the top use case for generative AI, particularly among younger users.

Studies by MIT and OpenAI show that frequent AI use can lower social confidence and increase avoidance of face-to-face interaction. On campuses, digital mental health platforms now supplement counselling services, offering tools that identify at-risk students and provide basic support.

Experts warn that chatbot companionship may create emotional habits that lack grounding in reality and hinder social skill development. Counsellors advocate for educating students on safe AI use and suggest universities adopt tools that flag risky engagement patterns.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI improves customer experience at Citi

Citi has expanded its digital client platform, CitiDirect Commercial Banking, with new AI capabilities to improve customer service and security.

The platform now supports over half of Citi’s global commercial banking client base and handles around 2.3 million sessions.

AI features assist in fraud detection, automate customer queries, and provide real-time onboarding updates and guidance.

KYC renewals have been simplified through automated alerts and pre-filled forms, cutting effort and processing time for clients.

Live in markets including the UK, US, India, and others, the platform has received positive feedback from over 10,000 users. Citi says the enhancements are part of a broader effort to make mid-sized corporate banking faster, more innovative, and more efficient.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Fed urges banks to embrace blockchain innovation

Federal Reserve Vice Chair for Supervision Michelle Bowman has warned that banks must embrace blockchain technology or risk fading into irrelevance. At the Wyoming Blockchain Symposium on 19 August, she urged banks and regulators to drop caution and embrace innovation.

Bowman highlighted tokenisation as one of the most immediate applications, enabling assets to be transferred digitally without intermediaries or physical movement.

She explained that tokenised systems could cut operational delays, reduce risks, and expand access across large and smaller banks. Regulatory alignment, she added, could accelerate tokenisation from pilots to mainstream adoption.

Fraud prevention was also a key point of her remarks. Bowman said financial institutions face growing threats from scams and identity theft, but argued blockchain could help reduce fraud.

She called for regulators to ensure frameworks support adoption rather than hinder it, framing the technology as a chance for collaboration between the industry and the Fed.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google Cloud boosts AI security with agentic defence tools

Google Cloud has unveiled a suite of security enhancements at its Security Summit 2025, focusing on protecting AI innovations and empowering cybersecurity teams with AI-driven defence tools.

VP and GM Jon Ramsey highlighted the growing need for specialised safeguards as enterprises deploy AI agents across complex environments.

Central to the announcements is the concept of an ‘agentic security operations centre,’ where AI agents coordinate actions to achieve shared security objectives. It represents a shift from reactive security approaches to proactive, agent-supported strategies.

Google’s platform integrates automated discovery, threat detection, and response mechanisms to streamline security operations and cover gaps in existing infrastructures.

Key innovations include extended protections for AI agents through Model Armour, covering Agentspace prompts and responses to mitigate prompt injection attacks, jailbreaking, and data leakage.

The Alert Investigation agent, available in preview, automates enrichment and analysis of security events while offering actionable recommendations, reducing manual effort and accelerating response times.

Integrating Mandiant threat intelligence feeds and Gemini AI strengthens detection and incident response across agent environments.

Additional tools, such as SecOps Labs and native SOAR dashboards, provide organisations with early access to AI-powered threat detection experiments and comprehensive security visualisation capabilities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Rapper Bot dismantled after 370,000 global cyberattacks

A 22-year-old man from Oregon has been charged with operating one of the most powerful botnets ever uncovered, Rapper Bot.

Federal prosecutors in Alaska said the network was responsible for over 370,000 cyberattacks worldwide since 2021, targeting technology firms, a central social media platform and even a US government system.

The botnet relied on malware that infected everyday devices such as Wi-Fi routers and digital video recorders. Once hijacked, the compromised machines were forced to overwhelm servers with traffic in distributed denial-of-service (DDoS) attacks.

Investigators estimate that Rapper Bot infiltrated as many as 95,000 devices at its peak.

The accused administrator, Ethan Foltz, allegedly ran the network as a DDoS-for-hire service, temporarily charging customers to control its capabilities.

Authorities said its most significant attack generated more than six terabits of data per second, making it among the most destructive DDoS networks. Foltz faces up to 10 years in prison if convicted.

The arrest was carried out under Operation PowerOFF, an international effort to dismantle criminal groups offering DDoS-for-hire services.

US Attorney Michael J. Heyman said the takedown had effectively disrupted a transnational threat, ending Foltz’s role in the sprawling cybercrime operation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Grok chatbot leaks spark major AI privacy concerns

Private conversations with xAI’s chatbot Grok have been exposed online, raising serious concerns over user privacy and AI safety. Forbes found that Grok’s ‘share’ button created public URLs, later indexed by Google and other search engines.

The leaked content is troubling, ranging from questions on hacking crypto wallets to instructions on drug production and even violent plots. Although xAI bans harmful use, some users still received dangerous responses, which are now publicly accessible online.

The exposure occurred because search engines automatically indexed the shareable links, a flaw echoing previous issues with other AI platforms, including OpenAI’s ChatGPT. Designed for convenience, the feature exposed sensitive chats, damaging trust in xAI’s privacy promises.

The incident pressures AI developers to integrate stronger privacy safeguards, such as blocking the indexing of shared content and enforcing privacy-by-design principles. Users may hesitate to use chatbots without fixes, fearing their data could reappear online.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot