Telegram faces global outages as Russia slows service

Users of the messaging app Telegram have experienced outages in multiple regions over the past 24 hours, with the largest volume of complaints coming from Russia. Reports from the US, UK, Germany, the Netherlands, and Norway suggest the issues could be global.

Difficulties primarily affected the mobile app, with users reporting login issues, messaging delays, and limited access to features. In Russia, outages result from traffic slowdowns by Roskomnadzor, with similar restrictions affecting WhatsApp.

Telegram’s founder, Pavel Durov, has criticised the Russian government’s actions, arguing that authorities aim to push citizens towards a state-controlled alternative, the ‘Max’ messenger.

Despite Telegram overtaking WhatsApp in Russia with over 95 million active users, Max has now surpassed 100 million users, showing the Kremlin’s growing influence over digital communications.

Russian authorities have stated that Telegram must comply with local laws, moderate content, and consider data localisation to avoid further restrictions. Durov has reaffirmed the platform’s commitment to protecting user privacy and upholding freedom of speech.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK watchdog demands stronger child safety on social platforms

The British communications regulator Ofcom has called on major technology companies to enforce stricter age controls and improve safety protections for children using online platforms.

The warning targets services widely used by young audiences, including Facebook, Instagram, Roblox, Snapchat, TikTok and YouTube.

Regulators said that despite existing minimum age policies, large numbers of children under the age of 13 continue to access platforms intended for older users.

According to Ofcom research, more than 70 percent of children aged 8 to 12 regularly use such services.

Authorities have asked companies to demonstrate how they will strengthen protections and ensure compliance with minimum age requirements.

Platforms must present their plans by 30 April, after which Ofcom will publish an assessment of their responses and determine whether further regulatory action is necessary.

The regulator also outlined several key areas requiring improvement.

Companies in the UK are expected to implement more effective age-verification systems, strengthen protections against online grooming and ensure that recommendation algorithms do not expose children to harmful content.

Another concern involves product development practices.

Ofcom warned that new digital features, including AI tools, should not be tested on children without adequate safety assessments. Platforms are required to evaluate potential risks before launching significant updates.

The measures are part of the UK’s broader regulatory framework introduced under the Online Safety Act, which aims to reduce exposure to harmful online material.

The law requires platforms to prevent children from accessing content linked to pornography, suicide, self-harm and eating disorders, while limiting the promotion of violent or abusive material in recommendation feeds.

Ofcom indicated that enforcement action may follow if companies fail to demonstrate meaningful improvements. Regulators argue that stronger safeguards are necessary to restore public trust and ensure that digital platforms prioritise child safety in their design and operation.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU competition regulators expand scrutiny across the entire AI ecosystem

Competition authorities in the EU are broadening their oversight of the AI sector, examining every layer of the technology’s value chain.

Speaking at a conference in Berlin, Teresa Ribera explained that regulators are analysing the full ‘AI stack’ instead of focusing solely on consumer applications.

According to the competition chief, scrutiny extends beyond visible AI tools to the systems that support them. Investigations are assessing underlying models, the data used to train those models, as well as cloud infrastructure and energy resources that power AI systems.

Regulatory attention has already reached the application layer.

The European Commission opened an investigation in 2025 involving Meta after concerns emerged that the company could restrict competing AI assistants on its messaging platform WhatsApp.

Following regulatory pressure, Meta proposed allowing rival AI chatbots on the platform in exchange for a fee. European regulators are now assessing the proposal to determine whether additional intervention is necessary to preserve fair competition in rapidly evolving digital markets.

Authorities have also examined concentration risks across other parts of the AI ecosystem, including the infrastructure layer dominated by companies such as Nvidia.

Regulators argue that effective competition oversight must address the entire technology stack as AI markets expand quickly.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Wiz joins Google Cloud in the company’s largest acquisition

Google has completed the largest acquisition in its history, finalising the $32 billion purchase of cloud security firm Wiz. The company confirmed that Wiz will join Google Cloud while continuing to operate under its existing brand and maintaining support for multiple cloud platforms.

Wiz has built its reputation as a cloud and AI security platform designed to monitor risks across different cloud environments. The company’s technology connects code, cloud infrastructure, and runtime operations into a single security context, allowing development and security teams to detect vulnerabilities earlier and respond to threats affecting cloud workloads.

Google Cloud leaders say the acquisition strengthens the company’s broader security strategy. Wiz will complement existing services such as Google Threat Intelligence, Google Security Operations and Mandiant Consulting, contributing to a unified security platform designed to protect cloud-native applications and enterprise infrastructure.

Both companies emphasise that Wiz will remain committed to a multicloud approach. Its products will continue to operate across platforms, including Amazon Web Services, Microsoft Azure and Oracle Cloud, reflecting the company’s existing model of providing visibility and security across competing cloud ecosystems.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot 

EU privacy watchdogs warn over US plans to expand traveller data collection

European privacy authorities have raised concerns about proposed changes to the Electronic System for Travel Authorisation that could require travellers to the US to disclose extensive personal information, including social media activity.

The European Data Protection Board, which coordinates national data protection authorities across the EU, sent a letter to the European Commission asking whether the institution plans to intervene or respond to the updated requirements.

A proposal that would apply to visitors entering the US through the visa-waiver programme for short stays of up to 90 days.

Under the proposed changes, travellers may be required to provide details about their social media accounts covering the previous five years.

Authorities could also request personal data about family members, including addresses, phone numbers and dates of birth, information that privacy regulators argue is unrelated to travel authorisation.

Watchdogs also questioned how EU citizens could exercise their data protection rights once such information is transferred to US authorities, particularly regarding storage periods and potential misuse.

Parallel negotiations between the EU and the US have also attracted attention.

Discussions around a potential Enhanced Border Security Partnerships framework could allow US authorities to seek access to biometric databases held by European countries, including facial scans and fingerprint records.

European privacy regulators warned that such measures could raise significant concerns regarding fundamental rights and personal data protection for travellers from the EU.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

BeatBanker malware targets Android users in Brazil

A new Android malware called BeatBanker is targeting users in Brazil through fake Starlink and government apps. The malware hijacks devices, steals banking credentials, tampers with cryptocurrency transactions, and secretly mines Monero.

Infection begins on phishing websites mimicking the Google Play Store or the ‘INSS Reembolso’ app. Users are tricked into installing trojanised APKs, which evade detection through memory-based decryption and by blocking analysis environments.

Fake update screens maintain persistence while silently downloading additional malicious payloads.

BeatBanker initially combined a banking trojan with a cryptocurrency miner. It uses accessibility permissions to monitor browsers and crypto apps, overlaying fake screens to redirect Tether and other crypto transfers.

A foreground service plays silent audio loops to prevent the device from shutting down, while Firebase Cloud Messaging enables remote control of infected devices.

The latest variant replaces the banking module with the BTMOB RAT, providing full control over devices. Capabilities include automatic permissions, background persistence, keylogging, GPS tracking, camera access, and screen-lock credential capture.

Kaspersky warns that BeatBanker demonstrates the growing sophistication of mobile threats and multi-layered malware campaigns.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI browsers expose new cybersecurity attack surfaces

Security researchers have demonstrated that agentic browsers, powered by AI, may introduce new cybersecurity vulnerabilities.

Experiments targeting the Comet AI browser, developed by Perplexity AI, showed that attackers could manipulate the system into executing phishing scams in only a few minutes.

The attack exploits the reasoning process used by AI agents when interacting with websites. These systems continuously explain their actions and observations, revealing internal signals that attackers can analyse to refine malicious strategies and bypass built-in safeguards.

Researchers showed that phishing pages can be iteratively trained using adversarial machine learning methods, such as Generative Adversarial Networks.

By observing how the AI browser responds to suspicious signals, attackers can optimise fraudulent pages until the system accepts them as legitimate.

The findings highlight a shift in the cybersecurity threat landscape. Instead of deceiving human users directly, attackers increasingly focus on manipulating the AI agents that perform online actions on behalf of users.

Security experts warn that prompt injection vulnerabilities remain a fundamental challenge for large language models and agentic systems.

Although new defensive techniques are being developed, researchers believe such weaknesses may remain difficult to eliminate.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI agents face growing prompt injection risks

AI developers are working on new defences against prompt-injection attacks that aim to manipulate AI agents. Security specialists warn that attackers are increasingly using social engineering techniques to influence AI systems that interact with online content.

Researchers say AI agents that browse the web or handle user tasks face growing risks from hidden instructions embedded in emails or websites. Experts in the US note that attackers often attempt to trick AI into revealing sensitive information.

Engineers are responding by designing systems that limit the impact of manipulation attempts. Developers in the US say AI tools must include safeguards preventing sensitive data from being transmitted without user approval.

Security teams are also introducing technologies that detect risky actions and prompt users for confirmation. Specialists argue that strong system design and user oversight will remain essential as AI agents gain more autonomy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Anthropic lawsuit gains Big Tech support in AI dispute

Several major US technology companies have backed Anthropic in its lawsuit challenging the US Department of Defence’s decision to label the AI company a national security ‘supply chain risk’.

Google, Amazon, Apple, and Microsoft have filed legal briefs supporting Anthropic’s attempt to overturn the designation issued by Defence Secretary Pete Hegseth. Anthropic argues the decision was retaliation after the company declined to allow its AI systems to be used for mass surveillance or autonomous weapons.

In court filings, the companies warned that the government’s action could have wider consequences for the technology sector. Microsoft said the decision could have ‘broad negative ramifications for the entire technology sector’.

Microsoft, which works closely with the US government and the Department of Defence, said it agreed with Anthropic’s position that AI systems should not be used to conduct domestic mass surveillance or enable autonomous machines to initiate warfare.

A joint amicus brief supporting Anthropic was also submitted by the Chamber of Progress, a technology policy organisation funded by companies including Google, Apple, Amazon and Nvidia. The group said it was concerned about the government penalising a company for its public statements.

The brief described the designation as ‘a potentially ruinous sanction’ for businesses and warned it could create a climate in which companies fear government retaliation for expressing views.

Anthropic’s lawsuit claims the government violated its free speech rights by retaliating against the company for comments made by its leadership. The dispute escalated after Anthropic declined to remove contractual restrictions preventing its AI models from being used for mass surveillance or autonomous weapons.

The company had previously introduced safeguards in government contracts to limit certain uses of its technology. Negotiations over revised contract language continued for several weeks before the disagreement became public.

Former military officials and technology policy advocates have also filed supporting briefs, warning that the decision could discourage companies from participating in national security projects if they fear retaliation for voicing concerns. The case is currently being heard in federal court in San Francisco.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Google outlines roadmap for safer generative AI for young users

Google has presented a strategy for developing generative AI systems designed to protect younger users better better while supporting learning and creativity.

The approach emphasises building conversational AI experiences that balance innovation with safeguards tailored to children and teenagers.

The company’s framework rests on three pillars: protecting young people online, respecting the role of families in digital environments and enabling youth to explore AI technologies responsibly.

According to Google, safety policies prohibit harmful content, including material linked to child exploitation, violent extremism and self-harm, while additional restrictions target age-inappropriate topics.

Safeguards are integrated throughout the AI development lifecycle, from user input to model responses. Systems use specialised classifiers to detect potentially harmful queries and prevent inappropriate outputs.

These protections are also applied to models such as Gemini, which incorporates defences against prompt manipulation and cyber misuse.

Beyond preventing harm, Google aims to support responsible AI adoption through educational initiatives.

Resources designed for families encourage discussions about responsible technology use, while tools such as Guided Learning in Gemini seek to help students explore complex topics through structured explanations and interactive learning support.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!