Russia restricts Telegram and WhatsApp calls

Russian authorities have begun partially restricting calls on Telegram and WhatsApp, citing the need for crime prevention. Regulator Roskomnadzor accused the platforms of enabling fraud, extortion, and terrorism while ignoring repeated requests to act. Neither platform commented immediately.

Russia has long tightened internet control through restrictive laws, bans, and traffic monitoring. VPNs remain a workaround, but are often blocked. During this summer, further limits included mobile internet shutdowns and penalties for specific online searches.

Authorities have introduced a new national messaging app, MAX, which is expected to be heavily monitored. Reports suggest disruptions to WhatsApp and Telegram calls began earlier this week. Complaints cited dropped calls or muted conversations.

With 96 million monthly users, WhatsApp is Russia’s most popular platform, followed by Telegram with 89 million. Past clashes include Russia’s failed Attempt to ban Telegram (2018–20) and Meta’s designation as an extremist entity in 2022.

WhatsApp accused Russia of trying to block encrypted communication and vowed to keep it available. Lawmaker Anton Gorelkin suggested that MAX should replace WhatsApp. The app’s terms permit data sharing with authorities and require pre-installation on all smartphones sold in Russia.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Musk–Altman clash escalates over Apple’s alleged AI bias

Elon Musk has accused Apple of favouring ChatGPT on its App Store and threatened legal action, sparking a clash with OpenAI CEO Sam Altman. Musk called Apple’s practices an antitrust violation and vowed to take immediate action through his AI company, xAI.

Critics on X noted rivals like DeepSeek AI and Perplexity AI have topped the App Store this year. Altman called Musk’s claim ‘remarkable’ and accused him of manipulating X. Musk called him a ‘liar’, prompting demands for proof he never altered X’s algorithm.

OpenAI and xAI launched new versions of ChatGPT and Grok, ranked first and fifth among free iPhone apps on Tuesday. Apple, which partnered with OpenAI in 2024 to integrate ChatGPT, did not comment on the matter. Rankings take into account engagement, reviews, and downloads.

The dispute reignites a feud between Musk and OpenAI, which he co-founded but left before the success of ChatGPT. In April, OpenAI accused Musk of attempting to harm the company and establish a rival. Musk launched xAI in 2023 to compete with major players in the AI space.

Chinese startup DeepSeek has disrupted the AI market with cost-efficient models. Since ChatGPT’s 2022 debut, major tech firms have invested billions in AI. OpenAI claims Musk’s actions are driven by ambition rather than a mission for humanity’s benefit.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Netherlands regulator presses tech firms over election disinformation

The Netherlands’ competition authority will meet with 12 major online platforms, including TikTok, Facebook and X, on 15 September to address the spread before the 29 October elections.

The session will also involve the European Commission, national regulators and civil society groups.

The Authority for Consumers and Markets (ACM), which enforces the EU’s Digital Services Act in the Netherlands, is mandated to oversee election integrity under the law. The vote was called early in June after the Dutch government collapsed over migration policy disputes.

Platforms designated as Very Large Online Platforms must uphold transparent policies for moderating content and act decisively against illegal material, ACM director Manon Leijten said.

In July, the ACM contacted the platforms to outline their legal obligations, request details for their Trust and Safety teams and collect responses to a questionnaire on safeguarding public debate.

The September meeting will evaluate how companies plan to tackle disinformation, foreign interference and illegal hate speech during the campaign period.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Musk faces an OpenAI harassment lawsuit after a judge rejects dismissal

A federal judge has rejected Elon Musk’s bid to dismiss claims that he engaged in a ‘years-long harassment campaign’ against OpenAI.

US District Judge Yvonne Gonzalez Rogers ruled that the company’s counterclaims are sufficient to proceed as part of the lawsuit Musk filed against OpenAI and its CEO, Sam Altman, last year.

Musk, who helped found OpenAI in 2015, sued the AI firm in August 2024, alleging Altman misled him about the company’s commitment to AI safety before partnering with Microsoft and pursuing for-profit goals.

OpenAI responded with counterclaims in April, accusing Musk of persistent attacks in the press and on his platform X, demands for corporate records, and a ‘sham bid’ for the company’s assets.

The filing alleged that Musk sought to undermine OpenAI instead of supporting humanity-focused AI, intending to build a rival to take the technological lead.

The feud between Musk and Altman has continued, most recently with Musk threatening to sue Apple over App Store listings for X and his AI chatbot Grok. Altman dismissed the claim, criticising Musk for allegedly manipulating X to benefit his companies and harm competitors.

Despite the ongoing legal battle, OpenAI says it will remain focused on product development instead of engaging in public disputes.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google patches critical Chrome bugs enabling code execution

Chrome security update fixes six flaws that could enable arbitrary code execution. Stable channel 139.0.7258.127/.128 (Windows, Mac) and .127 (Linux) ships high-severity patches that protect user data and system integrity.

CVE-2025-8879 is a heap buffer overflow in libaom’s video codec. CVE-2025-8880 is a V8 race condition reported by Seunghyun Lee. CVE-2025-8901 is an out-of-bounds write in ANGLE.

Detection methods included AddressSanitizer, MemorySanitizer, UndefinedBehaviorSanitizer, Control Flow Integrity, libFuzzer, and AFL. Further fixes address CVE-2025-8881 in File Picker and CVE-2025-8882, a use-after-free in Aura.

Successful exploitation could allow code to run with browser privileges through overflows and race conditions. The automatic rollout is staged; users should update it manually by going to Settings > About Chrome.

Administrators should prioritise rapid deployment in enterprise fleets. Google credited external researchers, anonymous contributors, and the Big Sleep project for coordinated reporting and early discovery.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Google rolls out Preferred Sources for tailored search results

Google has introduced a new ‘Preferred Sources’ feature that allows users to curate their search results by selecting favourite websites. Once added, stories from these sites will appear more prominently in the ‘Top Stories’ section and a dedicated ‘From your sources’ section on the search results page.

Now rolling out in India and the US, the feature aims to improve search quality by helping users avoid low-value content. There is no limit to the number of sources that can be chosen, and early testers typically added more than four.

While preferred outlets will appear more often, search results will still include content from other websites.

To set preferred sources, users can click the icon next to the ‘Top Stories’ section when searching for a trending topic, find the outlet they want, and reload results.

Google says the change may also benefit publishers, offering them more visibility when AI-driven search engines sharply reduce traffic to news websites.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI agents face prompt injection and persistence risks, researchers warn

Zenity Labs warned at Black Hat USA that widely used AI agents can be hijacked without interaction. Attacks could exfiltrate data, manipulate workflows, impersonate users, and persist via agent memory. Researchers said knowledge sources and instructions could be poisoned.

Demos showed risks across major platforms. ChatGPT was tricked into accessing a linked Google Drive via email prompt injection. Microsoft Copilot Studio agents leaked CRM data. Salesforce Einstein rerouted customer emails. Gemini and Microsoft 365 Copilot were steered into insider-style attacks.

Vendors were notified under coordinated disclosure. Microsoft stated that ongoing platform updates have stopped the reported behaviour and highlighted built-in safeguards. OpenAI confirmed a patch and a bug bounty programme. Salesforce said its issue was fixed. Google pointed to newly deployed, layered defences.

Enterprise adoption of AI agents is accelerating, raising the stakes for governance and security. Aim Labs, which had previously flagged similar zero-click risks, said frameworks often lack guardrails. Responsibility frequently falls on organisations deploying agents, noted Aim Labs’ Itay Ravia.

Researchers and vendors emphasise layered defence against prompt injection and misuse. Strong access controls, careful tool exposure, and monitoring of agent memory and connectors remain priorities as agent capabilities expand in production.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

YouTube’s AI flags viewers as minors, creators demand safeguards

YouTube’s new AI age check, launched on 13 August 2025, flags suspected minors based on their viewing habits. Over 50,000 creators petitioned against it, calling it ‘AI spying’. The backlash reveals deep tensions between child safety and online anonymity.

Flagged users must verify their age with ID, credit card, or a facial scan. Creators say the policy risks normalising surveillance and shrinking digital freedoms.

SpyCloud’s 2025 report found a 22% jump in stolen identities, raising alarm over data uploads. Critics fear YouTube’s tool could invite hackers. Past scandals over AI-generated content have already hurt creator trust.

Users refer to it on X as a ‘digital ID dragnet’. Many are switching platforms or tweaking content to avoid flags. WebProNews says creators demand opt-outs, transparency, and stronger human oversight of AI systems.

As global regulation tightens, YouTube could shape new norms. Experts urge a balance between safety and privacy. Creators push for deletion rules to avoid identity risks in an increasingly surveilled online world.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Santander expands AI-first strategy with OpenAI

Santander is accelerating its AI-first transformation through a new partnership with OpenAI, aiming to embed intelligent technology into every part of the bank.

Over the past two months, ChatGPT Enterprise has been rolled out to nearly 15,000 employees across Europe and the Americas, with plans to double that number by year-end. The move forms part of a broader ambition to become an AI-native institution where all decisions and processes are data-driven.

The bank will plan a mandatory AI training programme for all staff from 2026, with a focus on responsible use, and expects to scale agentic AI to enable fully conversational banking.

Santander says its AI initiatives saved over €200 million last year. In Spain alone, speech analytics now handles 10 million calls annually, automatically updating CRM records and freeing more than 100,000 work hours. Developer productivity has risen by up to 30% on some tasks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK-based ODI outlines vision for EU AI Act and data policy

The Open Data Institute (ODI) has published a manifesto setting out six principles for shaping European Union policy on AI and data. Aimed at supporting policymakers, it aligns with the EU’s upcoming digital reforms, including the AI Act and the review of the bloc’s digital framework.

Although based in the UK, the ODI has previously contributed to EU policymaking, including work on the General-Purpose AI Code of Practice and consultations on the use of health data. The organisation also launched a similar manifesto for UK data and AI policy in 2024.

The ODI states that the EU has a chance to establish a global model of digital governance, prioritizing people’s interests. Director of research Elena Simperl called for robust open data infrastructure, inclusive participation, and independent oversight to build trust, support innovation, and protect values.

Drawing on the EU’s Competitiveness Compass and the Draghi report, the six principles are: data infrastructure, open data, trust, independent organisations, an inclusive data ecosystem, and data skills. The goal is to balance regulation and innovation while upholding rights, values, and interoperability.

The ODI highlights the need to limit bias and inequality, broaden access to data and skills, and support smaller enterprises. It argues that strong governance should be treated like physical infrastructure, enabling competitiveness while safeguarding rights and public trust in the AI era.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!