Google states it has not received UK request to weaken encryption

Google has confirmed it has not received a request from the UK government to create a backdoor in its encrypted services. The clarification comes amid ongoing scrutiny of surveillance legislation and its implications for tech companies offering end-to-end encrypted services.

Reports indicate that the UK government may be reconsidering an earlier request for Apple to enable access to user data through a technical backdoor, which is a move that prompted strong opposition from the US government. In response to these developments, US Senator Ron Wyden has sought to clarify whether similar requests were made to other major technology companies.

While Google initially declined to respond to inquiries from Senator Wyden’s office, the company had not received a technical capabilities notice—an official order under UK law that could require companies to enable access to encrypted data.

Senator Wyden, who serves on the Senate Intelligence Committee, addressed the matter in a letter to Director of National Intelligence Tulsi Gabbard. The letter urged the US intelligence community to assess the potential national security implications of the UK’s surveillance laws and any undisclosed requests to US companies.

Meta, which offers encrypted messaging through WhatsApp and Facebook Messenger, also stated in a 17 March communication to Wyden’s office that it had ‘not received an order to backdoor our encrypted services, like that reported about Apple.’

While companies operating in the UK may be restricted from disclosing certain surveillance orders under law, confirmations such as Google’s provide rare public insight into the current landscape of international encryption policy and cooperation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Italy investigates Meta over AI integration in WhatsApp

Italy’s antitrust watchdog has investigated Meta Platforms over allegations that the company may have abused its dominant position by integrating its AI assistant directly into WhatsApp.

The Rome-based authority, formally known as the Autorità Garante della Concorrenza e del Mercato (AGCM), announced the probe on Wednesday, stating that Meta may have breached European Union competition regulations.

The regulator claims that the introduction of the Meta AI assistant into WhatsApp was carried out without obtaining prior user consent, potentially distorting market competition.

Meta AI, the company’s virtual assistant designed to provide chatbot-style responses and other generative AI functions, has been embedded in WhatsApp since March 2025. It is accessible through the app’s search bar and is intended to offer users conversational AI services directly within the messaging interface.

The AGCM is concerned that this integration may unfairly favour Meta’s AI services by leveraging the company’s dominant position in the messaging market. It warned that such a move could steer users toward Meta’s products, limit consumer choice, and disadvantage competing AI providers.

‘By pairing Meta AI with WhatsApp, Meta appears to be able to steer its user base into the new market not through merit-based competition, but by ‘forcing’ users to accept the availability of two distinct services,’ the authority said.

It argued that this strategy may undermine rival offerings and entrench Meta’s position across adjacent digital services. In a statement, Meta confirmed cooperating fully with the Italian authorities.

The company defended the rollout of its AI features, stating that their inclusion in WhatsApp aimed to improve the user experience. ‘Offering free access to our AI features in WhatsApp gives millions of Italians the choice to use AI in a place they already know, trust and understand,’ a Meta spokesperson said via email.

The company maintains its approach, which benefits users by making advanced technology widely available through familiar platforms. The AGCM clarified that its inquiry is conducted in close cooperation with the European Commission’s relevant offices.

The cross-border collaboration reflects the growing scrutiny Meta faces from regulators across the EU over its market practices and the use of its extensive user base to promote new services.

If the authority finds Meta in breach of EU competition law, the company could face a fine of up to 10 percent of its global annual turnover. Under Article 102 of the Treaty on the Functioning of the European Union, abusing a dominant market position is prohibited, particularly if it affects trade between member states or restricts competition.

To gather evidence, AGCM officials inspected the premises of Meta’s Italian subsidiary, accompanied by Guardia di Finanza, the tax police’s special antitrust unit in Italy.

The inspections were part of preliminary investigative steps to assess the impact of Meta AI’s deployment within WhatsApp. Regulators fear that embedding AI assistants into dominant platforms could lead to unfair advantages in emerging AI markets.

By relying on its established user base and platform integration, Meta may effectively foreclose competition by making alternative AI services harder to access or less visible to consumers. Such a case would not be the first time Meta has faced regulatory scrutiny in Europe.

The company has been the subject of multiple investigations across the EU concerning data protection, content moderation, advertising practices, and market dominance. The current probe adds to a growing list of regulatory pressures facing the tech giant as it expands its AI capabilities.

The AGCM’s investigation comes amid broader EU efforts to ensure fair competition in digital markets. With the Digital Markets Act and AI Act emerging, regulators are becoming more proactive in addressing potential risks associated with integrating advanced technologies into consumer platforms.

As the investigation continues, Meta’s use of AI within WhatsApp will remain under close watch. The outcome could set an essential precedent for how dominant tech firms can release AI products within widely used communication tools.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Aeroflot cyberattack cripples Russian flights in major breach

A major cyberattack on Russia’s flagship airline Aeroflot has caused severe disruptions to flights, with hundreds of passengers stranded at airports. Responsibility was claimed by two hacker groups: Ukraine’s Silent Crow and the Belarusian hacktivist collective Belarus Cyber-Partisans.

The attack is among the most damaging cyber incidents Russia has faced since the full-scale invasion of Ukraine in February 2022. Past attacks disrupted government portals and large state-run firms such as Russian Railways, but most resumed operations quickly. This time, the effects were longer-lasting.

Social media showed crowds of delayed passengers packed into Moscow’s Sheremetyevo Airport, Aeroflot’s main hub. The outage affected not only Aeroflot but also its subsidiaries, Rossiya and Pobeda.

Most of the grounded flights were domestic. However, international services to Belarus, Armenia, and Uzbekistan were also cancelled or postponed due to the IT failure.

Early on Monday, Aeroflot issued a statement warning of unspecified problems with its IT infrastructure. The company alerted passengers that delays and disruptions were likely as a result.

Later, Russia’s Prosecutor’s Office confirmed that the outage was the result of a cyberattack. It announced the opening of a criminal case and launched an investigation into the breach.

Kremlin spokesperson Dmitry Peskov described the incident as ‘quite alarming’, admitting that cyber threats remain a serious risk for all major service providers operating at scale.

In a Telegram post, Silent Crow claimed it had maintained access to Aeroflot’s internal systems for over a year. The group stated it had copied sensitive customer data, internal communications, audio recordings, and surveillance footage collected on Aeroflot employees.

The hackers claimed that all of these resources had now either been destroyed or made inaccessible. ‘Restoring them will possibly require tens of millions of dollars. The damage is strategic,’ the group wrote.

Screenshots allegedly showing Aeroflot’s compromised IT dashboards were shared via the same Telegram channel. Silent Crow hinted it may begin publishing the stolen data in the coming days.

It added: ‘The personal data of all Russians who have ever flown with Aeroflot have now also gone on a trip — albeit without luggage and to the same destination.’

The Belarus Cyber-Partisans, who have opposed Belarusian President Alexander Lukashenko’s authoritarian regime for years, said the attack was carefully planned and intended to cause maximum disruption.

‘This is a very large-scale attack and one of the most painful in terms of consequences,’ said group coordinator Yuliana Shametavets. She told The Associated Press that the group spent months preparing the strike and accessed Aeroflot’s systems by exploiting several vulnerabilities.

The Cyber-Partisans have previously claimed responsibility for other high-profile hacks. In April 2024, they said they had breached the internal network of Belarus’s state security agency, the KGB.

Belarus remains a close ally of Russia. Lukashenko, in power for over three decades, has permitted Russia to use Belarusian territory as a staging ground for the invasion of Ukraine and to deploy tactical nuclear weapons on Belarusian soil.

Russia’s aviation sector has already faced repeated interruptions this summer, often caused by Ukrainian drone attacks on military or dual-use airports. Flights have been grounded multiple times as a precaution, disrupting passenger travel.

The latest cyberattack adds a new layer of difficulty, exposing the vulnerability of even the most protected elements of Russia’s transportation infrastructure. While the full extent of the data breach is yet to be independently verified, the implications could be long-lasting.

For now, it remains unclear how long it will take Aeroflot to fully restore services or what specific data may have been leaked. Both hacker groups appear determined to continue using cyber tools as a weapon of resistance — targeting Russia’s most symbolic assets.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Free VPN use surges in UK after online safety law

The UK’s new Online Safety Act has increased VPN use, as websites introduce stricter age restrictions to comply with the law. Popular platforms such as Reddit and Pornhub are either blocking minors or adding age verification, pushing many young users to turn to free VPNs to bypass the rules.

In the days following the Act’s enforcement on 25 July, five of the ten most-downloaded free apps in the UK were VPNs.

However, cybersecurity experts warn that unvetted free VPNs can pose serious risks, with some selling user data or containing malware.

Using a VPN means routing all your internet traffic through an external server, effectively handing over access to your browsing data.

While reputable providers like Proton VPN offer safe free tiers supported by paid plans, lesser-known services often lack transparency and may exploit users for profit.

Consumers are urged to check for clear privacy policies, audited security practices and credible business information before using a VPN. Trusted options for safer browsing include Proton VPN, TunnelBear, Windscribe, and hide.me.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU AI Act begins as tech firms push back

Europe’s AI crackdown officially begins soon, as the EU enforces the first rules targeting developers of generative AI models like ChatGPT.

Under the AI Act, firms must now assess systemic risks, conduct adversarial testing, ensure cybersecurity, report serious incidents, and even disclose energy usage. The goal is to prevent harms related to bias, misinformation, manipulation, and lack of transparency in AI systems.

Although the legislation was passed last year, the EU only released developer guidance on 10 July, leaving tech giants with little time to adapt.

Meta, which developed the Llama AI model, has refused to sign the voluntary code of practice, arguing that it introduces legal uncertainty. Other developers have expressed concerns over how vague and generic the guidance remains, especially around copyright and practical compliance.

The EU also distinguishes itself from the US, where a re-elected Trump administration has launched a far looser AI Action Plan. While Washington supports minimal restrictions to encourage innovation, Brussels is focused on safety and transparency.

Trade tensions may grow, but experts warn that developers should not rely on future political deals instead of taking immediate steps toward compliance.

The AI Act’s rollout will continue into 2026, with the next phase focusing on high-risk AI systems in healthcare, law enforcement, and critical infrastructure.

Meanwhile, questions remain over whether AI-generated content qualifies for copyright protection and how companies should handle AI in marketing or supply chains. For now, Europe’s push for safer AI is accelerating—whether Big Tech likes it or not.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Australia reverses its stance and restricts YouTube for children under 16

Australia has announced that YouTube will be banned for children under 16 starting in December, reversing its earlier exemption from strict new social media age rules. The decision follows growing concerns about online harm to young users.

Platforms like Facebook, Instagram, Snapchat, TikTok, and X are already subject to the upcoming restrictions, and YouTube will now join the list of ‘age-restricted social media platforms’.

From 10 December, all such platforms will be required to ensure users are aged 16 or older or face fines of up to AU$50 million (£26 million) for not taking adequate steps to verify age. Although those steps remain undefined, users will not need to upload official documents like passports or licences.

The government has said platforms must find alternatives instead of relying on intrusive ID checks.

Communications Minister Anika Wells defended the policy, stating that four in ten Australian children reported recent harm on YouTube. She insisted the government would not back down under legal pressure from Alphabet Inc., YouTube’s US-based parent company.

Children can still view videos, but won’t be allowed to hold personal YouTube accounts.

YouTube criticised the move, claiming the platform is not social media but a video library often accessed through TVs. Prime Minister Anthony Albanese said Australia would campaign at a UN forum in September to promote global backing for social media age restrictions.

Exemptions will apply to apps used mainly for education, health, messaging, or gaming, which are considered less harmful.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Tea dating app suspends messaging after the major data breach

The women’s dating safety app Tea has suspended its messaging feature following a cyberattack that exposed thousands of private messages, posts and images.

The app, which helps women run background checks on men, confirmed that direct messages were accessed during the initial breach disclosed in late July.

Tea has 1.6 million users, primarily in the US. Affected users will be contacted directly and offered free identity protection services, including credit monitoring and fraud alerts.

The company said it is working to strengthen its security and will provide updates as the investigation continues. Some of the leaked conversations reportedly contain sensitive discussions about infidelity and abortion.

Experts have warned that the leak of both images and messages raises the risk of emotional harm, blackmail or identity theft. Cybersecurity specialists recommend that users accept the free protection services as soon as possible.

The breach affected those who joined the app before February 2024, including users who submitted ID photos that Tea had promised would be deleted after verification.

Tea is known for allowing women to check if a potential partner is married or has a criminal record, as well as share personal experiences to flag abusive or trustworthy behaviour.

The app’s recent popularity surge has also sparked criticism, with some claiming it unfairly targets men. As users await more information, experts urge caution and vigilance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

India uses AI to catch crypto tax evaders

India’s Income Tax Department is using AI and data tools to identify tax evasion in cryptocurrency transactions. The government collected ₹437 crore in crypto taxes in 2022-2023 using machine learning and digital forensics to spot suspicious activity.

Tax authorities match deducted at source (TDS) data from crypto exchanges to improve compliance. The introduction of the Crypto-Asset Reporting Framework (CARF) also enables automated sharing of tax information, aligning India’s efforts with international tax agreements.

These moves mark a push for greater transparency in India’s digital asset market. Enhanced wallet visibility and automatic data exchange aim to reduce anonymity and curb tax evasion in the crypto space.

India continues to develop regulations focused on consumer protection, cross-border cooperation, and tax compliance, demonstrating a commitment to a more traceable and accountable crypto industry.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Thailand launches crypto sandbox to boost tourism

Thailand has launched a digital asset sandbox to attract high-spending, tech-savvy tourists by enabling seamless cryptocurrency payments. The initiative lets foreign visitors convert digital assets to Thai baht and spend them using local e-money platforms.

The Securities and Exchange Commission, the Bank of Thailand, and other agencies oversee the regulatory sandbox. It aims to simplify payments from street vendors to luxury retailers, eliminating currency conversion friction and card fees.

Authorities plan to focus on merchant education, compliance, and cybersecurity to support the programme’s success.

The move aligns with Thailand’s broader strategy to become a regional digital finance and blockchain innovation hub. Recent policies include a five-year capital gains tax exemption on crypto sales through local exchanges.

The sandbox could attract fintech firms and blockchain events, signalling Thailand’s ambition to lead in digital asset adoption while maintaining regulatory safeguards.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Hanwha and Samsung lead Korea’s cyber insurance push

South Korea is stepping up efforts to strengthen its cyber insurance sector as corporate cyberattacks surge across industries. A string of major breaches has revealed widespread vulnerability and renewed demand for more comprehensive digital risk protection.

Hanwha General Insurance launched Korea’s first Cyber Risk Management Centre last November and partnered with global cybersecurity firm Theori and law firm Shin & Kim to expand its offerings.

Despite the growing need, the market remains underdeveloped. Cyber insurance makes up only 1 percent of Korea’s accident insurance sector, with a 2024 report estimating local cyber premiums at $50 million, just 0.3 percent of the global total.

Regulators and industry voices call for higher mandatory coverage, clearer underwriting standards, and financial incentives to promote adoption.

As Korean demand rises, comprehensive policies offering tailored options and emergency coverage are gaining traction, with Hanwha reporting a 200 percent revenue jump in under a year.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!