Privacy concerns lead India to withdraw cyber safety app mandate

India has scrapped its order mandating smartphone manufacturers to pre-install the state-run Sanchar Saathi cyber safety app. The directive, which faced widespread criticism, had raised concerns over privacy and potential government surveillance.

Smartphone makers, including Apple and Samsung, reportedly resisted the order, highlighting that it was issued without prior consultation and challenged user privacy norms. The government argued the app was necessary to verify handset authenticity.

So far, the Sanchar Saathi app has attracted 14 million users, reporting around 2,000 frauds daily, with a sharp spike of 600,000 new registrations in a single day. Despite these figures, the mandatory pre-installation rule provoked intense backlash from cybersecurity experts and digital rights advocates.

India’s Minister of Communications, Jyotiraditya Scindia, dismissed concerns about surveillance, insisting that the app does not enable snooping. Digital advocacy groups welcomed the withdrawal but called for complete legal clarity on the revised Cyber Security Rules, 2024.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU states strike deal on chat-scanning law

EU member states have finally reached a unified stance on a long-debated law aimed at tackling online child sexual abuse, ending years of stalemate driven by fierce privacy concerns. Governments agreed to drop the most controversial element of the original proposal, mandatory scanning of private messages, after repeated blockages and public opposition from privacy advocates who warned it would amount to mass surveillance.

The move comes as reports of child abuse material continue to surge, with global hotlines processing nearly 2.5 million suspected images last year.

The compromise, pushed forward under Denmark’s Council presidency, maintains the option for tech companies to scan content voluntarily while affirming that end-to-end encryption must not be compromised. Supporters argue that the agreement closes a regulatory gap that will occur when temporary EU rules allowing voluntary detection expire in 2026.

However, children’s rights groups argue that the Council has not gone far enough, saying that simply preserving the current system will not adequately address the scale of the problem.

Privacy campaigners remain alarmed. Critics fear that framing voluntary scanning as a risk-reduction measure could encourage platforms to expand surveillance of user communications to shield themselves from liability.

Former MEP Patrick Breyer, a prominent voice in the campaign against so-called ‘chat control,’ warned that the compromise could still lead to widespread monitoring and possibly age-verification requirements that limit access to digital services.

With the Council and European Parliament now holding formal positions, negotiations will finally begin on the regulation’s final shape. But with political divisions still deep and the clock ticking toward the 2026 deadline, it may be months before the EU determines how far it is willing to go in regulating the detection of child sexual abuse material, and at what cost to users’ privacy.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Spain opens inquiry into Meta over privacy concerns

Prime Minister of Spain, Pedro Sánchez, has announced that an investigation will be launched against Meta following concerns over a possible large-scale violation of user privacy.

The company will be required to explain its conduct before the parliamentary committee on economy, trade and digital transformation instead of continuing to handle the issue privately.

Several research centres in Spain, Belgium and the Netherlands uncovered a concealed tracking tool used on Android devices for almost a year.

Their findings showed that web browsing data had been linked to identities on Facebook and Instagram even when users relied on incognito mode or a VPN.

The practice may have contravened key European rules such as the GDPR, the ePrivacy Directive, the Digital Markets Act and the Digital Services Act, while class action lawsuits are already underway in Germany, the US and Canada.

Pedro Sánchez explained that the investigation aims to clarify events, demand accountability from company leadership and defend any fundamental rights that might have been undermined.

He stressed that the law in Spain prevails over algorithms, platforms or corporate size, and those who infringe on rights will face consequences.

The prime minister also revealed a package of upcoming measures to counter four major threats in the digital environment. A plan that focuses on disinformation, child protection, hate speech and privacy defence instead of reactive or fragmented actions.

He argued that social media offers value yet has evolved into a space shaped by profit over well-being, where engagement incentives overshadow rights. He concluded that the sector needs to be rebuilt to restore social cohesion and democratic resilience.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU proposal sparks alarm over weakened privacy rules

The Digital Omnibus has been released by the European Commission, prompting strong criticism from privacy advocates. Campaigners argue the reforms would weaken long-standing data protection standards and introduce sweeping changes without proper consultation.

Noyb founder Max Schrems claims the plan favours large technology firms by creating loopholes around personal data and lowering user safeguards. Critics say the proposals emerge despite limited political support from EU governments, civil society groups and several parliamentary factions.

The Omnibus is welcomed by industry which have called for simplification and changes to be made for quite a number of years. These changes should make carrying out business activities simpler for entities which do process vast amounts of data.

The Commission is also accused of rushing (errors can be found in the draft, including references to the GDPR) the process under political pressure, abandoning impact assessments and shifting priorities away from widely supported protections. View our analysis on the matter for a deep dive on the matter.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

The future of EU data protection under the Omnibus Package

Introduction and background information

The Commission claims that the Omnibus Package aims to simplify certain European Union legislation to strengthen the Union’s long-term competitiveness. A total of six omnibus packages have been announced in total.

The latest (no. 4) targets small mid-caps and digitalisation. Package no. 4 covers data legislation, cookies and tracking technologies (i.e. the General Data Protection Regulation (GDPR) and ePrivacy Directive (ePD)), as well as cybersecurity incident reporting and adjustments to the Artificial Intelligence Act (AIA).

That ‘simplification’ is part of a broader agenda to appease business, industry and governments who argue that the EU has too much red tape. In her September 2025 speech to German economic and business associations, Ursula von der Leyen sided with industry and stated that simplification is ‘the only way to remain competitive’.

As for why these particular laws were selected, the rationale is unclear. One stated motivation for including the GDPR is its mention in Mario Draghi’s 2024 report on ‘The Future of European Competitiveness’.

Draghi, the former President of the European Central Bank, focused on innovation in advanced technologies, decarbonisation and competitiveness, as well as security. Yet, the report does not outline any concrete way in which the GDPR allegedly reduces competitiveness or requires revision.

The GDPR appears only twice in the report. First, as a brief reference to regulatory fragmentation affecting the reuse of sensitive health data across Member States (MS).

Second, in the concluding remarks, it is claimed that ‘the GDPR in particular has been implemented with a large degree of fragmentation which undermines the EU’s digital goals’. There is, however, no explanation of this ‘large fragmentation’, no supporting evidence, and no dedicated section on the GDPR as its first mention being buried in the R&I (research and innovation) context.

It is therefore unclear what legal or analytical basis the Commission relies on to justify including the GDPR in this simplification exercise.

The current debate

There are two main sides to this Omnibus, which are the privacy forward and the competitive/SME side. The two need not be mutually exclusive, but civil society warns that ‘simplification’ risks eroding privacy protection. Privacy advocates across civil society expressed strong concern and opposition to simplification in their responses to the European Commission’s recent call for evidence.

Industry positions vary in tone and ambition. For example, CrowdStrike calls for greater legal certainty under the Cybersecurity Act, such as making recital 55 binding rather than merely guiding and introducing a one-stop-shop mechanism for incident reporting.

Meta, by contrast, urges the Commission to go beyond ‘easing administrative burdens’, calling for a pause in AI Act enforcement and a sweeping reform of the EU data protection law. On the civil society side, Access Now argues that fundamental rights protections are at stake.

It warns that any reduction in consent prompts could allow tracking technologies to operate without users ever being given a real opportunity to refuse. A more balanced, yet cautious line can be found in the EDPB and EDPS joint opinion regarding easing records of processing activities for SMEs.

Similar to the industry, they support reducing administrative burdens, but with the caveat that amendments should not compromise the protection of fundamental rights, echoing key concerns of civil society.

Regarding Member State support, Estonia, France, Austria and Slovenia are firmly against any reopening of the GDPR. By contrast, the Czech Republic, Finland and Poland propose targeted amendments while Germany proposes a more systematic reopening of the GDPR.

Individual Members of the European Parliament have also come out in favour of reopening, notably Aura Salla, a Finnish centre-right MEP who previously headed Meta’s Brussels lobbying office.

Therefore, given the varied opinions, it cannot be said what the final version of the Omnibus would look like. Yet, a leaked draft document of the GDPR’s potential modifications suggests otherwise. Upon examination, it cannot be disputed that the views from less privacy-friendly entities have served as a strong guiding path.

Leaked draft document main changes

The leaked draft introduces several core changes.

Those changes include a new definition of personal and sensitive data, the use of legitimate interest (LI) for AI processing, an intertwining of the ePrivacy Directive (ePD) and GDPR, data breach reforms, a centralised data protection impact assessment (DPIA) whitelist/blacklist, and access rights being conditional on motive for use.

A new definition of personal data

The draft redefines personal data so that ‘information is not personal data for everyone merely because another entity can identify that natural person’. That directly contradicts established EU case law, which holds that if an entity can, with reasonable means, identify a natural person, then the information is personal data, regardless of who else can identify that person.

A new definition of sensitive data

Under current rules, inferred information can be sensitive personal data. If a political opinion is inferred from browsing history, that inference is protected.

The draft would narrow this by limiting sensitive data to information that ‘directly reveals’ special categories (political views, health, religion, sexual orientation, race/ethnicity, trade union membership). That would remove protection from data derived through profiling and inference.

Detected patterns, such as visits to a health clinic or political website, would no longer be treated as sensitive, and only explicit statements similar to ‘I support the EPP’ or ‘I am Muslim’ would remain covered.

Intertwining article 5(3) ePD and the GDPR

Article 5(3) ePD is effectively copied into the GDPR as a new Article 88a. Article 88a would allow the processing of personal data ‘on or from’ terminal equipment where necessary for transmission, service provision, creating aggregated information (e.g. statistics), or for security purposes, alongside the existing legal bases in Articles 6(1) and 9(2) of the GDPR.

That generates confusion about how these legal bases interact, especially when combined with AI processing under LI. Would this mean that personal data ‘on or from’ a terminal equipment may be allowed if it is done by AI?

The scope is widened. The original ePD covered ‘storing of information, or gaining access to information already stored, in the terminal equipment’. The draft instead regulates any processing of personal data ‘on or from’ terminal equipment. That significantly expands the ePD’s reach and would force controllers to reassess and potentially adapt a broad range of existing operations.

LI for AI personal data processing

A new Article 88c GDPR, ‘Processing in the context of the development and operation of AI’, would allow controllers to rely on LI to process personal data for AI processing. That move would largely sideline data subject control. Businesses could train AI systems on individuals’ images, voices or creations without obtaining consent.

A centralised data breach portal, deadline extension and change in threshold reporting

The draft introduces three main changes to data breach reporting.

  • Extending the notification deadline from 72 to 96 hours, giving privacy teams more time to investigate and report.
  • A single EU-level reporting portal, simplifying reporting for organisations active in multiple MS.
  • Raising the notification threshold when the rights and freedoms of data subjects are at ‘risk’ to ‘high risk’.

The first two changes are industry-friendly measures designed to streamline operations. The third is more contentious. While industry welcomes fewer reporting obligations, civil society warns that a ‘high-risk’ threshold could leave many incidents unreported. Taken together, these reforms simplify obligations, albeit at the potential cost of reducing transparency.

Centralised processing activity (PA) list requiring a DPIA

This is another welcome change as it would clarify which PAs would automatically require a DPIA and which would not. The list would be updated every 3 years.

What should be noted here is that some controllers may not see their PA on this list and assume or argue that a DPIA is not required. Therefore, the language on this should make it clear that it is not a closed list.

Access requests denials

Currently, a data subject may request a copy of their data regardless of the motive. Under the draft, if a data subject exploits the right of access by using that material against the controller, the controller may charge or refuse the request.

That is problematic for the protection of rights as it impacts informational self-determination and weakens an important enforcement tool for individuals.

For more information, an in depth analysis by noyb has been carried out which can be accessed here.

The Commission’s updated version

As of the 19th of November, the Commission has published its digital omnibus proposal. Most of the amendments in the leaked draft have remained. One of the measures dropped is the definition of sensitive data. This means that inferences could amount to sensitive data.

However, the final document keeps three key changes that erode fundamental rights protections:

  • Changing the definition of personal data to be a subjective and narrow one;
  • An intertwining of the ePD and the GDPR which also allows for processing based on aggregated and security purposes;
  • LI being relied upon as a legal basis for AI processing of personal data.

Still, positive changes remain:

  • A single-entry point for EU data breaches. This is a welcomed measure which streamlines reporting and appease some compliance obligations for EU businesses.
  • Another welcomed measure is the white/black-list of processing activities which would or would not require a DPIA. The same note remains with what the language of this text will look like.

Overall, these two measures are examples of simplification measures with concrete benefits.

Now, the European Parliament has the task to dissect this proposal and debate on what to keep and what to reject. Some experts have suggested that this may take minimum 1 year to accomplish given how many changes there are, but this is not certain.

We can also expect a revised version of the Commission’s proposal to be published due to the errors in language, numbering and article referencing that have been observed. This does not mean any content changes.

Final remarks

Simplification in itself is a good idea, and businesses need to have enough freedom to operate without being suffocated with red tape. However, changing a cornerstone of data protection law to such an extent that it threatens fundamental rights protections is just cause for concern.

Alarms have already been raised after the previous Omnibus package on green due diligence obligations was scrapped. We may now be witnessing a similar rollback, this time targeting digital rights.

As a result, all eyes are on 19 November, a date that could reshape not only the EU privacy standards but also global data protection norms.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

US states weigh VPN restrictions to protect minors online

US legislators in Wisconsin and Michigan are weighing proposals that would restrict the use of VPNs to access sites deemed harmful to minors. The bills build on age-verification rules for websites hosting sexual content, which lawmakers say are too easy to bypass when users connect via VPNs.

In Wisconsin, a bill that has already passed the State Assembly would require adult sites to both verify age and block visitors using VPNs, potentially making the state the first in the US to outlaw VPN use for accessing such content if the Senate approves it.

In Michigan, similar legislation would go further by obliging internet providers to monitor and block VPN connections, though that proposal has yet to advance.

The Digital Rights Group and the Electronic Frontier Foundation argue that the approach would erode privacy for everyone, not just minors.

It warns that blanket restrictions would affect businesses, students, journalists and abuse survivors who rely on VPNs for security, calling the measures ‘surveillance dressed up as safety’ and urging lawmakers instead to improve education, parental tools and support for safer online environments.

The debate comes as several European countries, including France, Italy and the UK, have introduced age-verification rules for pornography sites, but none have proposed banning VPNs.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Denmark’s new chat control plan raises fresh privacy concerns

Denmark has proposed an updated version of the EU’s controversial ‘chat control’ regulation, shifting from mandatory to voluntary scanning of private messages. Former MEP Patrick Breyer has warned, however, that the revision still threatens Europeans’ right to private communication.

Under the new plan, messaging providers could choose to scan chats for illegal material, but without a clear requirement for court orders. Breyer argued that this sidesteps the European Parliament’s position, which insists on judicial authorisation before any access to communications.

He also criticised the proposal for banning under-16s from using messaging apps like WhatsApp and Telegram, claiming such restrictions would prove ineffective and easily bypassed. In addition, the plan would effectively outlaw anonymous communication, requiring users to verify their identities through IDs.

Privacy advocates say the Danish proposal could set a dangerous precedent by eroding fundamental digital rights. Civil society groups have urged EU lawmakers to reject measures that compromise secure, anonymous communication essential for journalists and whistleblowers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Samsung strengthens Galaxy AI privacy and user control features

Samsung has expanded its privacy and security controls for Galaxy AI, emphasising transparency and user choice. The company stated that its AI systems are designed with privacy at their core, ensuring users remain in control of how their personal data is managed and processed.

Galaxy AI combines on-device and cloud-based processing, enabling users to choose where their information is processed. Features such as Live Translate, Interpreter and Generative Edit can operate fully on-device, preventing sensitive data from leaving the phone.

Samsung’s Security and Privacy dashboard provides clear visibility into app permissions, data sharing, and potential threats. Users can track which apps have accessed personal information and enable Auto Blocker, a tool that prevents malware and unauthorised installations.

Additional settings like Maximum Restrictions provide an extra layer of defence by blocking unsafe networks and preventing data interception. Samsung stated that its goal is to develop smarter, adaptive security systems that safeguard privacy while supporting the evolution of AI capabilities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UN treaty sparks debate over digital cybersecurity

A new UN cybercrime treaty opened for signature on 25 October, raising concerns about digital cybersecurity and privacy protections. The treaty allows broad cross-border cooperation on serious crimes, potentially requiring states to assist investigations that conflict with domestic laws.

Negotiations revealed disagreements over the treaty’s scope and human rights standards, primarily because it grants broad surveillance powers without clearly specifying safeguards for privacy and digital rights. Critics warn that these powers could be misused, putting digital cybersecurity and the rights of citizens at risk.

Governments supporting the treaty are advised to adopt safeguards, including limiting intrusive monitoring, conditioning cooperation on dual criminality, and reporting requests for assistance transparently. Even with these measures, experts caution that the treaty could pose challenges to global digital cybersecurity protection.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Google enhances Chrome autofill while privacy experts urge caution

Google has introduced an update to Chrome’s enhanced autofill, allowing users to automatically complete forms with passport numbers, driving licence details and vehicle information. The feature builds on existing options such as addresses, passwords and payment details.

The new capability is available globally on desktop in all supported languages. Google said it plans to expand the types of data Chrome can recognise and fill in over the coming months, improving accuracy across complex and varied online forms.

The company stated that all personal information saved in Chrome is encrypted and stored only with the user’s consent. Before any form is completed automatically, Chrome prompts users for confirmation to ensure they remain in control of their data.

Privacy experts have raised concerns about storing such sensitive information within browsers, noting potential risks if devices are compromised. They advise users to enable two-factor authentication and regularly review their saved data to maintain security.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot