Trilateral sanctions target Media Land for supporting ransomware groups

The United States has imposed coordinated sanctions on Media Land, a Russian bulletproof hosting provider accused of aiding ransomware groups and broader cybercrime. The measures target senior operators and sister companies linked to attacks on businesses and critical infrastructure.

Authorities in the UK and Australia say Media Land infrastructure aided ransomware groups, including LockBit, BlackSuit, and Play, and was linked to denial-of-service attacks on US organisations. OFAC also named operators and firms that maintained systems designed to evade law enforcement.

The action also expands earlier sanctions against Aeza Group, with entities accused of rebranding and shifting infrastructure through front companies such as Hypercore to avoid restrictions introduced this year. Officials say these efforts were designed to obscure operational continuity.

According to investigators, the network relied on overseas firms in Serbia and Uzbekistan to conceal its activity and establish technical infrastructure that was detached from the Aeza brand. These entities, along with the new Aeza leadership, were designated for supporting sanctions evasion and cyber operations.

The sanctions block assets under US jurisdiction and bar US persons from dealing with listed individuals or companies. Regulators warn that financial institutions interacting with sanctioned entities may face penalties, stating that the aim is to disrupt ransomware infrastructure and encourage operators to comply.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

DPDP law takes effect as India tightens AI-era data protections

India has activated new Digital Personal Data Protection rules that sharply restrict how technology firms collect and use personal information. The framework limits data gathering to what is necessary for a declared purpose and requires clear explanations, opt-outs, and breach notifications for Indian users.

The rules apply across digital platforms, from social media and e-commerce to banks and public services. Companies must obtain parental consent for individuals under 18 and are prohibited from using children’s data for targeted advertising. Firms have 18 months to comply with the new safeguards.

Users can request access to their data, ask why it was collected, and demand corrections or updates. They may withdraw consent at any time and, in some cases, request deletion. Companies must respond within 90 days, and individuals can appoint someone to exercise these rights.

Civil society groups welcomed stronger user rights but warned that the rules may also expand state access to personal data. The Internet Freedom Foundation criticised limited oversight and said the provisions risk entrenching government control, reducing transparency for citizens.

India is preparing further digital regulations, including new requirements for AI and social media firms. With nearly a billion online users, the government has urged platforms to label AI-generated content amid rising concerns about deepfakes, online misinformation, and election integrity.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Azure weathers record 15.7 Tbps cloud DDoS attack

According to Microsoft, Azure was hit on 24 October 2025 by a massive multi-vector DDoS attack that peaked at 15.72 terabits per second and unleashed 3.64 billion packets per second on a single endpoint.

The attack was traced to the Aisuru botnet, a Mirai-derived IoT botnet. More than 500,000 unique IP addresses, mostly residential devices, participated in the assault. UDP floods with random ports made the attack particularly potent and harder to spoof.

Azure’s automated DDoS Protection infrastructure handled the traffic surge, filtering out malicious packets in real time and keeping customer workloads online.

From a security-policy viewpoint, this incident underscores how IoT devices continue to fuel some of the biggest cyber threats, and how major cloud platforms must scale defences rapidly to cope.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

FCC set to rescind cyber rules after Salt Typhoon hack

The FCC is scheduled this week to vote on rescinding rules imposed in January that required major telecommunications carriers to secure networks from unauthorised access and interception under Section 105 of the Communications Assistance for Law Enforcement Act.

These measures were introduced after the Salt Typhoon cyber-espionage campaign exposed vulnerabilities in US telecom infrastructure.

Current FCC Chair Brendan Carr argues the prior policy exceeded the agency’s legal authority and did not offer flexible or targeted protections. The proposed reversal follows lobbying by major carriers who claim the rules could undermine partnership efforts between public and private sectors.

Lawmakers, including Maria Cantwell, ranking Democrat on the Senate Commerce Committee, have strongly opposed the move. They describe the Salt Typhoon campaign, attributed to Chinese-linked actors targeting numerous US carriers, as one of the most serious telecom breaches in US history, emphasising that loosening these rules could undermine national security.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Chrome receives emergency update to fix high-severity zero-day flaw

Google has issued an emergency update to fix the seventh Chrome zero-day exploited in attacks this year. The flaw, tracked as CVE-2025-13223, is caused by a type confusion bug in the browser’s V8 JavaScript engine and was used in the wild before the patch was released.

The company says updates will roll out across the Stable Desktop channel in the coming weeks, though users can install the fix immediately by checking for updates in Chrome’s settings. Google is withholding technical details until most users have upgraded to avoid encouraging further exploitation.

The vulnerability was reported by a member of Google’s Threat Analysis Group and allowed attackers to trigger code execution or browser crashes through malicious HTML pages. It continues a pattern of high-severity zero-days discovered and patched throughout 2025.

Google stresses that prompt updates remain essential, as attackers often target unpatched systems. Automatic updates can help ensure that newly released fixes reach users quickly and reduce exposure to emerging threats.

Security experts also recommend enabling scheduled antivirus scans and using protective features, such as hardened browsers or VPNs. With multiple zero-days already patched this year, analysts say more are likely, and users should keep Chrome’s update settings enabled.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

The future of the EU data protection under the Omnibus Package

Introduction and background information

The Commission claims that the Omnibus Package aims to simplify certain European Union legislation to strengthen the Union’s long-term competitiveness. A total of six omnibus packages have been announced in total.

The latest (no. 4) targets small mid-caps and digitalisation. Package no. 4 covers data legislation, cookies and tracking technologies (i.e. the General Data Protection Regulation (GDPR) and ePrivacy Directive (ePD)), as well as cybersecurity incident reporting and adjustments to the Artificial Intelligence Act (AIA).

That ‘simplification’ is part of a broader agenda to appease business, industry and governments who argue that the EU has too much red tape. In her September 2025 speech to German economic and business associations, Ursula von der Leyen sided with industry and stated that simplification is ‘the only way to remain competitive’.

As for why these particular laws were selected, the rationale is unclear. One stated motivation for including the GDPR is its mention in Mario Draghi’s 2024 report on ‘The Future of European Competitiveness’.

Draghi, the former President of the European Central Bank, focused on innovation in advanced technologies, decarbonisation and competitiveness, as well as security. Yet, the report does not outline any concrete way in which the GDPR allegedly reduces competitiveness or requires revision.

The GDPR appears only twice in the report. First, as a brief reference to regulatory fragmentation affecting the reuse of sensitive health data across Member States (MS).

Second, in the concluding remarks, it is claimed that ‘the GDPR in particular has been implemented with a large degree of fragmentation which undermines the EU’s digital goals’. There is, however, no explanation of this ‘large fragmentation’, no supporting evidence, and no dedicated section on the GDPR as its first mention being buried in the R&I (research and innovation) context.

It is therefore unclear what legal or analytical basis the Commission relies on to justify including the GDPR in this simplification exercise.

The current debate

There are two main sides to this Omnibus, which are the privacy forward and the competitive/SME side. The two need not be mutually exclusive, but civil society warns that ‘simplification’ risks eroding privacy protection. Privacy advocates across civil society expressed strong concern and opposition to simplification in their responses to the European Commission’s recent call for evidence.

Industry positions vary in tone and ambition. For example, CrowdStrike calls for greater legal certainty under the Cybersecurity Act, such as making recital 55 binding rather than merely guiding and introducing a one-stop-shop mechanism for incident reporting.

Meta, by contrast, urges the Commission to go beyond ‘easing administrative burdens’, calling for a pause in AI Act enforcement and a sweeping reform of the EU data protection law. On the civil society side, Access Now argues that fundamental rights protections are at stake.

It warns that any reduction in consent prompts could allow tracking technologies to operate without users ever being given a real opportunity to refuse. A more balanced, yet cautious line can be found in the EDPB and EDPS joint opinion regarding easing records of processing activities for SMEs.

Similar to the industry, they support reducing administrative burdens, but with the caveat that amendments should not compromise the protection of fundamental rights, echoing key concerns of civil society.

Regarding Member State support, Estonia, France, Austria and Slovenia are firmly against any reopening of the GDPR. By contrast, the Czech Republic, Finland and Poland propose targeted amendments while Germany proposes a more systematic reopening of the GDPR.

Individual Members of the European Parliament have also come out in favour of reopening, notably Aura Salla, a Finnish centre-right MEP who previously headed Meta’s Brussels lobbying office.

Therefore, given the varied opinions, it cannot be said what the final version of the Omnibus would look like. Yet, a leaked draft document of the GDPR’s potential modifications suggests otherwise. Upon examination, it cannot be disputed that the views from less privacy-friendly entities have served as a strong guiding path.

Leaked draft document main changes

The leaked draft introduces several core changes.

Those changes include a new definition of personal and sensitive data, the use of legitimate interest (LI) for AI processing, an intertwining of the ePrivacy Directive (ePD) and GDPR, data breach reforms, a centralised data protection impact assessment (DPIA) whitelist/blacklist, and access rights being conditional on motive for use.

A new definition of personal data

The draft redefines personal data so that ‘information is not personal data for everyone merely because another entity can identify that natural person’. That directly contradicts established EU case law, which holds that if an entity can, with reasonable means, identify a natural person, then the information is personal data, regardless of who else can identify that person.

A new definition of sensitive data

Under current rules, inferred information can be sensitive personal data. If a political opinion is inferred from browsing history, that inference is protected.

The draft would narrow this by limiting sensitive data to information that ‘directly reveals’ special categories (political views, health, religion, sexual orientation, race/ethnicity, trade union membership). That would remove protection from data derived through profiling and inference.

Detected patterns, such as visits to a health clinic or political website, would no longer be treated as sensitive, and only explicit statements similar to ‘I support the EPP’ or ‘I am Muslim’ would remain covered.

Intertwining article 5(3) ePD and the GDPR

Article 5(3) ePD is effectively copied into the GDPR as a new Article 88a. Article 88a would allow the processing of personal data ‘on or from’ terminal equipment where necessary for transmission, service provision, creating aggregated information (e.g. statistics), or for security purposes, alongside the existing legal bases in Articles 6(1) and 9(2) of the GDPR.

That generates confusion about how these legal bases interact, especially when combined with AI processing under LI. Would this mean that personal data ‘on or from’ a terminal equipment may be allowed if it is done by AI?

The scope is widened. The original ePD covered ‘storing of information, or gaining access to information already stored, in the terminal equipment’. The draft instead regulates any processing of personal data ‘on or from’ terminal equipment. That significantly expands the ePD’s reach and would force controllers to reassess and potentially adapt a broad range of existing operations.

LI for AI personal data processing

A new Article 88c GDPR, ‘Processing in the context of the development and operation of AI’, would allow controllers to rely on LI to process personal data for AI processing. That move would largely sideline data subject control. Businesses could train AI systems on individuals’ images, voices or creations without obtaining consent.

A centralised data breach portal, deadline extension and change in threshold reporting

The draft introduces three main changes to data breach reporting.

  • Extending the notification deadline from 72 to 96 hours, giving privacy teams more time to investigate and report.
  • A single EU-level reporting portal, simplifying reporting for organisations active in multiple MS.
  • Raising the notification threshold when the rights and freedoms of data subjects are at ‘risk’ to ‘high risk’.

The first two changes are industry-friendly measures designed to streamline operations. The third is more contentious. While industry welcomes fewer reporting obligations, civil society warns that a ‘high-risk’ threshold could leave many incidents unreported. Taken together, these reforms simplify obligations, albeit at the potential cost of reducing transparency.

Centralised processing activity (PA) list requiring a DPIA

This is another welcome change as it would clarify which PAs would automatically require a DPIA and which would not. The list would be updated every 3 years.

What should be noted here is that some controllers may not see their PA on this list and assume or argue that a DPIA is not required. Therefore, the language on this should make it clear that it is not a closed list.

Access requests denials

Currently, a data subject may request a copy of their data regardless of the motive. Under the draft, if a data subject exploits the right of access by using that material against the controller, the controller may charge or refuse the request.

That is problematic for the protection of rights as it impacts informational self-determination and weakens an important enforcement tool for individuals.

For more information, an in depth analysis by noyb has been carried out which can be accessed here.

The Commission’s updated version

On 19 November, the European Commission is expected to present its official simplification package. This section will be updated once the final text is published.

Final remarks

Simplification in itself is a good idea, and businesses need to have enough freedom to operate without being suffocated with red tape. However, changing a cornerstone of data protection law to such an extent that it threatens fundamental rights protections is just cause for concern.

Alarms have already been raised after the previous Omnibus package on green due diligence obligations was scrapped. We may now be witnessing a similar rollback, this time targeting digital rights.

As a result, all eyes are on 19 November, a date that could reshape not only the EU privacy standards but also global data protection norms.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Eurofiber France confirms the major data breach

The French telecommunications company Eurofiber has acknowledged a breach of its ATE customer platform and digital ticket system after a hacker accessed the network through software used by the company.

Engineers detected the intrusion quickly and implemented containment measures, while the company stressed that services remained operational and banking data stayed secure. The incident affected only French operations and subsidiaries such as Netiwan, Eurafibre, Avelia, and FullSave, according to the firm.

Security researchers instead argue that the scale is far broader. International Cyber Digest reported that more than 3,600 organisations may be affected, including prominent French institutions such as Orange, Thales, the national rail operator, and major energy companies.

The outlet linked the intrusion to the ransomware group ByteToBreach, which allegedly stole Eurofiber’s entire GLPI database and accessed API keys, internal messages, passwords and client records.

A known dark web actor has now listed the stolen dataset for sale, reinforcing concerns about the growing trade in exposed corporate information. The contents reportedly range from files and personal data to cloud configurations and privileged credentials.

Eurofiber did not clarify which elements belonged to its systems and which originated from external sources.

The company has notified the French privacy regulator CNIL and continues to investigate while assuring Dutch customers that their data remains safe.

A breach that underlines the vulnerability of essential infrastructure providers across Europe, echoing recent incidents in Sweden, where a compromised IT supplier exposed data belonging to over a million people.

Eurofiber says it aims to strengthen its defences instead of allowing similar compromises in future.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Deepfakes surge as scammers exploit AI video tools

Experts warn online video is entering a perilous new phase as AI deepfakes spread. Analysts say totals climbed from roughly 500,000 in 2023 to eight million in 2025.

Security researchers say deepfake scams have risen by more than 3,000 percent recently. Studies also indicate humans correctly spot high-quality fakes only around one in four times. People are urged to question surprising clips, verify stories elsewhere and trust their instincts.

Video apps such as Sora 2 create lifelike clips that fraudsters reuse for scams. Sora passed one million downloads and later tightened rules after racist deepfakes of Martin Luther King Jr.

Specialists at Outplayed suggest checking eye blinks, mouth movements and hands for subtle distortions. Inconsistent lighting, unnaturally smooth skin or glitching backgrounds can reveal manipulated or AI-generated video.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

New report warns retailers are unprepared for AI-powered attacks

Retailers are entering the peak shopping season amid warnings that AI-driven cyber threats will accelerate. LevelBlue’s latest Spotlight Report says nearly half of retail executives are already seeing significantly higher attack volumes, while one-third have suffered a breach in the past year.

The sector is under pressure to roll out AI-driven personalisation and new digital channels, yet only a quarter feel ready to defend against AI attacks. Readiness gaps also cover deepfakes and synthetic identity fraud, even though most expect these threats to arrive soon.

Supply chain visibility remains weak, with almost half of executives reporting limited insight into software suppliers. Few list supplier security as a near-term priority, fuelling concern that vulnerabilities could cascade across retail ecosystems.

High-profile breaches have pushed cybersecurity into the boardroom, and most retailers now integrate security teams with business operations. Leadership performance metrics and risk appetite frameworks are increasingly aligned with cyber resilience goals.

Planned investment is focused on application security, business-wide resilience processes, and AI-enabled defensive tools. LevelBlue argues that sustained spending and cultural change are required if retailers hope to secure consumer trust amid rapidly evolving threats.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Mass CCTV hack in India exposes maternity ward videos sold on Telegram

Police in Gujarat have uncovered an extensive cybercrime network selling hacked CCTV footage from hospitals, schools, offices and private homes across India.

The case surfaced after local media spotted YouTube videos showing women in a maternity ward during exams and injections, with links directing viewers to Telegram channels selling longer clips. The hospital involved said cameras were installed to protect staff from false allegations.

Investigators say hackers accessed footage from an estimated 50,000 CCTV systems nationwide and sold videos for 800–2,000 rupees, with some Telegram channels offering live feeds by subscription.

Arrests since February span Maharashtra, Uttar Pradesh, Gujarat, Delhi and Uttarakhand. Police have charged suspects under laws covering privacy violations, publication of obscene material, voyeurism and cyberterrorism.

Experts say weak security practices make Indian CCTV systems easy targets. Many devices run on default passwords such as Admin123. Hackers used brute-force tools to break into networks, according to investigators. Cybercrime specialists advise users to change their IP addresses and passwords, run periodic audits, and secure both home and office networks.

The case highlights the widespread use of CCTV in India, as cameras are prevalent in both public and private spaces, often installed without consent or proper safeguards. Rights advocates say women face added stigma when sensitive footage is leaked, which discourages complaints.

Police said no patients or hospitals filed a report in this case due to fear of exposure, so an officer submitted the complaint.

Last year, the government urged states to avoid suppliers linked to past data breaches and introduced new rules to raise security standards, but breaches remain common.

Investigators say this latest case demonstrates how easily insecure systems can be exploited and how sensitive footage can spread online, resulting in severe consequences for victims.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!