EU unveils vision for a modern justice system

The European Commission has introduced a new Digital Justice Package designed to guide the EU justice systems into a fully digital era.

A plan that sets out a long-term strategy to support citizens, businesses and legal professionals with modern tools instead of outdated administrative processes. Central objectives include improved access to information, stronger cross-border cooperation and a faster shift toward AI-supported services.

The DigitalJustice@2030 Strategy contains fourteen steps that encourage member states to adopt advanced digital tools and share successful practices.

A key part of the roadmap focuses on expanding the European Legal Data Space, enabling legislation and case law to be accessed more efficiently.

The Commission intends to deepen cooperation by developing a shared toolbox for AI and IT systems and by seeking a unified European solution to cross-border videoconferencing challenges.

Additionally, the Commission has presented a Judicial Training Strategy designed to equip judges, prosecutors and legal staff with the digital and AI skills required to apply the EU digital law effectively.

Training will include digital case management, secure communication methods and awareness of AI’s influence on legal practice. The goal is to align national and EU programmes to increase long-term impact, rather than fragmenting efforts.

European officials argue that digital justice strengthens competitiveness by reducing delays, encouraging transparency and improving access for citizens and businesses.

The package supports the EU’s Digital Decade ambition to make all key public services available online by 2030. It stands as a further step toward resilient and modern judicial systems across the Union.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Tech groups welcome EU reforms as privacy advocates warn of retreat

The EU has unveiled plans to scale back certain aspects of its AI and data privacy rules to revive innovation and alleviate regulatory pressure on businesses. The Digital Omnibus package delays stricter oversight for high-risk AI until 2027 and permits the use of anonymised personal data for model training.

The reforms amend the AI Act and several digital laws, cutting cookie pop-ups and simplifying documentation requirements for smaller firms. EU tech chief Henna Virkkunen says the aim is to boost competitiveness by removing layers of rigid regulation that have hindered start-ups and SMEs.

US tech lobby groups welcomed the overall direction. Still, they criticised the package for not going far enough, particularly on compute thresholds for systemic-risk AI and copyright provisions with cross-border effects. They argue the reforms only partially address industry concerns.

Privacy and digital rights advocates sharply opposed the changes, warning they represent a significant retreat from Europe’s rights-centric regulatory model. Groups including NOYB accused Brussels of undermining hard-won protections in favour of Big Tech interests.

Legal scholars say the proposals could shift Europe closer to a more permissive, industry-driven approach to AI and data use. They warn that the reforms may dilute the EU’s global reputation as a standard-setter for digital rights, just as the world seeks alternatives to US-style regulation.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU eases AI and data rules to boost tech growth

The European Commission has proposed easing AI and data privacy rules to cut red tape and help European tech firms compete internationally. Companies could access datasets more freely for AI training and have 16 months to comply with ‘high-risk’ AI rules.

Brussels also aims to cut the number of cookie pop-ups, allowing users to manage consent more efficiently while protecting privacy. The move has sparked concern among rights groups and campaigners who fear the EU may be softening its stance on Big Tech.

Critics argue that loosening regulations could undermine citizen protections, while European companies welcome the changes as a way to foster innovation and reduce regulatory burdens that have slowed start-ups and smaller businesses.

EU officials emphasise that the reforms seek a balance between competitiveness and safeguarding fundamental rights. Commission officials say the measures will help European firms compete with US and Chinese rivals while safeguarding citizen privacy.

Simplifying consent mechanisms and providing companies more operational flexibility are central to the plan’s goals.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Pennsylvania Senate passes bill to tackle AI-generated CSAM

The Pennsylvania Senate has passed Senate Bill 1050, requiring all individuals classified as mandated reporters to notify authorities of any instance of child sexual abuse material (CSAM) they become aware of, including material produced by a minor or generated using artificial intelligence.

The bill, sponsored by Senators Tracy Pennycuick, Scott Martin and Lisa Baker, addresses the recent rise in AI-generated CSAM and builds upon earlier legislation (Act 125 of 2024 and Act 35 of 2025) that targeted deepfakes and sexual deepfake content.

Supporters argue the bill strengthens child protection by closing a legal gap: while existing laws focused on CSAM involving real minors, the new measure explicitly covers AI-generated material. Senator Martin said the threat from AI-generated images is ‘very real’.

From a tech policy perspective, this law highlights how rapidly evolving AI capabilities, especially around image synthesis and manipulation, are pushing lawmakers to update obligations for reporting, investigation and accountability.

It raises questions around how institutions, schools and health-care providers will adapt to these new responsibilities and what enforcement mechanisms will look like.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI in healthcare gains regulatory compass from UK experts

Professor Alastair Denniston has outlined the core principles for regulating AI in healthcare, describing AI as the ‘X-ray moment’ of our time.

Like previous innovations such as MRI scanners and antibiotics, AI has the potential to improve diagnosis, treatment and personalised care dramatically. Still, it also requires careful oversight to ensure patient safety.

The MHRA’s National Commission on the Regulation of AI in Healthcare is developing a framework based on three key principles. The framework must be safe, ensuring proportionate regulation that protects patients without stifling innovation.

It must be fast, reducing delays in bringing beneficial technologies to patients and supporting small innovators who cannot endure long regulatory timelines. Ultimately, it must be trusted, with transparent processes that foster confidence in AI technologies today and in the future.

Professor Denniston emphasises that AI is not a single technology but a rapidly evolving ecosystem. The regulatory system must keep pace with advances while allowing the NHS to harness AI safely and efficiently.

Just as with earlier medical breakthroughs, failure to innovate can carry risks equal to the dangers of new technologies themselves.

The National Commission will soon invite the public to contribute their views through a call for evidence.

Patients, healthcare professionals, and members of the public are encouraged to share what matters to them, helping to shape a framework that balances safety, speed, and trust while unlocking the full potential of AI in the NHS.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Trilateral sanctions target Media Land for supporting ransomware groups

The United States has imposed coordinated sanctions on Media Land, a Russian bulletproof hosting provider accused of aiding ransomware groups and broader cybercrime. The measures target senior operators and sister companies linked to attacks on businesses and critical infrastructure.

Authorities in the UK and Australia say Media Land infrastructure aided ransomware groups, including LockBit, BlackSuit, and Play, and was linked to denial-of-service attacks on US organisations. OFAC also named operators and firms that maintained systems designed to evade law enforcement.

The action also expands earlier sanctions against Aeza Group, with entities accused of rebranding and shifting infrastructure through front companies such as Hypercore to avoid restrictions introduced this year. Officials say these efforts were designed to obscure operational continuity.

According to investigators, the network relied on overseas firms in Serbia and Uzbekistan to conceal its activity and establish technical infrastructure that was detached from the Aeza brand. These entities, along with the new Aeza leadership, were designated for supporting sanctions evasion and cyber operations.

The sanctions block assets under US jurisdiction and bar US persons from dealing with listed individuals or companies. Regulators warn that financial institutions interacting with sanctioned entities may face penalties, stating that the aim is to disrupt ransomware infrastructure and encourage operators to comply.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

DPDP law takes effect as India tightens AI-era data protections

India has activated new Digital Personal Data Protection rules that sharply restrict how technology firms collect and use personal information. The framework limits data gathering to what is necessary for a declared purpose and requires clear explanations, opt-outs, and breach notifications for Indian users.

The rules apply across digital platforms, from social media and e-commerce to banks and public services. Companies must obtain parental consent for individuals under 18 and are prohibited from using children’s data for targeted advertising. Firms have 18 months to comply with the new safeguards.

Users can request access to their data, ask why it was collected, and demand corrections or updates. They may withdraw consent at any time and, in some cases, request deletion. Companies must respond within 90 days, and individuals can appoint someone to exercise these rights.

Civil society groups welcomed stronger user rights but warned that the rules may also expand state access to personal data. The Internet Freedom Foundation criticised limited oversight and said the provisions risk entrenching government control, reducing transparency for citizens.

India is preparing further digital regulations, including new requirements for AI and social media firms. With nearly a billion online users, the government has urged platforms to label AI-generated content amid rising concerns about deepfakes, online misinformation, and election integrity.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU simplifies digital rules to save billions for companies

The European Commission has unveiled a digital package designed to simplify rules and reduce administrative burdens, allowing businesses to focus on innovation rather than compliance.

An initiative that combines the Digital Omnibus, Data Union Strategy, and European Business Wallet to strengthen competitiveness across the EU while maintaining high standards of fundamental rights, data protection, and safety.

The Digital Omnibus streamlines rules on AI, cybersecurity, and data. Amendments will create innovation-friendly AI regulations, simplify reporting for cybersecurity incidents, harmonise aspects of the GDPR, and modernise cookie rules.

Improved access to data and regulatory guidance will support businesses, particularly SMEs, allowing them to develop AI solutions and scale operations across member states more efficiently.

The Data Union Strategy aims to unlock high-quality data for AI, strengthen Europe’s data sovereignty, and support businesses with legal guidance and strategic measures to ensure fair treatment of the EU data abroad.

Meanwhile, the European Business Wallet will provide a unified digital identity for companies, enabling secure signing, storage, and exchange of documents and communication with public authorities across 27 member states.

By easing administrative procedures, the package could save up to €5 billion by 2029, with the Business Wallet alone offering up to €150 billion in annual savings.

The Commission has launched a public consultation, the Digital Fitness Check, to assess the impact of these rules and guide future steps, ensuring that businesses can grow and innovate instead of being held back by complex regulations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Northamptonshire Police launches live facial recognition trial

Northamptonshire Police will roll out live facial recognition cameras in three town centres. Deployments are scheduled in Northampton on 28 November and 5 December, in Kettering on 29 November, and in Wellingborough on 6 December.

The initiative uses a van loaned from Bedfordshire Police and the watch-lists include high-risk sex offenders or persons wanted for arrest. Facial and biometric data for non-alerts are deleted immediately; alerts are held only up to 24 hours.

Police emphasise the AI based technology is ‘very much in its infancy’ but expect future acquisition of dedicated kit. A coordinator post is being created to manage the LFR programme in-house.

British campaigners express concern the biometric tool may erode privacy or resemble mass surveillance. Police assert that appropriate signage and open policy documents will be in place to maintain public confidence.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Roblox brings in global age checks for chat

Children will no longer be able to chat with adult strangers on Roblox after new global age checks are introduced. The platform will begin mandatory facial estimation in selected countries in December before expanding worldwide in January.

Roblox players will be placed into strict age groups and prevented from messaging older users unless they are verified as trusted contacts. Under-13s will remain barred from private messages unless parents actively approve access within account controls.

The company faces rising scrutiny following lawsuits in several US states, where officials argue Roblox failed to protect young users from harmful contact. Safety groups welcome the tighter rules but warn that monitoring must match the platform’s rapid growth.

Roblox says the technology is accurate and helps deliver safer digital spaces for younger players. Campaigners continue to call for broader protections as millions of children interact across games, chats and AI-enhanced features each day.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot