EU unveils vision for a modern justice system

The European Commission has introduced a new Digital Justice Package designed to guide the EU justice systems into a fully digital era.

A plan that sets out a long-term strategy to support citizens, businesses and legal professionals with modern tools instead of outdated administrative processes. Central objectives include improved access to information, stronger cross-border cooperation and a faster shift toward AI-supported services.

The DigitalJustice@2030 Strategy contains fourteen steps that encourage member states to adopt advanced digital tools and share successful practices.

A key part of the roadmap focuses on expanding the European Legal Data Space, enabling legislation and case law to be accessed more efficiently.

The Commission intends to deepen cooperation by developing a shared toolbox for AI and IT systems and by seeking a unified European solution to cross-border videoconferencing challenges.

Additionally, the Commission has presented a Judicial Training Strategy designed to equip judges, prosecutors and legal staff with the digital and AI skills required to apply the EU digital law effectively.

Training will include digital case management, secure communication methods and awareness of AI’s influence on legal practice. The goal is to align national and EU programmes to increase long-term impact, rather than fragmenting efforts.

European officials argue that digital justice strengthens competitiveness by reducing delays, encouraging transparency and improving access for citizens and businesses.

The package supports the EU’s Digital Decade ambition to make all key public services available online by 2030. It stands as a further step toward resilient and modern judicial systems across the Union.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK unveils major push to drive national AI growth

A significant wave of public and private investment is set to place AI at the centre of the UK’s growth strategy. AI Growth Zones backed by substantial investment will drive job creation, high-tech infrastructure and local industry development across regions such as South Wales, London and Bristol.

Government officials stated that the measures aim to provide British firms with the tools necessary to scale and compete globally.

South Wales will host a significant £10 billion development expected to create over 5,000 jobs in the next decade. The zone will focus on data centres, advanced computing and AI research, supported by government funding for skills development and business adoption.

International tech companies expanding in the UK include Microsoft, Vantage Data Centres, Groq and Perplexity AI, each committing to new sites and enlarged workforces.

Further support will expand access to computing for researchers and start-ups nationwide. A government-backed advance market commitment worth up to £100 million will help hardware-focused AI firms secure their first key customers.

Officials confirmed nearly £500 million for the Sovereign AI Unit, which will scale domestic capabilities and back high-potential firms. Up to £137 million will also support the UK’s new AI-for-science strategy, which focuses on accelerating drug discovery and other breakthroughs.

Government representatives and industry leaders described the announcements as a turning point for the UK’s innovation capacity. Supporters say the measures will strengthen Britain’s tech leadership while creating jobs, boosting regional economies and advancing scientific progress.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta to block under-16 Australians from Facebook and Instagram early

Meta is beginning to block users in Australia who it believes are under 16 from using Instagram, Facebook, and Threads, starting 4 December, a week ahead of the government-mandated social media ban.

Last week, Meta sent in-app messages, emails and texts warning affected users to download their data because their accounts will soon be removed. As of 4 December, the company will deactivate existing accounts and block new sign-ups for users under 16.

To appeal the deactivation, targeted users can undergo age verification by providing a ‘video selfie’ to prove they are 16 or older, or by presenting a government-issued ID. Meta says it will ‘review and improve’ its systems, deploying AI-based age-assurance methods to reduce errors.

Observers highlight the risks of false positives in Meta’s age checks. Facial age estimation, conducted through partner company Yoti, has known margins of error.

The enforcement comes amid Australia’s world-first law that bars under-16s from using several major social media platforms, including Instagram, Snapchat, TikTok, YouTube, X and more.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Tech groups welcome EU reforms as privacy advocates warn of retreat

The EU has unveiled plans to scale back certain aspects of its AI and data privacy rules to revive innovation and alleviate regulatory pressure on businesses. The Digital Omnibus package delays stricter oversight for high-risk AI until 2027 and permits the use of anonymised personal data for model training.

The reforms amend the AI Act and several digital laws, cutting cookie pop-ups and simplifying documentation requirements for smaller firms. EU tech chief Henna Virkkunen says the aim is to boost competitiveness by removing layers of rigid regulation that have hindered start-ups and SMEs.

US tech lobby groups welcomed the overall direction. Still, they criticised the package for not going far enough, particularly on compute thresholds for systemic-risk AI and copyright provisions with cross-border effects. They argue the reforms only partially address industry concerns.

Privacy and digital rights advocates sharply opposed the changes, warning they represent a significant retreat from Europe’s rights-centric regulatory model. Groups including NOYB accused Brussels of undermining hard-won protections in favour of Big Tech interests.

Legal scholars say the proposals could shift Europe closer to a more permissive, industry-driven approach to AI and data use. They warn that the reforms may dilute the EU’s global reputation as a standard-setter for digital rights, just as the world seeks alternatives to US-style regulation.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU eases AI and data rules to boost tech growth

The European Commission has proposed easing AI and data privacy rules to cut red tape and help European tech firms compete internationally. Companies could access datasets more freely for AI training and have 16 months to comply with ‘high-risk’ AI rules.

Brussels also aims to cut the number of cookie pop-ups, allowing users to manage consent more efficiently while protecting privacy. The move has sparked concern among rights groups and campaigners who fear the EU may be softening its stance on Big Tech.

Critics argue that loosening regulations could undermine citizen protections, while European companies welcome the changes as a way to foster innovation and reduce regulatory burdens that have slowed start-ups and smaller businesses.

EU officials emphasise that the reforms seek a balance between competitiveness and safeguarding fundamental rights. Commission officials say the measures will help European firms compete with US and Chinese rivals while safeguarding citizen privacy.

Simplifying consent mechanisms and providing companies more operational flexibility are central to the plan’s goals.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

US administration pushes back on proposal to restrict Nvidia sales to China

The White House is urging Congress to reject a bipartisan proposal that would restrict Nvidia from selling advanced AI chips to China and other countries subject to an embargo. The GAIN AI Act would require chipmakers to prioritise US buyers before exporting high-performance hardware.

Lawmakers are debating whether to attach the provision to the annual defence spending bill, a move that could accelerate approval. The White House intervention represents a significant win for Nvidia, which has lobbied to maintain export flexibility amid shifting trade policies.

China was previously a significant market for Nvidia, but the firm has pared back expectations due to rising geopolitical risks. Beijing has also increased scrutiny of US-made chips as it pushes for self-reliance in AI and semiconductor technology.

The policy discussions come shortly after Nvidia posted stronger-than-expected third-quarter earnings and issued an upbeat outlook. CEO Jensen Huang has pushed back against concerns of an AI-driven valuation bubble, arguing demand remains robust.

Nvidia’s shares rose 5 percent after hours following the earnings report, reflecting investor confidence as Washington continues to debate the future of AI chip export controls.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

KT launches secure public cloud with Microsoft for South Korean enterprises

The telco firm, KT Corp, has introduced a Secure Public Cloud service in partnership with Microsoft, designed to meet South Korea’s stringent data sovereignty demands instead of relying solely on global cloud platforms.

Built on Microsoft Azure, the platform targets sectors such as finance and manufacturing, offering high-performance computing while ensuring all data remains stored and processed domestically.

A service that is based on three pillars: end-to-end data protection, enhanced enterprise control over cloud resources, and strict compliance with the residency requirements of South Korea.

Confidential computing encrypts data even during in-memory execution, while a managed hardware security module allows customers to fully own and manage encryption keys, enabling true end-to-end protection.

KT said the platform is particularly suitable for AI training, transaction-heavy applications, and operational workloads where data exposure could pose major risks.

By combining domestic governance with the flexibility and scalability of Azure, the company aims to give enterprises a reliable cloud solution without compromising performance or compliance.

The launch also strengthens KT’s broader cloud ecosystem, which includes KT Cloud and managed global cloud services like AWS.

KT plans to expand the Secure Public Cloud gradually across industries, responding to rising demand from organizations that need robust domestic data controls instead of facing the risks of cross-border data exposure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Pennsylvania Senate passes bill to tackle AI-generated CSAM

The Pennsylvania Senate has passed Senate Bill 1050, requiring all individuals classified as mandated reporters to notify authorities of any instance of child sexual abuse material (CSAM) they become aware of, including material produced by a minor or generated using artificial intelligence.

The bill, sponsored by Senators Tracy Pennycuick, Scott Martin and Lisa Baker, addresses the recent rise in AI-generated CSAM and builds upon earlier legislation (Act 125 of 2024 and Act 35 of 2025) that targeted deepfakes and sexual deepfake content.

Supporters argue the bill strengthens child protection by closing a legal gap: while existing laws focused on CSAM involving real minors, the new measure explicitly covers AI-generated material. Senator Martin said the threat from AI-generated images is ‘very real’.

From a tech policy perspective, this law highlights how rapidly evolving AI capabilities, especially around image synthesis and manipulation, are pushing lawmakers to update obligations for reporting, investigation and accountability.

It raises questions around how institutions, schools and health-care providers will adapt to these new responsibilities and what enforcement mechanisms will look like.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Foxconn and OpenAI strengthen US AI manufacturing

OpenAI has formed a new partnership with Foxconn to prepare US manufacturing for a fresh generation of AI infrastructure hardware.

The agreement centres on design support and early evaluation instead of immediate purchase commitments, which gives OpenAI a path to influence development while Foxconn builds readiness inside American facilities.

Both companies expect rapid advances in AI capability to demand a new class of physical infrastructure. They plan to co-design several generations of data centre racks that can keep pace with model development instead of relying on slower single-cycle upgrades.

OpenAI will share insight into future hardware needs while Foxconn provides engineering knowledge and large-scale manufacturing capacity across the US.

A key aim is to strengthen domestic supply chains by improving rack architecture, widening access to domestic chip suppliers and expanding local testing and assembly. Foxconn intends to produce essential data centre components in the US, including cabling, networking, cooling and power systems.

The companies present such an effort as a way to support faster deployment, create more resilient infrastructure and bring economic benefits to American workers.

OpenAI frames the partnership as part of a broader push to ensure that critical AI infrastructure is built within the US instead of abroad. Company leaders argue that a robust domestic supply chain will support American leadership in AI and keep the benefits widely shared across the economy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI in healthcare gains regulatory compass from UK experts

Professor Alastair Denniston has outlined the core principles for regulating AI in healthcare, describing AI as the ‘X-ray moment’ of our time.

Like previous innovations such as MRI scanners and antibiotics, AI has the potential to improve diagnosis, treatment and personalised care dramatically. Still, it also requires careful oversight to ensure patient safety.

The MHRA’s National Commission on the Regulation of AI in Healthcare is developing a framework based on three key principles. The framework must be safe, ensuring proportionate regulation that protects patients without stifling innovation.

It must be fast, reducing delays in bringing beneficial technologies to patients and supporting small innovators who cannot endure long regulatory timelines. Ultimately, it must be trusted, with transparent processes that foster confidence in AI technologies today and in the future.

Professor Denniston emphasises that AI is not a single technology but a rapidly evolving ecosystem. The regulatory system must keep pace with advances while allowing the NHS to harness AI safely and efficiently.

Just as with earlier medical breakthroughs, failure to innovate can carry risks equal to the dangers of new technologies themselves.

The National Commission will soon invite the public to contribute their views through a call for evidence.

Patients, healthcare professionals, and members of the public are encouraged to share what matters to them, helping to shape a framework that balances safety, speed, and trust while unlocking the full potential of AI in the NHS.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!