EU approves funding for a new Onsemi semiconductor facility in the Czech Republic

The European Commission has approved €450 million in Czech support for a new integrated Onsemi semiconductor facility in Rožnov pod Radhoštěm.

A project that will help strengthen Europe’s technological autonomy by advancing Silicon Carbide power device production instead of relying on non-European manufacturing.

The Czech Republic plans to back a €1.64 billion investment that will create the first EU facility covering every stage from crystal growth to finished components. These products will be central to electric vehicles, fast charging systems and renewable energy technologies.

Onsemi has agreed to contribute new skills programmes, support the development of next-generation 200 mm SiC technology and follow priority-rated orders in future supply shortages.

The Commission reviewed the measure under Article 107(3)(c) of the Treaty on the Functioning of the EU and concluded that the aid is necessary, proportionate and limited to the minimum required to trigger the investment.

In a scheme that addresses a segment of the semiconductor market where the EU lacks sufficient supply, which improves resilience rather than distorts competition.

The facility is expected to begin commercial activity by 2027 and will support the wider European semiconductor ecosystem.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Creativity that AI cannot reshape

A landmark ruling in Munich has put renewed pressure on AI developers, following a German court’s finding that OpenAI is liable for reproducing copyrighted song lyrics in outputs generated by GPT-4 and GPT-4o. The judges rejected OpenAI’s argument that the system merely predicts text without storing training data, stressing the long-established EU principle of technological neutrality that, regardless of the medium, vinyl, MP3, or AI output, the unauthorised reproduction of protected works remains infringement.

Because the models produced lyrics nearly identical to the originals, the court concluded that they had memorised and therefore stored copyrighted content. The ruling dismantled OpenAI’s attempt to shift responsibility to users by claiming that any copying occurs only at the output stage.

Judges found this implausible, noting that simple prompts could not have ‘accidentally’ produced full, complex song verses without the model retaining them internally. Arguments around coincidence, probability, or so-called ‘hallucinations’ were dismissed, with the court highlighting that even partially altered lyrics remain protected if their creative structure survives.

As Anita Lamprecht explains in her blog, the judgement reinforces that AI systems are not neutral tools like tape recorders but active presenters of content shaped by their architecture and training data.

A deeper issue lies beneath the legal reasoning, the nature of creativity itself. The court inferred that highly original works, which are statistically unique, force AI systems into a kind of memorisation because such material cannot be reliably reproduced through generalisation alone.

That suggests that when models encounter high-entropy, creative texts during training, they must internalise them to mimic their structure, making infringement difficult to avoid. Even if this memorisation is a technical necessity, the judges stressed that it falls outside the EU’s text and data mining exemptions.

The case signals a turning point for AI regulation. It exposes contradictions between what companies claim in court and what their internal guidelines acknowledge. OpenAI’s own model specifications describe the output of lyrics as ‘reproduction’.

As Lamprecht notes, the ruling demonstrates that traditional legal principles remain resilient even as technology shifts from physical formats to vector space. It also hints at a future where regulation must reach inside AI systems themselves, requiring architectures that are legible to the law and laws that can be enforced directly within the models.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Spain opens inquiry into Meta over privacy concerns

Prime Minister of Spain, Pedro Sánchez, has announced that an investigation will be launched against Meta following concerns over a possible large-scale violation of user privacy.

The company will be required to explain its conduct before the parliamentary committee on economy, trade and digital transformation instead of continuing to handle the issue privately.

Several research centres in Spain, Belgium and the Netherlands uncovered a concealed tracking tool used on Android devices for almost a year.

Their findings showed that web browsing data had been linked to identities on Facebook and Instagram even when users relied on incognito mode or a VPN.

The practice may have contravened key European rules such as the GDPR, the ePrivacy Directive, the Digital Markets Act and the Digital Services Act, while class action lawsuits are already underway in Germany, the US and Canada.

Pedro Sánchez explained that the investigation aims to clarify events, demand accountability from company leadership and defend any fundamental rights that might have been undermined.

He stressed that the law in Spain prevails over algorithms, platforms or corporate size, and those who infringe on rights will face consequences.

The prime minister also revealed a package of upcoming measures to counter four major threats in the digital environment. A plan that focuses on disinformation, child protection, hate speech and privacy defence instead of reactive or fragmented actions.

He argued that social media offers value yet has evolved into a space shaped by profit over well-being, where engagement incentives overshadow rights. He concluded that the sector needs to be rebuilt to restore social cohesion and democratic resilience.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Twitch is classified as age-restricted by the Australian regulator

Australia’s online safety regulator has moved to classify Twitch as an age-restricted social media platform after ruling that the service is centred on user interaction through livestreamed content.

The decision means Twitch must take reasonable steps to stop children under sixteen from creating accounts from 10 December instead of relying on its own internal checks.

Pinterest has been treated differently after eSafety found that its main purpose is image collection and idea curation instead of social interaction.

As a result, the platform will not be required to follow age-restriction rules. The regulator stressed that the courts hold the final say on whether a service is age-restricted. Yet, the assessments were carried out to support families and industry ahead of the December deadline.

The ruling places Twitch alongside earlier named platforms such as Facebook, Instagram, Kick, Reddit, Snapchat, Threads, TikTok, X and YouTube.

eSafety expects all companies operating in Australia to examine their legal responsibilities and has provided a self assessment tool to guide platforms that may fall under the social media minimum age requirements.

eSafety confirmed that assessments have been completed in stages to offer timely advice while reviews were still underway. The regulator added that no further assessments will be released before 10 December as preparations for compliance continue across the sector.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

US considers allowing Bitcoin tax payments

Americans may soon be able to pay federal taxes in Bitcoin under a new bill introduced in the House of Representatives. The proposal would send BTC tax payments straight into the US strategic reserve and spare taxpayers from capital gains reporting.

Representative Warren Davidson says that BTC tax payments allow the government to build an appreciating reserve without purchasing coins on the open market. He says that Bitcoin-based revenue strengthens the national position as the dollar continues to lose value due to inflation.

Supporters say the plan expands the reserve in a market-neutral way and signals a firmer national stance on Bitcoin adoption. They argue a dedicated reserve reduces the risk of future regulatory hostility and may push other countries to adopt similar strategies.

Critics warn that using seized or forfeited BTC to grow the reserve creates harmful incentives for enforcement agencies. Some commentators say civil asset forfeiture already needs reform, while others argue the reserve is still positive for Bitcoin’s long-term global position.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google launches AI skilling blueprint to close Africa’s skills gap

Google has launched an AI Skilling Blueprint for Africa, activating a $7.5 million commitment to support expert local organisations in training talent. An additional $2.25 million will be used to modernise public data infrastructure.

The initiative aims to address the continent’s widening AI skills gap, where over half of businesses report the biggest barrier to growth is a shortage of qualified professionals.

The framework identifies three core groups for development. AI Learners build foundational AI skills, AI Implementers upskill professionals across key sectors, and AI Innovators develop experts and entrepreneurs to create AI solutions suited to African contexts.

Partner organisations include FATE Foundation, the African Institute for Mathematical Sciences, JA Africa and the CyberSafe Foundation.

Complementing talent development, the initiative supports the creation of a Regional Data Commons through funding from Google.org and the Data Commons initiative, in partnership with UNECA, UN DESA and PARIS21.

High-quality, trustworthy data will enable African institutions to make informed decisions, drive innovation in public health, food security, economic planning, and ultimately strengthen a sustainable AI ecosystem across the continent.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU unveils vision for a modern justice system

The European Commission has introduced a new Digital Justice Package designed to guide the EU justice systems into a fully digital era.

A plan that sets out a long-term strategy to support citizens, businesses and legal professionals with modern tools instead of outdated administrative processes. Central objectives include improved access to information, stronger cross-border cooperation and a faster shift toward AI-supported services.

The DigitalJustice@2030 Strategy contains fourteen steps that encourage member states to adopt advanced digital tools and share successful practices.

A key part of the roadmap focuses on expanding the European Legal Data Space, enabling legislation and case law to be accessed more efficiently.

The Commission intends to deepen cooperation by developing a shared toolbox for AI and IT systems and by seeking a unified European solution to cross-border videoconferencing challenges.

Additionally, the Commission has presented a Judicial Training Strategy designed to equip judges, prosecutors and legal staff with the digital and AI skills required to apply the EU digital law effectively.

Training will include digital case management, secure communication methods and awareness of AI’s influence on legal practice. The goal is to align national and EU programmes to increase long-term impact, rather than fragmenting efforts.

European officials argue that digital justice strengthens competitiveness by reducing delays, encouraging transparency and improving access for citizens and businesses.

The package supports the EU’s Digital Decade ambition to make all key public services available online by 2030. It stands as a further step toward resilient and modern judicial systems across the Union.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Pennsylvania Senate passes bill to tackle AI-generated CSAM

The Pennsylvania Senate has passed Senate Bill 1050, requiring all individuals classified as mandated reporters to notify authorities of any instance of child sexual abuse material (CSAM) they become aware of, including material produced by a minor or generated using artificial intelligence.

The bill, sponsored by Senators Tracy Pennycuick, Scott Martin and Lisa Baker, addresses the recent rise in AI-generated CSAM and builds upon earlier legislation (Act 125 of 2024 and Act 35 of 2025) that targeted deepfakes and sexual deepfake content.

Supporters argue the bill strengthens child protection by closing a legal gap: while existing laws focused on CSAM involving real minors, the new measure explicitly covers AI-generated material. Senator Martin said the threat from AI-generated images is ‘very real’.

From a tech policy perspective, this law highlights how rapidly evolving AI capabilities, especially around image synthesis and manipulation, are pushing lawmakers to update obligations for reporting, investigation and accountability.

It raises questions around how institutions, schools and health-care providers will adapt to these new responsibilities and what enforcement mechanisms will look like.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI in healthcare gains regulatory compass from UK experts

Professor Alastair Denniston has outlined the core principles for regulating AI in healthcare, describing AI as the ‘X-ray moment’ of our time.

Like previous innovations such as MRI scanners and antibiotics, AI has the potential to improve diagnosis, treatment and personalised care dramatically. Still, it also requires careful oversight to ensure patient safety.

The MHRA’s National Commission on the Regulation of AI in Healthcare is developing a framework based on three key principles. The framework must be safe, ensuring proportionate regulation that protects patients without stifling innovation.

It must be fast, reducing delays in bringing beneficial technologies to patients and supporting small innovators who cannot endure long regulatory timelines. Ultimately, it must be trusted, with transparent processes that foster confidence in AI technologies today and in the future.

Professor Denniston emphasises that AI is not a single technology but a rapidly evolving ecosystem. The regulatory system must keep pace with advances while allowing the NHS to harness AI safely and efficiently.

Just as with earlier medical breakthroughs, failure to innovate can carry risks equal to the dangers of new technologies themselves.

The National Commission will soon invite the public to contribute their views through a call for evidence.

Patients, healthcare professionals, and members of the public are encouraged to share what matters to them, helping to shape a framework that balances safety, speed, and trust while unlocking the full potential of AI in the NHS.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Trilateral sanctions target Media Land for supporting ransomware groups

The United States has imposed coordinated sanctions on Media Land, a Russian bulletproof hosting provider accused of aiding ransomware groups and broader cybercrime. The measures target senior operators and sister companies linked to attacks on businesses and critical infrastructure.

Authorities in the UK and Australia say Media Land infrastructure aided ransomware groups, including LockBit, BlackSuit, and Play, and was linked to denial-of-service attacks on US organisations. OFAC also named operators and firms that maintained systems designed to evade law enforcement.

The action also expands earlier sanctions against Aeza Group, with entities accused of rebranding and shifting infrastructure through front companies such as Hypercore to avoid restrictions introduced this year. Officials say these efforts were designed to obscure operational continuity.

According to investigators, the network relied on overseas firms in Serbia and Uzbekistan to conceal its activity and establish technical infrastructure that was detached from the Aeza brand. These entities, along with the new Aeza leadership, were designated for supporting sanctions evasion and cyber operations.

The sanctions block assets under US jurisdiction and bar US persons from dealing with listed individuals or companies. Regulators warn that financial institutions interacting with sanctioned entities may face penalties, stating that the aim is to disrupt ransomware infrastructure and encourage operators to comply.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!