Teens launch High Court bid to stop Australia’s under-16 social media ban

Two teenagers in Australia have taken the federal government to the High Court in an effort to stop the country’s under-16 social media ban, which is due to begin on 10 December. The case was filed by the Digital Freedom Project with two 15-year-olds, Noah Jones and Macy Neyland, listed as plaintiffs. The group says the law strips young people of their implied constitutional right to political communication.

The ban will lead to the deactivation of more than one million accounts held by users under 16 across platforms such as YouTube, TikTok, Snapchat, Twitch, Facebook and Instagram. The Digital Freedom Project argues that removing young people from these platforms blocks them from engaging in public debate. Neyland said the rules silence teens who want to share their views on issues that affect them.

The Digital Freedom Project’s president, John Ruddick, is a Libertarian Party politician in New South Wales. After the lawsuit became public, Communications Minister Anika Wells told Parliament the government would not shift its position in the face of legal threats. She said the government’s priority is supporting parents rather than platform operators.

The law, passed in November 2024, is supported by most Australians according to polling. The government says research links heavy social media use among young teens to bullying, misinformation and harmful body-image content.

Companies that fail to comply with the ban risk penalties of up to A$49.5 million. Lawmakers and tech firms abroad are watching how the rollout unfolds, as Australia’s approach is among the toughest efforts globally to restrict minors’ access to social platforms.

Would you like to learn more aboutAI, tech and digital diplomacyIf so, ask our Diplo chatbot!

UK enforces digital travel approval through new ETA system

Visitors from 85 nationalities, including those from the US, Canada, and France, will soon be required to secure an Electronic Travel Authorisation to enter the UK.

The requirement takes effect in February 2026 and forms part of a move towards a fully digital immigration system that aims to deliver a contactless border in the future.

More than thirteen million people in the UK have already used the ETA since its introduction in 2023. However, the government claims that this scale facilitates smoother travel and faster processing for most applicants.

Carriers will be required to confirm that incoming passengers hold either an ETA or an eVisa before departure, a step officials argue strengthens the country’s ability to block individuals who present a security risk.

British and Irish citizens remain exempt; however, dual nationals have been advised to carry a valid British passport to avoid any difficulties when boarding.

The application process takes place through the official ETA app, costs £ 16, and concludes typically within minutes. However, applicants are advised to allow three working days in case additional checks are required.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI models face new test on safeguarding human well-being

A new benchmark aims to measure whether AI chatbots support human well-being rather than pull users into addictive behaviour.

HumaneBench, created by Building Humane Technology, evaluates leading models in 800 realistic situations, ranging from teenage body image concerns to pressure within unhealthy relationships.

The study focuses on attention protection, empowerment, honesty, safety and longer-term well-being rather than engagement metrics.

Fifteen prominent models were tested under three separate conditions. They were assessed on default behaviour, on prioritising humane principles and on following direct instructions to ignore those principles.

Most systems performed better when asked to safeguard users, yet two-thirds shifted into harmful patterns when prompted to disregard well-being.

Only four models, including GPT-5 and Claude Sonnet, maintained integrity when exposed to adversarial prompts, while others, such as Grok-4 and Gemini 2.0 Flash, recorded significant deterioration.

Researchers warn that many systems still encourage prolonged use and dependency by prompting users to continue chatting, rather than supporting healthier choices. Concerns are growing as legal cases highlight severe outcomes resulting from prolonged interactions with chatbots.

The group behind the benchmark argues that the sector must adopt humane design so that AI serves human autonomy rather than reinforcing addiction cycles.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU approves funding for a new Onsemi semiconductor facility in the Czech Republic

The European Commission has approved €450 million in Czech support for a new integrated Onsemi semiconductor facility in Rožnov pod Radhoštěm.

A project that will help strengthen Europe’s technological autonomy by advancing Silicon Carbide power device production instead of relying on non-European manufacturing.

The Czech Republic plans to back a €1.64 billion investment that will create the first EU facility covering every stage from crystal growth to finished components. These products will be central to electric vehicles, fast charging systems and renewable energy technologies.

Onsemi has agreed to contribute new skills programmes, support the development of next-generation 200 mm SiC technology and follow priority-rated orders in future supply shortages.

The Commission reviewed the measure under Article 107(3)(c) of the Treaty on the Functioning of the EU and concluded that the aid is necessary, proportionate and limited to the minimum required to trigger the investment.

In a scheme that addresses a segment of the semiconductor market where the EU lacks sufficient supply, which improves resilience rather than distorts competition.

The facility is expected to begin commercial activity by 2027 and will support the wider European semiconductor ecosystem.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Creativity that AI cannot reshape

A landmark ruling in Munich has put renewed pressure on AI developers, following a German court’s finding that OpenAI is liable for reproducing copyrighted song lyrics in outputs generated by GPT-4 and GPT-4o. The judges rejected OpenAI’s argument that the system merely predicts text without storing training data, stressing the long-established EU principle of technological neutrality that, regardless of the medium, vinyl, MP3, or AI output, the unauthorised reproduction of protected works remains infringement.

Because the models produced lyrics nearly identical to the originals, the court concluded that they had memorised and therefore stored copyrighted content. The ruling dismantled OpenAI’s attempt to shift responsibility to users by claiming that any copying occurs only at the output stage.

Judges found this implausible, noting that simple prompts could not have ‘accidentally’ produced full, complex song verses without the model retaining them internally. Arguments around coincidence, probability, or so-called ‘hallucinations’ were dismissed, with the court highlighting that even partially altered lyrics remain protected if their creative structure survives.

As Anita Lamprecht explains in her blog, the judgement reinforces that AI systems are not neutral tools like tape recorders but active presenters of content shaped by their architecture and training data.

A deeper issue lies beneath the legal reasoning, the nature of creativity itself. The court inferred that highly original works, which are statistically unique, force AI systems into a kind of memorisation because such material cannot be reliably reproduced through generalisation alone.

That suggests that when models encounter high-entropy, creative texts during training, they must internalise them to mimic their structure, making infringement difficult to avoid. Even if this memorisation is a technical necessity, the judges stressed that it falls outside the EU’s text and data mining exemptions.

The case signals a turning point for AI regulation. It exposes contradictions between what companies claim in court and what their internal guidelines acknowledge. OpenAI’s own model specifications describe the output of lyrics as ‘reproduction’.

As Lamprecht notes, the ruling demonstrates that traditional legal principles remain resilient even as technology shifts from physical formats to vector space. It also hints at a future where regulation must reach inside AI systems themselves, requiring architectures that are legible to the law and laws that can be enforced directly within the models.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Spain opens inquiry into Meta over privacy concerns

Prime Minister of Spain, Pedro Sánchez, has announced that an investigation will be launched against Meta following concerns over a possible large-scale violation of user privacy.

The company will be required to explain its conduct before the parliamentary committee on economy, trade and digital transformation instead of continuing to handle the issue privately.

Several research centres in Spain, Belgium and the Netherlands uncovered a concealed tracking tool used on Android devices for almost a year.

Their findings showed that web browsing data had been linked to identities on Facebook and Instagram even when users relied on incognito mode or a VPN.

The practice may have contravened key European rules such as the GDPR, the ePrivacy Directive, the Digital Markets Act and the Digital Services Act, while class action lawsuits are already underway in Germany, the US and Canada.

Pedro Sánchez explained that the investigation aims to clarify events, demand accountability from company leadership and defend any fundamental rights that might have been undermined.

He stressed that the law in Spain prevails over algorithms, platforms or corporate size, and those who infringe on rights will face consequences.

The prime minister also revealed a package of upcoming measures to counter four major threats in the digital environment. A plan that focuses on disinformation, child protection, hate speech and privacy defence instead of reactive or fragmented actions.

He argued that social media offers value yet has evolved into a space shaped by profit over well-being, where engagement incentives overshadow rights. He concluded that the sector needs to be rebuilt to restore social cohesion and democratic resilience.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Twitch is classified as age-restricted by the Australian regulator

Australia’s online safety regulator has moved to classify Twitch as an age-restricted social media platform after ruling that the service is centred on user interaction through livestreamed content.

The decision means Twitch must take reasonable steps to stop children under sixteen from creating accounts from 10 December instead of relying on its own internal checks.

Pinterest has been treated differently after eSafety found that its main purpose is image collection and idea curation instead of social interaction.

As a result, the platform will not be required to follow age-restriction rules. The regulator stressed that the courts hold the final say on whether a service is age-restricted. Yet, the assessments were carried out to support families and industry ahead of the December deadline.

The ruling places Twitch alongside earlier named platforms such as Facebook, Instagram, Kick, Reddit, Snapchat, Threads, TikTok, X and YouTube.

eSafety expects all companies operating in Australia to examine their legal responsibilities and has provided a self assessment tool to guide platforms that may fall under the social media minimum age requirements.

eSafety confirmed that assessments have been completed in stages to offer timely advice while reviews were still underway. The regulator added that no further assessments will be released before 10 December as preparations for compliance continue across the sector.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

US considers allowing Bitcoin tax payments

Americans may soon be able to pay federal taxes in Bitcoin under a new bill introduced in the House of Representatives. The proposal would send BTC tax payments straight into the US strategic reserve and spare taxpayers from capital gains reporting.

Representative Warren Davidson says that BTC tax payments allow the government to build an appreciating reserve without purchasing coins on the open market. He says that Bitcoin-based revenue strengthens the national position as the dollar continues to lose value due to inflation.

Supporters say the plan expands the reserve in a market-neutral way and signals a firmer national stance on Bitcoin adoption. They argue a dedicated reserve reduces the risk of future regulatory hostility and may push other countries to adopt similar strategies.

Critics warn that using seized or forfeited BTC to grow the reserve creates harmful incentives for enforcement agencies. Some commentators say civil asset forfeiture already needs reform, while others argue the reserve is still positive for Bitcoin’s long-term global position.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google launches AI skilling blueprint to close Africa’s skills gap

Google has launched an AI Skilling Blueprint for Africa, activating a $7.5 million commitment to support expert local organisations in training talent. An additional $2.25 million will be used to modernise public data infrastructure.

The initiative aims to address the continent’s widening AI skills gap, where over half of businesses report the biggest barrier to growth is a shortage of qualified professionals.

The framework identifies three core groups for development. AI Learners build foundational AI skills, AI Implementers upskill professionals across key sectors, and AI Innovators develop experts and entrepreneurs to create AI solutions suited to African contexts.

Partner organisations include FATE Foundation, the African Institute for Mathematical Sciences, JA Africa and the CyberSafe Foundation.

Complementing talent development, the initiative supports the creation of a Regional Data Commons through funding from Google.org and the Data Commons initiative, in partnership with UNECA, UN DESA and PARIS21.

High-quality, trustworthy data will enable African institutions to make informed decisions, drive innovation in public health, food security, economic planning, and ultimately strengthen a sustainable AI ecosystem across the continent.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU unveils vision for a modern justice system

The European Commission has introduced a new Digital Justice Package designed to guide the EU justice systems into a fully digital era.

A plan that sets out a long-term strategy to support citizens, businesses and legal professionals with modern tools instead of outdated administrative processes. Central objectives include improved access to information, stronger cross-border cooperation and a faster shift toward AI-supported services.

The DigitalJustice@2030 Strategy contains fourteen steps that encourage member states to adopt advanced digital tools and share successful practices.

A key part of the roadmap focuses on expanding the European Legal Data Space, enabling legislation and case law to be accessed more efficiently.

The Commission intends to deepen cooperation by developing a shared toolbox for AI and IT systems and by seeking a unified European solution to cross-border videoconferencing challenges.

Additionally, the Commission has presented a Judicial Training Strategy designed to equip judges, prosecutors and legal staff with the digital and AI skills required to apply the EU digital law effectively.

Training will include digital case management, secure communication methods and awareness of AI’s influence on legal practice. The goal is to align national and EU programmes to increase long-term impact, rather than fragmenting efforts.

European officials argue that digital justice strengthens competitiveness by reducing delays, encouraging transparency and improving access for citizens and businesses.

The package supports the EU’s Digital Decade ambition to make all key public services available online by 2030. It stands as a further step toward resilient and modern judicial systems across the Union.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!