Denmark moves to ban social media for under-15s amid child safety concerns

Joining the broader trend, Denmark plans to ban children under 15 from using social media as Prime Minister Mette Frederiksen announced during her address to parliament on Tuesday.

Describing platforms as having ‘stolen our children’s childhood’, she said the government must act to protect young people from the growing harms of digital dependency.

Frederiksen urged lawmakers to ‘tighten the law’ to ensure greater child safety online, adding that parents could still grant consent for children aged 13 and above to have social media accounts.

Although the proposal is not yet part of the government’s legislative agenda, it builds on a 2024 citizen initiative that called for banning platforms such as TikTok, Snapchat and Instagram.

The prime minister’s comments reflect Denmark’s broader push within the EU to require age verification systems for online platforms.

Her statement follows a broader debate across Europe over children’s digital well-being and the responsibilities of tech companies in verifying user age and safeguarding minors.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Beware the language of human flourishing in AI regulation

TechPolicy.Press recently published ‘Confronting Empty Humanism in AI Policy’, a thought piece by Matt Blaszczyk exploring how human-centred and humanistic language in AI policy is widespread, but often not backed by meaningful legal or regulatory substance.

Blaszczyk observes that figures such as Peter Thiel contribute to a discourse that questions the very value of human existence, but equally worrying are the voices using humanist, democratic, and romantic rhetoric to preserve the status quo. These narratives can be weaponised by actors seeking to reassure the public while avoiding strong regulation.

The article analyses executive orders, AI action plans, and regulatory proposals that promise human flourishing or protect civil liberties, but often do so under deregulatory frameworks or with voluntary oversight.

For example, the EU AI Act is praised, yet criticised for gaps and loopholes; many ‘human-in-the-loop’ provisions risk making humans mere rubber stampers.

Blaszczyk suggests that nominal humanism is used as a rhetorical shield. Humans are placed formally at the centre of laws and frameworks, copyright, free speech, democratic values, but real influence, rights protection, and liability often remain minimal.

He warns that without enforcement, oversight and accountability, human-centred AI policies risk becoming slogans rather than safeguards.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta faces fines in Netherlands over algorithm-first timelines

A Dutch court has ordered Meta to give Facebook and Instagram users in the Netherlands the right to set a chronological feed as their default.

The ruling follows a case brought by digital rights group Bits of Freedom, which argued that Meta’s design undermines user autonomy under the European Digital Services Act.

Although a chronological feed is already available, it is hidden and cannot be permanent. The court said Meta must make the settings accessible on the homepage and Reels section and ensure they stay in place when the apps are restarted.

If Meta does not comply within two weeks, it faces a fine of €100,000 per day, capped at €5 million.

Bits of Freedom argued that algorithmic feeds threaten democracy, particularly before elections. The court agreed the change must apply permanently rather than temporarily during campaigns.

The group welcomed the ruling but stressed it was only a small step in tackling the influence of tech giants on public debate.

Meta has not yet responded to the decision, which applies only in the Netherlands despite being based on EU law. Campaigners say the case highlights the need for more vigorous enforcement to ensure digital platforms respect user choice and democratic values.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Grok controversies shadow Musk’s new Grokipedia project

Elon Musk has announced that his company xAI is developing Grokipedia, a planned Wikipedia rival powered by its Grok AI chatbot. He described the project as a step towards achieving xAI’s mission of understanding the universe.

In a post on X, Musk called Grokipedia a ‘necessary improvement over Wikipedia,’ renewing his criticism of the platform’s funding model and what he views as ideological bias. He has long accused Wikimedia of leaning left and reflecting ‘woke’ influence.

Despite Musk’s efforts to position Grok as a solution to bias, the chatbot has occasionally turned on its creator. Earlier this year, it named Musk among the people doing the most harm to the US, alongside Donald Trump and Vice President JD Vance.

The Grok 4 update also drew controversy when users reported that the chatbot praised and adopted the surname of a controversial historical figure in its responses, sparking criticism of its safety. Such incidents raised questions about the limits of Musk’s oversight.

Grok is already integrated into X as a conversational assistant, providing context and explanations in real time. Musk has said it will power the platform’s recommendation algorithm by late 2025, allowing users to customise their feeds dynamically through direct requests.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Sora 2.0 release reignites debate on intellectual property in AI video

OpenAI has launched Sora 2.0, the latest version of its video generation model, alongside an iOS app available by invitation in the US and Canada. The tool offers advances in physical realism, audio-video synchronisation, and multi-shot storytelling, with built-in safeguards for security and identity control.

The app allows users to create, remix, or appear in clips generated from text or images. A Pro version, web interface, and developer API are expected soon, extending access to the model.

Sora 2.0 has reignited debate over intellectual property. According to The Wall Street Journal, OpenAI has informed studios and talent agencies that their universes could appear in generated clips unless they opt out.

The company defends its approach as an extension of fan creativity, while stressing that real people’s images and voices require prior consent, validated through a verified cameo system.

By combining new creative tools with identity safeguards, OpenAI aims to position Sora 2.0 as a leading platform in the fast-growing market for AI-generated video.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Athens Democracy Forum highlights AI challenge for democracy

The 2025 Athens Democracy Forum opened in Athens with a dedicated session on AI, ethics and democracy, co-organised by Kathimerini in partnership with The New York Times.

Held at the Athens Conservatoire, the event placed AI at the heart of discussions on the future of democratic governance.

Speakers underlined the urgency of addressing systemic challenges created by AI.

Achilleas Tsaltas, president of the Democracy & Culture Foundation, described AI as the central concern of the era. At the same time, Greece’s minister of digital governance, Dimitris Papastergiou, warned that AI should remain a servant instead of becoming a master.

Axel Dauchez, founder of Make.org, pointed to the conflict between democratic and authoritarian governance models and called for stronger civic education.

The opening panel brought together academics such as Oxford’s Stathis Kalyvas and Yale’s Hélène Landemore, who examined how AI affects liberal democracies, global inequalities and political accountability.

Discussions concluded with a debate on Aristotle’s ethics as a framework for evaluating opportunities and risks in AI development, moderated by Stephen Dunbar-Johnson of The New York Times.

The session continues with panels on the AI transformation blueprint of Greece, regulation of AI, and the emerging concept of AI sovereignty as a business model.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

YouTube settles Donald Trump lawsuit over account suspension for $24.5 million

YouTube has agreed to a $24.5 million settlement to resolve a lawsuit filed by President Donald Trump, stemming from the platform’s decision to suspend his account after the 6 January 2021 Capitol riot.

The lawsuit was part of a broader legal push by Trump against major tech companies over what he calls politically motivated censorship.

As part of the deal, YouTube will donate $22 million to the Trust for the National Mall on Trump’s behalf, funding a new $200 million White House ballroom project. Another $2.5 million will go to co-plaintiffs, including the American Conservative Union and author Naomi Wolf.

The settlement includes no admission of wrongdoing by YouTube and was intended to avoid further legal costs. The move follows similar multimillion-dollar settlements by Meta and X, which also suspended Trump’s accounts post-January 6.

Critics argue the settlement signals a retreat from consistent content moderation. Media scholar Timothy Koskie warned it sets a troubling precedent for global digital governance and selective enforcement.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UN Secretary-General warns humanity cannot rely on algorithms

UN Secretary-General António Guterres has urged world leaders to act swiftly to ensure AI serves humanity rather than threatens it. Speaking at a UN Security Council debate, he warned that while AI can help anticipate food crises, support de-mining efforts, and prevent violence, it is equally capable of fueling conflict through cyberattacks, disinformation, and autonomous weapons.

‘Humanity’s fate cannot be left to an algorithm,’ he stressed.

Guterres outlined four urgent priorities. First, he called for strict human oversight in all military uses of AI, repeating his demand for a global ban on lethal autonomous weapons systems. He insisted that life-and-death decisions, including any involving nuclear weapons, must never be left to machines.

Second, he pressed for coherent international regulations to ensure AI complies with international law at every stage, from design to deployment. He highlighted the dangers of AI lowering barriers to acquiring prohibited weapons and urged states to build transparency, trust, and safeguards against misuse.

Finally, Guterres emphasised protecting information integrity and closing the global AI capacity gap. He warned that AI-driven disinformation could destabilise peace processes and elections, while unequal access risks leaving developing countries behind.

The UN has already launched initiatives, including a new international scientific panel and an annual AI governance dialogue, to foster cooperation and accountability.

‘The window is closing to shape AI, for peace, justice, and humanity,’ he concluded.

For more information from the 80th session of the UN General Assembly, visit our dedicated page.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

LinkedIn expands AI training with default data use

LinkedIn will use member profile data to train its AI systems by default from 3 November 2025. The policy, already in place in the US and select markets, will now extend to more regions, mainly for 18+ users who prefer not to share their information and must opt out manually via account settings.

According to LinkedIn, the types of data that may be used include account details, email addresses, payment and subscription information, and service-related data such as IP addresses, device IDs, and location information.

Once disabled, profiles will no longer be added to AI training, although information collected earlier may remain in the system. Users can request the removal of past data through a Data Processing Objection Form.

Meta and X have already adopted similar practices in the US, allowing their platforms to use user-generated posts for AI training. LinkedIn insists its approach complies with privacy rules but leaves the choice in members’ hands.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

YouTube rolls back rules on Covid-19 and 2020 election misinformation

Google’s YouTube has announced it will reinstate accounts previously banned for repeatedly posting misinformation about Covid-19 and the 2020 US presidential election. The decision marks another rollback of moderation rules that once targeted health and political falsehoods.

The platform said the move reflects a broader commitment to free expression and follows similar changes at Meta and Elon Musk’s X.

YouTube had already scrapped policies barring repeat claims about Covid-19 and election outcomes, rules that had led to actions against figures such as Robert F. Kennedy Jr.’s Children’s Health Defense Fund and Senator Ron Johnson.

An announcement that came in a letter to House Judiciary Committee Chair Jim Jordan, amid a Republican-led investigation into whether the Biden administration pressured tech firms to remove certain content.

YouTube claimed the White House created a political climate aimed at shaping its moderation, though it insisted its policies were enforced independently.

The company said that US conservative creators have a significant role in civic discourse and will be allowed to return under the revised rules. The move highlights Silicon Valley’s broader trend of loosening restrictions on speech, especially under pressure from right-leaning critics.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!