Council of Europe leads digital governance dialogue at SEEDIG 2025 in Athens

The Council of Europe is taking an active role in shaping regional digital policy by leading three key panels at the Southeastern European Dialogue on Internet Governance (SEEDIG 2025), held in Athens on 10-11 October. The discussions bring together policymakers, industry leaders, and civil society to strengthen cooperation on human rights, democracy, and the rule of law in the digital age.

The first day focuses on bridging human rights and digital innovation. A panel on ‘Public-Private Policy Dialogue’ examines how governments and companies can align emerging technologies with ethical standards through frameworks like the Council of Europe’s AI Convention. Another session tackles harmful online content and disinformation, exploring ways to balance content moderation with freedom of expression and democratic resilience in South-Eastern Europe.

On 11 October, the spotlight shifts to ‘Cyber Interference with Democracy,’ addressing how digital technologies can be misused to manipulate elections and public trust. Experts will discuss real-world cases of cyber interference and propose measures to protect democratic institutions through human rights–based approaches.

Ahead of the event, Council of Europe representatives will also meet participants of the SEEDIG Youth School to discuss opportunities within the Council’s Digital Agenda.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Facebook and Instagram Reels get multilingual boost with Meta AI

Meta has introduced new AI-powered translation features that allow Facebook and Instagram users to enjoy reels from around the world in multiple languages.

Meta AI now translates, dubs, and lip-syncs short videos in English, Spanish, Hindi, and Portuguese, with more languages to be added soon.

A tool that reproduces a creator’s voice and tone while automatically syncing translated audio to their lip movements, providing a natural viewing experience. It is free for Facebook creators with over 1,000 followers and all public Instagram accounts in countries where Meta AI is available.

The expansion is part of Meta’s goal to make global content more accessible and to help creators reach wider audiences. By breaking language barriers, Meta aims to strengthen community connections and turn Reels into a platform for global cultural exchange.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OSCE warns AI threatens freedom of thought

The OSCE has launched a new publication warning that rapid progress in AI threatens the fundamental human right to freedom of thought. The report, Think Again: Freedom of Thought in the Age of AI, calls on governments to create human rights-based safeguards for emerging technologies.

Speaking during the Warsaw Human Dimension Conference, Professor Ahmed Shaheed of the University of Essex said that freedom of thought underpins most other rights and must be actively protected. He urged states to work with ODIHR to ensure AI development respects personal autonomy and dignity.

Experts at the event said AI’s growing influence on daily life risks eroding individuals’ ability to form independent opinions. They warned that manipulation of online information, targeted advertising, and algorithmic bias could undermine free thought and democratic participation.

ODIHR recommends states to prevent coercion, discrimination, and digital manipulation, ensuring societies remain open to diverse ideas. Protecting freedom of thought, the report concludes, is essential to preserving human dignity and democratic resilience in an age shaped by AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

No breakthrough in EU debate over chat scanning

EU negotiations over the controversial ‘chat control’ proposal have once again failed to reach a breakthrough, leaving the future of the plan uncertain. The European Commission’s three-year-old proposal aims to curb the spread of child sexual abuse material by allowing authorities to require chat services to screen messages before they are encrypted.

Critics, however, warn that such measures would undermine privacy and amount to state surveillance of private communications.

Under the plan, chat services could only be ordered to scan messages after approval from a judicial authority, and the system would target known child abuse images stored in databases. Text-based messages would not be monitored, according to the Danish EU presidency, which insists that sufficient safeguards are in place.

Despite those assurances, several member states remain unconvinced. Germany has yet to reach a unified position, with Justice Minister Stefanie Hubig stressing that ‘chat control without cause must be taboo in a rule of law.’

Belgium, too, continues to deliberate, with Interior Minister Bernard Quintin calling for a ‘balanced and proportional’ approach between privacy protection and child safety.

The debate remains deeply divisive across Europe, as lawmakers and citizens grapple with a difficult question. How to effectively combat online child abuse without sacrificing the right to private communication?

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

California enacts landmark AI whistleblower law

California has enacted SB 53, offering legal protection to employees reporting AI risks or safety concerns. The law covers companies using large-scale computing for AI model training, focusing on leading developers and exempting smaller firms.

It also mandates transparency, requiring risk mitigation plans, safety test results, and reporting of critical safety incidents to the California Office of Emergency Services (OES).

The legislation responds to calls from industry insiders, including former OpenAI and DeepMind employees, who highlighted restrictive offboarding agreements that silenced criticism and limited public discussion of AI risks.

The new law protects employees who have ‘reasonable cause’ to believe a catastrophic risk exists, defined as endangering 50 lives or causing $1 billion in damages. It allows them to report concerns to regulators, the Attorney General, or management without fear of retaliation.

While experts praise the law as a crucial step, they note its limitations. The protections focus on catastrophic risks, leaving smaller but significant harms unaddressed.

Harvard law professor Lawrence Lessig emphasises that a lower ‘good faith’ standard for reporting would simplify protections for employees, though it is currently limited to internal anonymous channels.

The law reflects growing recognition of the stakes in frontier AI, balancing the need for innovation with safeguards that encourage transparency. Advocates stress that protecting whistleblowers is essential for employees to raise AI concerns safely, even at personal or financial risk.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI recreations of Robin Williams spark outrage

Zelda Williams has urged people to stop sending her AI-generated videos of her late father, Robin Williams, calling the practice disturbing and disrespectful. The actor and director said the videos are exploitative and misrepresent what her father would have wanted.

In her post, she said such recreations are ‘dumb’ and a ‘waste of time and energy’, adding that turning human legacies into digital imitations is ‘gross’. She criticised those using AI to simulate deceased performers for online engagement, describing the results as emotionless and detached.

The discussion intensified after the unveiling of ‘AI actor’ Tilly Norwood, created by Dutch performer Eline Van der Velden. Unions and stars such as Emily Blunt condemned the concept, warning that AI-generated characters risk eroding human creativity and emotional authenticity.

Williams previously supported SAG-AFTRA’s campaign against the misuse of AI in entertainment, calling digital recreations of her father’s voice ‘personally disturbing’. She has continued to call for respect for real artists and their legacies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

The global struggle to regulate children’s social media use

Finding equilibrium in children’s use of social media

Social media has become a defining part of modern childhood. Platforms like Instagram, TikTok, Snapchat and YouTube offer connection, entertainment and information at an unprecedented scale.

Yet concerns have grown about their impact on children’s mental health, education, privacy and safety. Governments, parents and civil society increasingly debate whether children should access these spaces freely, with restrictions, or not at all.

The discussion is no longer abstract. Across the world, policymakers are moving beyond voluntary codes to legal requirements, some proposing age thresholds or even outright bans for minors.

Supporters argue that children face psychological harm and exploitation online, while critics caution that heavy restrictions can undermine rights, fail to solve root problems and create new risks.

The global conversation is now at a turning point, where choices about social media regulation will shape the next generation’s digital environment.

Why social media is both a lifeline and a threat for youth

The influence of social media on children is double-edged. On the one side, these platforms enable creativity, allow marginalised voices to be heard, and provide educational content. During the pandemic, digital networks offered a lifeline of social interaction when schools were closed.

multiracial group of school kids using touchpads and listening to a teacher during computer class

Children and teens can build communities around shared interests, learn new skills, and sometimes even gain economic opportunities through digital platforms.

On the other side, research has linked heavy use of social media with increased rates of anxiety, depression, disrupted sleep and body image issues among young users. Recommendation algorithms often push sensational or harmful content, reinforcing vulnerabilities rather than mitigating them.

Cyberbullying, exposure to adult material, and risks of predatory contact are persistent challenges. Instead of strengthening resilience, platforms often prioritise engagement metrics that exploit children’s attention and emotional responses.

The scale of the issue is enormous. Billions of children around the world hold smartphones before the age of 12. With digital life inseparable from daily routines, even well-meaning parents struggle to set boundaries.

Governments face pressure to intervene, but approaches vary widely, reflecting different cultural norms, levels of trust in technology firms, and political attitudes toward child protection.

The Australian approach

Australia is at the forefront of regulation. In recent years, the country has passed strong online safety laws, led by its eSafety Commissioner. These rules include mandatory age verification for certain online services and obligations for platforms to design products with child safety in mind.

Most notably, Australia has signalled its willingness to explore outright bans on general social media access for children under 16. The government has pointed to mounting evidence of harm, from cyberbullying to mental health concerns, and has emphasised the need for early intervention.

australian social media laws for children safety

Instead of leaving responsibility entirely to parents, the state argues that platforms themselves must redesign the way they serve children.

Critics highlight several problems. Age verification requires identity checks, which can endanger privacy and create surveillance risks. Bans may also drive children to use less-regulated spaces or fake their ages, undermining the intended protections.

Others argue that focusing only on prohibition overlooks the need for broader digital literacy education. Yet Australia’s regulatory leadership has sparked a wider debate, prompting other countries to reconsider their own approaches.

Greece’s strong position

Last week, Greece reignited the global debate with its own strong position on restricting youth access to social media.

Speaking at the United Nations General Assembly during an event hosted by Australia on digital child safety, PM Kyriakos Mitsotakis said his government was prepared to consider banning social media for children under 16.

sweden social media ban for children

Mitsotakis warned that societies are conducting the ‘largest uncontrolled experiment on children’s minds’ by allowing unrestricted access to social media platforms. He cautioned that while the long-term effects of the experiment remain uncertain, they are unlikely to be positive.

Additionally, the prime minister pointed to domestic initiatives already underway, such as the ban on mobile phones in schools, which he claimed has already transformed the educational experience.

Mitsotakis acknowledged the difficulties of enforcing such regulations but insisted that complexity cannot be an excuse for inaction.

Across the whole world, similar conversations are gaining traction. Let’s review some of them.

National initiatives across the globe

UK

The UK introduced its Online Safety Act in 2023, one of the most comprehensive frameworks for regulating online platforms. Under the law, companies must assess risks to children and demonstrate how they mitigate harms.

Age assurance is required for certain services, including those hosting pornography or content promoting suicide or self-harm. While not an outright ban, the framework places a heavy responsibility on platforms to restrict harmful material and tailor their products to younger users.

EU

The EU has not introduced a specific social media ban, but its Digital Services Act requires major platforms to conduct systemic risk assessments, including risks to minors.

However, the European Commission has signalled that it may support stricter measures on youth access to social media, keeping the option of a bloc-wide ban under review.

Commission President Ursula von der Leyen has recently endorsed the idea of a ‘digital majority age’ and pledged to gather experts by year’s end to consider possible actions.

The Commission has pointed to the Digital Services Act as a strong baseline but argued that evolving risks demand continued vigilance.

EU

Companies must show regulators how algorithms affect young people and must offer transparency about their moderation practices.

In parallel, several EU states are piloting age verification measures for access to certain platforms. France, for example, has debated requiring parental consent for children under 15 to use social media.

USA

The USA lacks a single nationwide law, but several states are acting independently, although there are some issues with the Supreme Court and the First Amendment.

Florida, Texas, Utah, and Arkansas have passed laws requiring parental consent for minors to access social media, while others are considering restrictions.

The federal government has debated child online safety legislation, although political divides have slowed progress. Instead of a ban, American initiatives often blend parental rights, consumer protection, and platform accountability.

Canada

The Canadian government has introduced Bill C-63, the Online Harms Act, aiming to strengthen online child protection and limit the spread of harmful content.

Justice Minister Arif Virani said the legislation would ensure platforms take greater responsibility for reducing risks and preventing the amplification of content that incites hate, violence, or self-harm.

The framework would apply to platforms, including livestreaming and adult content services.

canada flag is depicted on the screen with the program code 1

They would be obliged to remove material that sexually exploits children or shares intimate content without consent, while also adopting safety measures to limit exposure to harmful content such as bullying, terrorism, and extremist propaganda.

However, the legislation also does not impose a complete social media ban for minors.

China

China’s cyberspace regulator has proposed restrictions on children’s smartphone use. The draft rules limit use to a maximum of two hours daily for those under 18, with stricter limits for younger age groups.

The Cyberspace Administration of China (CAC) said devices should include ‘minor mode’ programmes, blocking internet access for children between 10 p.m. and 6 a.m.

Teenagers aged 16 to 18 would be allowed two hours a day, those between eight and 16 just one hour, and those under eight years old only eight minutes.

It is important to add that parents could opt out of the restrictions if they wish.

India

In January, India proposed new rules to tighten controls on children’s access to social media, sparking a debate over parental empowerment and privacy risks.

The draft rules required parental consent before minors can create accounts on social media, e-commerce, or gaming platforms.

Verification would rely on identity documents or age data already held by providers.

Supporters argue the measures will give parents greater oversight and protect children from risks such as cyberbullying, harmful content, and online exploitation.

Singapore

PM Lawrence Wong has warned of the risks of excessive screen time while stressing that children must also be empowered to use technology responsibly. The ultimate goal is the right balance between safety and digital literacy.

In addition, researchers suggest schools should not ban devices out of fear but teach children how to manage them, likening digital literacy to learning how to swim safely. Such a strategy highlights that no single solution fits all societies.

Balancing rights and risks

Bans and restrictions raise fundamental rights issues. Children have the right to access information, to express themselves, and to participate in culture and society.

Overly strict bans can exclude them from opportunities that their peers elsewhere enjoy. Critics argue that bans may create inequalities between children whose families find workarounds and those who comply.

social media ban for under 16s

At the same time, the rights to health, safety and privacy must also be protected. The difficulty lies in striking a balance. Advocates of stronger regulation argue that platforms have failed to self-regulate effectively, and that states must step in.

Opponents argue that bans may create unintended harms and encourage authoritarian tendencies, with governments using child safety as a pretext for broader control of online spaces.

Instead of choosing one path, some propose hybrid approaches: stronger rules for design and data collection, combined with investment in education and digital resilience. Such approaches aim to prepare children to navigate online risks while making platforms less exploitative.

The future of social media and child protection

Looking forward, the global landscape is unlikely to converge on a single model. Some countries will favour bans and strict controls, others will emphasise parental empowerment, and still others will prioritise platform accountability.

What is clear is that the status quo is no longer acceptable to policymakers or to many parents.

Technological solutions will also evolve. Advances in privacy-preserving age verification may ease some concerns, although sceptics warn that surveillance risks will remain. At the same time, platforms may voluntarily redesign products for younger audiences, either to comply with regulations or to preserve trust.

Ultimately, the challenge is not whether to regulate, but how. Instead of focusing solely on prohibition, governments and societies may need to build layered protections: legal safeguards, technological checks, educational investments and cultural change.

If these can align, children may inherit a safer digital world that still allows them to learn, connect and create. If they cannot, the risks of exclusion or exploitation will remain unresolved.

black woman hands and phone for city map location gps or social media internet search in new york

In conclusion, the debate over banning or restricting social media for children reflects broader tensions between freedom, safety, privacy, and responsibility. Around the globe, governments are experimenting with different balances of control and empowerment.

Australia, as we have already shown, represents one of the boldest approaches, while others, from the UK and Greece to China and Singapore, are testing different variations.

What unites them is the recognition that children cannot simply be left alone in a digital ecosystem designed for profit rather than protection.

The next decade will determine whether societies can craft a sustainable balance, where technology serves the needs of the young instead of exploiting them.

In the end, it is our duty as human beings and responsible citizens.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Denmark moves to ban social media for under-15s amid child safety concerns

Joining the broader trend, Denmark plans to ban children under 15 from using social media as Prime Minister Mette Frederiksen announced during her address to parliament on Tuesday.

Describing platforms as having ‘stolen our children’s childhood’, she said the government must act to protect young people from the growing harms of digital dependency.

Frederiksen urged lawmakers to ‘tighten the law’ to ensure greater child safety online, adding that parents could still grant consent for children aged 13 and above to have social media accounts.

Although the proposal is not yet part of the government’s legislative agenda, it builds on a 2024 citizen initiative that called for banning platforms such as TikTok, Snapchat and Instagram.

The prime minister’s comments reflect Denmark’s broader push within the EU to require age verification systems for online platforms.

Her statement follows a broader debate across Europe over children’s digital well-being and the responsibilities of tech companies in verifying user age and safeguarding minors.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Beware the language of human flourishing in AI regulation

TechPolicy.Press recently published ‘Confronting Empty Humanism in AI Policy’, a thought piece by Matt Blaszczyk exploring how human-centred and humanistic language in AI policy is widespread, but often not backed by meaningful legal or regulatory substance.

Blaszczyk observes that figures such as Peter Thiel contribute to a discourse that questions the very value of human existence, but equally worrying are the voices using humanist, democratic, and romantic rhetoric to preserve the status quo. These narratives can be weaponised by actors seeking to reassure the public while avoiding strong regulation.

The article analyses executive orders, AI action plans, and regulatory proposals that promise human flourishing or protect civil liberties, but often do so under deregulatory frameworks or with voluntary oversight.

For example, the EU AI Act is praised, yet criticised for gaps and loopholes; many ‘human-in-the-loop’ provisions risk making humans mere rubber stampers.

Blaszczyk suggests that nominal humanism is used as a rhetorical shield. Humans are placed formally at the centre of laws and frameworks, copyright, free speech, democratic values, but real influence, rights protection, and liability often remain minimal.

He warns that without enforcement, oversight and accountability, human-centred AI policies risk becoming slogans rather than safeguards.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta faces fines in Netherlands over algorithm-first timelines

A Dutch court has ordered Meta to give Facebook and Instagram users in the Netherlands the right to set a chronological feed as their default.

The ruling follows a case brought by digital rights group Bits of Freedom, which argued that Meta’s design undermines user autonomy under the European Digital Services Act.

Although a chronological feed is already available, it is hidden and cannot be permanent. The court said Meta must make the settings accessible on the homepage and Reels section and ensure they stay in place when the apps are restarted.

If Meta does not comply within two weeks, it faces a fine of €100,000 per day, capped at €5 million.

Bits of Freedom argued that algorithmic feeds threaten democracy, particularly before elections. The court agreed the change must apply permanently rather than temporarily during campaigns.

The group welcomed the ruling but stressed it was only a small step in tackling the influence of tech giants on public debate.

Meta has not yet responded to the decision, which applies only in the Netherlands despite being based on EU law. Campaigners say the case highlights the need for more vigorous enforcement to ensure digital platforms respect user choice and democratic values.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!