The global struggle to regulate children’s social media use

Finding equilibrium in children’s use of social media

Social media has become a defining part of modern childhood. Platforms like Instagram, TikTok, Snapchat and YouTube offer connection, entertainment and information at an unprecedented scale.

Yet concerns have grown about their impact on children’s mental health, education, privacy and safety. Governments, parents and civil society increasingly debate whether children should access these spaces freely, with restrictions, or not at all.

The discussion is no longer abstract. Across the world, policymakers are moving beyond voluntary codes to legal requirements, some proposing age thresholds or even outright bans for minors.

Supporters argue that children face psychological harm and exploitation online, while critics caution that heavy restrictions can undermine rights, fail to solve root problems and create new risks.

The global conversation is now at a turning point, where choices about social media regulation will shape the next generation’s digital environment.

Why social media is both a lifeline and a threat for youth

The influence of social media on children is double-edged. On the one side, these platforms enable creativity, allow marginalised voices to be heard, and provide educational content. During the pandemic, digital networks offered a lifeline of social interaction when schools were closed.

multiracial group of school kids using touchpads and listening to a teacher during computer class

Children and teens can build communities around shared interests, learn new skills, and sometimes even gain economic opportunities through digital platforms.

On the other side, research has linked heavy use of social media with increased rates of anxiety, depression, disrupted sleep and body image issues among young users. Recommendation algorithms often push sensational or harmful content, reinforcing vulnerabilities rather than mitigating them.

Cyberbullying, exposure to adult material, and risks of predatory contact are persistent challenges. Instead of strengthening resilience, platforms often prioritise engagement metrics that exploit children’s attention and emotional responses.

The scale of the issue is enormous. Billions of children around the world hold smartphones before the age of 12. With digital life inseparable from daily routines, even well-meaning parents struggle to set boundaries.

Governments face pressure to intervene, but approaches vary widely, reflecting different cultural norms, levels of trust in technology firms, and political attitudes toward child protection.

The Australian approach

Australia is at the forefront of regulation. In recent years, the country has passed strong online safety laws, led by its eSafety Commissioner. These rules include mandatory age verification for certain online services and obligations for platforms to design products with child safety in mind.

Most notably, Australia has signalled its willingness to explore outright bans on general social media access for children under 16. The government has pointed to mounting evidence of harm, from cyberbullying to mental health concerns, and has emphasised the need for early intervention.

australian social media laws for children safety

Instead of leaving responsibility entirely to parents, the state argues that platforms themselves must redesign the way they serve children.

Critics highlight several problems. Age verification requires identity checks, which can endanger privacy and create surveillance risks. Bans may also drive children to use less-regulated spaces or fake their ages, undermining the intended protections.

Others argue that focusing only on prohibition overlooks the need for broader digital literacy education. Yet Australia’s regulatory leadership has sparked a wider debate, prompting other countries to reconsider their own approaches.

Greece’s strong position

Last week, Greece reignited the global debate with its own strong position on restricting youth access to social media.

Speaking at the United Nations General Assembly during an event hosted by Australia on digital child safety, PM Kyriakos Mitsotakis said his government was prepared to consider banning social media for children under 16.

sweden social media ban for children

Mitsotakis warned that societies are conducting the ‘largest uncontrolled experiment on children’s minds’ by allowing unrestricted access to social media platforms. He cautioned that while the long-term effects of the experiment remain uncertain, they are unlikely to be positive.

Additionally, the prime minister pointed to domestic initiatives already underway, such as the ban on mobile phones in schools, which he claimed has already transformed the educational experience.

Mitsotakis acknowledged the difficulties of enforcing such regulations but insisted that complexity cannot be an excuse for inaction.

Across the whole world, similar conversations are gaining traction. Let’s review some of them.

National initiatives across the globe

UK

The UK introduced its Online Safety Act in 2023, one of the most comprehensive frameworks for regulating online platforms. Under the law, companies must assess risks to children and demonstrate how they mitigate harms.

Age assurance is required for certain services, including those hosting pornography or content promoting suicide or self-harm. While not an outright ban, the framework places a heavy responsibility on platforms to restrict harmful material and tailor their products to younger users.

EU

The EU has not introduced a specific social media ban, but its Digital Services Act requires major platforms to conduct systemic risk assessments, including risks to minors.

However, the European Commission has signalled that it may support stricter measures on youth access to social media, keeping the option of a bloc-wide ban under review.

Commission President Ursula von der Leyen has recently endorsed the idea of a ‘digital majority age’ and pledged to gather experts by year’s end to consider possible actions.

The Commission has pointed to the Digital Services Act as a strong baseline but argued that evolving risks demand continued vigilance.

EU

Companies must show regulators how algorithms affect young people and must offer transparency about their moderation practices.

In parallel, several EU states are piloting age verification measures for access to certain platforms. France, for example, has debated requiring parental consent for children under 15 to use social media.

USA

The USA lacks a single nationwide law, but several states are acting independently, although there are some issues with the Supreme Court and the First Amendment.

Florida, Texas, Utah, and Arkansas have passed laws requiring parental consent for minors to access social media, while others are considering restrictions.

The federal government has debated child online safety legislation, although political divides have slowed progress. Instead of a ban, American initiatives often blend parental rights, consumer protection, and platform accountability.

Canada

The Canadian government has introduced Bill C-63, the Online Harms Act, aiming to strengthen online child protection and limit the spread of harmful content.

Justice Minister Arif Virani said the legislation would ensure platforms take greater responsibility for reducing risks and preventing the amplification of content that incites hate, violence, or self-harm.

The framework would apply to platforms, including livestreaming and adult content services.

canada flag is depicted on the screen with the program code 1

They would be obliged to remove material that sexually exploits children or shares intimate content without consent, while also adopting safety measures to limit exposure to harmful content such as bullying, terrorism, and extremist propaganda.

However, the legislation also does not impose a complete social media ban for minors.

China

China’s cyberspace regulator has proposed restrictions on children’s smartphone use. The draft rules limit use to a maximum of two hours daily for those under 18, with stricter limits for younger age groups.

The Cyberspace Administration of China (CAC) said devices should include ‘minor mode’ programmes, blocking internet access for children between 10 p.m. and 6 a.m.

Teenagers aged 16 to 18 would be allowed two hours a day, those between eight and 16 just one hour, and those under eight years old only eight minutes.

It is important to add that parents could opt out of the restrictions if they wish.

India

In January, India proposed new rules to tighten controls on children’s access to social media, sparking a debate over parental empowerment and privacy risks.

The draft rules required parental consent before minors can create accounts on social media, e-commerce, or gaming platforms.

Verification would rely on identity documents or age data already held by providers.

Supporters argue the measures will give parents greater oversight and protect children from risks such as cyberbullying, harmful content, and online exploitation.

Singapore

PM Lawrence Wong has warned of the risks of excessive screen time while stressing that children must also be empowered to use technology responsibly. The ultimate goal is the right balance between safety and digital literacy.

In addition, researchers suggest schools should not ban devices out of fear but teach children how to manage them, likening digital literacy to learning how to swim safely. Such a strategy highlights that no single solution fits all societies.

Balancing rights and risks

Bans and restrictions raise fundamental rights issues. Children have the right to access information, to express themselves, and to participate in culture and society.

Overly strict bans can exclude them from opportunities that their peers elsewhere enjoy. Critics argue that bans may create inequalities between children whose families find workarounds and those who comply.

social media ban for under 16s

At the same time, the rights to health, safety and privacy must also be protected. The difficulty lies in striking a balance. Advocates of stronger regulation argue that platforms have failed to self-regulate effectively, and that states must step in.

Opponents argue that bans may create unintended harms and encourage authoritarian tendencies, with governments using child safety as a pretext for broader control of online spaces.

Instead of choosing one path, some propose hybrid approaches: stronger rules for design and data collection, combined with investment in education and digital resilience. Such approaches aim to prepare children to navigate online risks while making platforms less exploitative.

The future of social media and child protection

Looking forward, the global landscape is unlikely to converge on a single model. Some countries will favour bans and strict controls, others will emphasise parental empowerment, and still others will prioritise platform accountability.

What is clear is that the status quo is no longer acceptable to policymakers or to many parents.

Technological solutions will also evolve. Advances in privacy-preserving age verification may ease some concerns, although sceptics warn that surveillance risks will remain. At the same time, platforms may voluntarily redesign products for younger audiences, either to comply with regulations or to preserve trust.

Ultimately, the challenge is not whether to regulate, but how. Instead of focusing solely on prohibition, governments and societies may need to build layered protections: legal safeguards, technological checks, educational investments and cultural change.

If these can align, children may inherit a safer digital world that still allows them to learn, connect and create. If they cannot, the risks of exclusion or exploitation will remain unresolved.

black woman hands and phone for city map location gps or social media internet search in new york

In conclusion, the debate over banning or restricting social media for children reflects broader tensions between freedom, safety, privacy, and responsibility. Around the globe, governments are experimenting with different balances of control and empowerment.

Australia, as we have already shown, represents one of the boldest approaches, while others, from the UK and Greece to China and Singapore, are testing different variations.

What unites them is the recognition that children cannot simply be left alone in a digital ecosystem designed for profit rather than protection.

The next decade will determine whether societies can craft a sustainable balance, where technology serves the needs of the young instead of exploiting them.

In the end, it is our duty as human beings and responsible citizens.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Denmark moves to ban social media for under-15s amid child safety concerns

Joining the broader trend, Denmark plans to ban children under 15 from using social media as Prime Minister Mette Frederiksen announced during her address to parliament on Tuesday.

Describing platforms as having ‘stolen our children’s childhood’, she said the government must act to protect young people from the growing harms of digital dependency.

Frederiksen urged lawmakers to ‘tighten the law’ to ensure greater child safety online, adding that parents could still grant consent for children aged 13 and above to have social media accounts.

Although the proposal is not yet part of the government’s legislative agenda, it builds on a 2024 citizen initiative that called for banning platforms such as TikTok, Snapchat and Instagram.

The prime minister’s comments reflect Denmark’s broader push within the EU to require age verification systems for online platforms.

Her statement follows a broader debate across Europe over children’s digital well-being and the responsibilities of tech companies in verifying user age and safeguarding minors.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Study shows how AI can uncover hidden biological mechanisms

Researchers in China have used AI to reveal how different species independently develop similar traits when adapting to shared environments. The study focuses on echolocation in bats and toothed whales, two distant groups that created this ability separately despite their evolutionary differences.

The Institute of Zoology, Chinese Academy of Sciences team found that high-order protein features are crucial to adaptive convergence. Convergent evolution is the independent emergence of similar traits across species, often under similar ecological pressures.

Led by Zou Zhengting, the researchers developed a framework called ACEP, which utilises a pre-trained protein language model to analyse amino acid sequences. This method reveals hidden structural and functional information in proteins, shedding light on how traits are formed at the molecular level.

The findings, published in the Proceedings of the National Academy of Sciences, reveal how AI can detect deep biological patterns behind convergent evolution. The study demonstrates how combining AI with protein analysis provides powerful tools for understanding complex evolutionary mechanisms.

Zou said the work deepens the understanding of life’s evolutionary laws and highlights the growing role of AI in biology. The team in China hopes this approach can be applied to other evolutionary questions, broadening the use of AI in decoding life’s hidden patterns.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Study explores AI’s role in future-proofing buildings

AI could help design buildings that are resilient to both climate extremes and infectious disease threats, according to new research. The study, conducted in collaboration with Charles Darwin University, examines the application of AI in smart buildings, with a focus on energy efficiency and management.

Buildings account for over two-thirds of global carbon emissions and energy consumption, but reducing consumption remains challenging and costly. The study highlights how AI can enhance ventilation and thermal comfort, overcoming the limitations of static HVAC systems that impact sustainability and health.

Researchers propose adaptive thermal control systems that respond in real-time to occupancy, outdoor conditions, and internal heat. Machine learning can optimise temperature and airflow to balance comfort, energy efficiency, and infection control.

A new framework enables designers and facility managers to simulate thermal scenarios and assess their impact on the risk of airborne transmission. It is modular and adaptable to different building types, offering a quantitative basis for future regulatory standards.

The study was conducted with lead author Mohammadreza Haghighat from the University of Tehran and CDU’s Ehsan Mohammadi Savadkoohi. Future work will integrate real-time sensor data to strengthen building resilience against future climate and health threats.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

New report finds IT leaders unprepared for evolving cyber threats

A new global survey by 11:11 Systems highlights growing concerns among IT leaders over cyber incident recovery. More than 800 senior IT professionals across North America, Europe, and the Asia Pacific report a rising strain from evolving threats, staffing gaps, and limited clean-room infrastructure.

Over 80% of respondents experienced at least one major cyberattack in the past year, with more than half facing multiple incidents. Nearly half see recovery planning complexity as their top challenge, while over 80% say their organisations are overconfident in their recovery capabilities.

The survey also reveals that 74% believe integrating AI could increase cyberattack vulnerability. Despite this, 96% plan to invest in cyber incident recovery within the next 12 months, underlining its growing importance in budget strategies.

The financial stakes are high. Over 80% of respondents reported spending at least six figures during just one hour of downtime, with the top 5% incurring losses of over one million dollars per hour. Yet 30% of businesses do not test their recovery plans annually, despite these risks.

11:11 Systems’ CTO Justin Giardina said organisations must adopt a proactive, AI-driven approach to recovery. He emphasised the importance of advanced platforms, secure clean rooms, and tailored expertise to enhance cyber resilience and expedite recovery after incidents.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Employees embrace AI but face major training and trust gaps

SnapLogic has published new research highlighting how AI adoption reshapes daily work across industries while exposing trust, training, and leadership strategy gaps.

The study finds that 78% of employees already use AI in their roles, with half using autonomous AI agents. Workers interact with AI almost daily and save over three hours per week. However, 94% say they face barriers to practical use, with concerns over data privacy and security topping the list.

Based on a survey of 3,000 US, UK, and German employees, the research finds widespread but uneven AI support. Training is a significant gap, with only 63% receiving company-led education. Many rely on trial and error, and managers are more likely to be trained than non-managers.

Generational and hierarchical differences are also evident. Seventy percent of managers express strong confidence in AI, compared with 43% of non-managers. Half believe they will be managed by AI agents rather than people in the future, and many expect to be handled by AI themselves.

SnapLogic’s CTO, Jeremiah Stone, says the agile enterprise is about easing workloads and sparking creativity, not replacing people. The findings underscore the need for companies to align strategy, training, and trust to realise AI’s potential in the workplace fully.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Policy hackathon shapes OpenAI proposals ahead of EU AI strategy

OpenAI has published 20 policy proposals to speed up AI adoption across the EU. Released shortly before the European Commission’s Apply AI Strategy, the report outlines practical steps for member states, businesses, and the public sector to bridge the gap between ambition and deployment.

The proposals originate from Hacktivate AI, a Brussels hackathon with 65 participants from EU institutions, governments, industry, and academia. They focus on workforce retraining, SME support, regulatory harmonisation, and public sector collaboration, highlighting OpenAI’s growing policy role in Europe.

Key ideas include Individual AI Learning Accounts to support workers, an AI Champions Network to mobilise SMEs, and a European GovAI Hub to share resources with public institutions. OpenAI’s Martin Signoux said the goal was to bridge the divide between strategy and action.

Europe already represents a major market for OpenAI tools, with widespread use among developers and enterprises, including Sanofi, Parloa, and Pigment. Yet adoption remains uneven, with IT and finance leading, manufacturing catching up, and other sectors lagging behind, exposing a widening digital divide.

The European Commission is expected to unveil its Apply AI Strategy within days. OpenAI’s proposals act as a direct contribution to the policy debate, complementing previous initiatives such as its EU Economic Blueprint and partnerships with governments in Germany and Greece.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI and AMD strike 6GW GPU deal to power next-generation AI infrastructure

AMD and OpenAI have announced a strategic partnership to deploy up to six gigawatts of AMD GPUs, marking one of the largest AI compute collaborations.

The multi-year agreement will begin with the rollout of one gigawatt of AMD Instinct MI450 GPUs in the second half of 2026, with further deployments planned across future AMD generations.

A deal that deepens a long-standing relationship between the two companies began with AMD’s MI300X and MI350X series.

OpenAI will adopt AMD as a core strategic compute partner, integrating its technology into large-scale AI systems and jointly optimising product roadmaps to support next-generation AI workloads.

To strengthen alignment, AMD has issued OpenAI a warrant for up to 160 million shares, with tranches vesting as the partnership achieves deployment and share-price milestones. AMD expects the collaboration to deliver tens of billions in revenue and boost its non-GAAP earnings per share.

AMD CEO Dr Lisa Su called the deal ‘a true win-win’ for both companies, while OpenAI’s Sam Altman said the partnership will ‘accelerate progress and bring advanced AI benefits to everyone faster’.

The collaboration positions AMD as a leading hardware supplier in the race to build global-scale AI infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

A new AI strategy by the EU to cut reliance on the US and China

The EU is preparing to unveil a new strategy to reduce reliance on American and Chinese technology by accelerating the growth of homegrown AI.

The ‘Apply AI strategy’, set to be presented by the EU tech chief Henna Virkkunen, positions AI as a strategic asset essential for the bloc’s competitiveness, security and resilience.

According to draft documents, the plan will prioritise adopting European-made AI tools across healthcare, defence and manufacturing.

Public administrations are expected to play a central role by integrating open-source EU AI systems, providing a market for local start-ups and reducing dependence on foreign platforms. The Commission has pledged €1bn from existing financing programmes to support the initiative.

Brussels has warned that foreign control of the ‘AI stack’ (the hardware and software that underpin advanced systems) could be ‘weaponised’ by state and non-state actors.

These concerns have intensified following Europe’s continued dependence on American tech infrastructure. Meanwhile, China’s rapid progress in AI has further raised fears that the Union risks losing influence in shaping the technology’s future.

Several high-potential AI firms have already been hosted by the EU, including France’s Mistral and Germany’s Helsing. However, they rely heavily on overseas suppliers for software, hardware, and critical minerals.

The Commission wants to accelerate the deployment of European AI-enabled defence tools, such as command-and-control systems, which remain dependent on NATO and US providers. The strategy also outlines investment in sovereign frontier models for areas like space defence.

President Ursula von der Leyen said the bloc aims to ‘speed up AI adoption across the board’ to ensure it does not miss the transformative wave.

Brussels hopes to carve out a more substantial global role in the next phase of technological competition by reframing AI as an industrial sovereignty and security instrument.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Bezos predicts gigantic gains from the current AI investment bubble

Jeff Bezos has acknowledged that an ‘AI bubble’ is underway but believes its long-term impact will be overwhelmingly positive.

Speaking at Italian Tech Week in Turin, the Amazon founder described it as an ‘industrial bubble’ rather than a purely financial.

He argued that the intense competition and heavy investment will ultimately leave society better off, even if many projects fail. ‘When the dust settles and you see who the winners are, societies benefit from those investors,’ he said, adding that the benefits of AI will be ‘gigantic’.

Bezos’s comments come amid surging spending by Big Tech on AI chips and data centres. Citigroup forecasts that investment will exceed $2.8 trillion by 2029.

OpenAI, Meta, Microsoft, Google and others are pouring billions into infrastructure, with projects like OpenAI’s $500 billion Stargate initiative and Meta’s $29 billion capital raise for AI data centres.

Industry leaders, including Sam Altman of OpenAI, warned of an AI bubble. Yet many argue that, unlike the dot-com era, today’s market is anchored by Nvidia and OpenAI, whose products form the backbone of AI development.

The challenge for tech giants will be finding ways to recover vast investments while sustaining rapid growth.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!