Global Youth Summit: Too Young to Scroll? Age verification and social media regulation
23 Jun 2025 15:30h - 17:00h
Global Youth Summit: Too Young to Scroll? Age verification and social media regulation
Session at a glance
Summary
The IGF 2025 Global Youth Summit focused on the critical issue of age verification and online safety for young people on social media platforms, examining how to protect youth from harmful content while preserving their digital rights and freedoms. The discussion brought together government officials, platform representatives, and youth advocates to address the growing concerns about social media’s impact on children and teenagers.
Australian Ambassador Brendan Dowling explained his country’s recent legislation requiring age verification to restrict social media access for users under 16, emphasizing that the measure targets platforms rather than banning children directly. He argued that social media platforms were not designed for children and that documented harms to mental health, attention spans, and overall well-being warranted government intervention. Martin Ruby from Meta presented the company’s perspective, advocating for device-level age verification through app stores and highlighting their development of teen accounts with enhanced safety features. He stressed the importance of maintaining flexibility in age verification approaches while acknowledging the significant differences between younger and older teens.
Youth representatives Amina Ramallan and Laura Rego emphasized the need for multi-stakeholder approaches that include young people’s voices in policy development. They highlighted challenges in developing countries where children often access social media through shared devices, making traditional age verification more complex. The panelists discussed various solutions including safety-by-design principles, improved digital literacy education, and stronger enforcement mechanisms.
Participants raised concerns about balancing protection with access to beneficial aspects of social media, particularly for marginalized youth who rely on these platforms for support networks. The discussion concluded with calls for increased awareness, better parental involvement, and the integration of youth perspectives into both government policy and corporate governance decisions affecting their digital experiences.
Keypoints
## Major Discussion Points:
– **Age verification and social media access restrictions**: The discussion centered heavily on Australia’s new legislation requiring age verification to prevent children under 16 from accessing social media platforms, with debates about implementation methods, effectiveness, and potential unintended consequences of such restrictions.
– **Multi-stakeholder responsibility for child online safety**: Participants emphasized that protecting young people online requires coordinated efforts from governments, social media platforms, parents, educators, and civil society rather than placing responsibility on any single entity.
– **Youth participation in policy-making**: A recurring theme was the importance of including young people’s voices in designing policies that affect them, with several speakers advocating for meaningful youth engagement in both government regulation and corporate policy development.
– **Platform design and business model concerns**: Discussion addressed how social media platforms’ attention-based economic models and addictive design features (like unlimited scrolling, algorithmic feeds, targeted advertising) may be inherently harmful to young users, regardless of content moderation efforts.
– **Digital literacy and education as alternatives to restriction**: Participants explored whether enhanced digital literacy programs, educational initiatives, and awareness campaigns might be more effective than age-based restrictions in protecting young people online.
## Overall Purpose:
The discussion aimed to examine approaches for protecting young people from online harms while preserving their rights and access to digital opportunities. The summit sought to bring together youth representatives, government officials, platform representatives, and other stakeholders to explore age verification policies, platform responsibilities, and alternative safety measures in a multi-stakeholder dialogue format.
## Overall Tone:
The discussion maintained a professional and collaborative tone throughout, though it revealed underlying tensions between different approaches to youth online safety. While respectful, the conversation showed clear disagreements between those favoring restrictive measures (like Australia’s age verification requirements) and those preferring platform regulation and digital literacy approaches. The tone became more engaged and dynamic during the open discussion period when youth participants directly challenged some of the proposed solutions, but remained constructive and solution-oriented throughout.
Speakers
**Speakers from the provided list:**
– **Lynn St. Amour** – Former IGF MAG Chair (2016-2019), Former President and CEO of the Internet Society (14 years), Co-moderator of the session
– **Jasmine Ko** – Hong Kong Youth IGF National Coordinator, Co-moderator of the session
– **Li Junhua** – Undersecretary General for Economic and Social Affairs, UN
– **Amina Ramallan** – Deputy Manager Institution for the Nigerian Communications Commission, Youth representative
– **Laura Rego** – Brazilian IGF representative, Student at Federal University of Para, Research fellow with the National Council for Scientific and Technological Development, Youth representative
– **Martin Ruby** – Director for Public Policy for Nordic Countries, Meta
– **Brendan Dowling** – Ambassador for Cyber Affairs and Critical Technology, Department of Foreign Affairs and Trade, Government of Australia
– **Larry Magid** – CEO of Connect Safely (US-based NGO)
– **Vivek Silwal** – From Nepal
– **Aditya Majumdar** – Dynamic Teen Coalition member
– **Audience** – Multiple unidentified audience members who asked questions and made comments
**Additional speakers:**
– **Giovanna** – From Brazil, Brazilian Youth Program participant (CGIBR)
– **Amy** – Digital ambassador for online safety for children (6 years experience), UK for Internet Center
– **Cosima** – Works at UCL London’s Digital Speech Lab, Previously with UK Safer Internet Center
– **Wouter** – From the Netherlands
– **Claire** – High school student from Hong Kong
– **Heilan** – Politics and technology student from Germany
– **Kenneth** – From London, Advisor of Asia-Pacific Policy Observatory
– **Louvo Gray** – Chairperson of the South African Youth Internet Governance Forum
– **Rick Lance** – Online participant (question submitted via Zoom)
– **Umut Pajaro** – Online participant (question submitted via Zoom)
Full session report
# IGF 2025 Global Youth Summit: Age Verification and Online Safety Discussion
## Executive Summary
The IGF 2025 Global Youth Summit convened a critical multi-stakeholder dialogue on age verification and online safety for young people on social media platforms. The discussion brought together government officials, platform representatives, youth advocates, and civil society members to examine the complex challenge of protecting youth from online harms whilst preserving their digital rights and freedoms. The session was co-moderated by Lynn St. Amour, former IGF MAG Chair and Internet Society President, and Jasmine Koh, Hong Kong Youth IGF National Coordinator.
The conversation centred on Australia’s recent legislation requiring age verification to restrict social media access for users under 16, with broader implications for global approaches to youth online safety. Participants explored various perspectives on implementation methods, platform responsibilities, and alternative solutions, revealing both significant consensus on fundamental principles and substantial disagreement on specific approaches.
## Key Participants and Perspectives
### UN Leadership Perspective
Li Junhua, UN Under-Secretary General for Economic and Social Affairs, opened the discussion by highlighting critical statistics: over 77 percent of young people aged between 15 to 24 use the Internet, and a vast majority, more than 80 percent, are active on social media. However, over one-third of young people in 30 countries reported being cyber-bullied. He set the philosophical foundation by asking: “How do we protect the young people online without compromising their rights or limiting their freedom to participate fully in the digital world?” He emphasised that this discussion occurs during the 20th review of the World Summit on the Information Society (WSIS Plus 20) and noted the building momentum across regional IGFs.
### Government Position
Australian Ambassador Brendan Dowling presented his country’s legislative approach, explaining that the new law—passed late last year with implementation later in 2025—requires platforms to take “reasonable steps” to restrict access for users under 16. He emphasised that the legislation targets companies rather than directly banning children, arguing that social media platforms were not originally designed for children and that documented evidence of harm to mental health, attention spans, and overall well-being warranted government intervention. Dowling acknowledged ongoing consultation processes to determine implementation details and the challenges faced by parents, particularly in developing countries where connectivity often reaches communities before digital literacy.
### Platform Industry Perspective
Martin Ruby from Meta offered the company’s viewpoint, advocating for device-level age verification through operating systems (Android and iOS) during phone setup rather than app-by-app verification. He explained that teenagers use “more than 40 different apps” and that controlling access at the device level would be more efficient and comprehensive. Ruby highlighted Meta’s development of over 50 safety tools, including teen accounts that place users aged 13-18 into different categories with age-appropriate protections, where younger teens cannot remove protections without parental permission. He mentioned using YOTI as a third-party service for age verification and AI detection when users change their age. Notably, he stated that Meta actively wants regulation on age verification, describing this as rare for a platform company.
### Youth Representatives’ Views
Youth advocates Amina Ramallan, deputy manager institution for the Nigerian Communications Commission, and Laura Rego from the Federal University of Para and research fellow with the National Council for Scientific and Technological Development, emphasised the critical need for multi-stakeholder approaches that meaningfully include young people’s voices in policy development. They highlighted unique challenges in developing countries where children often access social media through shared devices, making traditional age verification more complex. Laura Rego provided a specific example from Brazil, where the National Authority of Data Protection required TikTok to implement age verification after content could be accessed via direct links without age checks. Both stressed that social media provides significant benefits including creativity, innovation, job creation, peer connection, and civic engagement opportunities that must be balanced against potential harms.
## Major Areas of Discussion
### Age Verification Implementation Approaches
The discussion revealed significant disagreement about how age verification should be implemented. Martin Ruby advocated for device-level verification during phone setup, arguing this would create more binding restrictions than app-by-app approaches. He explained that this method would be more comprehensive and harder to circumvent, as it would establish age parameters at the operating system level rather than relying on individual platform verification.
Brendan Dowling supported Australia’s flexible approach, which requires platforms to take “reasonable steps” to restrict access for under-16s whilst allowing flexibility in implementation methods. He emphasised that the legislation deliberately avoids prescriptive requirements, recognising that technology and methods will evolve, with ongoing consultation processes to determine specific implementation details.
However, Amina Ramallan pointed out critical limitations of device-level solutions in developing countries, where children frequently use adults’ phones to access social media. This reality makes device-level verification less effective and highlights the need for context-sensitive approaches that consider different technological and economic circumstances globally.
### Platform Responsibility Versus User Restrictions
A fundamental tension emerged between those advocating for platform regulation and those supporting user access restrictions. Several audience members, including Wouter from the Netherlands, made powerful analogies comparing social media platforms to cigarettes, arguing that the focus should be on removing addictive design elements (the “nicotine”) rather than restricting access. This perspective emphasised regulating platform business models and design features that may be inherently harmful.
Louvo Gray from the South African Youth IGF challenged the entire premise of age verification, asking: “Shouldn’t we be focusing less on verifying the age of the users and more about verifying the responsibility and the accountability of platforms profiting from young people’s data and attention?” This question fundamentally reframed the debate from individual protection to systemic accountability.
Amina Ramallan provided concrete recommendations including safety by design, rights-based design and regulation, human rights impact assessment in product design, turning off addictive features like unlimited scrolling, stronger enforcement, global cooperation, and digital literacy education.
### Benefits Versus Harms of Social Media for Youth
The discussion revealed sharp disagreements about the balance between social media benefits and harms for young people. Larry Magid, CEO of ConnectSafely (who disclosed support from Meta, Amazon, Google, and OpenAI), emphasised that platforms offer lifelines for marginalised youth, including LGBTQ children in unsupportive communities, potentially saving lives. He argued that blanket bans might disconnect youth from critical support networks and educational resources.
Aditya Majumdar from the Dynamic Teen Coalition reinforced this perspective, warning that restrictive approaches could cut off beneficial uses and support systems that young people rely on. Both speakers highlighted the risk that well-intentioned protective measures might cause more harm than good for vulnerable youth populations.
Conversely, Brendan Dowling focused on documented harms, arguing that social media platforms demonstrate clear negative impacts on attention spans, mental health, and development. He maintained that these platforms were not designed for children and that the evidence of harm justifies restrictive measures.
Martin Ruby acknowledged that the vast majority of social media content is normal and beneficial, with problematic content representing a small percentage, whilst recognising that youth voices are often lost in policy debates and that their perspectives differ significantly from adult assumptions.
### Digital Literacy and Global Perspectives
Participants unanimously agreed that digital literacy and ongoing education are fundamental to protecting children online. Vivek Silwal highlighted specific challenges in developing nations, noting the lack of digital literacy curricula and multilingual accessibility of terms and conditions, making it difficult for users to understand platform policies.
Amina Ramallan stressed that awareness campaigns must scale up and evolve rapidly to keep pace with technological developments, and provided detailed alternatives to social media for children including educational apps, creative platforms, coding, storytelling, music, art, podcasts, audiobooks, and outdoor activities. She also explained the concept of “share-renting,” where parents upload details of their children online without the child’s knowledge or consent.
Laura Rego advocated for multi-stakeholder educational approaches that include clear guidelines showing what different actors should do, rather than placing educational responsibility solely on schools or parents.
## Areas of Consensus and Disagreement
### Strong Consensus Areas
All speakers agreed on several fundamental principles: the need for multi-stakeholder approaches involving platforms, governments, parents, schools, and civil society; meaningful youth participation in policy-making processes following the principle “nothing about us without us”; recognition that age verification presents significant technical challenges requiring flexible solutions; and the essential importance of digital literacy education that evolves with technological developments.
### Key Disagreements
Fundamental disagreements emerged about implementation methods, with device-level versus platform-based approaches creating particular tension when considering developing country contexts. Sharp disagreements arose about the scope of restrictions, with some defending under-16 bans citing documented harms whilst others emphasised life-saving benefits for marginalised youth. The regulatory focus remained contentious, with ongoing tension between platform regulation advocates and user access restriction supporters, and significant disagreement about whether social media’s benefits or harms predominate for young users.
## Thought-Provoking Moments and Insights
Several comments fundamentally shaped the discussion’s trajectory: Li Junhua’s opening question about protecting youth without compromising their rights established the philosophical framework; Martin Ruby’s observation about the significant difference between 8-year-olds and 15-year-olds introduced crucial nuance to age-based restrictions; Wouter’s “take the nicotine out of the platform” analogy reframed the debate toward platform design responsibility; and Louvo Gray’s challenge to focus on platform accountability rather than user age verification shifted attention to systemic issues.
## Unresolved Issues and Future Challenges
Critical issues requiring further attention include: effective implementation of age verification in developing countries where children primarily use adults’ devices; balancing platform regulation with user access whilst addressing potentially harmful economic models; developing mechanisms for meaningful youth participation beyond consultation; providing digital literacy education in communities where parents lack digital experience; and addressing legal liability questions for platforms whose design may cause harm to users.
## Conclusion and Path Forward
The discussion revealed both the complexity of protecting young people in digital spaces and the potential for collaborative solutions. The strong consensus on fundamental principles—particularly multi-stakeholder approaches and meaningful youth participation—provides a foundation for future policy development, even as significant disagreements remain about implementation.
The consistent push-back against top-down, one-size-fits-all approaches suggests that effective solutions will require nuanced, context-sensitive policies that centre youth voices and address root causes rather than symptoms. As Li Junhua emphasised, the message from the youth will be an official output of this summit and among the IGF 2025 outputs, ensuring these perspectives contribute to ongoing global discussions about internet governance and youth protection.
The challenge moving forward will be translating shared principles into practical, implementable policies that can adapt to rapidly evolving technology whilst maintaining focus on creating digital spaces where young people can thrive safely and with dignity.
Session transcript
Lynn St. Amour: I have to thank the government of Norway for such soaring music. I’ve been in every one of these sessions this morning, and it really is uplifting and energizing. So I’d like to welcome everybody to the IGF 2025 Global Youth Summit. This summit is a key component of the IGF 2025 Youth Track and part of the IGF 2025 High-Level Leaders Track. It’s organized under the motto Young Leaders for Multi-Stakeholder Governance of Digital Tech, and it’s part of a broader series of the Youth Track’s activities here at the IGF. The summit was developed through bottom-up consultations among designated representatives of the IGF’s host country, this year Norway obviously, of the UN IGF Secretariat, from various youth IGFs, and other youth-driven global internet governance initiatives such as the Internet Society’s Youth Ambassadors Program. The summit serves as a multi-stakeholder intergenerational panel between the current and next generation of experts and leaders. I’m going to introduce my co-moderator, Jasmine Koh, who’s from the Hong Kong Youth IGF. She will be leading the second part of the program where it’s actually an open discussion with people here in the room and of course online, and I’ll introduce the panels just as soon as we get the program underway fully. Jasmine, a few words.
Jasmine Ko: Thank you very much, Lynn. So this is Jasmine Koh from Hong Kong, the Youth IGF National Coordinator. So adding a little bit more the context of this global youth summit, it’s an effort by many of our youth colleagues, several months effort to co-create these sessions. So a little bit more, this is, we are creating this topic because we noticed that the digital platforms have become very integral to our daily life. Therefore, especially for young people, we want to ensure a safe and age-appropriate online experience as emergent as well as a global priority. And governments around the world are exploring developing regulatory frameworks aimed at protecting youth from harmful contact while striving to balance safety, security, and protection of human rights and freedom. So recent legislative efforts are increasingly focused on age verification requirements for social media and adult content platforms. These laws could mandate digital platforms to implement robust age tracking systems decided to prevent users under a certain age from assessing inappropriate or potentially harmful content. So after a very brief introduction of what will be covered in this session, I would like to give the floor to Lynn to introduce a very special guest. So Lynn, please.
Lynn St. Amour: Thank you. It was just pointed out that I forgot to introduce myself. My name is Lynn Sainamore. I served as the IGF MAG Chair from 2016 through 2019, and I was previously for 14 years the President and CEO of the Internet Society. It is now my great pleasure, though, to welcome the Undersecretary General for Economic and Social Affairs, Mr. Li Junha, for a welcome.
Li Junhua: Thank you. Thank you, Lynn, for inviting me. Well, dear young people from around the world, all distinguished participants, dear friends, good afternoon. Welcome, all of you, to the IGF 2025, the Global Youth Summit. Just a month ago in Riyadh, I vividly recorded the IGF Youth Summit brought the global attention to the importance of the AI education. Today, with your bottom-up consultation, you’d actually rightly set the tone by focusing on another issue that defines your generation, and that is the unregulations of social media. Social media is integral to our lives, so much that most of you have never known the world without it. Globally speaking, over 77 percent of the young people aged between 15 to 24 use the Internet, and a vast majority, more than 80 percent, are active on social media. While these platforms offer incredible opportunities to all of us, they would not necessarily be viewed with your safety as a primary issue. The risks are real and pervasive. Over one-third of the young people in the 30 countries reported being super-bullied. The used data are harvested for the inappropriate targeted advertising, and the impact of the excessive screen time on mental and physical health is a growing concern, even a growing crisis. In response, many governments are turning to solutions like age verification laws to protect the youth, particularly minors. But this actually raises a critical question. How do we protect the young people online without compromising their rights or limiting their freedom to participate fully in the digital world? I believe that the answer is within this room. Meaningful protection cannot be imposed from the top down. It must be co-created and co-managed with the young engagement. Placing youth at the center of the policymaking is not just a good practice, but it is a moral imperative. Meaningful youth engagement is necessary to ensure the digital environment are safe, grounded in respect for young people’s rights, and supportive of their healthy development. The voices of the youth are essential to shaping policies that reflect the real experiences and needs. I’m very much encouraged to see senior government leaders, respectives from the major social media platforms, key regulators, and other stakeholders are here today, and those who joined the WSIS Plus 20 review, and particularly gathered in this IGF 2025. Your presence signals a vital willingness to listen, and more importantly, to act on what you have heard. The goal is very clear. To harvest, to have the smart rights respecting regulations that keeps young people safe, secure, and protected, while empowering them to freely explore, express, and grow. This summit is a critical stop in this global journey. The youth IGF track is building momentum from across the globe, from the European IGF in France and the African IGF in Tanzania, to the future gatherings at the Asia-Pacific IGF in Nepal, and the Latin American and the Caribbean IGF. This journey is about building momentum and amplifying the youth leadership, the internet, the governance. This global engagement is especially timely, as this year marks the 20th review of the World Summit on the Information Society of WSIS Plus 20. This is a pivotal opportunity to redefine the digital governance for the next generation. I sincerely urge all of you to actively share your insights from this track to the WSIS Plus 20 process. You are not just a part of the discussion, you are essential to designing a safer, more equitable digital future, reflecting the diversity of the practices, challenges, and aspirations. I sincerely wish you a very productive and fruitful discussion. Thank you.
Lynn St. Amour: Thank you. I’d like to introduce our panelists, and again, the first session of this is going to be some prepared remarks from them. specific to some questions which have been posed. And then we’ll have an open session, the second part of the program. So first, I’d like to introduce Ms. Amina Romalin, who’s a deputy manager institution for the Nigerian Communications Commission. And she is a youth representative. Second, Ms. Laura Rego from the Brazilian IGF. She’s a student at the Federal University of Para and a research fellow with the National Council for Scientific and Technological Development and also a youth representative. And we’re going to hear from the two of them first, as we believe that that’s really important in terms of setting the stage for the rest of our interventions. Next is Mr. Martin Arubi. He’s director for public policy for the Nordic Countries Meta. And we will be joined very shortly by Mr. Brendan Dowling, who’s the ambassador for cyber affairs and critical technology, development of foreign affairs and trade, government of Australia. Like so many of us here this week, he was double-booked in this session. So he’s making his remarks in another room. And then we’ll come here immediately after. But he should be here any minute now. So with that, I’d like to invite Amina, your remarks.
Amina Ramallan: Good afternoon, everyone. So today, I’ll be talking about the role of social media in the lives of young people and platform policies if they align with the fundamental human rights and freedoms of young people. So I’ll start by taking us back to the first remark that was done, which was that 77% of young people globally use the internet. And one third of young people in 30 countries have reported to be either cyber bullied or some other form of cyber crime. So social media today is deeply interwoven in the lives of young people, from their daily activities to being a platform for creativity, being a platform for innovation, job creation, peer connection, and even civic engagement. We’ve seen cases of young people championing governance, championing advocacy within their regions, using social media, and succeeding about it. So I’d like to start by talking about the role of social media in the lives of young people. So I’ll start by talking about the role of social media in the lives of young people. So young people remain dynamic, and they remain at the forefront of social media. Now, while we talk about these benefits that social media provides for young people while it’s interwoven with their life, they still remain at risk of certain exposure of certain risks. So we also have data privacy. Now, it may not seem like it, but young people, it may seem like, OK, maybe just someone is just playing a game, right? It might just be an innocent game. But young people remain at risk of their data being harnessed, of their data being used without informed consent. We also have exposure to harmful content. Now, there’s this, for instance, now, these young people might not even be the ones putting their data out there. It might be a family member. It might be a parent. There is a term today online called share renting. Now, what that means is when a parent is out there uploading details of maybe their child or word, so that exposes the child to their data being out there without them even knowing. So it goes beyond not giving consent to the platforms, but also not giving consent to the parent. Now, to the aspect of does social media platform policies currently, do they align with fundamental human rights and freedom? So I’ll say while social media gives young people the avenue to express themselves, the avenue to be dynamic, there are a lot of policies out there that need work because these policies are not evolving as fast as social media is evolving. For instance, now, social media platforms, they create some sort of, it might be like a UI effect. It might be like an algorithm effect. It creates a dopamine effect. For instance, you have unlimited scrolling. Now, in the case of unlimited scrolling, that is how the policy of the platform is, right? But then kids are at the receiving end because they become addicted. They scroll and scroll and scroll. You also have the case of the way algorithm works, targeted ads at children. As much as there are policies out there that say, oh, if an account is registered to a child, maybe we will not present this ad to the child, but we still see child accounts having targeted ads. Also, another thing is that as young people are at the forefront of social media, they are also at risk of not having. So, for instance, while the UN Convention on the Rights of the Child protects from harmful content, however, we see compliance across regions, compliance across the world is lacking in some places. So, we also see, like, the risk of being in contact with malicious users. Another thing is also that, for instance, social media remains, you know, the platform for young people to also, you know, have peer connection and community connection, but also that also puts them at the risk of coming in contact with malicious users. So, this is a very important point. We also see that, you know, there are a lot of policies that are not transparent enough. So, for example, we have a policy on the right of expression protection. They often struggle in practice to balance these rights. We see issues of enforcement. We see issues of transparency. A lot of times, these policies are not transparent enough and sometimes harmonizing those policies for the good of the child remains an issue. I’ll just round up by saying that one of my recommendations is that social media platforms should integrate, you know, safety by design. They should ensure that right-based design and regulation is their priority, integrating human rights impact assessment in product design, especially for youth, you know, youth futures. For instance, turning off addictive futures, like, you know, unlimited scrolling, turning off addictive futures, you know, like likes. Regulators also should mandate age-appropriate design. There should also be stronger enforcement and global cooperation. And then digital literacy. Awareness cannot be overstated. There isn’t anything like too much awareness. It might seem like every day we sit down and talk about the same thing, but it’s a lot out there. The only thing is to scale up the awareness. Maybe 10 years ago, we were talking about, oh, how do you come on social media? But today, we’re talking about awareness on, you know, how to identify deepfake. If that is even possible, how to know the difference between what is AI and what is real, right? And then to wrap up, I will say platforms should also consider if a child, if an account being open is a child account, there should be some sort of, you know, survey, or I don’t want to call it a course, but almost like a questionnaire to ensure that the person opening the account understands what they’re opening the account for, understands the risk that they might be exposed to, and also tell them, okay, if you come across this risk, this is what you need to do, this is how you report it, and this is how you get it out of your feed. Thank you.
Lynn St. Amour: Thank you for your very pertinent comments, Amina. Next, we’re going to hear from Ms. Laura Rigo. Specifically, we’re interested in, from a youth’s perspective, what do you think is a good way to make sure that the age checks that are being proposed on some social media really work, and that they help keep young people safe from harmful content?
Laura Rego: Good afternoon. I would like to greet all of the present and express my gratitude for the invitation of being here. I’m Laura Rigo. I stand here as a fellow member of the youth program from the cgi.br. I’m also a youth, a young woman from the Amazon region, from the city of BelĂ©m in ParĂ¡. I’m a law student, and I’m very worried with the online safety of the youth community. As a matter of fact, age verification is one of the various important steps in trying to create a health environment for children and teenagers in the digital space, although it’s not enough by itself. Despite newer solutions in tracking users’ age more accurately, like asking for the birth certificate or using facial biometry for the age check, and here I open a parenthesis to speak a little about a suicide case we had in Brazil that was coordinated by the National Authority of Data Protection in a request made by Institute Alana that we had a problem with TikTok. Why? Because you could access any of their content just by the link. link, and you could see anything, any video, without checking the age of the person who has accessed this content. And the authority of data protection, or national authority, they got to make TikTok as for the account of the person who has accessed the content, so they could do the age verification, and it was very good for the children. Well, protecting the young from the harmful content goes by building stronger policies in rating content, and also evolving a multi-stakeholder network of shared responsibility. Talking about Brazilian law, the Institute of Children and Teens assures us as a principle the best interest of minors and the integral protection must be guaranteed by the whole society, with no space for exclusive responsibilization of the parents. This means the final user and his well-being should be, with no regards, protected by the companies that provide the service. From a youth perspective, especially from a country that 83% of the children between 9 and 17 already have their own account on social media, as it shows research realized by SETIC, called TIKKids, unilateral solutions have been shown to be flawed, urging for participation of the most affected, the children and the teens, which attracts the motto, nothing about us without us. That means to better comprehend the necessity and the pain of these groups, they should be called to the spaces of discussion, qualified and heard. In a way, the solutions could bring safety while engaging our next generations of leaders. Thank you.
Lynn St. Amour: Thank you, Laura. I’d now like to introduce Mr. Brendan Dowling. As I mentioned before, he was double-booked for this session, but he has arrived just in time. He’s the Ambassador for Cyber Affairs and Critical Technology in the Department of Foreign Affairs and Trade for the Government of Australia. Specifically, Brendan, we’d like you to talk to us about the Australian Parliament’s recently endorsed age verification for social media use. It’s been a few months now. How do you see the impact of the decision thus far, and what are any challenges or successes you might have noted?
Brendan Dowling: Thank you, and apologies for being late. Late last year, the Australian Government announced that we would be working towards restrictions on the use of social media by people under 16. The implementation of that initiative will happen in December 2025, so later this year. The current development of the scheme is underway and going through a consultation process. What drove this measure is a sense that social media platforms were not created for children. They were not created generally with safety in mind. In fact, it’s been years and years of pressure applied from governments and civil society to have social media companies actually take the safety of their users seriously, and a growing concern that platforms not designed for use by children are still being used by children, that current mechanisms for age assurance adopted by social media platforms are ineffective, often not developed particularly well, and the way that the platforms are designed to be exploitative of people’s attention, to share material that is harmful, that for all the years of concerns that have been raised by the community, the response of social media companies has been lacklustre at best, disingenuous at worst. So the measure is driven by a sense of frustration at the level of inactivity from social media platforms and the feeling that the community in Australia expects government to do something to better protect its children. Of course, what you do is the really difficult question that we and many governments are grappling with. What we’re looking for is a solution that is flexible, that is designed based on consultation with the community, including with people who are under 16 years old, that is not perfect. So let’s not pretend there is any age verification solution that is going to be a perfect measure to be all things to all people. Part of the aim here is it is a normative shift to create in the culture and in the community an understanding that these services are not appropriate for kids who are under 16 years old. It is also to create some friction in the system, to say it should not be easy for you to circumvent whatever measures are in place on social media platforms to prevent use by children. It’s part of a broader suite of measures. For years many of you would know the Australian government has been very active in requiring platforms to adopt online safety mechanisms, requiring by law measures to limit the spread of harmful content. But the end result is we do think there needs to be some sort of mechanism. What we’ll be doing is requiring platforms to take greater effort to restrict the use of their platforms by children. So this requirement is not a ban that applies to children or their parents, it applies to the companies. The wording of the legislation says that companies should, now I’m going to forget the exact wording, but essentially should take best efforts to adopt age verification measures. This means that there will be flexibility in the measures that they use. There won’t be the prescription of one single tool that should be used. It means that there can be flexibility in the arrangements. We’re currently going through a process of working with industry on what mechanisms can be used. There will be expectations about respecting rights and privacy through the use of these mechanisms. The government has already said that we will not be using a digital ID process, we’ll be looking at alternative ways to verify age. So the whole aim here is design a system that’s flexible, that’s consulted with industry, but that ultimately does give effect to the promise to say that children should not be using social media platforms because of the demonstrable harms that they have been facing. The actual implementation of the measure, as I said, will come later this year, so there are more details to be figured out before December 2025, but that’s the pathway we’re on at the moment.
Lynn St. Amour: Thank you. And now to wrap up our panelists, with quite a big lift in front of them, is Mr. Martin Rubi. Again, he’s the Director of Public Policy for Nordic Countries MEDA, and specifically we’d like to know how you believe all stakeholders, particularly social media companies, can work together to make sure that age checks and social media are effective everywhere so that all young people across the world can protect themselves.
Martin Ruby: Thank you, and thank you for having me. I hope the sound is okay here, otherwise flag it somehow. Well, I already agree with a lot that has been said here, not everything, but a lot that has been said, but we can discuss it over the next hour or so. And to your question on the responsibility, who has the responsibility, I think there is only the classic pretty boring answer to that, is that there are sort of a lot of stakeholders that need to be on board in order to solve this one. Age verification online is super difficult, and I agree with the Ambassador in what he said, that there is no perfect solution out there, and if there was, I think we had been forced to put it to use many years ago. So it is a very difficult one, but I’ll spend my couple of minutes here on saying a little bit about our approach in MEDA. We have a two-step approach, I would say. One of them is, of course, trying to keep kids safe online from the content they need on our platforms, which means that we have developed over the years many, many different tools. We are learning every day, every week, every month, we are learning how are the trends, what are the youth doing, what new technologies are there that we have to sort of work with. That means we have more than 50 tools now that are put in place to increase the safety for kids online. And then the biggest one maybe is the newest one, the teen accounts that we rolled out gradually over the world on Instagram first, and then later on Facebook. Teen accounts means that everyone between the age of 13 and 18 are put into different buckets depending on their age, and that means that there are limits to what they can see, there are limits to who can contact them, there are a lot of different sort of protections around them. And for the youngest of those teens, they cannot remove those without their parents doing it. The bigger ones, they have more freedom to change the settings. But anyway, so lots of we’re trying, definitely hard. We have no interest in someone’s… said something about indicating a little bit that we have an interest in having bad, harmful content on our platforms. We definitely do not have any interest in that. And we want to protect the kids online, of course. Secondly, the next big thing here is then the age verification. Because in order to be able to do the teens account, as I just mentioned, putting kids into the right or the teenagers into the right buckets so they are protected well, we need to have a good, solid age verification. And that is just super, super difficult. It’s easy if there’s no privacy concerns. It would be easy. Because then we could just ask the people to send us a lot of personal data. And then we can easily figure out how old they are. But I don’t think there is a big interest in the world in sending Meta or other big platforms a lot of extensive personal data. I think there is a general opinion that we have enough. So we need to find the right balance between the privacy and the safety here. And that makes it complicated. Then I think someone like us, big machinery, we have a pretty good system in place. Not perfect, of course. We can discuss that. But someone like us and Google and some of the other big ones have big machineries. We have big muscles. We have good age verification systems in place. But again, across the internet, across all services, also ours, there is a need for a better one. The model we actually suggest is that two things. We actually want to see regulation on this. It’s very rare that you hear something like that maybe from a player like us. But we actually want to see regulation on age verification coming our way. We’re pushing it very actively in the EU where I’m working. And secondly, we are saying that the model we suggest then to actually make it work efficiently would be to do it on the device level. That means that in the app store. So that means on this one. Instead of going app by app by app by app, our number shows that teenagers are using more than 40 different apps. Looking at my own two girls at home, I think that’s right. I think we think it’s a good idea to do it on this one. Because there are only two apps, basically there are only two operating systems in the world, Android and iOS. Put it there. So that means in practical terms, when I set up this phone with my daughter in Denmark where I live, the safety children says that children are between eight and nine years old when they get their first smartphone. Eight or nine years old. That’s probably true. And that means that when I then set it up, and an 80-year-old doesn’t go down and buy their own smartphone. They get it from their parent. It could be a used one, or it could be a new one. But they said then a parent is sitting there next to the kid setting up the phone. And they put in the age. And our proposal is just that that has to be binding by law that when that age is in, that kid can only download stuff that fits that age, unless the parent makes an exception to it. When they are like 14, you can say, OK, it’s OK. But that means my daughter, when she was like 12, she could not download Instagram or Snapchat or whatever. So the technology is there. We find it to be a golden moment to do it like that. We can discuss it more in a minute. I don’t want to spend more time. But it’s a very easy way to do it, we feel. And we feel that it could cut across all countries in Europe, as where I work right now, where we discuss it most actively, I think, also in the US. But it cuts across a lot of countries. It cuts across all apps in an efficient way. I’ll stop there.
Lynn St. Amour: Thank you. There are so many things I personally would like to follow up on with all the panelists. But Jasmine and I are purposely trying to keep our moderation very light. This next section is the time for both you here in the room and online participants to ask your questions, either of the panelists or start a discussion amongst yourself if you want to follow on from something some other individual has said. There are mics at two ends of the room and someone who’s kindly volunteered to be a roving mic. So I’ll turn the floor over to Jasmine now.
Jasmine Ko: Thank you very much, Lynn. So please be reminded, each speaker, you could have up to two minutes for your interventions. You can comment. You can also ask questions to the panelists, et cetera. So if you’re ready, please feel free to use the mic on two sides of the room. And then we will also be tracking the student room for if any online participants would like to pose a question as well. So perhaps we could start from that gentleman first on the left.
Larry Magid: Thank you. My name is Larry Magid. I’m the CEO of Connect Safely. We’re a US-based NGO. And full disclosure, we do have support with Meta, Amazon, Google, OpenAI, and many of the companies that are involved in this. I guess this question is to anybody, but specifically the gentleman from Australia. There is no doubt that social media, like bicycles and basketballs and just about anything in the world, can harm people. There are dangers. But there’s also no doubt, based on research, that there have been many benefits to young people from social media. In fact, there are children alive today that might not have been alive had it not been for social support systems that they were able to get on social media when they reached out, when they were in some kind of crisis. There are children and adults, but children, LGBTQ children, for example, in communities where they’re ostracized. And they’ve been able to literally get a lifeline through social media. And there are many examples where mental health has actually improved as a result of social media. So while I agree with you that just like bicycle riding, it’s very important to have education, wear helmets, obey all the rules of traffic, but we don’t ban people under 16 from riding bicycles, even though more people have died in bicycle accidents than have died from social media. So I’m just curious, do you have any concerns about the harm you may be doing as well as the benefit? I’m not acknowledging that there is no benefit. But the harm you may be doing by keeping some young people from social media that could actually literally save or at least improve their lives and their mental health.
Brendan Dowling: So I think all public policy is about balancing risks and benefits. I do think that comparing social media to bicycles is a fair stretch. I think bicycles are not designed to exploit the attention and to target marketing towards children. I’m sure there are situations where social media platforms have provided those types of benefits that you talked about. However, I think across the board, the research and evidence pointing to the exploitative and predatory practices of social media platforms that do target children, the harms that have been documented over years and years in broad-reaching ways of attention spans, of cyberbullying, of access to harmful content, when we look at the balance of those measures, my view is that those platforms are not designed for children. They’re not designed to benefit the health and well-being of children. And in fact, the harms are very well documented now. So on balance, I’m not saying I disagree with your point. I think it’s a really valid point. And this is something that we do see coming up in the consultation process that we’re going through. We do hear the range of perspectives. But on balance, the harms that we see in a very widespread way throughout community, warrant this type of action from government public policy makers.
Jasmine Ko: OK. So maybe we can have the next. Thank you very much. We can have the next speaker taking the mic, please.
Vivek Silwal: Hi, everyone. This is Vivek Silwal from Nepal. So my perspective would be from the developing nations. So the youth and children from developing nations aren’t aware of their rights, whether it is the exploitation in the platform, are they being abused or not? In developed countries, somehow in courses in a school, they are taught, OK, these are the do’s and don’ts in the platform. These are need to be done. But in the case of developing nations, there are no any curriculums where these things are the things you can do in the platforms. These are the good things. These are the bad things. So my question is to Martin from Meta. So how is this balanced for the youths, especially the underage, to get access to the platform? Is the simple data birth enough? Or there should be some AI platforms, like face recognition or somehow that this can be balanced? And also, is limitation being multilingual? Because when you go back to the country, so are they able to read what are the terms and conditions, what they are going through in their own language? The platforms or whatever, the content that are posted is in their language. But are the terms and conditions and all those other things you are going to are multilingual? Yeah, thank you.
Martin Ruby: Thank you. Just come back if I don’t reply to it all. But the sound is not perfect. I think on your question on the age verification, how to do it, use AI, local language, whatever, I think, again, our idea is to do it on the device level. Today, we use AI, actually, in the sense that if young users, for instance, a 13-year-old suddenly changes their age and says they’re 23 instead. We actually ask them to send a picture and then send it to our third-party, YOTI, that we use as a – and we also use other kinds of technologies to try and – but our – to try to see if we can detect the age properly. But going forward, our main proposal is that we think that we should do it when you set up the phone. When you set up the phone, because that is the entry point in by far the most instances. So when you set up the phone, that is when we think that you put in the age, and when you’ve done that, it should be binding from there on. And then the parent is in control of what you download, and then they can make exceptions, et cetera, for the bigger – for the older teens, not the youngest ones. And I think – yes, I think that’s probably our way – what we find to be the easiest way forward. I don’t know if that answered your question, otherwise just – yeah, sorry.
Amina Ramallan: I’ll just – first, if you put on the headphones, the sound is clearer. But just to add, I think what the speaker was saying – sorry, I didn’t get your name. Especially for the developing countries, right – I come from a developing country – the percentage of eight- to nine-year-olds owning a smartphone is very, very low. So a lot of times you find that children maybe use a phone that is to a sibling or a parent to say, okay, maybe I use your phone, I download maybe an app, I download social media. Sometimes the owner of the phone, the parent, might not even know that the kid has downloaded that app on their phone and is using social media on it. So yes, safety by design is good, yes, starting from the OEMs and making sure that once a phone is set up for a child, it cannot download certain apps. But beyond that, especially for developing countries where you don’t usually have young children owning their own smart device, I think the responsibility still falls back to these apps that they are downloading. So in the event that a child downloads these apps, either through a phone that was set up for an adult or it belongs to their parents, what are the policies, what are the steps that are put in place to ensure that when that happens, this child – because at that point the child is not using any form of facial verification on the app – the phone belongs to an adult, automatically it downloads, it creates a profile. So I just wanted to add to what – I think that maybe makes your question fairer, right? Thank you.
Jasmine Ko: Okay, thanks, Martin and Amina. So we received one question from a Zoom, but we will go to the on-site first, and then we will go back to the Zoom. So please.
Audience: Hi, my name is Giovanna, I’m from Brazil. I’m actually from the same program that Laura participates at, the Brazilian Youth Program held by CGIBR. And I am trying to look at the discussion as asking, like, age verification for what purpose? And I think here we are talking about age verification for the purpose of tailoring an experience for kids and teens. So I would like to ask all of you – actually, it’s a general question, and if you could all comment, I’d be really happy – how do you think this experience should change in terms of advertisement? Because I think we can all fairly agree that the way ads work online are part of the reason why social media can become so toxic for kids and teens. So how should this experience be different when it comes to online ads and advertisement?
Brendan Dowling: Thank you. Look, I’ll start with one comment. The measure that Australia is looking to put in place is not about shaping the user experience, it’s about saying children under 16 should not have a social media account. So we know that age verification can be used for that exact purpose, and in fact, I think a lot of technology companies have got quite sophisticated in the way that they can target ads and shape the user experience, which brings its own set of issues and risks. But for us, this measure is about restricting access to having a social media account, not changing the user experience.
Martin Ruby: Can I just comment on the ambassador’s point here? I think the problem is also, as one of the previous speakers said, that I think the problem is that under 16, I mean, I think there is a big difference between someone with the age of 8 and 15. I mean, I have two daughters aged 13 and 16, and I would say that’s a big difference, and I would say that removing the opportunity for a 15-year-old to communicate via social media with her friends and family and whoever, and follow interests there, exploring stuff, I think that’s a big thing to do, I must say. I think my daughter would definitely agree if she was sitting here. I think it’s a big thing to do, and I think sometimes, I’m glad that we have a lot of youth in this room, because sometimes I feel the young people’s voice are getting lost in these debates. I think it’s always a bit fearful, what is being debated, and I can just say, when I sometimes talk to my 15-year-old, now she’s a 16-year-old daughter, about these things, I think her generation, I think, is just in a different place. I think they are just getting, I think it’s a very, and I totally appreciate what you’re saying, that social media is not built for small kids, that’s true, and an 8-year-old shouldn’t be running around on social media, but I think a 15-year-old saying that they are not allowed at all on social media, I think it’s a big thing, and I just want to make that point, because I owe that to my daughter, I would say.
Jasmine Ko: Okay, thanks Ambassador and Martin, so now we move back to online. Aditya, you have a question, you can unmute yourself and speak.
Aditya Majumdar: Yeah, hello. Yeah, so I’m Aditya Majumdar with the Dynamic Teen Coalition, and a lot of my work has focused on tracking international social media bans and restrictions targeting teens. I found that these measures, while often have been justified as protecting youth, have also been shown to frequently disconnect us from critical support networks and educational resources, especially for marginalized teens. And I think the gentleman, a gentleman previously brought this up as well. So, my question is, how can the Australian government ensure youth safety without turning to blanket bans that silence our voices, and when will we finally have a meaningful seat at the table in shaping these digital policies that affect our lives daily? Thank you.
Brendan Dowling: I think access to educational resources is not what this measure is targeted about, access to the online environment is not what this measure is targeted about. It is very specifically targeted to social media accounts created by children under the age of 16. So, this does not affect access to YouTube, for instance, as a way to access educational videos. The voice of youth is important. That is part of the consultation process. I would say that as part of a lot of that consultation, not only parents are raising concerns about the impact of social media on well-being, on mental health, on stress levels, on sleep, on attention spans. These are issues being raised by kids as well. There are kids in Australian schools who are initiating limited access to phones during the education day. So, I don’t think this should be seen as an adults versus kids type of measure. This is about a general community sense that the documented harms to all those areas of brain development at that age warrant public policy makers stepping in and taking steps to address those. So, the online environment has become an important part of all of our lives. It is an important part of healthcare. It is an important part of education. But social media, I think, on balance, has far more demonstrable harms to the well-being of children that has taken this step. That’s not at the exclusion of people’s voices. It’s not at the exclusion of the participation of youth. It’s not about silencing youth. It is about doing something drastic, and it’s a big step. I agree with my colleague from MEDA. It’s a big step. But it’s a step that the Australian government has felt compelled to take because we don’t believe that the protections that have been put in place by social media companies for children’s well-being have been adequate or sufficient. So, this is the role of government to step in and protect the community interest.
Jasmine Ko: So, anyone else? Okay. Thank you, Ambassador. So, since we also have questions from the Zoom, so we’ll go for two in-person interventions, and then we’ll go back to…
Audience: online. So please. My commentary is that social media is absolutely is not for children but it’s also bad for young adults even if they are 16 or 18 or even 20 because their frontal lobe hasn’t been developed yet and so they they don’t fully know how to use social media for good purposes and a good step to that would be is introducing some kind of education course to social media that would be obligatory so that every person that has a social media account on Facebook Instagram or tik-tok that they would receive every couple of months some kind of education course on how to use social media for good purposes and to teach the parents also to give them more information about what harm social media could give to their children and what are your thoughts on that?
Martin Ruby: I mean I’m totally all in favor of increased digital skills and increased digital literacy I mean I think it should be part of part of schooling that you because I mean social media the internet as such is not going away it’s there you can try to then squeeze out a few apps but then they the young people will move to other apps they will move to other places on the internet they probably move to darker places I would also say so I think you have to be very careful when we go in and do these things but I totally agree that there’s no way around that we have to learn our kids better to how they what they see what they need how to behave on the on the internet and on social media of course and and and adding on that one I think and that’s when we talk about who has the responsibility we have a big responsibility regulators have responsibility as I suggested a couple of times on what they think they should do and then I think that parents I mean me as a parent I think I also have some responsibility in terms of I mean there is no tech company in the world that can fix this alone there’s no regulator that can fix this on their own I mean if if I don’t care what my daughters are doing it’s gonna be very difficult to fix it from a regulator of our tech platform so I mean that’s gonna have everyone has to be responsible about this thing to make it work but digital skills I think of the literacy is the absolutely the basis I would agree totally on that
Jasmine Ko: thanks Martin anyone want to add on.
Brendan Dowling: I’ll just add one point we are now seeing connectivity reach some of the most remote populations in the world we do extensive work in the Pacific ensuring last mile connectivity connectivity into remote villages real challenge that we’re seeing in communities in places like the Pacific is that connectivity access to mobile devices access to social media platforms is reaching communities before digital literacy gets it this this is I agree this is a collective challenge this is something we need to work on to build up the digital literacy understanding of what the appropriate use of social media platforms are we have kids accessing devices and social media in communities where parents have had no actual lived experience so it becomes very difficult for them to work with their children on understanding the risks and potential harms of the social media environment so there’s a real challenge to with connectivity should come online safety in digital literacy I think collectively from a government and an industry and a civil society perspective we haven’t done a good enough job of that and we’re now we’re trying to make up for that lost ground but actually these things should go hand in hand from the outset.
Jasmine Ko: thank you ambassador so one more in person intervention.
Audience: hi my name is Cosima and I work at UCL in London’s digital speech lab and I was previously with the UK safer Internet Center for five years and I have a couple questions a little bit more on the digital literacy angle but also the sort of flexibility of a blanket ban and I know both from matter and also the Australian government you sort of taken different stances on how flexible we should be in terms of allowing young people to use social media and on sort of wanting sort of less flexibility I’m wondering if we don’t want sort of if we want a flexibility in the legislation and not just a one way of enforcing these bands how do we ensure the legislation is strong enough and then if we don’t want a ban and which I’m sure is more than meta perspective to an extent then what more specifically can we do on digital literacy giving the platforms are the ones that facilitate the harms by just being the platform that it happens on not that it’s necessarily their fault but it’s where it happens so looking for a bit more detail there if that’s okay.
Brendan Dowling: I think it’s a really tricky question this is what when you design policy figuring out how much flexibility is the right amount of flexibility is always a balance is always a debate is always an argument I think involving all members of the policy community so I don’t think we will get things exactly right the test which I couldn’t remember earlier that we often use in Australian law is take reasonable steps now what does reasonable mean in these sort of situations often it comes to a community test do we think this is reasonable it can then be challenged in courts so this push and pull over what is a right level of flexibility are industry players abusing that flexibility this is something that we see play out in a range of areas of public policy I think for us this as we’ve covered in the panel this is a new area of policymaking for many governments around the world it is a new area of technology development for me that means more flexibility is the right approach at this moment there are a range of options that companies already look to adopt that are being developed let’s let those options proliferate let’s see what works best and oftentimes we then get coalescence around more effective tools so I think starting flexibly is better than starting prescriptive and trying to pull that back so as I said take companies will have the expectation they will take reasonable steps they then have the flexibility to figure out what the right approaches are we will embed in the rules the protection of privacy and digital rights noting that that is a sliding scale which which not everyone will be satisfied with the results of where it lands but we lean more flexible in this early stage.
Jasmine Ko: so I’m gonna wait that one question from online
Audience: from Umut Pajaro the question is how do you think childs and teens can be included in the design of policy for this social media platforms.
Amina Ramallan: so I’ll just start by saying in 2020 when the pandemic hit the ITU developed you know child and land protection guidelines for industry the policy children parents teachers and whatnot and one of the recommendations was for the you know for member states to adopt and adapt and localize and make it um you know under make make it relatable for the children within their region so one of the things that we did in Nigeria was I mean it was the pandemic everybody was at home people were not easily reachable right but we found a way to virtually reach children find out what their ideas are what do they think of the policy as it is what they would like to see is different and what else do you think should be included that is not currently included as children so I will say there isn’t just one straight way to you know include children in policymaking you just have to do it nations organizations platforms international corporations need to just put children at the forefront because young people are not just like the leaders of tomorrow they are the leaders of today governance is becoming younger right they are not you know today they are on social media they’re on internet no matter how we try to discourage how we try to you know paint this bad picture but they do see the benefits so they remain online so just there is in a way just include them make sure their voices are heard because they are actually the majority users of these platforms and these policies are made to protect them these policies are made to make sure that their use of the social media days of the internet is you know straightforward and they are not being exposed to any harm so just um just include them and make sure their voices are not lost in conversation
Laura Rego: I agree with you and I think that it’s about occupied spaces and about digital literacy too, so we have to have this concern of inclusion now so we can hear these voices and when we hear them we can make these policies better by knowing what’s really the problem because it’s very easy for us to look and think I know what the problem is but we don’t have the mentality of a children of a team who is in various scenarios being the first ones who experiment this technology and the technology experimenting on the children we can have this we must hear them.
Jasmine Ko: Thank you Amina and Laura so coming back to the Qt please.
Audience: Hi my name is Amy I’ve been a digital ambassador for online safety for children for about six years with the UK for Internet Center as well and I have a follow up question for Laura and Amina as well as a question for Brendan as well so my follow-up question is who do you think holds the most responsibility when it comes to making sure children are at the forefront of these discussions about social media and their uses of it and for Brendan I know you said there is a lot of work to go in the legislation that you’re trying to roll out but have you had any thought process towards how you’ll ensure that social media companies actually use the most effective verification method rather than a verification method that works but is it necessarily the most effective for them?
Laura Rego: Thank you. Well I think is a multi-stakeholder effort we can name one one sector who should include these children. Brazil for example in 2025 it released a guide for safety use of internet for children and this guide it has a multi-stakeholder view that points for example what enterprise should do, what the parents should do, what the children should do, what the government should do and it’s not a closed answer but we have some guidelines, some principles and I think discussed it is the first step.
Amina Ramallan: No I just just to re-echo what she said it’s a multi-stakeholder approach because where the responsibility of one stakeholder stops the responsibility of the next stakeholder begins where when I drop off my child at school my responsibility to ensure they are safe online stops there now the teachers pick up right when the child maybe goes out on an excursion somewhere for instance the responsibility of the school stops and maybe then it’s now the responsibility of say it’s a government event the government so it’s a multi-stakeholder effort there isn’t one stakeholder that is more responsible for child online safety than the next.
Brendan Dowling: I think on using the most effective tool that’s it’s a good question a couple of ways we’ll be monitoring that one is we have a commission called a safety commission which is responsible for regulating online safety issues part of these measures will be transparency around the use of tools reporting from companies about how they’re using the tools what effectiveness areas that is something that our regulated a safety commission will be taking responsibility for if we see that the most effective tools are not being used then there will be the ability to monitor it there will be the ability to say well these are not reasonable steps now easier said than done right monitoring the effectiveness of these types of tools is not necessarily going to be easy but we’ll rely on the transparency of platforms to actually work with government on making this work this is where I think community expectations are really important as the social norm on the use of social media platforms by kids shifts there will also be community expectations about how social media platforms go about doing this and as always the community holds platforms accountable for what they commit to doing as well as the government regulator so I think between those measures we’ll have some pretty good visibility of what happens how effective it is a lot of eyes are on these measures Australia is not the only country but one of the early countries to look at adopting this I think we’ll have a lot of attention to the tools that get used
Jasmine Ko: thank you ambassador and Amina and also Laura so back to the cute here
Audience: thank you very much my name is Wouter I’m from the Netherlands and I want to share a different point of view about age fabrication because I think it’s a really the wrong way forward I would say it’s part of victim blaming because it’s not only about the content that’s damaging youth but it’s also the platform itself many of the things Amina described are not part of the content but actually the platform that’s contents hosted on it I think companies like matter created the perfect cigarette and now we’re trying to take them away from our children so I would request the ambassador of Australia please regulate the platforms not the users do not take this cigarette away from our children but take the nicotine out of this platform and yeah and I think I would like to add to that there are alternatives to these addictive platforms platforms are based on public values and not profits so let us not confuse innovation with progress and choose for platforms that actually contribute to human progress and keep youth and parents safe.
Brendan Dowling: I think I think that’s a really important point I think I think we will look back on this era and the last 15-20 years as an era that we regret in how these technologies evolve being used practices that were acceptable 10 years ago by social media platforms will will not age well when we look back in history you use a cigarette example I think it’s a really interesting analogy the knowledge of the harms and the failure of companies to actually do something to protect the well-being of users I think has been light has been shown on that but I think will continue to be a feature of how we understand the use of this technology I would say in in our in our legislation the obligation is on companies it’s not about banning kids it’s about companies taking reasonable steps to restrict access but I think your point is an interesting one that will continue to play out as we better understand what some of the the harms of the way that social media is used a more apparent we might look back and say look actually this further measures that need to be adopted here.
Martin Ruby: I think I probably I probably need to to respond a little bit on that one just saying that I actually agree with a lot of what the ambassador is saying on I think we will learn a lot from these years on terms of the technology and I think we are maturing and that might also be some of the technologies that were used ten years ago that I that you just said now that will not be looking good five years from now or now even that’s probably true so I mean we are definitely I mean we have billions of people using our platforms totally voluntarily every day because people like to use them and I like to use them and I mean I think it’s I think there is something now we’re gloomy in here today we’re very looking at the bad things which is totally fair but there is actually like ninety nine point nine nine whatever of the content that is on the platforms and probably also on our competitors platforms are totally normal boring funny interesting learning whatever content and I think we just and then there is the bad stuff that we need to find and we need to get rid of and are we always good enough at finding it fast enough no probably not we can discuss that I mean we we try but that’s a different discussion maybe to just saying that I think maybe we’re getting big gloomy on on what the incident on social media actually contributes and can just one final point on that is that if you look at at young people I think young people actually they go there because they and I think there are research is backing up what I’m saying here is that they actually go there because they find a lot of connections there they actually engage with their friends and families there they are increasing their well-being when they are there that’s also research backed up because they go in there and they it’s an extension of their social life and then there is of course problematic stuff and we need to get rid of that there’s also extensive views that is too much and we also need to get rid of that but I think we should just be a bit sort of have some proportions in the debate that’s just what I’m saying.
Jasmine Ko: thank you both so in interest of time can we kindly ask to receive two question at once so after the lady the gentleman can go directly so we got two question at once.
Audience: okay I’ll try to be quick I am both young and old enough to remember social media like myspace and Facebook and Instagram with our algorithms or personal ads and so we’re having conversations here about access and social media being harmful to children but what I feel like we haven’t addressed is the economic models that have developed in the mid-2000s, that is an economic model of attention which is harmful for children. Social media in itself can have positive benefits but what I want to know is what is being done on the regulation point of view and also the platform point of view to address the economy of attention. Thank you. Thank you very much, my name is Louvo Gray, I am the chairperson of the South African Youth Internet Governance Forum. I think the gentleman before me had touched on this point, we recently hosted South African Youth Internet Governance Forum before the National Internet Governance Forum in South Africa in April and this discussion was you know quite a touching topic, especially amongst many of the young people that attended there. So it basically touched on, as we build these frameworks right for age verification and online safety, are we really asking the right question? Shouldn’t we be focusing less on verifying the age of the users and more about verifying the responsibility and the accountability of platforms profiting from young people’s data and attention? So how do we shift the conversation from regulating the youth to regulating the digital power structures around them? Because you know as the gentleman mentioned, why are we you know talking about the cigarette yet the cigarette is being supplied by the digital owner? So I think instead of over regulating the internet and over regulating social media, let’s look at the wholesalers of these platforms and regulate them instead of focusing on the users. Thank you so much.
Jasmine Ko: Thank you. So for the panel, you may want to keep your response to within a minute if possible.
Brendan Dowling: I’ll be really short. I love both those questions. I think the social media change to maximise the phrase you use, which I think is a really excellent phrase, the economic model of attention. Anyone who’s been on X recently I think would be hard-pressed to say that the way it sucks your attention in has any social health benefit whatsoever. It’s become about maximising engagement. That’s how platforms have monetised this. I think that’s a really serious question. This is a hard area of public policy making. I think it’s one that we’re getting more attuned to. It’s one that we’re paying more attention to. There’s greater transparency. I don’t think there’s super easy answers to those questions, but part of the reason I work in this field is because it’s an area of public policy where we’re grappling with some of the hardest questions. So I really appreciate some really interesting points being made there.
Martin Ruby: I think I will actually say to the last question that I actually recommend against that approach. I know I’m as biased as it gets, but still. My argument here will be that I think the age verification, it might be nerdy, but it is really, really cool. If we want to have kids to be safe online, it’s about time that we get practical around age verification. I live in Europe. Europe has been discussing EU regulation for 10-15 years now and been adding a lot of it, but age verification is actually where there is still a gap and we need to get that fixed, otherwise we get nowhere. Then we can also have that other debate about the bigger picture, but I just think it’s really important to focus on that. Then on the first question, just saying on the overall business model and algorithms, actually you can go in and kill the algorithm on our platforms. You can just say, I want my feed to be chronological. Someone mentioned that before. You can still do that. You can just go in there and choose chronological feed. I wouldn’t recommend it as a user experience, but you can feel free to do it and then there is no algorithm controlling your feed. On the business model as such, the advertising business model, with some level of personalization, is pretty much the business model of the whole internet. Of news publishers, pretty much the business model of the whole internet. It’s a big bear you’re shooting, when you are discussing that. Just bring that on the table.
Jasmine Ko: Thank you. In the interest of time, I will just have one online question. For three of you guys, please try to make it within 30 seconds. We will take four questions at the same time. After the questions, panelists, please prepare to respond and give your final remarks by giving our audience the one concrete action that could take away. I will start with the online question very quickly from Rick Lance. The question is,
Audience: if a product is decided in a way that can cause harm, shouldn’t the company that designed that product be held accountable by those who are harmed in the court of law? First question. The remaining 30 seconds. Hi, my name is Claire. I’m a high school student from Hong Kong. I agree that everyone should take responsibility. My question is that, since the age of children using social media is decreasing as the years go by and generations go by, I guess it’s, to me, it is because, a reason is because, like, there are, at least, okay, in Hong Kong, there are kids, like, use social media because they don’t really have an alternative, and it is a way, like, for kids, like, they’re on their iPad because they don’t really have an alternative, and it’s a way that parents are keeping them, like, attentive. Like, for example, like, during dinner, I see, like, many people, like, scrolling on social media, like, children. I guess my question is, what are some ways that parents can get involved in, I guess, averting kids’ attention from social media to some other, like, alternatives to get them off social media? But I agree that, like, social media is, because I go to boarding school. I have been going. Hello, my name is Heilan. I’m a politics and technology student from Germany. In recent years, we have seen a global shift towards more extreme political positions, and social media is often used to amplify or radicalize these standards because many politicians actively use them to gain support from younger audience. My question is, how do you see the role of technology, like tech companies and governments, in addressing this issue? Should there be more strong interventions or regulation to prevent the misuse of this platform for political radicalization? And if so, what kind of interventions should be appropriate? And I know that there are fact-checking mechanisms that already exist to some extent, especially on Facebook, for previous Facebook. And do you think, could they be used more strategically to contain, to counter political radicalization, especially against the spread of this and misinformation? Thank you so much. Next one, please. Last one. Hi, my name is Kenneth from London, and thank you, Martin, for highlighting your voices. It’s getting lost in this debate. I want to reply on this as the advisor of Asia-Pacific Policy Observatory, a digital native think tank initiative building capacity in policy analysis and engagement. So most recently, we have been looking at how AI is impacting various digital issues, including online safety. And our latest findings, which will be launched at the IGF this week, mentioned that a lot of the things that you all have been talking about, the need of safety by design principles that incorporate age-appropriate protections throughout the service ecosystem and business model, but also how crucial transparency is. And this is what I wanted to zoom in on. So digital natives wanted to see not just online platform publishing their principle policy in place or action taken to protect minors, which is crucial, but digital natives also wanted to have a seat at the table in shaping not just government policy, but also these aspects of corporate governance and policies. So in your remarks, you have mentioned and emphasized the difficulty and absence of a perfect solution to verify age and protect children on platforms. So I would encourage the youth and multistakeholder participation in shaping these corporate policies, which would be very fundamental. Thank you.
Jasmine Ko: Yeah, thank you. So for the panelists, again, while you’re addressing the question, please also make your final remark by sharing a concrete action to take away as well. So please, panelists.
Amina Ramallan: Okay, I’ll just respond to the first question about alternatives for to keep children engaged rather than social media. So just to, not because these are all, the list is exhaustive, right? So some of the, the things that parents can look at. You have educational applications, you have educational games. There’s so much out there online that children will rather be on than be on social media that can help with their learning, help with their cognitive thinking, help with their ability to do better in school, help with their ability to learn, to be creative, to express themselves. You have creative platforms, platforms that enable, help children find out what they’re good at. Platforms to help children. You have coding, storytelling, even music, even art. There are a lot of platforms out there. It fosters problem solving, creativity, and digital literacy. Also, you have a lot of podcasts, age appropriate podcasts, of course, for children. Audiobooks also, they can listen to that. And also, something that we used to do before, which was a lot of outdoor activities. Now, because of the internet and social media, everybody’s indoors. The pandemic didn’t help. We got used to being more indoors again. But then, being outside, it helps physical, not just even to be offline because of the risks, but it helps physical health. And also, it helps with… It reduces sedentary time because a lot of times today, not just even children, we also… We are at work, children are in school, they’re just sitting down, sitting down. That’s also not good. So, just to round up my one concrete action, I will say, yes, we talk about safety by design, but I will still key into awareness. The ambassador spoke about, we are in a situation whereby parents did not have access to the internet. A lot of parents did not have access to the internet and social media as long as their kids of today. So, we need to scale up that awareness. It’s not just about the platforms doing the right thing, we need them to do that, but also scale up awareness to the children, to the parents, to the teachers, to policy makers, and ensure that maybe five years ago, the awareness might be about something. Today, it can be about AI, it can be about how to be safe online. And also, not just in informal platforms, we need to also infuse these awareness sessions into the school curriculum, so that while we’re teaching children how to code and how to… What is a computer? What does a monitor look like, for instance? We’re also teaching them, okay, if you’re on social media, this is it. If you’re on social media, this is how you stay safe. Thank you.
Jasmine Ko: Thank you, Amina. Ambassador, can I… Sure, I’ll try and hit all four
Brendan Dowling: really quickly. Accountability in a court of law for harms caused by content on social media platforms is a really tricky question. There is a legal shield under US law, which protects… Puts a liability with the person who posted the content, not with the social media platform. I think that’s shaped the way that platforms have developed, but that’s the way the law is there. I think it’s for other jurisdictions to figure out what the right legal approach is, and whether there is liability if a platform fails to take action against content that is proved to be harmful, can they be held liable for those harms? That’s something I think that our courts of law will test. Alternatives for kids. Parents are lazy, right? I do this, I’ve got young kids. Our kids get upset, our kids are bored, they get hungry, we might have the screen. We have become bad at parenting because that’s the easy way out. My worry is that when you give the screen to the child, they don’t process their feelings when they’re feeling angry or upset. Boredom is crucial to creativity. If we are taking away kids’ ability to be bored, we’re destroying their creativity. So I think that’s on parents. I think if we see that as a solution because their kids are annoying us, and look, they can be very annoying, there’s no doubt about that, then that’s on us and we’re failing as parents. So there’s lots of alternatives. It’s okay, there’s sometimes when putting them in front of the screen is the only option or the right thing to do, but as parents, we need to take responsibility for that. Abuse of platforms by politicians with extremist or radicalized messages. I think META, we should give credit here. I think there were years in which the algorithms were manipulated by politicians and extremists using META’s platforms, and META took that seriously and has looked to address that. I hope we’re not seeing backsliding from META and other platforms now, because this remains a real risk that the spread of extreme content, content that creates social divisions or reinforces social divisions, will always be a major risk in social media platforms. And finally, youth getting involved in corporate governance, in company policies, I think really important. Transparency, access for researchers has always been a really crucial part of understanding what’s happening in social media platforms. I think youth should expect to be involved and to be included in the way companies design their products.
Laura Rego: I would like to comment what the Sir said, because I think it’s very valid and I agree, but we have to do… Look at the situation too, by the view of someone in the development country, because it’s very easy when you live in somewhere that is mostly safe, say, you can take your kid to a park, but when you live in somewhere there’s dangers, you can’t take your kid to the park when you want. When you don’t have a lot of people who could help you raise your kid when you’re a single mother, it’s hard for you to take a shower, to do basic things, so it’s common that people in situations of vulnerability use more those tools to help raise those children. I read an article once that compared with a nanny or a pacifier, the use of screens, and it’s what’s happening, but we have to come back to the mood stakeholder and think that the government must provide safety so I could take my children to the park. So I have to… The enterprises have to help me to control this time of screen. We have to work together if we want to think about the future of children worldwidely.
Martin Ruby: Martin? Yeah, sure. I think I will be, maybe in the interest of time, deep diving on the second question, on the prioritization, this moderation, freedom of speech, really, point. I think it’s right that… We could do a whole session on that one. It’s really interesting what to remove and what to leave up. It’s something we face every day in the millions of cases where we have to decide where should the border… Where should the exact border go for our piece of content? Of course, the easier one are like threats to other people or porn or terrorism. It can be difficult to find it, but when we find it, it’s easy to decide on, they just have to go. The more difficult ones are the political statements, the misinformation, the tough talk, because what is just sort of… When is it too tough talk? All those decisions and all those… Hitting the right balance there and then enforcing on it is super difficult, no matter who owns the platforms. It’s a very difficult and tricky task. It’s true, I think, as has been said by a couple of people, that we have been better at it, I think, over the years and been taking down a lot of the sort of… When we saw the algorithm reinforcing some things in a bad way, we’ve been trying to set a stop to that, but it is difficult because, again, it comes down to that individual decision on what are you allowed to sort of say and when should I stop you from saying that? And that is a big responsibility for us to have as… But, yeah, and it’s a daily struggle for everyone who has that kind of platform and which is, of course, something we’ll just have to learn. But it is… Let’s have a session on that next year. I think it’s a super interesting one, that actually. And one final point, maybe, is that I think maybe us in the room here, we would agree on, let’s say, 98% of what should be put on the internet or put on social media, but not on 100%. We will never be… Not even two of us will be in agreement on 100% of what should be left up on our platforms, for instance. So that means that there will always be disagreement about what is… Where should those exact balances be? What should be allowed and what should not be allowed? It’s very interesting.
Jasmine Ko: Yeah, so thank you all for your closing remark on strengthening safety and security on social media, while also supporting the positive use of this platform by young people. And also, of course, thank you to my co moderator, Eileen, for this session. So I would like to take this chance here to close the session by announcing that the message from the youth will be an official output of this summit, and among the IGF 2025 outputs. Thank you very much for joining us. Thank you.
Jasmine Ko
Speech speed
134 words per minute
Speech length
708 words
Speech time
314 seconds
Social media platforms should implement robust age verification systems to prevent underage access to inappropriate content
Explanation
Jasmine Ko argues that as digital platforms have become integral to daily life, especially for young people, there is a need for robust age tracking systems to prevent users under certain ages from accessing inappropriate or potentially harmful content. This is presented as both an emergent and global priority for ensuring safe and age-appropriate online experiences.
Evidence
Recent legislative efforts are increasingly focused on age verification requirements for social media and adult content platforms, with laws mandating digital platforms to implement robust age tracking systems
Major discussion point
Age Verification and Social Media Access for Youth
Topics
Children rights | Privacy and data protection | Content policy
Martin Ruby
Speech speed
173 words per minute
Speech length
3003 words
Speech time
1040 seconds
Age verification should be implemented at the device level during phone setup rather than app-by-app to create binding restrictions
Explanation
Martin Ruby proposes that age verification should occur when setting up smartphones, as parents are typically involved in this process for children aged 8-9 when they get their first device. This approach would create binding restrictions across all apps rather than requiring verification for each individual application, making it more efficient since teenagers use over 40 different apps.
Evidence
Safety children data shows children are between 8-9 years old when they get their first smartphone, and teenagers use more than 40 different apps. There are only two operating systems (Android and iOS) making device-level implementation feasible
Major discussion point
Age Verification and Social Media Access for Youth
Topics
Children rights | Privacy and data protection | Digital standards
Disagreed with
– Brendan Dowling
– Amina Ramallan
Disagreed on
Age verification implementation approach – device-level vs app-level vs blanket restrictions
Age verification is technically difficult but necessary, requiring balance between privacy protection and safety measures
Explanation
Martin Ruby acknowledges that age verification is extremely difficult because it would be easy if there were no privacy concerns, but platforms need to find the right balance between privacy and safety. He argues that while big platforms like Meta have good systems, there’s still a need for better age verification across all internet services.
Evidence
Meta has developed over 50 tools for child safety and uses AI and third-party services like YOTI for age verification when users change their age suspiciously
Major discussion point
Platform Responsibility and Safety by Design
Topics
Privacy and data protection | Children rights | Digital standards
Agreed with
– Amina Ramallan
– Brendan Dowling
– Audience
Agreed on
Digital literacy and education are essential components of online safety
Platforms have developed over 50 tools for child safety, including teen accounts with age-appropriate restrictions and parental controls
Explanation
Martin Ruby describes Meta’s comprehensive approach to child safety, including the rollout of teen accounts that place users aged 13-18 into different categories with varying restrictions based on age. For the youngest teens, these protections cannot be removed without parental involvement, while older teens have more freedom to adjust settings.
Evidence
Meta has more than 50 tools for increasing safety for kids online, with teen accounts being the newest major feature rolled out gradually worldwide on Instagram first, then Facebook
Major discussion point
Platform Responsibility and Safety by Design
Topics
Children rights | Content policy | Privacy and data protection
Youth voices are often lost in policy debates, and their perspectives on social media use differ significantly from adult assumptions
Explanation
Martin Ruby argues that young people’s voices are getting lost in debates about social media regulation, and that there’s a significant difference between restricting access for different age groups. He emphasizes that removing social media access for a 15-year-old is a major decision that would significantly impact their ability to communicate with friends and family.
Evidence
Personal example of his 13 and 16-year-old daughters, noting the big difference between ages 8 and 15, and that his daughter’s generation is in a different place regarding social media use
Major discussion point
Multi-stakeholder Responsibility and Youth Engagement
Topics
Children rights | Human rights principles | Content policy
Agreed with
– Li Junhua
– Laura Rego
– Audience
– Lynn St. Amour
Agreed on
Youth voices must be included in policy discussions
Disagreed with
– Brendan Dowling
– Larry Magid
– Aditya Majumdar
Disagreed on
Scope of social media restrictions – complete ban vs regulated access
The vast majority of social media content is normal and beneficial, with problematic content being a small percentage
Explanation
Martin Ruby contends that 99.99% of content on social media platforms is normal, boring, funny, interesting, or educational content, and that billions of people use these platforms voluntarily every day because they find value in them. He argues that while there is problematic content that needs to be addressed, the debate has become too focused on negative aspects.
Evidence
Billions of people using Meta’s platforms voluntarily every day, and research showing young people go to social media for connections with friends and families, increasing their well-being
Major discussion point
Benefits vs. Harms of Social Media for Youth
Topics
Content policy | Human rights principles | Freedom of expression
Disagreed with
– Brendan Dowling
– Larry Magid
– Amina Ramallan
Disagreed on
Balance between benefits and harms of social media for youth
Brendan Dowling
Speech speed
147 words per minute
Speech length
2895 words
Speech time
1180 seconds
Australia’s approach requires platforms to take reasonable steps to restrict access for under-16s, with flexibility in implementation methods
Explanation
Brendan Dowling explains that Australia’s legislation places the obligation on companies, not children or parents, requiring platforms to take ‘reasonable steps’ to adopt age verification measures. The approach is designed to be flexible, allowing companies to choose from various mechanisms rather than prescribing one specific tool.
Evidence
The legislation will be implemented in December 2025, with ongoing consultation processes and government commitment not to use digital ID processes while looking at alternative age verification methods
Major discussion point
Age Verification and Social Media Access for Youth
Topics
Children rights | Legal and regulatory | Privacy and data protection
Agreed with
– Martin Ruby
– Vivek Silwal
Agreed on
Age verification is technically challenging and requires flexible approaches
Disagreed with
– Audience
– Martin Ruby
Disagreed on
Primary focus of regulation – platforms vs users
Social media was not designed for children and demonstrates clear harms to attention spans, mental health, and development
Explanation
Brendan Dowling argues that social media platforms were not created with children or safety in mind, and that years of pressure from governments and civil society have been needed to get platforms to take user safety seriously. He cites documented harms including impacts on attention spans, cyberbullying, mental health, stress levels, sleep, and brain development.
Evidence
Years of documented concerns raised by the community, with platforms’ responses being ‘lacklustre at best, disingenuous at worst,’ and community expectations for government action to protect children
Major discussion point
Benefits vs. Harms of Social Media for Youth
Topics
Children rights | Content policy | Human rights principles
Disagreed with
– Martin Ruby
– Larry Magid
– Amina Ramallan
Disagreed on
Balance between benefits and harms of social media for youth
The Australian approach is driven by frustration with inadequate platform responses to documented harms over many years
Explanation
Brendan Dowling explains that Australia’s age restriction measure stems from frustration with social media companies’ insufficient responses to years of concerns about child safety. The measure aims to create a normative shift in community understanding that these services are inappropriate for children under 16.
Evidence
Years of pressure applied from governments and civil society with inadequate responses from social media companies, and community expectations that government should act to protect children
Major discussion point
Regulatory Approaches and Government Action
Topics
Legal and regulatory | Children rights | Consumer protection
Parents have responsibility but face challenges, especially in developing countries where they lack digital literacy experience
Explanation
Brendan Dowling acknowledges that parents often use screens as an easy solution when children are upset or bored, but warns this prevents children from processing emotions and destroys creativity through eliminating boredom. He notes particular challenges in remote communities where connectivity reaches before digital literacy, and parents have no lived experience with social media.
Evidence
Work in the Pacific ensuring connectivity to remote villages, where social media access reaches communities before digital literacy, and parents have no actual lived experience to guide their children
Major discussion point
Multi-stakeholder Responsibility and Youth Engagement
Topics
Digital access | Capacity development | Children rights
Agreed with
– Amina Ramallan
– Martin Ruby
– Audience
Agreed on
Digital literacy and education are essential components of online safety
Flexible regulatory approaches are preferable to prescriptive ones in this emerging policy area
Explanation
Brendan Dowling argues that in new areas of policymaking and technology development, starting with more flexibility is better than being prescriptive and trying to pull back later. He advocates for allowing various options to proliferate to see what works best, with coalescence around more effective tools over time.
Evidence
Australia’s approach uses ‘reasonable steps’ standard that can be challenged in courts, with embedded protections for privacy and digital rights, and ongoing consultation processes
Major discussion point
Regulatory Approaches and Government Action
Topics
Legal and regulatory | Privacy and data protection | Digital standards
Vivek Silwal
Speech speed
167 words per minute
Speech length
222 words
Speech time
79 seconds
Simple birth date verification is insufficient; more sophisticated methods including AI and facial recognition may be needed
Explanation
Vivek Silwal questions whether simple birth date data is sufficient for age verification, particularly for youth from developing nations who may not be aware of their rights or platform exploitation. He suggests that AI platforms and face recognition might be needed to create better balance in age verification systems.
Evidence
Developing nations lack curricula teaching platform do’s and don’ts, unlike developed countries where these topics are taught in schools
Major discussion point
Age Verification and Social Media Access for Youth
Topics
Children rights | Digital access | Capacity development
Agreed with
– Martin Ruby
– Brendan Dowling
Agreed on
Age verification is technically challenging and requires flexible approaches
Amina Ramallan
Speech speed
151 words per minute
Speech length
2222 words
Speech time
882 seconds
Social media platforms should integrate safety by design principles and rights-based regulation, especially for youth features
Explanation
Amina Ramallan recommends that social media platforms should prioritize safety by design and rights-based regulation, integrating human rights impact assessments in product design for youth features. She specifically suggests turning off addictive features like unlimited scrolling and likes, and implementing stronger enforcement with global cooperation.
Evidence
Examples of addictive features like unlimited scrolling that create dopamine effects and targeted ads at children, despite policies claiming to protect child accounts
Major discussion point
Platform Responsibility and Safety by Design
Topics
Children rights | Human rights principles | Content policy
Current platform policies are not evolving as fast as social media technology, creating gaps in protection
Explanation
Amina Ramallan argues that while social media provides avenues for youth expression and dynamism, platform policies are not keeping pace with technological evolution. She points to issues like UI effects creating dopamine addiction, algorithm-driven targeted advertising to children, and lack of transparency in policies.
Evidence
Examples include unlimited scrolling creating addiction, targeted ads still appearing on child accounts despite policies, and ‘sharenting’ where parents expose children’s data without consent
Major discussion point
Platform Responsibility and Safety by Design
Topics
Children rights | Privacy and data protection | Content policy
Child online safety requires a multi-stakeholder approach involving platforms, governments, parents, schools, and civil society
Explanation
Amina Ramallan emphasizes that responsibility for child online safety is shared across multiple stakeholders, with each taking over where another’s responsibility ends. She illustrates this with examples of how responsibility shifts from parents to schools to government depending on the context and location of the child.
Evidence
Example of responsibility shifting from parent to teacher when child is at school, then to government when child attends government events or excursions
Major discussion point
Multi-stakeholder Responsibility and Youth Engagement
Topics
Children rights | Human rights principles | Capacity development
Agreed with
– Laura Rego
– Brendan Dowling
– Martin Ruby
Agreed on
Multi-stakeholder responsibility for child online safety
Social media provides significant benefits including creativity, innovation, job creation, peer connection, and civic engagement opportunities
Explanation
Amina Ramallan acknowledges that social media is deeply interwoven in young people’s lives, serving as a platform for creativity, innovation, job creation, peer connection, and civic engagement. She notes successful cases of young people using social media for governance and advocacy within their regions.
Evidence
77% of young people globally use the internet, with social media being integral to their daily activities and successful advocacy campaigns
Major discussion point
Benefits vs. Harms of Social Media for Youth
Topics
Children rights | Freedom of expression | Digital access
Disagreed with
– Martin Ruby
– Brendan Dowling
– Larry Magid
Disagreed on
Balance between benefits and harms of social media for youth
In developing countries, children often use adults’ phones to access social media, making device-level verification less effective
Explanation
Amina Ramallan points out that in developing countries, the percentage of 8-9 year olds owning smartphones is very low, so children often use phones belonging to siblings or parents to download and access social media. This creates challenges for device-level age verification since the phone is set up for an adult.
Evidence
Low smartphone ownership rates among young children in developing countries, with children downloading apps on adult-owned devices without the adult’s knowledge
Major discussion point
Age Verification and Social Media Access for Youth
Topics
Digital access | Children rights | Development
Disagreed with
– Martin Ruby
– Brendan Dowling
Disagreed on
Age verification implementation approach – device-level vs app-level vs blanket restrictions
Awareness campaigns must scale up and evolve to address new challenges like deepfakes and AI-generated content
Explanation
Amina Ramallan emphasizes that digital literacy awareness cannot be overstated and must continuously evolve. She notes that while awareness 10 years ago focused on basic social media access, today’s challenges include identifying deepfakes and distinguishing between AI-generated and real content.
Evidence
Evolution from basic social media awareness 10 years ago to current needs for deepfake identification and AI content recognition
Major discussion point
Digital Literacy and Education
Topics
Capacity development | Children rights | Content policy
Agreed with
– Martin Ruby
– Brendan Dowling
– Audience
Agreed on
Digital literacy and education are essential components of online safety
Larry Magid
Speech speed
168 words per minute
Speech length
292 words
Speech time
104 seconds
Platforms offer lifelines for marginalized youth, including LGBTQ children in unsupportive communities, potentially saving lives
Explanation
Larry Magid argues that while social media can harm people like bicycles and basketballs, there are also significant benefits including life-saving support systems for children in crisis. He specifically mentions LGBTQ children in communities where they’re ostracized who have been able to get literal lifelines through social media, and cases where mental health has improved as a result of social media use.
Evidence
Research showing benefits to young people from social media, children alive today who might not have been without social media support systems, and examples of improved mental health outcomes
Major discussion point
Benefits vs. Harms of Social Media for Youth
Topics
Children rights | Human rights principles | Gender rights online
Disagreed with
– Martin Ruby
– Brendan Dowling
– Amina Ramallan
Disagreed on
Balance between benefits and harms of social media for youth
Li Junhua
Speech speed
107 words per minute
Speech length
603 words
Speech time
336 seconds
Meaningful youth engagement is essential in policymaking, as young people are the primary users and should be included in decisions affecting them
Explanation
Li Junhua argues that meaningful protection cannot be imposed from the top down but must be co-created and co-managed with youth engagement. He emphasizes that placing youth at the center of policymaking is not just good practice but a moral imperative, as their voices are essential to shaping policies that reflect real experiences and needs.
Evidence
Over 77% of young people aged 15-24 use the Internet, with more than 80% active on social media, making them the primary users of these platforms
Major discussion point
Multi-stakeholder Responsibility and Youth Engagement
Topics
Children rights | Human rights principles | Capacity development
Agreed with
– Laura Rego
– Martin Ruby
– Audience
– Lynn St. Amour
Agreed on
Youth voices must be included in policy discussions
Laura Rego
Speech speed
128 words per minute
Speech length
840 words
Speech time
392 seconds
Brazil’s experience shows the importance of data protection authorities in enforcing age verification requirements
Explanation
Laura Rego describes how Brazil’s National Authority of Data Protection, following a request from Institute Alana, successfully required TikTok to implement age verification after discovering that users could access any content through direct links without age checks. This action was taken in response to concerns about children’s safety online.
Evidence
Specific case where TikTok was required to ask for user accounts to enable age verification after content could be accessed without age checks through direct links
Major discussion point
Regulatory Approaches and Government Action
Topics
Privacy and data protection | Children rights | Legal and regulatory
The principle ‘nothing about us without us’ should guide youth inclusion in policy discussions
Explanation
Laura Rego emphasizes that unilateral solutions have proven flawed, requiring participation from those most affected – children and teens. She advocates for calling young people to discussion spaces, qualifying them and hearing their voices so that solutions can bring safety while engaging the next generation of leaders.
Evidence
Research showing 83% of children between 9-17 in Brazil already have their own social media accounts, and Brazilian law principle that the best interest of minors must be guaranteed by whole society
Major discussion point
Multi-stakeholder Responsibility and Youth Engagement
Topics
Children rights | Human rights principles | Capacity development
Agreed with
– Li Junhua
– Martin Ruby
– Audience
– Lynn St. Amour
Agreed on
Youth voices must be included in policy discussions
Educational approaches should include multi-stakeholder guidelines showing what different actors should do
Explanation
Laura Rego describes Brazil’s 2025 guide for safe internet use for children, which takes a multi-stakeholder view providing guidelines for what enterprises, parents, children, and government should do. She emphasizes this as a first step in addressing online safety through collaborative approaches.
Evidence
Brazil’s 2025 guide for safety use of internet for children with multi-stakeholder guidelines for different sectors
Major discussion point
Digital Literacy and Education
Topics
Capacity development | Children rights | Human rights principles
Agreed with
– Amina Ramallan
– Brendan Dowling
– Martin Ruby
Agreed on
Multi-stakeholder responsibility for child online safety
Aditya Majumdar
Speech speed
153 words per minute
Speech length
120 words
Speech time
47 seconds
Blanket bans may disconnect youth from critical support networks and educational resources
Explanation
Aditya Majumdar argues that social media bans and restrictions targeting teens, while often justified as protecting youth, have been shown to frequently disconnect young people from critical support networks and educational resources, especially for marginalized teens. He questions how youth safety can be ensured without resorting to blanket bans that silence youth voices.
Evidence
Work tracking international social media bans and restrictions targeting teens, showing disconnection from support networks and educational resources particularly affecting marginalized teens
Major discussion point
Benefits vs. Harms of Social Media for Youth
Topics
Children rights | Human rights principles | Freedom of expression
Disagreed with
– Brendan Dowling
– Martin Ruby
– Larry Magid
Disagreed on
Scope of social media restrictions – complete ban vs regulated access
Audience
Speech speed
139 words per minute
Speech length
1756 words
Speech time
754 seconds
Companies should be held accountable for designing products that can cause harm, particularly through addictive features
Explanation
Multiple audience members argued that companies like Meta have created ‘the perfect cigarette’ and that the focus should be on regulating platforms rather than users. They suggest that if a product is designed in a way that can cause harm, the company that designed it should be held accountable in courts of law.
Evidence
Comparison to cigarettes and tobacco industry accountability, and the economic model of attention that is harmful to children
Major discussion point
Platform Responsibility and Safety by Design
Topics
Consumer protection | Legal and regulatory | Children rights
Disagreed with
– Brendan Dowling
– Martin Ruby
Disagreed on
Primary focus of regulation – platforms vs users
The focus should be on regulating platforms and their business models rather than restricting user access
Explanation
Audience members argued that instead of over-regulating the internet and social media users, the focus should be on regulating the ‘digital power structures’ and ‘wholesalers’ of these platforms. They advocate for addressing the economic model of attention and algorithmic design that creates harm rather than implementing age verification.
Evidence
Reference to the economic model of attention developed in the mid-2000s and comparison to removing nicotine from cigarettes rather than taking cigarettes away from children
Major discussion point
Platform Responsibility and Safety by Design
Topics
Legal and regulatory | Digital business models | Consumer protection
Disagreed with
– Brendan Dowling
– Martin Ruby
Disagreed on
Primary focus of regulation – platforms vs users
Platforms should provide transparency in their content moderation and safety policies
Explanation
Audience members emphasized the need for transparency from platforms, not just in publishing their policies and actions to protect minors, but also in involving digital natives in shaping corporate governance and policies. They argue that transparency is crucial for effective youth protection measures.
Evidence
Findings from Asia-Pacific Policy Observatory research on AI’s impact on digital issues, emphasizing transparency and youth participation in corporate policy-making
Major discussion point
Platform Responsibility and Safety by Design
Topics
Content policy | Human rights principles | Corporate governance
Young people should have meaningful seats at the table in shaping both government and corporate policies
Explanation
Multiple audience members argued that digital natives should be involved not just in government policy discussions but also in corporate governance and policy-making processes. They emphasize that youth participation is fundamental to creating effective solutions for online safety.
Evidence
Reference to Asia-Pacific Policy Observatory findings and emphasis on youth as digital natives who should participate in shaping policies that affect them
Major discussion point
Multi-stakeholder Responsibility and Youth Engagement
Topics
Human rights principles | Children rights | Corporate governance
Agreed with
– Li Junhua
– Laura Rego
– Martin Ruby
– Lynn St. Amour
Agreed on
Youth voices must be included in policy discussions
Digital literacy education should be mandatory and integrated into school curricula to teach safe social media use
Explanation
Audience members argued for introducing mandatory education courses on social media that would be obligatory for every person with social media accounts, providing regular education on how to use social media for good purposes. They also emphasized the need to educate parents about potential harms and how to help their children.
Evidence
Comparison of frontal lobe development in young adults and the need for ongoing education every few months for social media users
Major discussion point
Digital Literacy and Education
Topics
Capacity development | Online education | Children rights
Agreed with
– Amina Ramallan
– Martin Ruby
– Brendan Dowling
Agreed on
Digital literacy and education are essential components of online safety
There’s a need for age-appropriate educational content and alternatives to social media for children
Explanation
Audience members, particularly a high school student from Hong Kong, noted that children often use social media because they don’t have alternatives, and parents use devices to keep children attentive during activities like dinner. They questioned what alternatives parents can provide to divert children’s attention from social media.
Evidence
Personal observation of children scrolling social media during dinner and using iPads because of lack of alternatives
Major discussion point
Digital Literacy and Education
Topics
Children rights | Online education | Content policy
Lynn St. Amour
Speech speed
155 words per minute
Speech length
843 words
Speech time
325 seconds
The IGF Youth Summit represents a collaborative bottom-up approach to addressing digital governance challenges
Explanation
Lynn St. Amour emphasizes that the Global Youth Summit was developed through bottom-up consultations among various stakeholders including IGF host countries, UN IGF Secretariat, youth IGFs, and other youth-driven initiatives. This collaborative approach demonstrates the multi-stakeholder nature of internet governance and the importance of including youth voices in policy discussions.
Evidence
The summit was developed through bottom-up consultations among designated representatives of Norway, UN IGF Secretariat, various youth IGFs, and other youth-driven global internet governance initiatives such as the Internet Society’s Youth Ambassadors Program
Major discussion point
Multi-stakeholder Responsibility and Youth Engagement
Topics
Human rights principles | Children rights | Capacity development
Intergenerational dialogue is essential for effective internet governance policy-making
Explanation
Lynn St. Amour describes the summit as serving as a multi-stakeholder intergenerational panel between current and next generation experts and leaders. This approach recognizes that effective digital governance requires bridging the gap between experienced policymakers and the youth who are most affected by digital policies.
Evidence
The summit serves as a multi-stakeholder intergenerational panel between the current and next generation of experts and leaders
Major discussion point
Multi-stakeholder Responsibility and Youth Engagement
Topics
Human rights principles | Capacity development | Children rights
Agreed with
– Li Junhua
– Laura Rego
– Martin Ruby
– Audience
Agreed on
Youth voices must be included in policy discussions
Youth engagement should be structured and facilitated through dedicated programs and initiatives
Explanation
Lynn St. Amour highlights the importance of creating dedicated spaces and programs for youth participation in internet governance discussions. The Youth Track activities and summit represent a systematic approach to ensuring youth voices are heard and integrated into broader policy discussions.
Evidence
The summit is part of the IGF 2025 Youth Track and broader series of Youth Track activities, organized under the motto ‘Young Leaders for Multi-Stakeholder Governance of Digital Tech’
Major discussion point
Multi-stakeholder Responsibility and Youth Engagement
Topics
Capacity development | Human rights principles | Children rights
Agreements
Agreement points
Multi-stakeholder responsibility for child online safety
Speakers
– Amina Ramallan
– Laura Rego
– Brendan Dowling
– Martin Ruby
Arguments
Child online safety requires a multi-stakeholder approach involving platforms, governments, parents, schools, and civil society
Educational approaches should include multi-stakeholder guidelines showing what different actors should do
Parents have responsibility but face challenges, especially in developing countries where they lack digital literacy experience
Age verification is technically difficult but necessary, requiring balance between privacy protection and safety measures
Summary
All speakers agree that protecting children online cannot be the responsibility of a single stakeholder but requires coordinated efforts from platforms, governments, parents, schools, and civil society organizations working together.
Topics
Children rights | Human rights principles | Capacity development
Youth voices must be included in policy discussions
Speakers
– Li Junhua
– Laura Rego
– Martin Ruby
– Audience
– Lynn St. Amour
Arguments
Meaningful youth engagement is essential in policymaking, as young people are the primary users and should be included in decisions affecting them
The principle ‘nothing about us without us’ should guide youth inclusion in policy discussions
Youth voices are often lost in policy debates, and their perspectives on social media use differ significantly from adult assumptions
Young people should have meaningful seats at the table in shaping both government and corporate policies
Intergenerational dialogue is essential for effective internet governance policy-making
Summary
There is strong consensus that young people, as the primary users of social media platforms, must have meaningful participation in policy discussions that affect them, rather than having decisions made about them without their input.
Topics
Children rights | Human rights principles | Capacity development
Age verification is technically challenging and requires flexible approaches
Speakers
– Martin Ruby
– Brendan Dowling
– Vivek Silwal
Arguments
Age verification is technically difficult but necessary, requiring balance between privacy protection and safety measures
Australia’s approach requires platforms to take reasonable steps to restrict access for under-16s, with flexibility in implementation methods
Simple birth date verification is insufficient; more sophisticated methods including AI and facial recognition may be needed
Summary
All speakers acknowledge that age verification presents significant technical challenges and that there is no perfect solution, requiring flexible and evolving approaches rather than rigid prescriptive methods.
Topics
Privacy and data protection | Children rights | Digital standards
Digital literacy and education are essential components of online safety
Speakers
– Amina Ramallan
– Martin Ruby
– Brendan Dowling
– Audience
Arguments
Awareness campaigns must scale up and evolve to address new challenges like deepfakes and AI-generated content
Age verification is technically difficult but necessary, requiring balance between privacy protection and safety measures
Parents have responsibility but face challenges, especially in developing countries where they lack digital literacy experience
Digital literacy education should be mandatory and integrated into school curricula to teach safe social media use
Summary
There is unanimous agreement that digital literacy and ongoing education are fundamental to protecting children online, with recognition that these efforts must evolve to address new technological challenges.
Topics
Capacity development | Children rights | Online education
Similar viewpoints
These speakers share concern that overly restrictive approaches to social media regulation may cause more harm than good by cutting off beneficial uses and support systems that young people rely on.
Speakers
– Larry Magid
– Aditya Majumdar
– Martin Ruby
Arguments
Platforms offer lifelines for marginalized youth, including LGBTQ children in unsupportive communities, potentially saving lives
Blanket bans may disconnect youth from critical support networks and educational resources
The vast majority of social media content is normal and beneficial, with problematic content being a small percentage
Topics
Children rights | Human rights principles | Freedom of expression
Both recognize that platform design significantly impacts user safety, though they differ on whether current efforts are sufficient or whether more fundamental changes to business models are needed.
Speakers
– Audience
– Martin Ruby
Arguments
Companies should be held accountable for designing products that can cause harm, particularly through addictive features
Platforms have developed over 50 tools for child safety, including teen accounts with age-appropriate restrictions and parental controls
Topics
Platform Responsibility and Safety by Design | Consumer protection | Children rights
Both speakers recognize the unique challenges faced in developing countries where traditional assumptions about device ownership and parental digital literacy may not apply.
Speakers
– Amina Ramallan
– Brendan Dowling
Arguments
In developing countries, children often use adults’ phones to access social media, making device-level verification less effective
Parents have responsibility but face challenges, especially in developing countries where they lack digital literacy experience
Topics
Digital access | Children rights | Development
Unexpected consensus
Platform regulation advocacy from industry representative
Speakers
– Martin Ruby
Arguments
Age verification should be implemented at the device level during phone setup rather than app-by-app to create binding restrictions
Explanation
It is unexpected that a Meta representative would actively advocate for regulation, as Martin Ruby explicitly stated ‘We actually want to see regulation on age verification coming our way. It’s very rare that you hear something like that maybe from a player like us.’ This suggests industry recognition that self-regulation alone may be insufficient.
Topics
Legal and regulatory | Children rights | Digital standards
Acknowledgment of platform design flaws by industry representative
Speakers
– Martin Ruby
– Brendan Dowling
Arguments
Platforms have developed over 50 tools for child safety, including teen accounts with age-appropriate restrictions and parental controls
Social media was not designed for children and demonstrates clear harms to attention spans, mental health, and development
Explanation
There is unexpected consensus between the government representative and industry representative that social media platforms were not originally designed with children’s safety in mind and that significant problems exist, even as they propose different solutions.
Topics
Platform Responsibility and Safety by Design | Children rights | Content policy
Recognition of imperfect solutions by all stakeholders
Speakers
– Brendan Dowling
– Martin Ruby
– Amina Ramallan
Arguments
Flexible regulatory approaches are preferable to prescriptive ones in this emerging policy area
Age verification is technically difficult but necessary, requiring balance between privacy protection and safety measures
Current platform policies are not evolving as fast as social media technology, creating gaps in protection
Explanation
All stakeholders, including government, industry, and civil society representatives, acknowledge that there are no perfect solutions and that approaches must be flexible and evolving. This level of humility and recognition of complexity is unexpected in policy debates.
Topics
Legal and regulatory | Privacy and data protection | Children rights
Overall assessment
Summary
The discussion revealed significant consensus on fundamental principles: the need for multi-stakeholder approaches, youth inclusion in policy-making, the technical challenges of age verification, and the importance of digital literacy. However, disagreement remains on the balance between access and protection, with some favoring restrictive approaches and others emphasizing the benefits of social media access.
Consensus level
High consensus on principles and process, moderate consensus on implementation approaches. The level of agreement suggests potential for collaborative solutions, though the challenge lies in translating shared principles into effective, balanced policies that protect children while preserving beneficial uses of social media platforms.
Differences
Different viewpoints
Age verification implementation approach – device-level vs app-level vs blanket restrictions
Speakers
– Martin Ruby
– Brendan Dowling
– Amina Ramallan
Arguments
Age verification should be implemented at the device level during phone setup rather than app-by-app to create binding restrictions
Australia’s approach requires platforms to take reasonable steps to restrict access for under-16s, with flexibility in implementation methods
In developing countries, children often use adults’ phones to access social media, making device-level verification less effective
Summary
Martin Ruby advocates for device-level age verification during phone setup, Brendan Dowling supports Australia’s flexible platform-based approach requiring reasonable steps from companies, while Amina Ramallan points out that device-level solutions may not work in developing countries where children use adult-owned devices
Topics
Children rights | Privacy and data protection | Digital access
Scope of social media restrictions – complete ban vs regulated access
Speakers
– Brendan Dowling
– Martin Ruby
– Larry Magid
– Aditya Majumdar
Arguments
Social media was not designed for children and demonstrates clear harms to attention spans, mental health, and development
Youth voices are often lost in policy debates, and their perspectives on social media use differ significantly from adult assumptions
Platforms offer lifelines for marginalized youth, including LGBTQ children in unsupportive communities, potentially saving lives
Blanket bans may disconnect youth from critical support networks and educational resources
Summary
Brendan Dowling supports Australia’s under-16 ban citing documented harms, while Martin Ruby argues youth voices are lost and restrictions on 15-year-olds are excessive. Larry Magid and Aditya Majumdar emphasize the life-saving benefits for marginalized youth and risks of disconnecting them from support networks
Topics
Children rights | Human rights principles | Freedom of expression
Primary focus of regulation – platforms vs users
Speakers
– Audience
– Brendan Dowling
– Martin Ruby
Arguments
Companies should be held accountable for designing products that can cause harm, particularly through addictive features
The focus should be on regulating platforms and their business models rather than restricting user access
Australia’s approach requires platforms to take reasonable steps to restrict access for under-16s, with flexibility in implementation methods
The vast majority of social media content is normal and beneficial, with problematic content being a small percentage
Summary
Audience members argue for regulating platform design and business models rather than users, comparing social media to cigarettes. Brendan Dowling focuses on restricting user access through platform obligations, while Martin Ruby defends that most platform content is beneficial
Topics
Legal and regulatory | Consumer protection | Platform responsibility
Balance between benefits and harms of social media for youth
Speakers
– Martin Ruby
– Brendan Dowling
– Larry Magid
– Amina Ramallan
Arguments
The vast majority of social media content is normal and beneficial, with problematic content being a small percentage
Social media was not designed for children and demonstrates clear harms to attention spans, mental health, and development
Platforms offer lifelines for marginalized youth, including LGBTQ children in unsupportive communities, potentially saving lives
Social media provides significant benefits including creativity, innovation, job creation, peer connection, and civic engagement opportunities
Summary
Martin Ruby and Amina Ramallan emphasize the significant benefits and normal content on platforms, Larry Magid highlights life-saving support for marginalized youth, while Brendan Dowling focuses on documented harms and argues platforms weren’t designed for children
Topics
Children rights | Human rights principles | Content policy
Unexpected differences
Meta advocating for regulation of their own industry
Speakers
– Martin Ruby
Arguments
Age verification should be implemented at the device level during phone setup rather than app-by-app to create binding restrictions
Explanation
It’s unexpected that a major social media platform representative would actively advocate for regulation, as Martin Ruby explicitly states ‘We actually want to see regulation on age verification coming our way. It’s very rare that you hear something like that maybe from a player like us.’ This suggests the complexity of the issue has led even industry players to seek regulatory clarity
Topics
Legal and regulatory | Digital standards | Children rights
Disagreement on parental responsibility between developed and developing country perspectives
Speakers
– Brendan Dowling
– Laura Rego
Arguments
Parents have responsibility but face challenges, especially in developing countries where they lack digital literacy experience
Educational approaches should include multi-stakeholder guidelines showing what different actors should do
Explanation
While both speakers acknowledge parental responsibility, Laura Rego challenges Brendan Dowling’s emphasis on parental accountability by highlighting structural inequalities in developing countries where parents may lack safety options and resources, creating an unexpected tension between individual responsibility and systemic constraints
Topics
Children rights | Development | Digital access
Overall assessment
Summary
The main areas of disagreement center on implementation approaches for age verification (device-level vs platform-based vs app-level), the scope of restrictions (complete bans vs regulated access), the primary focus of regulation (platforms vs users), and the balance between benefits and harms of social media for youth
Disagreement level
Moderate to high disagreement with significant policy implications. While speakers generally agree on the need to protect children online, they fundamentally disagree on methods, scope, and focus of interventions. This reflects the complexity of balancing child safety, privacy rights, freedom of expression, and practical implementation challenges across different cultural and economic contexts. The disagreements suggest that achieving global consensus on youth social media policy will require extensive negotiation and potentially different approaches for different regions
Partial agreements
Partial agreements
Similar viewpoints
These speakers share concern that overly restrictive approaches to social media regulation may cause more harm than good by cutting off beneficial uses and support systems that young people rely on.
Speakers
– Larry Magid
– Aditya Majumdar
– Martin Ruby
Arguments
Platforms offer lifelines for marginalized youth, including LGBTQ children in unsupportive communities, potentially saving lives
Blanket bans may disconnect youth from critical support networks and educational resources
The vast majority of social media content is normal and beneficial, with problematic content being a small percentage
Topics
Children rights | Human rights principles | Freedom of expression
Both recognize that platform design significantly impacts user safety, though they differ on whether current efforts are sufficient or whether more fundamental changes to business models are needed.
Speakers
– Audience
– Martin Ruby
Arguments
Companies should be held accountable for designing products that can cause harm, particularly through addictive features
Platforms have developed over 50 tools for child safety, including teen accounts with age-appropriate restrictions and parental controls
Topics
Platform Responsibility and Safety by Design | Consumer protection | Children rights
Both speakers recognize the unique challenges faced in developing countries where traditional assumptions about device ownership and parental digital literacy may not apply.
Speakers
– Amina Ramallan
– Brendan Dowling
Arguments
In developing countries, children often use adults’ phones to access social media, making device-level verification less effective
Parents have responsibility but face challenges, especially in developing countries where they lack digital literacy experience
Topics
Digital access | Children rights | Development
Takeaways
Key takeaways
Age verification for social media is technically challenging but necessary, requiring a balance between privacy protection and child safety
A multi-stakeholder approach is essential for child online safety, involving platforms, governments, parents, schools, and civil society working together
Youth voices must be meaningfully included in policymaking processes that affect them, following the principle ‘nothing about us without us’
Social media platforms should implement safety-by-design principles and age-appropriate protections rather than retrofitting safety measures
Digital literacy education must be integrated into school curricula and scaled up to address evolving challenges like AI and deepfakes
There is significant debate about whether to focus on regulating platforms and their business models versus restricting user access
The benefits of social media for youth (creativity, connection, civic engagement, support networks) must be weighed against documented harms (cyberbullying, addiction, mental health impacts)
Device-level age verification during phone setup may be more effective than app-by-app verification, especially for comprehensive protection
Developing countries face unique challenges where children often access social media through adults’ devices, making traditional age verification less effective
Resolutions and action items
The youth message from this summit will be included as an official output of IGF 2025
Australia will implement its social media age restriction legislation by December 2025, requiring platforms to take reasonable steps to restrict under-16 access
Continued consultation processes are needed to develop flexible age verification mechanisms that respect privacy rights
Platforms should provide greater transparency in their content moderation and safety policies
Digital literacy programs should be expanded and integrated into educational curricula globally
Multi-stakeholder guidelines should be developed showing specific responsibilities for different actors (governments, platforms, parents, schools)
Unresolved issues
How to effectively implement age verification in developing countries where children primarily use adults’ devices
The appropriate balance between platform regulation and user access restrictions
How to address the economic model of attention and algorithmic design that may be inherently harmful to youth
The effectiveness of different age verification technologies and which methods should be considered ‘reasonable steps’
How to ensure meaningful youth participation in corporate governance and policy development beyond consultation
The challenge of content moderation and determining appropriate boundaries for political speech and misinformation
How to provide adequate digital literacy education in communities where parents lack digital experience
The question of legal liability for platforms when their design causes harm to users
Suggested compromises
Flexible regulatory approaches that allow platforms to choose from multiple age verification methods rather than prescribing single solutions
Device-level age verification during phone setup with parental controls, allowing exceptions for older teens
Graduated restrictions based on age groups (e.g., different rules for 8-year-olds versus 15-year-olds) rather than blanket bans
Combining technological solutions with educational approaches and parental responsibility rather than relying on any single intervention
Transparency requirements and regular reporting from platforms about their safety measures and effectiveness
Multi-stakeholder responsibility frameworks that distribute obligations across different actors rather than placing full responsibility on any single entity
Allowing chronological feeds as alternatives to algorithmic feeds to reduce addictive design elements while maintaining platform functionality
Thought provoking comments
How do we protect the young people online without compromising their rights or limiting their freedom to participate fully in the digital world? I believe that the answer is within this room. Meaningful protection cannot be imposed from the top down. It must be co-created and co-managed with the young engagement.
Speaker
Li Junhua (UN Under-Secretary General)
Reason
This comment reframes the entire debate from a binary choice between safety and freedom to a collaborative approach that centers youth voices. It challenges the traditional top-down regulatory approach and establishes the philosophical foundation for meaningful youth participation in policy-making.
Impact
This set the tone for the entire discussion, establishing youth participation as a central theme that was repeatedly referenced throughout. It influenced subsequent speakers to address how their approaches incorporate youth voices and shaped questions about meaningful inclusion in policy-making processes.
I think the problem is that under 16, I mean, I think there is a big difference between someone with the age of 8 and 15… I think sometimes I feel the young people’s voice are getting lost in these debates… when I sometimes talk to my 15-year-old, now she’s a 16-year-old daughter, about these things, I think her generation is just in a different place.
Speaker
Martin Ruby (Meta)
Reason
This comment introduces crucial nuance to age-based restrictions by highlighting developmental differences within the ‘under 16’ category and personalizes the debate through his own parenting experience. It challenges blanket age restrictions and emphasizes generational differences in digital literacy.
Impact
This comment shifted the discussion from supporting blanket bans to questioning their appropriateness, leading to more nuanced conversations about age-appropriate design rather than complete restrictions. It prompted the Australian Ambassador to defend the rationale behind the 16-year threshold and influenced later discussions about flexibility in policy approaches.
My commentary is that social media is absolutely is not for children but it’s also bad for young adults even if they are 16 or 18 or even 20 because their frontal lobe hasn’t been developed yet… introducing some kind of education course to social media that would be obligatory
Speaker
Audience member
Reason
This comment expands the harm discussion beyond children to include young adults, introducing neuroscience (frontal lobe development) as a factor. It shifts focus from age verification to mandatory digital literacy education as a solution.
Impact
This broadened the scope of the discussion beyond the typical child safety framework and reinforced the importance of digital literacy as a complementary approach to regulation. It led to strong agreement from multiple panelists about the necessity of digital literacy programs.
I think companies like matter created the perfect cigarette and now we’re trying to take them away from our children… please regulate the platforms not the users do not take this cigarette away from our children but take the nicotine out of this platform
Speaker
Wouter (Netherlands)
Reason
This powerful analogy reframes the entire debate by comparing social media platforms to cigarettes and suggesting that the addictive design elements (the ‘nicotine’) should be removed rather than restricting access. It challenges the focus on age verification and redirects attention to platform design and business models.
Impact
This comment created a significant shift in the discussion, prompting both the Australian Ambassador and Meta representative to directly address platform responsibility versus user restrictions. It reinforced arguments about regulating business models rather than users and influenced subsequent questions about economic models of attention.
Shouldn’t we be focusing less on verifying the age of the users and more about verifying the responsibility and the accountability of platforms profiting from young people’s data and attention? So how do we shift the conversation from regulating the youth to regulating the digital power structures around them?
Speaker
Louvo Gray (South African Youth IGF)
Reason
This comment fundamentally challenges the premise of age verification by questioning whether the focus should be on users versus platforms. It introduces concepts of digital power structures and profit motives, shifting from individual protection to systemic accountability.
Impact
This reinforced the platform accountability theme introduced earlier and prompted Meta’s representative to defend age verification as a practical necessity while acknowledging the broader systemic issues. It contributed to the discussion’s evolution toward examining business models and corporate responsibility.
We have to look at the situation too, by the view of someone in the development country, because it’s very easy when you live in somewhere that is mostly safe, say, you can take your kid to a park, but when you live in somewhere there’s dangers… it’s common that people in situations of vulnerability use more those tools to help raise those children.
Speaker
Laura Rego (Brazilian IGF)
Reason
This comment introduces crucial socioeconomic and geographic context to the debate, challenging assumptions about parental choices and highlighting how safety, poverty, and social support systems affect technology use. It brings a Global South perspective to a discussion dominated by developed country viewpoints.
Impact
This added important nuance to discussions about parental responsibility and alternative activities for children. It grounded the theoretical policy discussion in real-world constraints faced by families in different circumstances and reinforced the need for multi-stakeholder approaches that address underlying social conditions.
Overall assessment
These key comments fundamentally shaped the discussion by challenging binary thinking and introducing systemic perspectives. The conversation evolved from a simple safety-versus-access debate to a complex examination of youth agency, platform accountability, socioeconomic factors, and the need for collaborative solutions. The most impactful comments consistently pushed back against top-down, one-size-fits-all approaches, instead advocating for nuanced, context-sensitive policies that center youth voices and address root causes rather than symptoms. The discussion’s trajectory moved from technical implementation questions toward broader questions about digital rights, corporate responsibility, and equitable policy-making processes.
Follow-up questions
How do we protect young people online without compromising their rights or limiting their freedom to participate fully in the digital world?
Speaker
Li Junhua
Explanation
This fundamental question addresses the core tension between safety and rights in digital policy, requiring further exploration of balanced approaches.
How should the user experience change in terms of advertisement for kids and teens on social media platforms?
Speaker
Giovanna
Explanation
This addresses a key aspect of platform design that affects youth safety, requiring research into age-appropriate advertising practices and their regulation.
How can we ensure legislation is strong enough while maintaining flexibility in enforcement of social media bans?
Speaker
Cosima
Explanation
This explores the balance between regulatory effectiveness and adaptability, which is crucial for developing workable policy frameworks.
What more specifically can be done on digital literacy given that platforms facilitate harms by being the platform where it happens?
Speaker
Cosima
Explanation
This seeks concrete solutions for digital literacy programs that address platform-specific risks and harms.
How can children and teens be included in the design of policy for social media platforms?
Speaker
Umut Pajaro
Explanation
This addresses the need for meaningful youth participation in policymaking processes that directly affect them.
Who holds the most responsibility when it comes to making sure children are at the forefront of discussions about social media?
Speaker
Amy
Explanation
This explores accountability structures for ensuring youth voices are heard in policy discussions.
How can we ensure that social media companies use the most effective verification methods rather than just methods that work but aren’t necessarily the most effective?
Speaker
Amy
Explanation
This addresses the need for standards and oversight to ensure optimal age verification implementation.
How do we shift the conversation from regulating the youth to regulating the digital power structures around them?
Speaker
Louvo Gray
Explanation
This fundamental reframing question challenges current regulatory approaches and requires research into alternative policy frameworks.
What are some ways that parents can get involved in averting kids’ attention from social media to alternatives?
Speaker
Claire
Explanation
This seeks practical solutions for parents to provide alternatives to social media engagement for children.
How do technology companies and governments address the role of social media in political radicalization, and what interventions are appropriate?
Speaker
Heilan
Explanation
This addresses the broader societal impacts of social media platforms and requires research into content moderation and political speech policies.
How can fact-checking mechanisms be used more strategically to counter political radicalization and misinformation?
Speaker
Heilan
Explanation
This explores specific technical and policy solutions for addressing harmful political content on platforms.
If a product is designed in a way that can cause harm, shouldn’t the company that designed that product be held accountable by those who are harmed in court of law?
Speaker
Rick Lance
Explanation
This addresses fundamental questions of corporate liability and legal frameworks for platform accountability.
How can we address the economic model of attention that underlies social media platforms and is harmful to children?
Speaker
Anonymous audience member
Explanation
This requires research into alternative business models and regulatory approaches to attention-based monetization.
How can digital natives have a seat at the table in shaping corporate governance and policies, not just government policy?
Speaker
Kenneth
Explanation
This explores mechanisms for youth participation in private sector decision-making processes that affect them.
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.