Main Topic 5: The Age Verification Dilemma: Balancing child protection and digital access rights

14 May 2025 07:00h - 08:30h

Main Topic 5: The Age Verification Dilemma: Balancing child protection and digital access rights

Session at a glance

Summary

This discussion focused on the challenges and considerations surrounding age verification and child protection in the digital environment. Experts from various fields debated the balance between safeguarding children online and preserving digital rights and access.


The conversation highlighted the complexities of implementing age verification systems, with concerns raised about privacy, data protection, and potential exclusion of marginalized groups. Speakers emphasized the need for age-aware rather than identity-aware solutions, stressing that technology alone cannot solve the issue without considering broader societal factors.


Participants discussed the roles of different stakeholders, including parents, educators, tech companies, and policymakers, in creating safe online spaces for children. The importance of involving children themselves in designing these solutions was underscored, recognizing their right to participation and privacy.


The debate touched on the global implications of age verification measures, with calls for internationally negotiated standards to preserve internet interoperability. Speakers also addressed the distinction between age verification and age assurance, noting the need for context-aware, rights-based approaches.


While some advocated for swift implementation of protective measures, others cautioned against rushed solutions that might inadvertently create new problems. The discussion emphasized the need for a nuanced approach that considers diverse perspectives and balances child protection with digital inclusion and privacy rights.


Overall, the session highlighted the complexity of the age verification dilemma and the need for continued multi-stakeholder dialogue to develop effective, ethical, and inclusive solutions for child safety online.


Keypoints

Major discussion points:


– The need to balance child protection online with privacy, digital rights, and Internet accessibility


– The differences between age verification and age assurance, and the challenges of implementing effective systems


– The role of parents, educators, and platforms in creating safe online environments for children


– Concerns about age verification systems excluding or marginalizing certain groups


– The importance of including children’s perspectives and designing age-appropriate online spaces with their input


The overall purpose of the discussion was to explore the complex issues surrounding online age verification and child protection, considering various stakeholder perspectives and potential approaches.


The tone of the discussion was largely constructive and collaborative, with participants acknowledging the importance of the issue while raising concerns about implementation challenges. There was some tension between those advocating for stronger technical solutions and those cautioning against over-reliance on technology alone. Overall, the tone remained respectful as participants shared diverse viewpoints on this nuanced topic.


Speakers

– Esther Passaris – Legislator from Kenya


– Michael Terhörst – Federal Agency for Child and Youth Protection in the Media, Germany


– Moritz Taylor – Moderator


– Manon Baert – Senior EU Affairs Officer, Five Rights Foundation


– Torsten Krause – Political scientist and child rights researcher, Digital Opportunities Foundation, Germany


– Tatiana Tropina – Senior Advisor, Institutional Relations, Internet Society


– Iain Corby – Executive Director, Age Verification Providers Association; Chair of IEEE Standards Association Interoperable Age Verification Standards Development


– Karen Mulberry – IEEE Standards Association, Moderator


Additional speakers:


– Viktoriia Omelianenko – Project Officer, International Telecommunication Union


– Anastasia Feuerstein – Director of an academic campus specialized in cybersecurity


– David Frautschy – Area of expertise not specified


– Ozgur Kesim – CEO of Code Blau GmbH


– Atul Kerimgun – Vice President of Information and Communication Technologies Authority, Turkey


– Katrin – Works for a media authority in Germany


– Yegor – From the Ukrainian office (full name/title not provided)


– Valentin Koval – Member of Ukrainian Media Regulator


– Pilar – Internet Governance Forum in Spain, former youth digger


– Tapani Taasvainen – Electronic Frontier Finland


– Mia Kühlewin – Internet Architecture Board


– Desario Dushi – Program Committee Member of EuroDIG


Full session report

Age Verification and Child Protection Online: Balancing Safety and Digital Rights


This comprehensive discussion brought together experts from various fields to debate the complex issues surrounding age verification and child protection in the digital environment. The conversation highlighted the challenges of implementing effective safeguards for children online whilst preserving digital rights, privacy, and internet accessibility. The session coincided with the anniversary of the EU Better Internet for Kids strategy, providing context for the urgency of these discussions.


Key Themes and Debates


1. Age Verification vs. Age Assurance


Karen Mulberry introduced the distinction between age verification and age assurance at the beginning of the session. Age verification involves confirming a user’s exact age, while age assurance uses various methods to estimate age or ensure age-appropriate experiences. Michael Terhörst from Germany’s Federal Agency for Child and Youth Protection in the Media expanded on this, arguing that age assurance enables safe participation, not just blocking access. Manon Baert, Senior EU Affairs Officer at Five Rights Foundation, supported this approach, emphasizing that the current digital environment is not designed for children’s safety.


2. Balancing Protection and Digital Rights


A central theme was the need to balance child protection with broader concerns about digital rights and internet accessibility. Esther Passaris, a legislator from Kenya, emphasized that age verification must not become a barrier to participation, particularly for marginalized groups in the Global South. Tatiana Tropina, Senior Advisor at the Internet Society, stressed the importance of preserving the global interoperability of the Internet.


Iain Corby, Executive Director of the Age Verification Providers Association, proposed that age verification should make the Internet “age-aware” rather than “identity-aware.” He discussed the potential of zero knowledge proof and privacy-enhancing technologies to achieve this balance. However, some participants argued against separate Internets for children and adults.


David Frautschy highlighted the difference between offline and online age verification, noting the potential chilling effect of online verification on free expression and access to information.


3. Effectiveness and Risks of Age Verification Technology


The discussion revealed significant disagreement about the effectiveness and potential risks of age verification technology. Tatiana Tropina expressed skepticism about mandatory age verification tools, citing privacy and security risks. Concerns were raised about the potential for age verification systems to leak exact ages through repeated checks.


Anastasia Feuerstein, Director of an academic campus specialized in cybersecurity, suggested using AI and industry collaboration for dynamic, adaptive solutions. An audience member proposed the need for transparent, privacy-preserving, and AI-driven age assurance systems.


4. Global Considerations and Inclusivity


Esther Passaris emphasized that solutions must work for the Global South and marginalized groups, cautioning against approaches that might inadvertently exclude those without formal identification. The discussion touched on expanding European approaches beyond Europe, raising questions about the appropriateness of applying European standards globally.


5. Regulatory and Industry Approaches


Manon Baert mentioned the draft guidelines from the EU Commission on Article 28 of the Digital Services Act (DSA), highlighting ongoing regulatory efforts. The Ukrainian Media Regulator representative stressed the need for political decisions rather than just technical solutions.


There was discussion about the need for industry collaboration and binding cross-sector commitments for safety by design. The potential impact of age verification on business models and the commercial market was also noted.


6. Parental Involvement and Digital Literacy


The role of parents in online child protection was debated, with research showing low adoption of technical measures by parents. Iain Corby argued that parental controls alone are insufficient and that society must help. Manon Baert suggested that discussion is better than control for supporting children online.


The importance of digital literacy education in schools was emphasized as a complementary approach to technical solutions.


Unresolved Issues and Future Directions


The discussion highlighted several unresolved issues requiring further attention:


1. Implementing age verification without excluding marginalized groups or those without formal identification.


2. Balancing age verification with children’s right to privacy and anonymity online.


3. Determining the appropriate role of data protection authorities in age verification processes.


4. Addressing age verification for services that don’t require logging in, such as advertising on platforms.


Suggested compromises and future directions included:


1. Developing “double blind” or zero knowledge proof systems to protect privacy while verifying age.


2. Using AI-driven content moderation with dynamic real-time filtering to create age-appropriate environments instead of strict blocking.


3. Implementing multiple options for age verification to ensure accessibility.


4. Engaging children in the design of online safety measures to ensure their perspectives are considered.


5. Exploring self-labeling of websites with metadata to allow parental control without risking privacy and security.


Conclusion


The discussion underscored the complexity of the age verification dilemma and the need for continued multi-stakeholder dialogue. While there was broad agreement on the importance of protecting children online, significant challenges remain in developing effective, ethical, and inclusive solutions that balance safety with digital rights and internet accessibility. The conversation highlighted the need for nuanced, context-aware approaches that consider diverse global perspectives and involve children themselves in the design process.


As the digital landscape continues to evolve, addressing these challenges will require ongoing collaboration between policymakers, technologists, child rights advocates, and other stakeholders to ensure a safe and inclusive online environment for children worldwide.


Session transcript

Moritz Taylor: Good morning everybody. Welcome to this third and final day of EuroDIG. I hope that you had a great time so far. On Monday if you were here for day zero, yesterday if you were here for day one, and today is the last day. So you have one more main session here in the hemicycle today. It is titled The Age Verification Dilemma, Balancing Child Protection and Digital Access Rights. As someone who used to work in the Children’s Rights Division here at the Council of Europe, this is a topic that is very interesting, very controversial. All sorts of ideas come and clash in this debate, so I’m really looking forward to it. Before we begin though, and we hand over to Karen, I’ll hand over to João, our online moderator, to go over the rules. Thank you very much.


Online moderator: Hello, good morning. So for those joining on Zoom, we’d appreciate it if you raise your hand when you want to take the floor. And I will be the one signaling the moderator and allowing you to unmute when your statements are invited. For those who want to join Zoom and are in the room, I ask you that you join with your microphone muted and the speaker of your device turned off. And now I hand the floor to the moderator of the session.


Karen Mulberry: Good morning, everyone. I’m Karen Mulberry. I’m with the IEEE Standards Association, and I’d like to welcome you to our discussion today on age verification and the dilemmas that are being faced by balancing child protection against digital rights. But before we get started, I thought I’d like to recognize the fact that May 11th was the third anniversary of the EU Better Internet for Kids strategy, just in case you missed that. Because that’s actually tied to a lot of the things I think we’re going to be discussing today. Now, I’d like to outline the program for you. First of all, when you speak, there’s a button by your microphone that you need to push so your microphone can be activated and we can hear you. So it’s very important to note that. And the fact that for our program, we’re going to have some key participants provide different aspects of age verification and online protection. They’re each coming from a different industry, different perspective, and different passion. So that should lend a lot of diversity to our discussion later on today. Now, first, I need to provide some context for you. There is a very interesting dilemma between what is age verification and what is age assurance. They seem to be the same, but they’re actually very different. So age verification is generally defined as the process of confirming that an individual is of a required minimum age typically to access certain products, services, or content that are legally restricted based on age. So that requires proof. Age assurance is a process of estimating or verifying a user’s age to ensure that they can appropriately access age-restricted content, services, or products. Now, you look at the process, and I know in a lot of the legislation, the terms seem to be intertwined. And so it’s just nice to know the perspective of when you’re talking about verifying something, it’s actually based on proof. The age assurance actually is proof, but also estimating that the respondent is of the appropriate age to access the content. So what I’d like to do to get us started is introduce our key participants who will provide their expert opinions on what is going on within their industry or their company regarding age verification and child online protection. So we have Tatiana Tropina, and I’m going to be informal here, who’s a Senior Advisor, Institutional Relations from the Internet Society. We have Iain Corby, he’s Executive Director, Age Verification Providers Association. He’s also Chair of the IEEE Standards Association, Interoperable Age Verification Standards Development. Michael, who’s Head of the KIDD Germany, part of the Independent Federal Office for the Enforcement of Children’s Rights. And Manon, who’s a Senior EU Affairs Officer, Five Rights from the United Kingdom. So I’d like to welcome our speakers, and we will start off with Tatiana Tropina, please.


Tatiana Tropina: Hello, everyone, and thank you for having me on this panel. At the Internet Society, we believe that Internet is for everyone. Internet improves lives of people and our society for the better. Our goal is for Internet to be open, globally connected, secure, and trustworthy. And in this light, I want to start with one big question. Do mandatory age verification tools make Internet more secure? We do understand that there is a problem. There is a problem of child safety online. And we do understand that protection of children is a very important goal. But what is no less important is how we go about it. And this is where we at the Internet Society have some concerns about mandatory age verification tools. And we believe as we proceed with rolling out mandatory age verification, especially in terms of deploying the technical tools, there are quite a few risks that have to be addressed before we proceed with those. These tools affect not only kids. They will affect everybody online, kids and adults alike. And they create certain risks to privacy and security. They potentially can create barriers to accessing the Internet and digital services. And lastly, they can impact the global interoperable nature of the Internet. And I will briefly go through these risks. The first set of risks we see are the risks to privacy and security. Mandatory age verification tools often force users to share very sensitive information. For example, financial accounts or governments issued ID just to prove that they are old enough. So this is not uncommon to require to upload this ID on the website. And here is the problem. The more sensitive data we collect, the more attractive these systems and the more vulnerable these systems become to all sorts of misuse. From data breaches to other types of misuse. And even if we set aside the issue of malicious actors being interested in this data, how can we ensure, and here again I’m highlighting the risk, how we ensure that this data are not amassed, not sold to the third party to track users and not misused in other ways. Now, of course, I know that we can say we do perform age verification checks in the physical world. Fine, we do. I can totally agree we do. But in the physical world, in the offline world, this age verification is temporary. You show your ID and you don’t give it to anybody, right? You don’t upload it anywhere. In the digital world, you upload it and it leaves the digital trail. And there is no way to actually safely, maybe there is a way to safely upload your ID, but there is no way to ignore the fact that it can be tracked, that it can be collected, data can be breached. And in this regard, we are witnessing the increasing trend of lack of trust in the Internet. And this situation will not help to solve this problem. It’s going to get worse. Research already shows that when people fear that they can be tracked, that their data can be breached, they will be less inclined to use these legitimate services that require age verification. So to sum up, age verification, mandatory age verification with technical tools, at least for now, does not offer a holistic or safe approach to protecting young people online if we do not address this problem. We believe that one of the aspects to solve this problem is to ensure that mandatory age verification tools comply with privacy and security guidelines and they can be audited independently with reports being publicly available. Now the second set of risks, no less important, are barriers to accessibility. So you might think that age verification is just a minor bump on the road, like we think in the physical world when we are buying alcohol, we are showing our ID and going away. It’s not the same in the digital realm. For some people it’s not a minor bump on the road, it’s going to be a huge roadblock to the accessibility. Think about those people who do not have government issued ID. Think about those people who cannot provide this ID or financial account information for various reasons and most of the time it would affect elderly people or marginalized and disadvantaged communities who already have problems accessing various digital services. In addition to financial information and ID of course one can say, but you can use facial scanning tools, but it is well known, even though technology advances, that these tools are working much less perfectly on people with darker skin tones or on people with disabilities, on older individuals. And then I know that this is a very uncomfortable thought because you all probably have a device in your pocket, in your bag, or a laptop with a working webcam or device camera, but it’s not the case for everybody in this world. Some people just do not have this. So older adults might have their access hindered because of certain health conditions, cataracts, heart veins, stroke, and other conditions basically block them from using these tools or making the use of these tools very difficult. And it’s not only older adults who are affected. People in their late teens and younger adults might be also affected by these facial recognition tools because they have range of several years. So they also can be disproportionately impacted. These people who face access barriers will be cut off from transbody services that are trying to solve the problem of protecting children online. And what really concerns us is that they might end up in darker corners of the Internet. And this is the opposite of what we want to solve, what we want to achieve by trying to introduce tools for safety online. And I know that there are claims that technology is improving, that technology is solving the biases, that technology can recognize skin tones better. But technology cannot address all these issues. So technology maturity addresses only the issues of technical verification on the layer where it’s performed, right? It might address this problem. But beyond this layer, even if we have a foolproof technology, this is the claim that I am personally – I would probably argue against that we can have a foolproof technology. But imagine that we do. Beyond this layer, what happens to data? How we address privacy and security risks? Not all of access barriers created by technology are technical. So it does not solve these issues. And certainly it does not solve the issue of trust on the Internet. And lastly, I’m wrapping up. This mandatory age verification technology, if deployed by various countries for various services, various ages, have impact on global interoperability of the Internet. The more countries are rolling these solutions for various ages, for various platforms, for various requirements, it can affect the globally connected Internet and its interoperability. And I know that we’re at European Dialogue on Internet Governance, but we do have to think globally. We do need to ensure that by dealing with societal problems, by dealing with this important issue of insurance safety of the most vulnerable online, we do not break the technical layer of the Internet, which actually provides us with this connectivity. So globally we believe that this solution, if we move forward with age verification technology, has to move to the globally negotiated standards, which will include various stakeholders, which will be done in a multi-stakeholder manner, allowing to factor these risks and ensure that we put security and privacy at the core of our solutions, and also that we preserve the global interoperability of the Internet. And to wrap up, I want to just give an example of how it can be done without these risks. For example, there are websites that are self-labelling with metadata to allow parents to control access of their kids to harmful content or adult content. So these tools empower and protect families, empower and protect users without risking their privacy and security. I thank you. Thanks for having me. And I hope that we have a very interesting dialogue here. Thank you.


Karen Mulberry: All right. Next we have an online participant, Iain Colby, who will actually speak on his activities with the Age Verification Association and Standards Development. Iain, over to you online.


Iain Corby: Thank you very much for having me today. My name is Iain Colby. I’m the Executive Director of the Age Verification Providers Association. We’re a global trade body representing over 30 different providers of age verification and age estimation technology, which are the two categories of age assurance that we talk about. So I hope I can provide some reassurance to the previous speaker. What we’re trying to do in the world of age assurance is to apply the same norms of society in the real world to the online world. So we wouldn’t allow our kids to walk into downtown Strasbourg and walk into a nightclub or a casino, or I don’t know if they have them, a strip club, without checking their age. And we think we need to be able to do the same thing online. And we can do it in broadly the same way. And if we can put a man on the moon, then we can certainly prove your age online without putting your personal data or privacy at risk. And that is the essence of the age assurance industry, proving your age without disclosing your identity. So the principal way that we would do that is by using an independent third party to do the age assurance, and then they simply say yes or no to the digital service that needs to know whether you’re over or under or between a certain age. And it’s important to know that that third party, once it has checked your age, does not retain any personal data at all. And in fact, some of the methods don’t even require that third party to touch the data in the first place. So let me give you a few examples of how we might do that. We’ve heard earlier the use of your physical ID to create a digital identity. And that’s very straightforward. You just take a selfie, you show your ID to your camera. We compare the ID photo to the selfie. We make sure you’re a live person, not some AI generated image or the documents, not a fake image. And if the two match, then we can calculate your exact date of birth and therefore your exact age. Now, that process could be done in the cloud, but there are some companies which now do it entirely on your device. So your personal data, your ID and your selfie never leave the palm of your hand. We can also use things like open banking, where you just log into your bank and the bank simply confirms, yes, this person is over 18 or this is their date of birth to a third party provider. There are some innovative methods using estimation. We heard about facial age estimation already. That’s accurate to within about a year. And it’s important to note that the concerns around racial bias really arose around facial recognition technology in its early days. Facial estimation technology is a very different technology and suffers much less from bias. But we do include in our requirements for any of these technologies analysis of whether a system is biased or not. Now, of course, the main reason why things are biased is because the way AI works is you train it with some faces where you know the age and you then apply patterns from those faces to faces where you don’t know the age and look to see which is the most likely match. So as long as you have a very diverse range of training data, then you will have less problems in terms of any bias in the outcome. Now, those systems are generally working to within a year or two of your actual age. So that’s not great if you are just turning 18 and you want to buy alcohol for the first time. But you’re as long in the tooth as I am. Then if I’m. Iain Corby, Ozgur Kesim, David Frautschy, Iain Corby, Ozgur Kesim, David Frautschy, Iain Corby, Ozgur Kesim, David Frautschy, Iain Corby, Ozgur Kesim, David Frautschy, Iain Corby, Ozgur Kesim, David Frautsch- Iain Corby, Ozgur Kesim, David Frautsch- Iain Corby, Ozgur Kesim, David Frautsch- Iain Corby, Ozgur Kesim, David Frautsch- Iain Corby, Ozgur Kesim, David Frautsch- Iain Corby, Ozgur Kesim, David Frautsch- Iain Corby, Ozgur Kesim, David Frautsch- We did some work recently with ETSI, the European Standards Body, to look at the gaps between what we need and what standards exist. And I think there are some further standards that will be required in sort of the periphery, but the core standards are there. I also want to briefly touch on interoperability because this has become more and more important. And again, this is really useful for accessibility and inclusivity, because this means you just need to find one way to prove your age to one website, and then you can seamlessly reuse that same check on any other digital service that you need to access. And there are a couple of major initiatives around this, one in the private sector and one in the public sector. So the European Commission has just finished developing a what they call an interim app. It’s a sort of slimmed down version of what will in future be the European Digital Identity Wallet. And that’s being offered to member states and they can implement it in their member state with a degree of sort of local design. And that allows you to use your national ID to turn that into an age token. And the reason they didn’t wait for the European wallet is because there are some issues around traceability and trackability. If you use the technology designed into the wallet, the interim app has mitigated those risks. So you are not traceable or trackable when you use this new interim solution to prove your age online. That’s going to mostly focus at the 18 plus use case to start with. And then there’s a second area where I’ve been involved called EU Consent, which was originally a European Union funded project to do interoperability, which allowed us to tie together different age verification providers. We’ve now moved on to a decentralized model for that where you can download a small mini app into your browser, not one you get from the app store, just automatically drops into your browser. And that allows you to have a token on your device, which other age assurance providers can see. And then they can rely on that to redo the age check again seamlessly without you needing to be involved. We might from time to time ask you to put in a PIN number, a password or biometric password to prove that you’re the same person who did the original check. There is some talk about moving where we do the age checks around what we call the technical stack. So maybe we don’t have the digital service be responsible. We put the responsibility, for example, on the app stores. The app stores are resistant to that. And my main question on that is who would be liable if it goes wrong? Are we saying that Google and Apple suddenly become liable for every failure of age assurance around the world? So as we look forward, I hope we can divide the debate between the technology, which I believe can deliver and mitigate all the risks that we’ve heard about earlier, and the policy of whether it’s a good thing to provide a degree of information about age on the Internet. My personal belief is it is well overdue that we make the Internet age aware, not identity aware, but age aware. And that’s what we aim to do as a trade association and through some of these projects I’ve described. So thank you very much for your time this morning.


Karen Mulberry: Thank you, Iain. Now for our next speaker, Michael, who will give us a perspective from what’s going on in Germany. Thank you.


Michael Terhörst: Thank you so much, and thank you for having me. Good morning, everyone. I hope you had a wonderful evening yesterday, and today the headache is not too bad. I’m happy to present here the work of the Federal Agency for Child and Youth Protection in the media, or actually the Federal Office for Child and Youth Protection, for the enforcement of children’s rights and services, because that’s what we do. So we are a common authority regarding Article 28 DSA, and we are responsible for the implementation of precautionary measures. For example, child-friendly presets, parental guidance tools, reporting systems, and so on. So we work on a child-rights-based approach regarding the general command 25 and so on, and it’s not just only about protection, but also about safe participation. So what we do is that we analyze platforms which are relevant for young people. We then analyze the individual risks arising out of the different functionalities on the platforms. We check if all risks are already covered by the precautionary measures which might already be implemented, and this is actually never the case, so there’s still a lot to do for us. We then get in touch with the provider and start a so-called dialogic regulation where we discuss possible solutions. If they’re willing to comply, it’s all good, and we’re fine. If they don’t, then we start an enforcement by forcing them with fines and so on. So to guarantee safe spaces online, the platforms need to implement precautionary measures. Actually, yesterday, the EU Commission presented the guidelines or the draft of the guidelines for Article 28, and we are part of the working group where the guidelines were discussed, and we’re actually pretty happy that the approach of individual solution, and not this one-size-fits-all, can be found in the guidelines in the draft. So individual solutions for each service and the necessity for different precautionary measures to create safe spaces for young people. In this context, age assurance plays an important role, or actually two roles. So it can be used to exclude minors from inappropriate content. If the system is effective, it’s not just, are you over 18? Yes, no. But as it is important that young persons are being prevented from inappropriate content, it’s also necessary to see the possibility of age assurance when it comes to the safe usage of platforms. By implementing functional mechanisms, the before-mentioned precautionary measures can really work. So let’s take child-friendly presets, for example. You can have the best settings by default. You could ever imagine, like, limitation between communication between adults and young persons to prevent cyber grooming, and so on. You can have limits regarding the time of usage profiles automatically, private, and so on. But when you can open an account and can just, like, type in a date of birth, which might be fake, this just doesn’t work. So the child-friendly presets can only work if the platform knows the age. It doesn’t have to know, like, the exact age, or maybe an age bracket, like the person is between 13 and 15 or 16, 17, and so on. So age assurance can be an important factor regarding preventing children from adult content, but age assurance also has to be an important tool when it comes to enabling safe usage for children and young persons. That’s what respects the necessity of protection on the one hand, and the right of safe participation on the other. To complete our holistic approach, media literacy regarding minors and parents, as well, has to be forced to make it a complete picture. You see, there’s a lot to do, and we have to think new and work together to create a safe environment for young persons online. Thank you so much.


Karen Mulberry: Thank you very much. And now I’d like to introduce our last speaker, who will provide a very unique perspective from the Five Rights Foundation and all the work they see globally. Manon, please join us.


Manon Baert: Thank you so much for the kind introduction. Good morning, everyone. Nice to see so many people in the room and some familiar faces, as well. So as was introduced, my name is Manon, and I work at Five Rights Foundation. We are an NGO that really works for children, and we specialize on the digital space, so our mission is really to make the digital environment safe for children by design and by default. And so, coming from that perspective, I want to go back a bit to what Michael was saying previously, as to what are we trying to achieve really with age assurance and with age verification. Is this children’s rights respecting, as well? We’re talking about the rights of everyone, but we should also talk about the rights of children in this space. And finally, as well, what are we trying to achieve? to talk about what do children actually think about those different measures that are actually being put on them. And so I think here, you know, a good start is also to really see that the digital environment wasn’t created for children, it wasn’t designed for children. Children have been, you know, really using the digital environment as much as they can. They love it. They are out there. But the digital environment wasn’t made safe for them. And when we talk to our youth ambassadors, you know, what they tell us is that they feel addicted. They feel that, you know, the Internet has a big impact on their body image, on their sleep, on their education. We know that, you know, algorithms that recommend the system push problematic content onto kids. We know that, you know, children can actually be reached by groomers online. There’s a lot of issues that we see on the online environment. And what can age assurance and age verification actually bring to that? And I think, you know, Michael really, like, set it out. There is two roles that we see for age assurance and age verification. There is one which is about, you know, blocking access to certain types of contents or services. And the other one which is about bringing out actually more age-appropriate experiences for children. And I think when we look a bit at the policy debates right now, we’ve seen a lot of focus on, you know, banning and blocking children. We hear a lot about social media bans. We hear a lot about age verification for porn platforms. But that’s only really one of the uses that we see with age verification and age assurance. And that should not be our entire focus. If we bring a children’s rights perspective, you know, to this entire debate, it’s important to recognize that the digital environment, yes, brings a lot of risk in terms of, you know, children’s rights to sexual exploitation, for example, to commercial exploitation. But it also brings a lot of positive perspective in terms of the right to participation, the right to learn, to connect, to, you know, message their friends, to be active as citizens as well in our societies. And so that’s why we see also a lot of potential for the second role of age assurance in terms of providing actually age-appropriate spaces for kids. And here I think it’s very important that we see age assurance only really as an enabler. It’s only really a first step. Whether or not, you know, whether someone is a kid is not going to suddenly make the environment safer. You need to afterwards actually adapt the design of your platform rights. And that means changing your recommended system. That means having, you know, default settings. And if you don’t do all of that, then age assurance is not going to solve anything. Age assurance per se is not a silver bullet, it’s not a one-stop solution, it’s not going to make, you know, the environment safer for kids. And I think it’s important that we really, you know, need to reframe that debate. We’ve heard a lot about, you know, the risk. And those risks are real, like they are real risks with age assurance in terms of privacy, in terms of, you know, additional commercial exploitations, in terms of exclusion. And I think those risks are, you know, not only present for the general public, but also for children. Children also have the right to privacy. We see a lot about, you know, age verification as a way to age gates and to actually block children’s access out. This is also exclusionary for children. But I think, you know, if we put the right system in place, if we have a strong regulatory framework around how we use age verification and the type of technology that we have in place, then we can solve a lot of those problems. And we heard a lot earlier, you know, from Iain about the fact that you need standards in place, you need to have audits, you need to have certification mechanism to make sure that at the end of the day, age assurance is privacy preserving, it is done in a proportionate manner, it is accessible. We’ve just heard about the commission draft guidelines on Article 28. You know, they also tell us that, for example, if you want to have an age estimation system in place, you should actually provide two different types of mechanism to make sure that if people cannot choose one, they should be able to use the other one. So there is a lot of solutions that are actually, you know, possible and that can be put in place out there. And finally, what, you know, do children actually think about those age assurance mechanisms? At the end of the day, I think this should be, you know, our first questions when we’re talking about children’s rights online. One of the key rights that we need to keep on repeating is, of course, children’s rights to participation, children’s rights to be heard. And I think, you know, as a lot of marginalized communities, they’re usually excluded from those debates. And I think it’s also quite clear when we see different policy debates happening around the world, especially in terms of social media bands, we rarely hear about what actually children think. And I think when you talk to children and when you ask them, you know, how do they use already existing age assurance mechanism, a lot of them will actually tell you that they circumvent existing self-declaration mechanism. Sometimes their parents actually also help them, you know, circumvent this mechanism, which kind of proves that we cannot only rely on this kind of mechanism for ensuring safety online. But when you talk to kids, when you actually explain to them, you know, what are we trying to do with age assurance, they also tell you that at the end of the day, they agree, you know, with certain types of platforms actually asking their age and actually trying to verify it. And I think here as well, what’s very interesting is when you ask kids, they might tell you that for themselves, they don’t need it, because kids like to, you know, believe that they are able to deal with whatever risk. But they recognize really the use for their peers. So they want to make sure that there is actually a better environment for younger generations as well. And when you talk to kids as well, they are very cognizant of the fact of, like, of course, you know, they don’t want their privacy to be infringed, they see, you know, really the need for proportionality there as well, so it needs to be context-dependent. Some of the children, for example, didn’t necessarily see why you should have age assurance mechanism on video games. So I think, you know, in some ways, when you talk to kids, you actually get to see all of the problems, but also a lot of the solutions. And I think here, you know, that’s really my last point. If we want to really design an environment for children, we really need to design it with them. They are the ones who are actually, you know, living, they are the ones who are aware of the risk, they are the ones who also put in place solutions to deal with this risk, and we really need to make them the center of the conversation. So yes, thank you very much for your attention. Thank you.


Karen Mulberry: Thank you. Now you’ve heard from our experts talking about some of the key areas that they are focused on. At this point, we have people that have registered to make statements. We’d like to turn and allow them to make their statements, and then we’ll move into an interactive discussion based on all of the perspectives that have been presented so far this morning, the strategies, the issues, the implementation approaches, and the regulation. So please, let’s go to the online statements. So actually, the first statement is by Torsten Krause, representing civil society, please.


Torsten Krause: Good morning, hello. My name is Torsten Krause, I’m a political scientist and child rights researcher working at the Digital Opportunities Foundation based in Germany. Thanks for all your input in this discussion. We’ve heard that children have several rights, the right for participation, for example, but also the right for protection, and so I guess it’s our duty to create safe digital environment to serve the best interests of the child. And I guess that besides media literacy, parental control mechanisms, peer guidance, that age assurance could be a really key to deliver such safe digital environments. But we have also to address the concerns you’ve raised, Tatjana, because children also have the right to privacy or the protection of their anonymity. And so, I guess that’s really important what Iain said, that technology can deliver and to overcome these challenges. There are solutions in place, and I would like to have that we use this multi-stakeholder approach to find solutions together, how to create safe environments for all people, for children, for adults, for all, and not use age verification or age assurance measures to ban someone out of something. We have to include all people in the digital environment, and also children, and we can use age assurance mechanisms to create a safe digital environment. Thanks.


Online moderator: Thank you, thank you, Torsten. Next up, we have Viktoriia Omelianenko, representing International Governmental Organization at 105. You may have to press the button next to the microphone, I think. Next to the microphone. On the right. Thank you so much.


Viktoriia Omelianenko: Good morning, everyone. My name is Viktoriia Omelianenko. I’m the project officer in the International Telecommunication Union. It’s also from our side, it’s a great pleasure to contribute to this topic, and, of course, many speakers already highlighted the importance of the addressing risks for child protection and for the children, and from our side, from my side as well, working many years in this field with children with different stakeholders, I would like to emphasize also the need for the multi-stakeholder approach in terms of stakeholders who are dealing with children on a daily basis. We start from the policymaking, speaking about the regulations, and the need for the regulation at the international, European, national level, with many already actions in place. We also see the rising needs of more engagement with the private sector as well, on the regular basis, equipping them with the knowledge and skill how to implement the regulation. We also work on the regular basis and need to continue work actively in different contexts, and, of course, it’s the best way when you are with them in the room, talking, exchanging the views, seeing the real problems on the ground in the different countries, from our experience as well, and we see that parents and educators are in need of the skills, first of all, to be able to protect and to implement those actions that are already developed by the policymakers, that sometimes are even employed by ISPs or private sector in general with big tech platforms, but these are the people who are all the time with the children, stakeholders that are crucial technically for providing the ecosystem of the safe environment for kids, starting from parents and educators. And on the primary importance, again, working and teaching the children itself with the tools, with clear tools, how to protect and how to prevent the risks from the early age. And having this ecosystem approach to working with different stakeholders, both of the regulatory and capacity building activities, is important and we have the ITU Channel Protection Guidelines for all stakeholders, making sure that everyone is aware and everyone is contributing. Thank you.


Online moderator: Next up is Ozgur Kesim from the technical community, apparently online. You have the floor.


Ozgur Kesim: Hello, everybody. I’m Ozgur Kesim, CEO of Code Blau GmbH, a security company from Berlin, Germany. And we are also a member of a consortium called NGI Thala that brings digital payments to the Eurozone that is privacy preserving. And in this context, I want to report to you and pay your attention to a solution for age verification that we have implemented in GNU Thala, the payment system, which preserves, protects the privacy of the buyers, in this case also of the children, and also aligns with the property of subsidiarity, where the ability to pay and prove the age is bound to a coin by the parent, for example. So the digital payment system, GNU Thala operates in a way where customers receive digital tokens with which they can pay. These tokens have a denomination and our extension for age verification allows to bind the ability to prove sufficient age with a particular coin. So there’s no involvement of any ID verification and so on necessary in the context of digital payments. In a very practical sense, it would be like as a parent, I could give my child coins to buy for online games, for instance, and bind to this particular money the ability to pay for up to 14 years of age, so that the child can then themselves go online and pay for the online game without any involvement of the parents afterwards. The benefit of this approach is that it binds the ability to pay, or it binds the age verification to the ability to pay, and that has no involvement of third parties other than that. Thank you.


Online moderator: Esther Passaris from the government, please. 68.


Esther Passaris: Distinguished delegates, as a legislator from Kenya and a champion for women and youth inclusion, I support child protection online, but not at the expense of equity and access. Age verification systems, as currently designed, often exclude children in the global south, especially girls, refugees, and those without formal identity documents. These risk deepening the digital divide and silencing the very voices we need to empower. We must develop context-aware, rights-based frameworks that protect children while respecting their rights to learn, connect, and grow. Protection should never become a barrier to participation. Let us prioritize literacy, ethical platforms designed, and inclusive partnerships with parents, educators, and tech providers. Empowerment, not restriction, should be our guiding principle. Why? Because child safety and digital access are not opposing goals. They are two sides of the same coin in a journey towards justice. Thank you.


Online moderator: Thank you very much. And lastly, Anastasia Feuerstein from Academia. Over there, 108, please.


Anastasia Feuerstein: Thank you. I’m Anastasia Feuerstein, and I’m the director of an academic campus specialized in cybersecurity. Ladies and gentlemen, children as young as 8 wander freely through online realms not built for them, exposed to violence, exploitation, and manipulation. The convention of the right of the child calls us not only to shield but to guide, recognizing that access to education, information, media, and free expression is not a luxury but a right, inseparable from our obligation to nurture their digital literacy. So what should we propose? First, AI-driven content moderation with dynamic real-time filtering that does not merely book but adapts, creating age-appropriate environments. Second, industry collaboration, binding cross-sector commitments where platforms, regulators, and civil society co-create safety by design, not as marketing vanish but as structural DNA, transparently audited and meaningfully enforced. Third, digital literacy embedded in schools, teaching children resilience, critical thinking, and the subtle art of navigating complexity, because a safe Internet is not only a matter of walls but of wings. And finally, we must secure inclusive protection, ensuring that measures never exclude children from marginalized backgrounds, that digital rights are not the privilege of the well-documented few. So ladies and gentlemen, let’s not build fortresses. Let us build bridges. Let us rise to this challenge with not fear but with courage and creativity, forging a Europe and a world where protection and possibility walk hand by hand. Thank you.


Moritz Taylor: Thank you very much.


Online moderator: Thank you. Registered. If you registered, we were only listing confirmed participations, but if you are willing to participate with a statement, 196, go ahead.


Karen Mulberry: Yes, I can say we’re now moving into actually where everyone can participate, make their statements, and ask their questions. So if you have a statement, please.


Panelist-Turkey: Good morning, everyone. My name is Atul Kerimgun. I’m the Vice President of Information and Communication Technologies Authority. As a public officer representing Turkey, I want to emphasize that protecting children online isn’t just a regulatory duty. It’s a moral imperative. But how we do this matters just as much as why. Age verification mustn’t become a gateway to surveillance or exclusion. It must be designed to empower, not to restrict, proportionately. In Turkey, we are actively engaging with this balance. Our national strategy promotes safer digital environments for children through education, parental tools, and regulatory oversight. But we are equally aware that under the Digital Survivors Act, and similarly in our own evolving regulatory framework, digital platforms must do more than acknowledge this responsibility. They must act on it. Age verification isn’t just a compliance box. It’s a gateway to safer digital environment for our children. We are now at a stage where AI tools have become powerful enough to support this task. Meaningfully, advanced algorithms can detect patterns, verify identities with minimal friction, and adapt in real time to protect against misuse. But the question is, are platforms willing to use these tools in the interest of public good? We support innovation, but we cannot ignore the risks that unverified access to digital platforms poses to minors, be it exposure to harmful content, exploitation, or algorithmic manipulation. At the same time, we are deeply aware of the need to protect privacy and ensure equitable access to digital spaces. That’s why we are calling for transparent, privacy-preserving, and AI-driven age assurance systems that do not collect unnecessary personal data, but still provide a strong protective layer for minors. We believe an ununified approach across jurisdictions based on principles of accountability, child rights, and technical feasibility is possible. But the responsibility lies with online platforms. They must invest in and deploy their AI capabilities already used for content moderation and advertising for age assurance. This is not just a regulatory demand. It’s a social expectation. We must ensure that the digital world remains open, but never at the cost of the safety of our children. Thank you.


Karen Mulberry: Thank you very much. Well, now we’re in the open dialogue, so I don’t know if anyone has any questions of any of our experts. spoke today, or any questions you’d like actually to put out for general discussion of everyone that’s here today. I have one in the back, please.


Audience: Okay. I think it’s me, right? Okay. First of all, thanks, Karen, for pointing out that it’s the big anniversary. I think it’s really nice, not only because Gerard, our online moderator, and I have been part of the big network for many years, since our school days, so really nice for shouting that out. I’m Katrin, I’m working for one of the media authorities in Germany, and we are actually responsible for the evaluation of age verification systems in Germany. And from over 20 years of doing that, of course, I haven’t been doing this for 20 years, but some of my colleagues have, and it’s really amazing to see what is possible today, and of course, to get it shown by all the people who are providing systems and see what we can do, and it’s pretty easy, actually, to do it right, but of course, all the problems that have been mentioned, we also see them, so I don’t want to repeat them, but I want to point out one important point that has also been made by Michael and Manon, which is the blocking out versus good participation of young people online, and I think when we look at content that is available online, we also look at the Digital Services Act and all the debates about illegal content online, and when we then say that we have to remove some of the content because it’s illegal, because, for example, we in Germany have laws that say content is illegal when it’s impairing the development of young people, so we then have to, of course, our platforms have to remove this content because it’s impairing the development, but of course, for people who are over 18, the content would be totally fine, so creating spaces for young people and creating spaces for people who are over 18, then we can share different things which we cannot share at the moment because they have to take it down, and also it was mentioned that, sorry, I’m almost done, we also mentioned that it’s really important the role of parents to take them into account, but our research shows that 72% of parents in Germany are not using any technical measures with their children, so my question would be if we want to have parents taking part in that, how can we do that and how can we assure that more parents are using technical measures like screen time and stuff with their children?


Karen Mulberry: Thank you for those comments. I can say personally, I have a five-year-old that I thought I had set up all the appropriate parameters, so when she got on an iPad, she only had access to the things I thought were appropriate for her. I don’t know how she did it, but she got somewhere where she shouldn’t have been, and so for me, there’s a lot more that can be done, including my responsibility to actually sit with her next time she decides she’s going to play a game to see where she goes off to. Anyway, do we have any other points? Yes, please.


Audience: Hello, my name is Yegor. I’m from the Ukrainian office. I want to say that the question of age verification online is rapidly gaining relevance in Ukraine, particularly as we strive to better protect children from harmful digital contact without undermining their rights to access information, privacy, and digital inclusion. As a context of Russian full-scale invasion, the online environment has become a critical space for education and communication, and the source of serious risks, including exposure to violence and manipulation. This creates a pressing need for a balancing and rights-based framework for age assurance. I have a question for esteemed experts, so, for example, considering the successful experience of Spain in implementing online age verification mechanism for children, and the prominent role played by the national data protection authorities, is this reform, I would like to ask your views about the appropriate institutional leadership for such an initiative. In your opinion, should this reform can be implemented in the future, and what would be the role of the national data protection authorities, given its expertise in privacy and risk rights-based approach, or would it be effective for a digital policy-making body? Thank you.


Karen Mulberry: Thank you. We have someone who will respond. David, please.


David Frautschy: Thank you. I think this is a very interesting debate. I think we have to go a little bit back. Tania showed an example that it’s not the same online as offline. I go to a place to buy a porn magazine. I show my ID, the guy says, yeah, you’re over 18, you go out from the store with your magazine, and that ID has never been recorded. Yet, if I go to a porn site, and I need to log in like this, and somehow, this data can be stored, and this is a chilling effect that might refrain me in the future to go online for this kind of services. So, yeah, we put a man in the moon a long time ago, but it’s not that technically it’s possible or not. We cannot put someone on Mars and bring him back. So, I wonder why we insist in trying to solve societal issues through technical solutions, when most times this can be done by education at homes, and also ensuring that spaces for kids have reporting mechanisms, so kids can identify others that are misbehaving there, and say this person, this individual, this profile is doing wrong in this space. So, we should go back and try to identify solutions on empowering kids, because I see a rush in many governments, including the European Commission, to put in place solutions when most, if not all of the speakers today said we still need to be doing more things. So, before rushing to get a technical solution, let’s make sure that we are doing the right thing.


Karen Mulberry: Thank you. Yes, please.


Audience: So much. Is this working? Yes. I think a lot of very interesting points that I would like maybe to bounce back on, on the responsibility of parents and on the responsibility of children as well. Maybe going back to Catherine and to what’s the role of parents there, I think it’s super important to recognize that, first of all, a lot of parents don’t necessarily have the digital literacy skills to actually deal with all of those parental control tools. We know that kids are on more than 40 platforms usually. That also means that parental control tools are different from one platform to the next. So, as a parent, you should be able to actually go on all of those platforms, set up the right parental control tools on all of those platforms. Parents deal with a lot of work, life, family life. I mean, I am not a parent myself, but I can only imagine all of the things you have to deal with, so that’s a lot of pressure to put on parents. And sometimes also the way those parental control tools are actually set up is that, as a parent, you must have to register yourself on the platform, right? So, the platform actually gets a new user as a parent to actually use those platforms just to be able to actually set up those parental control tools. I think also the way we see parental control tools, a lot of them can be also very privacy intrusive for children as well, right? And so, there again, we need to have a bit more of a nuanced approach. We know that all kids are not the same. You evolve as a children, right? When we are talking about children, we are talking about everybody who is under 18, and a kid who is 12 compared to a kid who is 16 doesn’t need the same level of privacy. And we know also that children, to actually be able to grow, to develop their identity, they need that level of privacy, right? So, actually enabling your parent to surveil everything that you’re doing online does not necessarily go forward in that direction. Does that mean that on top of having surveillance from big tech companies, we’re also enabling surveillance from parents? We also know that from research, actually, the best way in which parents can actually support their children is not through control, but actually through discussion, through participation with them. So, usually actually trying to restrict your kids via control tools is not the best way forward. And I also think it’s very clear as well from the draft guidelines that we just got from the commission on Article 28, which Article 28 of the DSA requires online platforms to ensure a high level of privacy, safety, and security for children. They looked at what they call tools for guardians, and then again here I think it’s important to mention that not all children have parents, not all children have parents who are online, and not all children have parents who might be good for them. But then again, you know, let’s use the tools for guardians. as tools for guardians, there again they also recognize that those tools are actually only complementary. It’s not because that you have good parental control tools that suddenly your platform is actually safe. No. If your algorithm is still pushing harmful content onto children, whether or not as a parent you can see that that content is being pushed to the kids does not change the actual root of the problem. So that’s one thing about the role of parents. Secondly, about the responsibility on children, we are a children’s rights organization, we work a lot with kids and they want to be empowered, they want to be participating in this world and they want to be able to make right choices for them. But I think we need to recognize that right now the way the online environment is being made is by very few powerful companies who are making money out of this. And as for any other regulatory environment, they actually have a responsibility to ensure that the users are actually safe. And in the same way that for kids when you go to a supermarket and you buy a toy, you expect that toy to actually be safe for your kids because it’s written that it is made for kids between, I don’t know, 8 and 10. When you go online and you know that you’re profiting from children, you also have that responsibility to then make that environment safe for kids. And in general, the way we see the online environment right now is that it’s very manipulative in a lot of the designs, right? We see a lot of retention design, we see a lot of, you know, notifications measure, likes. Everything is really made to keep you online, infinite scrolling, all of those measures are very problematic I think for anyone as adults, but especially for children as they are developing, they have less ability to actually control all of those things. And so I think it’s very cynical to ask children who are developing to actually be the one in control of an environment that was made to keep them online. So we need to make sure that, you know, when we’re talking about this topic, it’s not about putting the responsibility on children, but actually putting the responsibility on the people who are actually benefiting from children right now.


Karen Mulberry: Thank you. And we’ve got a comment online and then Tatiana Tropina will go to you.


Online moderator: Yeah, we have Iain from the panel, just wanted to do a round, he’s on the screen.


Karen Mulberry: Iain, please.


Iain Corby: Thank you. Just briefly to respond to a couple of points. The question from, I think, from the representative from Ukraine about whether the data protection authorities should step in and do this. I think what we have noticed is some data protection authorities in Europe don’t have much faith in GDPR protecting people’s privacy. And so they’ve advocated for more technical means to protect privacy. And this is often being called double blind, where you use zero knowledge proof, which is a form of privacy enhancing technology to make sure that when you communicate the fact that somebody is over a certain age, that information cannot be traced back to the individual technically. So it’s not just a legal protection under GDPR, but it’s also a physical technical protection. We’ve taken that on board as an industry and build that into the age aware interoperable ecosystem. The problem when the DPAs try to invent these things is they never consider the commercial market that needs to underpin the operation. So that’s fine if governments are willing to spend the money on building these systems, operating customer services and operating support to the relying parties to integrate them. But that’s a very expensive job and it’s a big commitment from government. So if they want the private market to continue to do this work, then we do have to find a commercially sustainable business model, which can be achieved even with zero knowledge proof. And that’s what we did with the age aware ecosystem. And then on the question of whether we should just leave this to the parents, sadly, I mean, there are multiple opportunities for parents to add controls. But as our convener has herself pointed out, you can be the most committed parent you like, but you may not be able to cover all the bases. Every game, every platform has its own unique parental controls. That’s an enormous investment in time for parents. And I don’t think we know that only about 1% of parents use some of the most popular platforms, parental controls. So that’s not an adequate way of dealing, particularly for people who are perhaps less affluent and whose parents are less well educated and less capable of intervening to implement those measures. So I do think society needs to help those parents with these sort of broad age verification controls. Thank you.


Karen Mulberry: And Tatiana Tropina, you had a comment as well.


Tatiana Tropina: Thank you very much. I would like to actually follow up on Iain’s comment about putting a man on the moon. I absolutely loved it because it also reminded me that a few decades later, after we put a man on the moon, or a few men on the moon, a Mars mission failed because different teams in different countries were using imperial and metrics. And this is what exactly we are trying to avoid here. I was making an argument about role in different technical solutions and having impact on the global interoperability on the Internet. Another argument relates to those men on the moon. We somehow, in this room I see a bit of contradictory things. We believe that technology can solve the problem, but at the same time we recognize that there is a broader societal issue with parental controls, with trust, with multitude of aspects. And as to the technology itself, I’m going to make an example in the physical world. I think anybody who owns a newish car knows that the cars can now read the signs on the road. And the percentage of failure of them reading the road signs is quite high. It’s very annoying for drivers. So if a car cannot read a sign on the road, if this technology cannot do this, I’m not entirely convinced that technology can solve the facial recognition issue. And it should not be a trade-off. Let’s do it. It’s good enough. But those who are excluded are exactly those who are excluded from many, many other things. And marginalizing them further will not do our society any good. And I’m very much aligned with the intervention of esteemed Kenyan legislator Esther Passaris, right? I hope I captured the name right. It is about those who left out, not only whom we let in. And this is incredibly important. And the last point about the man on the moon. We put the man on the moon. And we are sending Mars missions perhaps soon. It is very costly. And as Iain said, like when we leave some things to the private industry, we have to find solutions. And this is why we’re not sending the man on the moon anymore. Because it’s free. It costs. It’s political decisions. At the same time, with all these missions, we’re still confronted with the news about data breaches every day. I don’t know. I opened my news application yesterday and I saw again the Marks and Spencer data breach. The news just keep coming, Marks and Spencer, the UK retailer. So this is not a trade-off for me. We shouldn’t frame it as either or. And as I said, we absolutely recognize this problem. And I was happy to hear about ISO standards as well. We really have to make sure that we are moving toward some global agreement, building bridges as one of the interventions said. Building bridges instead of creating trade-offs. Including stakeholders, finding the solutions together. And in this regard, I’m so happy that we’re having these dialogues today. I find it so enriching for us as well, as we are trying to propose our own solutions and consulting our community and the Internet Society. Thank you.


Karen Mulberry: Yes, we have a comment online?


Online moderator: Yes, so we have actually a few comments that are coming from the chat, which I’ll briefly summarize.


Karen Mulberry: Please.


Online moderator: Yeah. And yes, I’ll then hand the floor here to someone on site that took the opportunity to interact in a hybrid format with the participants online. So we have one comment from Denise that was mentioning, okay, one thing is age verification in specific platforms and services where children are logging in, but what about other services that do not require logging in and, for instance, advertising in those platforms? How can we safeguard children? Then another one from Lufono was focusing on the broad scope of human rights that we need to enforce and how can we ensure a transversal approach to ensure the safeguarding of human rights in the digital space for children. And then I have, I’m sorry, because I’m just scrolling down the conversation from Tapani here on the floor 148, he raised a comment on the chat that I would kindly ask him to elaborate. Thank you.


Audience: Okay. Thank you. So I’m Tapani Taasvainen from Electronic Frontier Finland, and the point I wanted to raise is that while I appreciate and very much like the knowledge that people have realized that age verification should be done without identification, but that is more difficult than it may seem at first, because identities tend to leak. And even if you are, let’s say that poor implementation in particular can easily reveal the exact age even of the person in question, even if it’s designed. It’s fine to only tell it’s old enough. Repeated checks can still leak the exact age. The most trivial example is that if the check fails today and succeeds tomorrow, then we know exactly that you are 18 today. But in an even more general case, if it allows repeated checks, where you’re old enough at a given date, things like that, then it will easily leak that. Identifying things like the browser being used and operating system and so forth may easily lead to rather precise identification of the person in question. So the question is, how well do various verification systems deal with this? Thank you.


Karen Mulberry: Are there no comments on this? Okay. Well, we have time for one more intervention. Please.


Audience: Yes, thank you. I’m Pilar. I’m from Internet Governance Forum in Spain and a former youth digger. So here I would like to raise some points, first of all, regarding the digital wallet from the items to regulation, from which Spain has been one of the main promoters and I think has created a really good framework that Europe can start working on. First of all, how can we ensure that these practices can be expanded beyond Europe? It’s supposed to be like this privacy protective to ensure that everyone can have this privacy online even when they access these web pages that children cannot access. Also, here I’d like to raise one of the youth digger messages that we brought in my year in 2023 and that we cannot have two internets, one internet for children, one internet for adults. Adults can enjoy their privacy and children cannot because they do not have encryption because we have to protect them. This promotes internet fragmentation and we just, as youth, we do not want that. We want everyone to have the same privacy and not just internets that are not completely separated. And finally, I’d like to raise a point on leaving this regulation, not a regulation, but control to the parents or the tutors because it could promote the marginalization of certain children groups that do not have the same access to the contents if we just leave it to parents or we just leave it to tutors. Maybe it should be left even to co-regulation between platforms and governments so that we can all reach the best solution that can benefit everyone. So, thank you.


Karen Mulberry: Thank you very much. Well, I think we’ve got one more comment over here. Yes, sir.


Audience: My name is Valentin Koval. I’m the member of Ukrainian Media Regulator. And what I want to say is that this session gave a little bit more Euroscepticism for me just because the only dilemma that we now have is to be kind to our children and to our future or to tech companies. And from other side, we should understand that this place is not a place for tech decisions and solutions. This place is a place for political decisions. And the decision could be only one, to protect children. It’s not a time to wait until, say, civil society will sit in a row with big tech. It’s no time to wait until North Korea will sit in a row with North Ireland. It’s not a time when all other stakeholders gather together just to decide. We have enough, I hope, technical and political solutions to start this because it’s a real problem. It’s not about to stop entrance to Internet to children. It’s just to stop their entrance to risky part of Internet. So, we need to understand that it’s not about the freedom of speech or freedom of getting information or even freedom of entertaining. It’s just the defense of our future. Thank you.


Karen Mulberry: Thank you. Well, to wrap up and then move on to capturing our messages, our output from this session, I think we’ve heard an awful lot about the need to be more age-appropriate for access to information as well as the balance for what might be technically feasible and then balance that against what, you know, human rights, digital rights, and data protection, how that all applies and how that works. I think we have a lot of work to do still. There’s a lot of aspects to think about as you move forward in terms of applying a technical solution against all the other criteria that are out there. I can tell you in my personal experience, the issue I had with my five-year-old, I forgot to suppress pop-ups. She clicked on an ad that came up. So, it was appropriate for the little game she was playing, not the ad. So, there’s more that needs to be done. So, let me turn to our – Desario is taking the messages for us. We need to review those and all agree that this kind of captures the essence of what we’ve covered this morning, and then that goes into the EuroDIG record. So, please, Desario.


Reporter: Hello, everyone. Hello, everyone. I’m Desario Dushi, Program Committee Member of EuroDIG. The first message is it is important to find the right way to address child protection with age verification tools, as these tools affect everybody, not just kids. They create risks to privacy and security, digital services, and impact the global interoperable Internet, which affects trust on the Internet. To safeguard from these risks, we need to ensure that mandatory age verification tools comply with the requirements. We cannot have foolproof technology. The negotiations have to move to a global level involving all stakeholders. We must develop context-aware, rights-based approaches. The second message would be technology can deliver. However, to achieve the best interest of the child, we need to make Internet age-aware, not identity-aware. The child-friendly platforms can only work if the Internet knows at least the age range. However, we need to ensure that the measures taken do not leave anyone out, increasing the digital divide. And the last message is the digital environment wasn’t created for children, even though many of them are now online. That’s why we need to bring the children’s perspective in the debate. This assurance is not a one-step or single solution. While taking this measure, we need to make sure children are not excluded from the Internet by taking proportionate measure steps, including having more than one option. When designing an environment for children, we need to design it with them.


Karen Mulberry: Okay, any comments on the messages, the output from our discussion this morning? Yes, please.


Audience: Yes, hello. Mia Kühlewin from the Internet Architecture Board. Just on the wording, the first point implies that there is a right way, which I think it makes more sense to provide a more neutral wording here. And then also it says mandatory age verification requirements need to comply with these requirements, but actually every age verification, not only the mandatory ones, need to comply.


Karen Mulberry: Thank you for your comments. Do we have any other comments on what was captured as output from our session? Well, if there aren’t any other comments then or suggestions on our text, I’d like to turn it back over to our Master of Ceremonies to conclude our session. Thank you very much for all of your dialogue.


Moritz Taylor: Thank you very much, Karen. We’ve done incredibly well with the time, thank you very much. That gives everyone here a longer break, so thank you to everyone who contributed to this first session today, and at 11 o’clock, I believe, we’ll be back here. There’ll be coffee, I think, outside in front of the hemicycle again for some final networking, some final chats as we approach the end. Thank you very much and see you soon.


T

Tatiana Tropina

Speech speed

135 words per minute

Speech length

1915 words

Speech time

849 seconds

Need globally negotiated standards for interoperability

Explanation

The speaker emphasizes the importance of developing globally negotiated standards for age verification to ensure interoperability. This approach aims to address privacy and security concerns while preserving the global nature of the Internet.


Major discussion point

Global standards for age verification


Agreed with

– Iain Corby
– Michael Terhörst
– Manon Baert
– Esther Passaris

Agreed on

Need for balanced approach to age verification


Disagreed with

– Iain Corby

Disagreed on

Effectiveness of age verification technology


I

Iain Corby

Speech speed

172 words per minute

Speech length

1713 words

Speech time

595 seconds

Parental controls alone insufficient, society must help

Explanation

The speaker argues that relying solely on parental controls is not adequate for ensuring child safety online. He suggests that society needs to implement broader age verification controls to assist parents, especially those who may be less affluent or less educated.


Evidence

Statistic that only about 1% of parents use some of the most popular platforms’ parental controls.


Major discussion point

Societal responsibility in online child protection


Disagreed with

– Manon Baert

Disagreed on

Role of parental control vs. societal responsibility


M

Michael Terhörst

Speech speed

143 words per minute

Speech length

632 words

Speech time

264 seconds

Age assurance enables safe participation, not just blocking access

Explanation

Age assurance can be used not only to exclude minors from inappropriate content but also to ensure safe usage of platforms. It allows for implementing child-friendly presets and other safety measures effectively.


Evidence

Example of child-friendly presets like limiting communication between adults and young persons to prevent cyber grooming.


Major discussion point

Using age assurance for creating safe online environments for children


Agreed with

– Tatiana Tropina
– Iain Corby
– Manon Baert
– Esther Passaris

Agreed on

Need for balanced approach to age verification


M

Manon Baert

Speech speed

220 words per minute

Speech length

1429 words

Speech time

389 seconds

Discussion better than control for supporting children online

Explanation

The speaker suggests that parents can better support their children online through discussion and participation rather than strict control. She argues that restrictive control tools may not be the most effective way to support children’s online experiences.


Evidence

Reference to research showing that discussion and participation are more effective than control in supporting children online.


Major discussion point

Effective parental strategies for online child safety


Disagreed with

– Iain Corby

Disagreed on

Role of parental control vs. societal responsibility


E

Esther Passaris

Speech speed

109 words per minute

Speech length

142 words

Speech time

77 seconds

Need context-aware, rights-based frameworks for protection

Explanation

Age verification systems should be developed with consideration for different contexts and based on rights-based frameworks. This approach ensures protection while respecting children’s rights to learn, connect, and grow.


Major discussion point

Developing inclusive and rights-based age verification systems


Agreed with

– Tatiana Tropina
– Iain Corby
– Michael Terhörst
– Manon Baert

Agreed on

Need for balanced approach to age verification


Age verification must not become a barrier to participation

Explanation

While protecting children online is important, age verification systems should not exclude children from digital participation, especially those in the Global South. Protection should not come at the expense of equity and access.


Evidence

Mention of potential exclusion of children in the Global South, especially girls, refugees, and those without formal identity documents.


Major discussion point

Balancing protection with digital inclusion


Agreed with

– Tatiana Tropina
– Iain Corby
– Michael Terhörst
– Manon Baert

Agreed on

Need for balanced approach to age verification


Solutions must work for Global South and marginalized groups

Explanation

Age verification and child protection solutions need to be inclusive and work for children in the Global South and other marginalized groups. These solutions should not deepen the digital divide or silence vulnerable voices.


Major discussion point

Ensuring global inclusivity in age verification solutions


P

Panelist-Turkey

Speech speed

137 words per minute

Speech length

357 words

Speech time

156 seconds

Protecting children online is a political imperative

Explanation

The speaker argues that protecting children online is not just a technical issue but a political imperative. They emphasize the need for decisive action to protect children from risky parts of the Internet.


Major discussion point

Political responsibility in child protection online


A

Anastasia Feuerstein

Speech speed

135 words per minute

Speech length

235 words

Speech time

104 seconds

Use AI and industry collaboration for dynamic, adaptive solutions

Explanation

The speaker proposes using AI-driven content moderation and industry collaboration to create dynamic, real-time filtering for age-appropriate environments. This approach aims to adapt to changing online risks and create structural safety measures.


Major discussion point

Innovative technological solutions for online child protection


D

David Frautschy

Speech speed

140 words per minute

Speech length

270 words

Speech time

115 seconds

Empower kids through education and reporting mechanisms

Explanation

The speaker suggests focusing on educating children and implementing reporting mechanisms rather than relying solely on technical solutions. This approach aims to empower children to identify and report inappropriate behavior online.


Major discussion point

Education and user empowerment in online safety


A

Audience

Speech speed

167 words per minute

Speech length

2283 words

Speech time

816 seconds

Most parents not using available technical measures

Explanation

Research shows that a majority of parents are not using available technical measures to protect their children online. This highlights the need for alternative approaches to ensure child safety online.


Evidence

Statistic that 72% of parents in Germany are not using any technical measures with their children.


Major discussion point

Parental engagement in online child protection


Consider role of data protection authorities

Explanation

The audience raised a question about the appropriate institutional leadership for implementing age verification, specifically considering the role of national data protection authorities given their expertise in privacy and rights-based approaches.


Major discussion point

Institutional responsibility for age verification implementation


Cannot have separate Internets for children and adults

Explanation

The audience member argued against creating separate Internet experiences for children and adults. They emphasized the need for equal privacy protections for all users to avoid Internet fragmentation.


Major discussion point

Unified Internet experience with age-appropriate safeguards


Leaving control only to parents could marginalize some children

Explanation

The audience member pointed out that relying solely on parental control for online safety could lead to the marginalization of certain groups of children. They suggested co-regulation between platforms and governments as a potential solution.


Major discussion point

Equitable approaches to online child protection


Consider expanding European approaches beyond Europe

Explanation

An audience member raised the question of how European practices, such as the digital wallet from the eIDAS regulation, could be expanded beyond Europe. This highlights the need for global solutions in age verification and online safety.


Major discussion point

Global applicability of European online safety approaches


Ukraine sees pressing need for rights-based age verification

Explanation

The audience member from Ukraine emphasized the urgent need for a rights-based framework for age assurance in the context of the ongoing conflict. They highlighted the critical role of the online environment for education and communication in Ukraine.


Evidence

Mention of the context of Russian full-scale invasion and its impact on the online environment in Ukraine.


Major discussion point

Implementing age verification in conflict-affected regions


R

Reporter

Speech speed

157 words per minute

Speech length

241 words

Speech time

92 seconds

Age verification tools affect everyone and create risks

Explanation

Age verification tools impact not just children, but all Internet users. They pose risks to privacy, security, and the global interoperability of the Internet, which can affect overall trust in the Internet.


Major discussion point

Balancing child protection with broader Internet impacts


Need for context-aware, rights-based approaches

Explanation

Age verification systems should be developed with consideration for different contexts and based on rights-based frameworks. This approach aims to protect children while respecting their rights to learn, connect and grow online.


Major discussion point

Developing inclusive and rights-based age verification systems


Involve children in designing safe digital environments

Explanation

When designing online environments for children, it’s crucial to involve them in the process. Children are aware of risks and often develop their own solutions, making their input valuable in creating effective safety measures.


Major discussion point

Child participation in online safety design


Agreed with

– Manon Baert

Agreed on

Importance of child participation in online safety design


M

Moritz Taylor

Speech speed

169 words per minute

Speech length

223 words

Speech time

79 seconds

Age verification is a complex and controversial topic

Explanation

The issue of age verification online involves various conflicting ideas and perspectives. It is a topic that generates significant debate and discussion.


Evidence

Reference to the session title ‘The Age Verification Dilemma, Balancing Child Protection and Digital Access Rights’


Major discussion point

Complexity of balancing child protection and digital rights


O

Online moderator

Speech speed

128 words per minute

Speech length

375 words

Speech time

175 seconds

Need to address age verification for services without login

Explanation

Age verification is not only relevant for platforms where users log in, but also for services that don’t require login. This includes addressing issues like advertising on these platforms to protect children.


Major discussion point

Comprehensive approach to age verification across different online services


Importance of transversal approach to human rights

Explanation

There is a need to ensure a comprehensive approach to safeguarding human rights in the digital space for children. This involves considering various aspects of human rights protection simultaneously.


Major discussion point

Holistic human rights protection for children online


T

Torsten Krause

Speech speed

125 words per minute

Speech length

219 words

Speech time

104 seconds

Need for safe digital environments serving children’s best interests

Explanation

It is crucial to create safe digital environments that prioritize the best interests of children. This involves balancing various rights including participation and protection.


Major discussion point

Creating child-centric safe digital spaces


Age assurance as key to safe digital environments

Explanation

Age assurance could be a crucial tool in delivering safe digital environments for children. However, it’s important to address concerns about privacy and anonymity protection.


Major discussion point

Role of age assurance in online child safety


O

Ozgur Kesim

Speech speed

145 words per minute

Speech length

275 words

Speech time

113 seconds

Privacy-preserving age verification through digital payments

Explanation

A solution for age verification has been implemented in the GNU Thala payment system that protects privacy. It allows binding the ability to prove sufficient age with a particular digital coin without involving ID verification.


Evidence

Example of the GNU Thala payment system implementation


Major discussion point

Innovative approaches to privacy-preserving age verification


K

Karen Mulberry

Speech speed

128 words per minute

Speech length

1209 words

Speech time

566 seconds

Distinction between age verification and age assurance

Explanation

There is an important difference between age verification and age assurance. Age verification requires proof of age, while age assurance involves estimating or verifying age to ensure appropriate access to content or services.


Evidence

Definitions provided for age verification and age assurance


Major discussion point

Understanding different approaches to determining user age online


V

Viktoriia Omelianenko

Speech speed

163 words per minute

Speech length

361 words

Speech time

132 seconds

Need for multi-stakeholder engagement in child online protection

Explanation

Protecting children online requires engagement from multiple stakeholders. This includes policymakers, the private sector, parents, educators, and children themselves.


Evidence

Reference to ITU Child Protection Guidelines for all stakeholders


Major discussion point

Collaborative approach to online child protection


Agreements

Agreement points

Need for balanced approach to age verification

Speakers

– Tatiana Tropina
– Iain Corby
– Michael Terhörst
– Manon Baert
– Esther Passaris

Arguments

Need globally negotiated standards for interoperability


Age assurance enables safe participation, not just blocking access


Need context-aware, rights-based frameworks for protection


Age verification must not become a barrier to participation


Summary

Speakers agreed on the need for a balanced approach to age verification that protects children while preserving digital rights and access.


Importance of child participation in online safety design

Speakers

– Manon Baert
– Reporter

Arguments

Involve children in designing safe digital environments


When designing an environment for children, we need to design it with them


Summary

Speakers emphasized the importance of involving children in the design of online safety measures.


Similar viewpoints

Both speakers stressed the importance of ensuring that age verification solutions do not exclude or marginalize vulnerable groups, particularly in the Global South.

Speakers

– Tatiana Tropina
– Esther Passaris

Arguments

Solutions must work for Global South and marginalized groups


Age verification must not become a barrier to participation


Both speakers emphasized the importance of enabling safe participation for children online, rather than focusing solely on restrictive measures.

Speakers

– Michael Terhörst
– Manon Baert

Arguments

Age assurance enables safe participation, not just blocking access


Discussion better than control for supporting children online


Unexpected consensus

Privacy-preserving age verification

Speakers

– Tatiana Tropina
– Iain Corby
– Ozgur Kesim

Arguments

Need globally negotiated standards for interoperability


Parental controls alone insufficient, society must help


Privacy-preserving age verification through digital payments


Explanation

Despite coming from different perspectives, these speakers all recognized the importance of developing privacy-preserving age verification methods, showing an unexpected consensus on the need to balance child protection with privacy concerns.


Overall assessment

Summary

The main areas of agreement included the need for a balanced approach to age verification, the importance of child participation in safety design, and the necessity of privacy-preserving methods.


Consensus level

Moderate consensus was observed on the need for nuanced, rights-based approaches to age verification. However, there were varying perspectives on implementation details and the role of different stakeholders. This implies that while there is general agreement on the importance of the issue, further discussion and collaboration are needed to develop effective and widely accepted solutions.


Differences

Different viewpoints

Effectiveness of age verification technology

Speakers

– Tatiana Tropina
– Iain Corby

Arguments

Need globally negotiated standards for interoperability


Technology can deliver. However, to achieve the best interest of the child, we need to make Internet age-aware, not identity-aware.


Summary

Tatiana Tropina expresses skepticism about the effectiveness and risks of mandatory age verification tools, while Iain Corby argues that technology can deliver effective age verification solutions without compromising privacy.


Role of parental control vs. societal responsibility

Speakers

– Iain Corby
– Manon Baert

Arguments

Parental controls alone insufficient, society must help


Discussion better than control for supporting children online


Summary

Iain Corby argues for broader societal involvement in age verification, while Manon Baert emphasizes the importance of parental discussion and participation over strict control measures.


Unexpected differences

Global applicability of European approaches

Speakers

– Esther Passaris
– Audience

Arguments

Solutions must work for Global South and marginalized groups


Consider expanding European approaches beyond Europe


Explanation

While both speakers discuss global applicability, there’s an unexpected tension between ensuring solutions work for marginalized groups and expanding European approaches globally, highlighting potential conflicts in addressing global digital inequalities.


Overall assessment

Summary

The main areas of disagreement revolve around the effectiveness and implementation of age verification technologies, the balance between parental and societal responsibility, and the global applicability of proposed solutions.


Disagreement level

The level of disagreement is moderate to high, with significant implications for policy-making and technological development in online child protection. These disagreements highlight the complexity of balancing child safety, privacy, and digital rights across diverse global contexts.


Partial agreements

Partial agreements

Similar viewpoints

Both speakers stressed the importance of ensuring that age verification solutions do not exclude or marginalize vulnerable groups, particularly in the Global South.

Speakers

– Tatiana Tropina
– Esther Passaris

Arguments

Solutions must work for Global South and marginalized groups


Age verification must not become a barrier to participation


Both speakers emphasized the importance of enabling safe participation for children online, rather than focusing solely on restrictive measures.

Speakers

– Michael Terhörst
– Manon Baert

Arguments

Age assurance enables safe participation, not just blocking access


Discussion better than control for supporting children online


Takeaways

Key takeaways

Age verification tools create privacy and security risks, but technology can potentially deliver verification without compromising privacy


There’s a need to balance child protection with digital rights and access


The current digital environment was not designed with children’s safety in mind


A multi-stakeholder, context-aware, rights-based approach is needed for age verification and child protection online


Parental controls alone are insufficient; society and platforms must also take responsibility


Solutions must be globally applicable and preserve the interoperability of the Internet


Resolutions and action items

Develop privacy-preserving technical standards for age verification


Engage in globally negotiated standards for age verification to ensure interoperability


Design digital environments for children with their input and participation


Ensure age verification tools comply with privacy and security guidelines


Unresolved issues

How to implement age verification without excluding marginalized groups or those without formal identification


How to balance age verification with children’s right to privacy and anonymity online


The appropriate role of data protection authorities in age verification processes


How to expand European approaches to age verification beyond Europe


How to address age verification for services that don’t require logging in


Suggested compromises

Use AI-driven content moderation with dynamic real-time filtering to create age-appropriate environments instead of strict blocking


Implement ‘double blind’ or zero knowledge proof systems to protect privacy while verifying age


Use age assurance to enable safe participation rather than just blocking access


Develop multiple options for age verification to ensure accessibility


Thought provoking comments

Age verification mustn’t become a gateway to surveillance or exclusion. It must be designed to empower, not to restrict, proportionately.

Speaker

Atul Kerimgun


Reason

This comment succinctly captures a key tension in the debate – the need to balance protection with empowerment and avoid unintended negative consequences.


Impact

It shifted the discussion towards considering more nuanced, empowering approaches rather than just restrictive measures.


We cannot have two internets, one internet for children, one internet for adults. Adults can enjoy their privacy and children cannot because they do not have encryption because we have to protect them. This promotes internet fragmentation and we just, as youth, we do not want that.

Speaker

Pilar


Reason

This comment brings in the youth perspective and highlights the risk of creating a fragmented internet in attempts to protect children.


Impact

It challenged participants to think about solutions that protect children without compromising the open nature of the internet or children’s rights to privacy.


We believe that Internet is for everyone. Internet improves lives of people and our society for the better. Our goal is for Internet to be open, globally connected, secure, and trustworthy. And in this light, I want to start with one big question. Do mandatory age verification tools make Internet more secure?

Speaker

Tatiana Tropina


Reason

This comment frames the discussion in terms of broader internet principles and challenges the assumption that age verification necessarily improves security.


Impact

It prompted participants to consider the wider implications of age verification measures on the overall health and openness of the internet.


Age assurance per se is not a silver bullet, it’s not a one-stop solution, it’s not going to make, you know, the environment safer for kids. And I think it’s important that we really, you know, need to reframe that debate.

Speaker

Manon Baert


Reason

This comment pushes back against oversimplified solutions and emphasizes the need for a more comprehensive approach.


Impact

It broadened the discussion to consider age assurance as part of a larger ecosystem of child protection measures rather than a standalone solution.


Overall assessment

These key comments shaped the discussion by moving it beyond simple pro/con positions on age verification to consider more nuanced approaches. They highlighted the need to balance child protection with maintaining an open, globally connected internet, preserving privacy for all users, and avoiding fragmentation. The comments also emphasized the importance of youth perspectives and comprehensive solutions rather than relying solely on technical measures. This led to a richer, more multifaceted exploration of the challenges and potential solutions in protecting children online.


Follow-up questions

How can we ensure that more parents are using technical measures like screen time and other parental controls with their children?

Speaker

Katrin (audience member from German media authority)


Explanation

Research shows that 72% of parents in Germany are not using any technical measures with their children, indicating a need to increase parental engagement with these tools.


What is the appropriate institutional leadership for implementing online age verification mechanisms?

Speaker

Yegor (from Ukrainian office)


Explanation

There is a need to determine whether national data protection authorities or digital policy-making bodies should lead such initiatives, given their different areas of expertise.


How can we ensure that age verification practices can be expanded beyond Europe?

Speaker

Pilar (from Internet Governance Forum in Spain)


Explanation

There is a need to explore how privacy-protective frameworks like the European digital wallet can be implemented globally to ensure consistent protection across different regions.


How well do various verification systems deal with potential identity leaks through repeated checks or browser fingerprinting?

Speaker

Tapani Taasvainen (from Electronic Frontier Finland)


Explanation

There are concerns about how age verification systems might inadvertently reveal exact ages or identities through technical implementation details, requiring further investigation into their privacy protections.


How can we address age verification for services that do not require logging in, such as advertising on platforms?

Speaker

Denise (online participant)


Explanation

This highlights a gap in current age verification approaches for protecting children from potentially harmful content in spaces where they are not explicitly logged in.


How can we ensure a transversal approach to safeguarding human rights in the digital space for children?

Speaker

Lufono (online participant)


Explanation

This question emphasizes the need for a comprehensive strategy that addresses the full spectrum of human rights issues related to children’s online experiences.


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.