DC-IoT & DC-CRIDE: Age aware IoT – Better IoT

18 Dec 2024 08:30h - 10:30h

DC-IoT & DC-CRIDE: Age aware IoT – Better IoT

Session at a Glance

Summary

This discussion focused on age-aware Internet of Things (IoT) and how to create better IoT systems that respect children’s rights and safety. Participants explored various aspects of this topic, including data governance, age assurance technologies, AI’s role, and capacity building.


The conversation highlighted the importance of considering children’s evolving capacities when designing IoT systems and policies. Experts emphasized the need for a more nuanced approach to age verification that goes beyond simple chronological age limits. They discussed the challenges of balancing children’s protection with their rights to privacy, access to information, and participation.


The role of AI in IoT systems was examined, with participants noting both its potential benefits for personalized learning and its risks in terms of data collection and user profiling. The discussion touched on the need for ethical AI development that considers children’s best interests.


Labeling and certification of IoT devices were proposed as ways to empower users and parents to make informed choices. Participants stressed the importance of global standards and the potential role of public procurement in driving adoption of child-friendly IoT practices.


The conversation also addressed the need for capacity building among various stakeholders, including parents, educators, and industry professionals. Experts called for more inclusive discussions that involve children and young people in the development of IoT policies and technologies.


Throughout the discussion, participants emphasized the shared responsibility of industry, governments, and civil society in creating a safer and more empowering digital environment for children. They concluded by highlighting the importance of ongoing dialogue and the need for practical, enforceable solutions to protect children’s rights in the evolving IoT landscape.


Keypoints

Major discussion points:


– The importance of age-aware IoT and developing good practices to protect children while allowing them to benefit from technology


– The need for better data governance, labeling, and certification of IoT devices to empower users and protect privacy


– The role of AI in adapting IoT environments to users’ abilities and needs


– The importance of capacity building and education for children, parents, and other stakeholders about IoT


– The tension between innovation, regulation, and corporate responsibility in developing safe IoT for children


Overall purpose:


The goal of this discussion was to explore how to develop age-appropriate and safe Internet of Things (IoT) technologies that serve people, especially children, while addressing potential risks and ethical concerns.


Tone:


The tone was collaborative and solution-oriented, with experts from different fields sharing insights and building on each other’s ideas. There was a sense of urgency about addressing these issues, but also optimism about finding ways to harness technology for good. Towards the end, the tone became more pointed about the need for corporate accountability and including children’s voices in future discussions.


Speakers

– Maarten Botterman: Chair of the Dynamic Coalition on Internet of Things (DC IoT)


– Sonia Livingstone: Professor at London School of Economics, expert on children’s rights in digital environments


– Jonathan Cave: Senior teaching fellow at University of Warwick, Turing Fellow at Alan Turing Institute, former economist member of British Regulatory Policy Committee


– Jutta Croll: Representative of Dynamic Coalition on Children’s Rights in the Digital Environment


– Torsten Krause: Role not specified, helped moderate online comments


– Pratishtha Arora: Expert on AI and children’s engagement with technology


– Abhilash Nair: Legal expert on age assurance and online regulation


– Sabrina Vorbau: Representative of Better Internet for Kids initiative


Additional speakers:


– Helen Mason: Representative from Child Helpline International


– Musa Adam Turai: Audience member who asked a question


Full session report

Age-Aware Internet of Things: Protecting Children’s Rights in a Connected World


This comprehensive discussion explored the challenges and opportunities of creating age-aware Internet of Things (IoT) systems that respect children’s rights and safety. Experts from various fields, including child rights, economics, law, and technology, convened to address the complexities of developing IoT technologies that serve people, especially children, while mitigating potential risks and ethical concerns.


Key Themes and Discussions


1. Data Governance and Age-Aware IoT


The conversation highlighted the nuanced nature of data collection in IoT systems, recognising that it can be both beneficial and harmful to children. Jonathan Cave emphasised that static age limits may not be appropriate given the evolving capacities of children. Sonia Livingstone stressed the need to consider broader child rights beyond just privacy and safety, arguing for a more holistic approach to children’s rights in digital environments. She emphasised the importance of consulting children in the design of technologies and policies.


2. Labelling and Certification of IoT Devices


Several speakers agreed on the importance of labelling and certification for IoT devices as a means of empowering users and protecting privacy. Maarten Botterman suggested that such measures could enable users to make informed choices about the technologies they adopt. Jutta Croll proposed leveraging public procurement to drive the adoption of standards, while Abhilash Nair noted that certification could help mitigate literacy issues for parents and caregivers.


Jonathan Cave expanded on this idea, suggesting that public procurement could serve as a complement to self-regulation or formal regulation, potentially incentivising industry compliance with safety standards. This approach was seen as a novel policy tool for promoting child-safe technologies.


3. The Role of AI in Age-Aware IoT


The discussion explored the dual nature of AI in IoT systems, acknowledging its potential to both facilitate and potentially distort children’s development. Jonathan Cave highlighted this duality, while Pratishtha Arora emphasised the importance of developing age-appropriate AI models and interfaces. Arora also raised the crucial point of considering impacts on children who may not be direct users of IoT devices but are nonetheless affected by them.


4. Capacity Building and Awareness


Participants stressed the need for translating research into user-friendly guidance for parents and educators. Sabrina Vorbau discussed the Better Internet for Kids initiative, which aims to create a safer and better internet for children and young people. She emphasised the importance of involving children and youth in developing these resources. Jutta Croll mentioned the EU ID wallet as a potential tool for age verification in digital environments.


Helen Mason from Child Helpline International advocated for including civil society and frontline responders in discussions, noting that data from child helplines could provide valuable insights into children’s experiences with online technologies.


5. Corporate Responsibility and Regulation


A significant portion of the discussion focused on the need to place more responsibility on industry rather than users for ensuring child safety in IoT environments. Sonia Livingstone argued strongly for this shift, while Jonathan Cave suggested that personal liability for executives might drive more attention to child safety issues. Abhilash Nair supported this idea, noting that it could lead to more proactive measures from companies.


The conversation also touched on the tension between free speech rights and child protection, particularly in conservative societies, as raised by audience member Musa Adam Turai. This highlighted the need for nuanced approaches that balance various rights and cultural contexts.


Thought-Provoking Insights


Several comments sparked deeper reflection and shifted the discussion:


1. Jonathan Cave’s challenge to static age-based protection measures, encouraging more nuanced approaches based on digital maturity.


2. Sonia Livingstone’s emphasis on considering the full spectrum of children’s rights, not just safety and privacy.


3. An audience member’s suggestion to focus more on media information literacy rather than access restrictions.


4. Livingstone’s critique of the term “user” and how it can lead to overlooking children’s specific needs and rights in technology development and policy discussions.


Conclusion and Future Directions


The discussion concluded with a call for ongoing dialogue and practical, enforceable solutions to protect children’s rights in the evolving IoT landscape. Participants emphasised the shared responsibility of industry, governments, and civil society in creating a safer and more empowering digital environment for children.


Key takeaways included the need to consider children’s evolving capacities in IoT design, the potential of labelling and certification to empower users, the importance of involving children in technology development processes, and the need for greater industry responsibility.


Moving forward, participants suggested involving children and young people in future IGF sessions on this topic, developing more user-friendly guidance on age assurance, and considering the use of public procurement to drive adoption of child safety standards in IoT. Jutta Croll noted the upcoming high-level session on children’s rights in the digital environment at the UN, highlighting the growing importance of this topic on the global stage.


Session Transcript

Maarten Botterman: Oh, you cannot unmute. Jonathan, you should be able to…


Jonathan Cave: Oh, now I can. Yes, I am allowed to unmute. I can’t turn on my camera, but at least I can speak.


Maarten Botterman: Okay, that’s excellent. Thank you. Sonja will check you to…


Sonia Livingstone: Hello. Yes, I can speak now. Thank you. And it would be lovely to have my camera on, if that’s possible.


Maarten Botterman: We’re checking. Thank you. Can we put on the camera for those speakers? Can we put on the camera for Sonja and Jonathan? Yes. Oh, excellent.


Sonia Livingstone: Thank you.


Maarten Botterman: And Jonathan Case, right? And Jonathan Case. Gentlemen, Jonathan Case as well. So, unmute your camera. Yes, there I am. Great. Thank you. Jonathan and Sonja, you can unmute and unmute yourself. So, if you’re not speaking, maybe best to mute yourself.


Sonia Livingstone: Okay, sounds good.


Maarten Botterman: You’re now both co-hosts. Shall we begin? Can we begin? Okay, good morning, everybody. Welcome to the session from DCIoT and DCCRIDE, which is focused on age-aware IoT and better IoT. Can you hear me well in the room? Good. So, this session will take us through the landscape of evolving technology and how it relates to how we deal with people socially in age, how we can make sure technology serves the people with specific focus on age. So, there’s so many opportunities to make everyday life more convenient, safer and more efficient. But there’s also threats that come with that. And we want to get the best out of it. This is why dynamic coalitions throughout the year explore how to develop good practice in the best possible way and address risk processing, provision of information that may be inappropriate or even harmful to individuals or the initiation of processes that are based on false assumptions. And one of the ways to counter these risks is by categorizing the users. If the devices in the surroundings can categorize users, the Internet of Things can adequately adapt according to the needs and the specific measures to serve that user. So, this is why Jutta and I discussed coming together with the two dynamic coalitions and focus on how this will look like. A little bit on the reasons for the DCIAT on CRITE. Can you share that, Jutta?


Jutta Croll: Yes, of course I can share that. You gave me the perfect segue to that when you mentioned evolving technologies because the dynamic coalitions are talking about the evolving capacities of children, which is one of the general principles of the UN Convention on the Rights of the Child. I do think both dynamic coalitions started their work very early in the process of the Internet governance from children’s rights than was called the Dynamic Coalition on Child Online Safety. And that started in 2007. I do think IoT started the same year as well?


Maarten Botterman: 2008.


Jutta Croll: 2008. So, very long-standing collaboration between these two dynamic coalitions in a certain way. And several years later, when the General Command Number 25 came out, which is dedicated to children’s rights in the online environment, we renamed the Dynamic Coalition to Children’s Rights, as Martin has already said. And we found some similarities in the work that we are doing and also in the objectives that we want to achieve because we know that children are the early adopters of new technology, of emerging technologies, and that’s always the case where we have to look whether their rights are ensured and whether they can benefit from these technologies. And IoT is one area that can help people, that can help children to benefit from digital environment. And that’s, having said that, I hand over to you, Martin, again.


Maarten Botterman: Thank you so much. So, basically, the DC IoT 2008, HydroBot was the first time, has been talking over time, so how can IoT serve people? What global good guidelines, what should be a good global good practice guidelines should be adopted by people around the world because this technology is used everywhere. So, like the Internet, the Internet of Things doesn’t stop at the border. Very practically, because products come from all over the world, but also because, for instance, the more mobile IoT devices, like in cars, in planes, or what you’re carrying with you when you travel, crosses borders all the time as well. So, an understanding of global good practice would also help governments to take it into account when they develop legislation, being more aware of what consequences could be, what to think of. It could take business, global business, could take it into account, design and development of devices and systems. By doing that from the outset, innovation can be guided by some insights, even when it’s not lost yet. So, the Internet of Things good practice aims at developing IoT systems, products and services, taking ethical considerations into account from the outset, development phase, deployment phase, use phase, and waste phase of the life cycle, or sustainable way forward. It’s using IoT to help to create a free, secure, and enabling rights-based environment with a minimal ecological footprint, and that for a future we want for us and future generations. And when we talk about an Internet we want in which IoT, as we want, is to be developed, it’s crucial that we really get that clear what that means for us, and that we do take action to make something happen there, because otherwise, still remembering very much what Vint Cerf, the chair of the high-level panel here in Kobe, if we don’t make the Internet we want, we may get the Internet we deserve, and we may not like that. So, with that, I really look forward to discussions today for which we have a number of excellent speakers in the room and online, and we will talk first about the data governance aspects that underline this, then we go into labeling and certification of IoT devices as that helps in transparency of these devices and what they can do, and empowers users to be more informed in their choices. Every session so far, I think I’ve heard the word AI, so let me be the first one to mention it here. Of course, it’s important that IoT environments work and also how selections can be made to adapt that to abilities of people. And then last but not least, in all this, the kind of horizontal layer is, how do we develop capacity? Because IoT may be developed all over the world, but to apply it locally, you need to have local knowledge. So where does that come together? How can we work on that? I’m very, very happy to have also Sabrina here to talk more about that. With that, I’d love to give the word to Jonathan Cave, who is a senior teaching fellow, who used to be a senior teaching fellow at economics in the University of Warwick, and a Turing Fellow at the Alan Turing Institute, well known for its work on ethics in the digital space. He was also a former economist member of the British Regulatory Policy Committee. Jonathan, can you dive into the data governance aspect and why this is so important and what we need to do about it? You will be followed by Sonia.


Jonathan Cave: Yes, thank you, Martin, and thank you, everyone, for showing up. This is a very important topic, and I’m going to largely limit these first remarks to matters dealing with data. But one thing I want to point out is that this idea of evolving capacity applies not only to the technologies which are changing and collecting more and more data, but also applies to the evolving capacities of the individuals involved, in particular children.


Maarten Botterman: Jonathan, can you improve your microphone? Not really. Okay, you’re understandable, but just not great. If you don’t have an easy trick, let’s continue. Sorry about that.


Jonathan Cave: Okay, let me just try.


Maarten Botterman: Closer to the device may help. Okay, well, let me try.


Jutta Croll: Now that you’re so close to the device, it’s better if you just go close.


Jonathan Cave: No, actually, the device is attached to my ears, so there’s no way of going closer without changing my face geometry. But I’ve switched to another microphone on my camera. Is that better?


Jutta Croll: Yes. Yes, it’s much better.


Jonathan Cave: Okay, thank you. One can never tell with these technologies. Yes, I think it’s very interesting is that much of our laws and much of our policy prescriptions around child safety is predicated around the idea of chronological age, that people below a certain age should be kept safe and people above that age lose that protection. But of course, particularly as children have more and more experience of online environments, and the people making the rules have less and less experience of the new technologies, that static perspective of protecting people on the basis of age may not be the most appropriate, and we need to stay aware of that. First of all, I think it’s interesting to remember the data governance issues. One element of this is that the data themselves can be a source of either safety or risk and harm to young people, and the reason we care about that is both in terms of the immediate harm, but in terms of the collective, progressive, or ongoing harm that early exposure to inappropriate content, which includes manipulation of individuals by priming and profiling, can expose people, which then change the way they think as individuals or as groups. Now, in that respect, the question becomes, which data should people have available to them? One particular element of this is that we have a lot of privacy laws, and many of these privacy laws set age limits for people’s exposure to or ability to consent to certain kinds of data collection or processing. Mostly, these are predicated around what we would consider sensitive data, but in the online environment, particularly social environments or gaming environments, many more data are collected whose implications we only dimly understand, and this is where AI comes in. It’s not obvious which data may be harmful. So, instead of imposing rules and asking industries simply to comply with those rules, we may need, and increasingly in areas like online harms, we’re moving in the direction of a sort of duty of care where we make businesses and providers responsible for measurable improvements and not for following static codes. So, it’s like harm reduction rather than compliance. So, there’s the question about which data are collected. There’s also a more minor issue. We are exempt from certain kinds of data collection, but those may be the same data needed to assess either what their true chronological age is or their level of digital maturity. So, it may be that some of the rules we have in place make it difficult to do too many things to keep pace with the evolving technologies. Okay. I think, probably, rather than going on, I should turn over to Sonia at this point. Any comments as to when things come back up?


Maarten Botterman: Fantastic. Thank you. Yes, Sonia, please go on.


Sonia Livingstone: Okay. Brilliant. Thank you very much, and thank you for the preceding remarks, which kind of set the scene. So, I did want to begin by acknowledging what an interesting conversation this promises to be because we’re bringing together two constituencies, those concerned with Internet of Things and those concerned with children’s rights, that haven’t historically talked together. It’s really valuable that we’re having this conversation now. In a kind of Venn diagram of child rights and IoT experts, I think the overlap currently is relatively modest, and I hope we can widen the domain of mutual understanding. I think it’s even there in some of the… and age assurance is a brilliant topic to illustrate some of the both overlaps but also differences. So, I guess, from a child rights perspective, a starting point is to say that it’s very hard to respect children’s rights online or in relation to IoT if the digital providers don’t know which user is a child. And so, having that knowledge seems a prerequisite to respecting children’s rights. And yet, as some of us have been investigating so far, it is far from clear that age assurance technologies as they currently exist do themselves respect children’s rights. So, the many is that we might bring in a non-child rights respecting technology to solve a serious child rights problem. So, and I think this has amplified this challenge in relation to the Internet of Things because now we’re talking about technologies that are embedded, that are ambient, and that the users may not even know are operating or collecting their data, processing… but also introducing risks. So, a child rights landscape always seeks to be holistic. So, privacy is key, as has already been said. Safety is key, as has already been said. But the concern about some of the age assurance solutions is that, as Jonathan just said, they introduce age limits. And so, there are also costs, potentially, to children’s rights, as well as benefits. …perspective that is crucial. So, it’s always important that we think about privacy, safety, if you like, hygiene factors. How do we stop the technologies introducing problems? But we need to think about those also in relation to the rest of children’s rights. What am I thinking of? I’m thinking of consulting children in the design and making of both the technologies and also the policies that regulate them. I’m thinking of children’s right to access information, right to participation, and rather than excluded through age limits, or perhaps through delegating the responsibility of children to parents, which means parents might exclude children. We’re seeing a lot of this in various parts of the world at the moment. As Jutta said, I’m thinking about evolving capacities. This is not just a matter of age limits. Underneath, children are excluded, and above, they are placed at risk, as it were. This is a terrible binary, if that’s where we’re heading. But we’re also thinking of appropriate provision for children who are users or may be impacted by technology. They may not be the named user. They may not be signed up to the profile or signed up to the service or paying for the service, but they may be in the room, in the car, in the street, in the workplace, in the school. They may be impacted by the technology. I’m thinking also about best interests, that overall, the balance should always be in the child’s best interest. interests. That’s what every state in the world, except America, has signed up to when it ratified the convention. And I’m thinking of child-friendly remedy so that when something goes wrong, children themselves, not necessarily through adult remedy. So I think a child rights approach brings a broader perspective but also one that is already embedded in and encoded in established laws policies and obligations on institutions and states to ensure that these new areas of business respect children’s rights during and as part of their innovative process. And so I’ll end with the mention of child rights by design if you like to give a broader focus to questions of privacy by design.


Maarten Botterman: Thank you. I see no… Oh, in the chat. I will buy the host. No, there’s no comment. Torsten?


Torsten Krause: Yeah, we have one comment in a chat. I would read it out, please, as it was written by Godzway Kubi. I hope I pronounced it correctly. And Godzway Kubi wrote, two ways Age-Aware IoT can contribute to a better IoT ecosystem for children is by prioritizing private devices designed for children, such as smart toys and learning tools, and using age-appropriate interfaces and content filters to enhance usability and safety.


Maarten Botterman: Okay, thank you for that very appropriate remark. By the way, Sonja, the first togetherness on age was actually on children’s toys and IoTs six years ago. So… Jonathan, please.


Jonathan Cave: Yeah, just a small follow-up. Those are extremely useful remarks. There’s one thing I wanted to say about the use of technologies to identify children’s ages, whether they’re chronological or let’s call them digital ages, which is that, of course, these, like all other age verification technologies, can be bypassed. And one particular concern we should have is that when these bypass approaches become to children or by groups of children, there may be an adverse selection in the sense that those most likely to bypass those protections may be those most at risk, either as individuals or in the groups through which these practices share, disseminate. I remember when I was growing up and our county was 21 for drinking alcohol, the neighboring county was 18, and fake IDs were in widespread circulation among certain social networks. And so there is this issue about whether, although the solution may be very good, the path to the solution may be more harmful than where we start up now. And so, yeah.


Maarten Botterman: I think that’s a very good point and one of the big later on. So with that, I would like to move on towards the second. Oh, Sonja, do you want to respond to that?


Sonia Livingstone: Well, I was just going to say, thank you. I was just going to say very briefly, in response to Jonathan, and thinking of a paper that I worked on as part of the EU consent project also with Abhilash, when we consult children and families, they actually value those kinds of workarounds. It provides a little flexibility. And I might say, I don’t think a five year ID to drink, but perhaps a 19 year old could. And it’s that little bit of flexibility around hard kind of age limits, that many families and children say is important to them in providing just that bit of flexibility for where they know their child is a bit less mature or a bit more mature. And so encoding everything in legislation and technology can in itself take away some agency from families. And I think that’s a challenge to consider.


Jonathan Cave: I’d also say that it takes away some agency from the children themselves, who have to learn how to navigate. There is this tension between asking whether the environment can be made safe. And if not, whether denying access, as for example, in Australia is the appropriate approach, or whether some more active form of engaged filtration that goes, but then you have to move away from the binary, of course. Yeah, because you do have to not only gauge how mature children are, but to provide them with curated experiences, perhaps under parental or peer control, that enable them to become capable of safely navigating these environments.


Maarten Botterman: So I think there’s the legal hard, legal coded limits. And I think online tools can be more useful and practical limits than with online code limits, unless you involve obligatory registration, including at the UKIP.


Jutta Croll: Yes, so far, what we know, for example, from the GDPR is a hard set limit of, it’s between 13 or 16. But in every country in Europe, it’s a hard limit, either 13, 14, 15, or 16. And what the European limits is more about age brackets, which would mean a certain age range, saying between 13 and 15, or 16 to 18, or something like that. So that you have a diverse, a bit of a range. And there comes in the issue of maturity. So you might have a 13 year old that is mature, like a 15 year old, and a 15 year old that is only like someone who’s 14, or 11, or 12, something like that. So when we are not talking about an exact age threshold, but about age brackets, we get some of this flexibility. And it also pays into the concept of maturity, so that it makes it more flexible. Thank you.


Maarten Botterman: Yes, thank you very much. And I see another one from Doherty in the audience. I don’t know about this, but let’s call on Doherty, please.


Torsten Krause: Doherty Gordon wrote in a comment. It’s repeating a little bit what Jonathan said. She wrote, denying access will always encourage teens to look at work around, to not engage in dangerous behavior because they have no guidance. Why do we not put more emphasis on media information literacy, so that users understand how to protect themselves?


Maarten Botterman: Yes, thanks for that. And also, the brackets, can you come into that as well? Please.


Pratishtha Arora: Yeah, so I want to speak on the point. Looking into all aspects of children over there, it’s equally important over here that when we’re defining age on the accessibility of technology for children, what category of children that we’re talking about, because it might just differ as per the point that, you know, of their sense of understanding about technology, and also their sense of understanding about using the technology.


Maarten Botterman: Yes, thanks. And this was Batista Aurora. If you speak, please, we will be speaking shortly on AI as well. Thank you for that. So how can we help to help ensure that parents, children environments know how I think devices can serve them. And that is the next topic we would like to dive into. So basically having all these goods from all over the world, having different capacities and having found in the past that, for instance, the security of these devices was sometimes limited to or to being set with default passwords like admin is not useful. Also, we found that in the past some devices, for instance, send data back to the factory or wherever without users being aware of that or asks for more of that clarity. At the same time, legislation is per country. And so how can we together get a good tool here that helps us to understand what we actually buy, what we actually start using? So from the IoT perspective, we had a big discussion last year in Kobe where it was made clear that labeling of devices and of services is crucial. Needs to be even dynamic because with upgrades of software and things, the label may change. So a label that can be linked to online repository would be crucial. And certification of that, of course, is important because one could say anything there. And how do we know that it’s true? So some certainty needs to be built in. And there’s different certification schemes. This is not a session to go deeper, very deep into the differences of those. At this moment, these labeling and certification schemes are also discussed around the world and put in place. Framework has been put in place that has also identified this in Singapore. Action has been taking place in other places. What is now, and in Europe as well, of course, as part of the Digital Services Act. And what we see is that now the diplomatic go around is beginning. These countries are talking to each other about how can we do it in such a way that we can recognize each other’s certifications, that the labels of other countries are useful for us as well. And this is the beginning. But we’re not there yet. The deeper intent of labeling and certification, so labeling is what is it about the certification is how do I know that this is correct, is basically that it empowers users to make smarter choices. So next to security, we discussed last year, it should be also about data security. So where is data streams going? And I think what came up over the years since is more and more emphasis also, so how much energy does it use, like IEEE is already doing for electronic devices. So all this together is, I think, in the name of IoT devices can do and can offer. And I’m very interested to hear your perspective, Abhilash, on what this can do for age awareness and appropriateness.


Abhilash Nair: Thank you. I want to talk a little bit about why age assurance matters from a legal perspective. As a starting point, we know that requires some form of age assurance for various content services and sale of goods online. And some of these laws have explicit requirements of age verification or age assurance. Some of them are implied. But in practice, there is very little out there. For decades, really, we’ve had laws that have not been enforced properly because they have not been complemented with appropriate age assurance tools. And the notable exception for that is probably online gambling, where the law seems to have worked in some jurisdiction under the Gambling Commission. But part of the reason why it was successful is because it’s not just about age assurance. It’s also about identity verification, so where people need to be identified so that they can be offered support against support for problem gambling, so on and so forth. For almost all of the cases, when we looked at the EU Consent Project that Sonia mentioned earlier, when we looked at all member states in the EU plus the UK, we found that there was very little out there in terms of AV tools, age verification tools, that can actually help implement legislation. And content was the most problematic of all of us, not least because there are cultural variations even within Europe as to what is acceptable content, even for children in under the sub-18 category of people. But there was also a wider problem there. There was a disconnect between the principle of self-regulation that the EU has advocated, especially for content for people of Italian, on the one hand, with legislation that suggests the adoption of age assurance tools on the other. So, not led to a happy situation that, you know, the law on the one hand requires age assurance to protect children, but the practical reality is, you know, there hasn’t been any useful means of enforcing that legislation in practice. There is a legal principle which suggests that if a law cannot be enforced as unenforceable, it’s unlikely to succeed. It’s unlikely to command the respect of people who are bound by that legislation. You can see a good example in copyright infringement online. It’s not because people’s copyright is unlawful that people infringe copyrights, it’s because they can do it without consequence. Unfortunately, that is the case with most laws that require age assurance, most not to play with content, as I’ve already mentioned. So, what I’m trying to say here is it’s important that age assurance, or effective age assurance, complements legislation for the legislation to work, and that is a starting point. Jonathan already mentioned that we’ve got too many rules. The solution is not to have more legislation. The real starting point is to make sure that what we have is enforceable and practical. Now, things are changing lately with more specific legislation coming into the books, especially specifically mandating age assurance and posting specific obligations on platforms and websites for non-compliance. But it’s not without problems. The problem with age assurance, in my view, the fundamental problem there is, essentially, it has been a debate about children out and adults in, and that’s not, you know, that’s not how it should be. Sonya’s already talked about evolving capacity, so children in the UK cannot just classify everyone under the age of 18 into one category and use age assurance on them and have adults on the other side. But there’s also the other binary debate between adults who will feel strongly about privacy and their ability to access internet without any restrictions, without any restrictions, and keeping children safe. Now, that balance also has not been struck appropriately thus far. So, and there’s also the other issue of, you know, the age threshold for children for accessing content services or purchasing goods can also vary across nations, across cultures, even within the same country. There are different cultural variations in that. But the law does not always factor in evolving capacities of children. To take the example of the EU Audiovisual Media Services Directive, which refers to a notion of risk of harm, to be the basis for adopting appropriate safeguards and measures for age assurance is a good example because, in principle, it recognises evolving capacities of children, you know, every child, say, for example, an 11-year-old is very different to a 16-year-old. But not all legislation does that. I’ve already talked about the cultural variations as to age thresholds, even for what is generally perceived as harmful content for children. Even within Europe, there are variations as to the age threshold for accessing porn, even under the sub-18 category. So, that’s where I stand on age assurance laws. I think we have toyed with the idea of self-regulation, especially in the online space, for more than three decades, and it hasn’t worked. And I think we need, what I’m saying is we don’t need new age assurance laws, we already have age assurance laws. What we need is workable things. legislation. And like you said, labelling and certification can be mandated by law or it could be a voluntary thing, but they obviously have to go hand in hand and complement legislation because I do believe that measures like certification and labelling gives more consumer choices, gives more parental or caregiver autonomy, also children autonomy, but that cannot be a substitute for it is how I feel about it.


Maarten Botterman: Thank you for that. Comments online on this subject, Torsten?


Torsten Krause: There are discussions online,


Maarten Botterman: but it’s skipped Jonathan and Sonja because they raised their hand. Other comments? There are comments, but not to this. Jonathan, please. Your thoughts on this.


Jonathan Cave: Okay. Thank you. Thank you very much. And thanks for that.


Maarten Botterman: I can’t hear you right now. Am I still inaudible? One moment. Yes. Jonathan? Yes. No.


Jonathan Cave: No. I’m still inaudible.


Maarten Botterman: I see the technical section working on it. Yeah. It will be the same issue for her because it’s settings here in the room. Can you say something? Something? Something? Say something. We can’t hear the online speakers. We can’t hear the online speakers. Okay. One moment. Yes. We can hear you now. You’re back.


Jonathan Cave: Okay. I’ll be very brief because of the technical delays. I think one of the things that we learned with age verification in relation to pornography is that the very existence of the single market or the global use of these technologies makes it very difficult to make differences. Even attempts to tackle the problem by regulating payment platforms because you couldn’t regulate content users on the platforms sort of failed because the content was coming from outside the jurisdiction, and the fact that it was banned within the jurisdiction merely increased the profitability or the price of external supplies of this kind of potentially harmful content. Another thing is that I completely agree that some mixture of self and co-regulation and formal regulation backed by a concept of a duty of care or harm-based regulation, something more accenting, is required to keep pace, not just with the evolution of technology and people’s understanding, but with how it reacts to existing, let’s say, bans and protections of regulations. And the final point was to say that we should probably also be aware of the fact that certification schemes and other forms of high-profile regulation can convey a false sense of security, but by the same token, it may be the case that some of the harms against which we regulate are really harmful because people have evolved away from the point where they’re vulnerable to them. And in that case, in that sense, I just point out that in relation to disinformation, misinformation, and malinformation, there’s evidence that a lot of the younger generations, Gen Alpha, Gen Z, in particular, are less vulnerable to these harms than their unrestricted, unregulated adult counterparts. And so that it may be that some of the harms we worry about cease to be harms or are no longer appropriate to be tackled by legal means. Okay, those are my comments.


Maarten Botterman: Thank you for that, Jonathan. Sonia, please.


Sonia Livingstone: Thank you. I wanted to acknowledge the conversation in the Zoom chat for the meeting, identifying the range of stakeholders involved. And so, of course, maybe we should have said at the very beginning that in facing this new challenge of age assurance in relation to IOT, a whole host of actors are crucial, they play a role, and there are difficult questions of balance which will vary by culture. So yes, we need to empower children and make sure that these services are child friendly, they speak to them, they’re understandable by them, we need to address parents, we need to address educators, and involve media literacy initiatives in exactly this domain. But there is, I wanted to make two points there. And one is, in that we can only educate people, the public, in school and so forth, insofar as the technologies are legible, insofar as people can reasonably be expected to. What are the harms, where the harms might come from, and then what are the levers, what are the kind of available resources for the public, the users, to address those. And we’re not there yet. And so on the question of balance, I think the spotlight for IOT is rightly on the industry, and on the role of the state, as Abhilash said, in bringing legislation. And on that point, we’ve been doing some work trying to make the idea of child rights by design real and actionable. We’ve been doing some work with the industry, the stakeholder group that is kind of most … And so I just want to open up the black box of industry a little bit. And because what we’re finding is that from the CEO, through the legal department, to the marketing department, the design department, the developers, the engineers, you know, all the different professionals and all the different experts that make up the development of a technology, for the most part, most are not aware of the child user who may or may not be at the end of the process. Most of them have a different kind of individual, not a family, that might share passwords and share technologies, and by and large is a relatively invulnerable or resilient user, rather than one with a range of vulnerabilities, including children that we’ve thought about. So I think let’s break up this, you know, look into this notion of the industry and think about where are ethical principles, our duty of care, our legal requirements and our child rights expectations, where will they land within a company, whether it’s a small startup that is completely hard-pressed and has no idea of these concerns, all the way through to an enormous company that has, you know, a lot of kind of trust and safety focus at a relatively unheard level in the organization, and a lot of engineers and innovators who are pushing forward without the kind of knowledge and the kind of awareness that we’re discussing today. So pointing to industry and governments regulating industry just, you know, opens up the next set of challenges about who and how to address these issues.


Maarten Botterman: Very well said, and also an excellent segue towards our next sections, because basically this is about maybe education of the equipment through AI, and for sure the capacity building to parents, children and environments, to which we will talk in the last session. I’d like to invite Patricia Arora to initiate explaining the role for AI that you see in this interaction. Thank you.


Pratishtha Arora: Yes, thank you for that. I think the impact of AI is kind of, you know, putting a lot of emphasis on children and the engagement on the devices. In terms of impact, I see both the positive and the negative in terms of positive, because it’s also a learning platform for many children who are maybe slow developers, you know, watching through videos and learning and learning and building their own capacities. It’s a good contrary when I talk about technology and the advancement of it and the impact on children. That is also a big challenge for them in terms of that children are being given all the rights to use that device without. What speaker children are using to call their parents, voice out what they feel like, engage with that device, which is also giving them that expression of freedom to learn on the aspect that, okay, the device is answering them back from the point that a child is asking a question. But there, the speaker or the device is unable to understand that if, what is the age of the child over there? So age assurance as a point over there that an old child asking the question or a 13-year-old or a 15-year-old, so that gap is not being identified as of now. So where I feel that, you know, these technological advancements is playing a very big role. On the point that how it is leading to a negative impact is that, you know, there is over dependence on these technological tools as well. Because for every small thing that we are going up to the device and asking a problem to solve, to solve that problem, and that is also leading to somewhere the development of the skills in terms of the physical development of a child, the mental development of a child as well. Because we are totally dependent upon what the device is responding to us. In terms of standards, I feel that, you know, we need to have more defined standards where children have access to devices and where the engagement of parents have a big role in terms of what PowerPoint and also from the point that we need to have other stakeholders involved from the industry perspective that they follow these rules when any device is being designed from a child perspective. Because developers, like what Sonia mentioned, that, you know, developers are not able to figure out that, you know, this device has been designed from a child’s perspective. So that thought to be ingrained to the point that any technology which is being designed needs to be child friendly as we are advancing with technology now. Of course, reinforcing on the point again and again that safety by design is a key concept. And in the future of particular point that all these aspects are taken into consideration that any child and every child is looked upon irrespective of their age, irrespective of their own skills and their own learning capabilities as well. So coming from India, I have experienced a varied range of, you know, observations of how technology is being used by children. And also how it is misleading them in terms of the engagement in the online spaces as well. While it is in the space of online gaming or it is in the space of, you know, social media interactions. Because somewhere or the other, the Internet of things or the devices, when we see that there is a development on one aspect with technology, there is the flip side of it on the misuse of technology. So we need to keep the right checks and balances. I think that’s what has been coming again and again in the conversation as well. We need to have the right checks and balances. Also because Internet of things is quite an alien term when we talk about it in India. It’s sad. We need to somewhere or the other, we need to also break down this concept of Internet of things to people to simplify the understanding of what exactly it means. Like when we talk about trust and safety, it is again an alien term. Somewhere we are advancing with the technology in the technological tools only with certain sector of people who are involved into this whole game of designing and developing tools. But for the larger or for the masses, it is still an alien concept. What is? How the standards need to be defined. That is why I think that’s the missing gap when we talk about child friendly or safety by design as a concept. Also now because somewhere or the other, technology has been knocking everybody’s door. So as a smart device or a device in terms of a phone is in everybody’s house. But in terms of having other gadgets is again more about what section of society engage. But larger is more about the difference in the economic backgrounds of the families. That it is more with the privileged section that they are encountering this problem and challenge about devices, the engagement over there. While the phone, a smart phone is in everybody’s house. I feel there, this is also global attention that a phone device which is in a capacity that any child can use because there has been an emphasis about that we need to have devices as well. So I think I’ll stop there. And with the last point that how data governance is playing a very big role over there. Because whenever you’re setting up any device and you’re giving out the data, you’re also ending up giving your child data. So there, what is the governance about data, the privacy aspect of children’s data over there?


Maarten Botterman: Okay. Thank you very much. Some good points made. Jonathan, I’ll leave the floor to you.


Jonathan Cave: Okay. Thank you. And thank you for that discussion. I just have a few points that are I think other points I’d like to introduce. I’m an educator. One of the things that AI does is that it not only facilitates education and children’s development in the ways that we normally understood it, but it also preempts or distorts it. It has an influence on the way people think. One of the aspects of this is that the AI devices that children use learn about the child, but in the process they also, as it were, to program the child and they teach the child. Now, one of the things they teach them is to rely on the system for certain kinds of things, outsourced our memory to our AI devices, and we will ask for things that in the past we would have thought about. When a child searches for information online or asks a question, in the past they would have had to read a book, for example. They would have read things that they weren’t specifically looking for, and they would have to think about them to develop an answer to the question. If the AI gets very good at simply answering the question that was asked, the educational aspect of that is somewhat lost, and the child on the device becomes in a certain sense deeper. The child becomes an interface that sets the device in motion. Now, this is something we have to deal with. We might say what we need to do is to prevent it, but my students say to me, with respect to the use of AI to write essays and so on, that it is a transferable skill that the world into which they will grow will make use of these technologies, and to learn how to use the technologies may be more important than learning to use the technologies to do the things that we used to ask them to do. So there is a question here, a deep question, I think, of experience is best for children to help them become the kind of adults who can successfully work in this environment. And there are some technical things we can do along the way, like developing specific or stratified large-language modules or even small-language models for specific children to use, or to use synthetic data or digital twins to put a sandbox around children’s experience of using the technologies. But I think that the general lesson is that these technologies used in this way to serve people, that if they’re oriented towards solving past problems, and developers often tend to do that, they need to be required to think consequences of what it is that they’re doing. And that requires a continuous conversation involving children, developers, parents, and the rest of society that doesn’t just stop when the device is released into use. And a final point on games. It’s certainly true that games, particularly immersive online games, have a kind of reality or salience to a person that is even greater than the salience of real experience. It can cut through to the way in which we think in ways that normal contact doesn’t always do that. We know that from neuroscience experiments as well as normal experience, which suggests that these games could be used actually to help people navigate this new world, to promote ethical development instead of sort of attenuating ethical and moral sensitivities among children. And then the final thing is that there is a difference, and this was very compellingly brought out by the experience from India. Of the technologies which are designed for or used for elites, whether they are privileged elites in the sense of money or trained elites who can navigate, understand the risks and benefits, and those same technologies when used by the drop, for example, and the uses become different, evolve away from those that the developers originally intended. So I think that is a fundamental issue that needs to be dealt with at the development and deployment level. So oh, then the final thing is to say that, of course, one of the things that AI can do is it can police the problems that AI creates. And so one of the things that one would expect a machine learning model, a deep neural net model to do is to keep track of how these technologies are changing our children and to respond. It’s, I don’t know, the solution to the problems created by AI is more AI. I would hesitate to actually endorse that because then we do give up our human agency. But those are sort of my concluding thoughts on AI in this respect.


Maarten Botterman: Thank you very much for that. Yeah. That play that is ongoing and is forming us with a clear call for being aware of dependency that may grow with AI, the safety for design by kids, from the outset, right, by design. The warning, the human agency warning, like, we may want to keep that in some way or another. Jonathan?


Jonathan Cave: Yeah. The other thing I would just add to that is Piaget’s dictum that play is a child’s work, that that sense of engaging with these technologies for play allows us to develop in ways that doing them in anger or for serious reasons do not. And there’s some really interesting work going on at Oxford on the difference on play as a state of mind when we engage with technologies. OK.


Maarten Botterman: Thank you so much. Person from the room?


Torsten Krause: No comment to the current block, but there was a discussion or a hint that it’s not only necessary that children understand how IoT works, what’s the functionality behind it, but also parents must know how it works and how it could influence their children. But maybe that’s an aspect we can add to the next block, too.


Maarten Botterman: Yes. We will come back to that in the end, because it’s not only about children and parents, but also the babysitters, but also social environments. So we will take that back to the end. Jutta, you want to? Other specific questions on the AI impact? I mean, it’s clear, right? We’re learning with AI, we’re learning from AI, but also AI is learning while we do it. And let’s make sure that it learns it in the right ways, taking into account the values that we share around the world. And that may be not all values that you’ve been getting from your parents, like the inclusiveness, the recognition and valuing of diversity, and human agency and privacy. I think these are some of the clear examples of values that we share. And for new tools, new developments to be taken into account is one thing. Let’s also keep in our mind, so what with all the old stuff that is already out there? And evolve that. Now, on standards, we heard about legislation, we heard about industry practice. And in a way, if we look to, for instance, electronic shocks, we’ve got the IEEE standards. It’s global standards. For internet standards, we’ve got the IETF standards that set certain rules. But they are voluntary, but they’re industry standards and they’re adopted and at least agreed and discussed around the world. And more of this is likely to come. Jutta, please.


Jutta Croll: Yes, if I may come in, since you got back to the standards, I was considering that Abelis was saying labeling certification schemes should be mandated. That would be the first step to go further with IoT, when we have that mandatory certification scheme or labeling. But then it also needs to be accepted. And one example that we know for about 20 years now is Section 508, that at that time demanded or obliged the US state’s administration to have accessibility as a precondition in procurement. So from that time on, any product that was bought by administration in the United States needed to be accessible for people with disabilities. Of course, the whole process of having a broad range of products that were certified to be accessible, and also it brought the prices down. The products got affordable because the administration was obliged to only buy those products that are accessible. And if we could come to that next stage, not only labeling and certification, but having it like a procurement precondition, that I do think would really help to bring forward the labeled IoT. Do you understand what I mean? It’s come to my mind that it’s a really good example that we could follow up with.


Maarten Botterman: Yes, I see Jonathan clap his hands because we talked a lot about this. And basically, we’ve got standards, we’ve got legislation. The problem about legislation is it’s per jurisdiction. And then you can start to harmonize across jurisdictions and it takes time. But at least if there’s principles that are globally recognized, you have something to look for. And an organization like IEEE, ITF, ISO, there’s a role in that. So very much, I know Jonathan is much more an expert in this than I am. And I see he raised his hands, Jonathan.


Jonathan Cave: Yes, thank you very much. That’s a brilliant point. The idea of using public procurement as a tool, as a sort of complement either to self-regulation or to formal regulation is, I think, one that’s worked in a number of areas. One of the things it can do, as Jotun mentioned, is to set a floor under the kind of capability that we wish to provide a stimulus, an economic stimulus to. The things have to be accessible. They have abilities. But it can also create a kind of direction of competition. So when you specify a procurement, part of it is the requirements that you put in that the proposed solutions have to provide. The other part are the things on which you provide advanced scores. And so procurement tenders written appropriately can also stimulate innovation to come up with better and more effective solutions. So there’s that aspect that puts money into developments that might not yet have a market home, that might not yet be profitable for people to provide, but which with certain development or certain economies of scale might become profitable. And you can do that without putting governments in the position of saying, this is what we need, because governments are particularly bad at picking winners and specifying solutions. But what they can do is to move the whole industry in the direction of providing these things. So that also happens, and the final point on this, when it comes to the adoption of standards within the procurement. European standards, although not developed by the EU, are often incorporated into public procurement tenders, with the idea being that you have to show compliance with the standard or equivalent performance. And that introduces into this market-based alternative to regulation, something which looks like an outcome-based or principles-based criterion as to what’s acceptable and what isn’t. So in other words, you either have to do the thing which is there in the code, you have to comply, or better still, you have to show that you can do better. And if you do that, you harness the inability to give the customer some say in the matter, and not just a negotiation or a procurement officer within a government bureaucracy. So I think it’s really profitable.


Maarten Botterman: Yes. A clear example on that is that, for instance, for internet security, there are standards that back up the flaws in the current system, like DNSSEC, like RPKI. These are standards that can be implemented. adopted by service providers. Now, these standards are, again, global, but they’re voluntary. For instance, in the Netherlands, the Dutch administration does include it in its standarding for services. And with that, they ensure that their service providers, that I as a citizen can also go to for those services, because the services get the basis. So that’s one of the examples. And yes, if government isn’t sure, at least they can help with the direction. So Utah, thanks for raising it. Jonathan, thanks for bringing it home. And you wanted to compliment on that.


Abhilash Nair: Yes, thank you. Just wanted to follow up on what you just said about laws mandating, said laws could rather than, you know, must always. I recognize there are some instances where it’s not possible. One thing, one other thing just to add to that is it might also help mitigate the literacy problem of parents or caregivers, because often policymakers assume that parents and caregivers are always educated and every child comes from a two-parent middle-class household. And that doesn’t happen, especially in countries with varying literacy rates, let alone digital literacy rates, with that kind of certification level. They might for children.


Jutta Croll: You gave the perfect segue to handing over to Sabrina, I would say. Yes. Start to go on.


Maarten Botterman: And yes, we also come with the remark from Dorothy online that says, there’s so many people who are not online yet. And how do we provide, make sure that they don’t miss the boat? So with the focus on, well, after what we can have technology do and develop with AI and standards, in the end, it’s about the people. And how can we make sure that people use it well? Sabrina, please.


Sabrina Vorbau: Yeah, thank you. Good afternoon, everyone. I kept quiet for the moment because I think it makes sense for me to come in at the very end, just to compliment on the various aspects that have been, so how we can indeed build this bridge of the information and the knowledge we have to the end users, which are, of course, in primary children and young people, but not exclusively also parents, caregivers, teachers, but also not to forget other stakeholders, the policymakers in the industry. I want to come in with a concrete example, representing the Better Internet for Kids initiative, which is funded by the European Commission under the Digital Europe program. The initiative aims to create a safer and more empowering online environment for children and young people. The goal is in the European Union, we have the Better Internet for Kids plus strategy that is based on three core pillars, child protection, child participation, and child empowerment. So also what was mentioned already, to try to empower children and young people to become agents of change, but in order for them to do this, they need us as an adult responsible space. And that, of course, is to promote, it’s the goal of the Better Internet for Kids initiative, to promote responsible use of the internet, protecting minors from online risks such as harmful and inappropriate content, but also to provide resources for parents, for educators, and other stakeholders to better support on aspects such as online safety and digital literacy. And, of course, Better Internet for Kids also addresses the very prominent topic of age assurance, to ensure that children and young people engage with age-appropriate content and are protected from harmful content. And to give some concrete examples of materials that you will find on the Better Internet for Kids portal, just earlier this year, together with University of Leiden in the Netherlands, we published a mapping of age assurance typologies and by a comprehensive report that gives an overview of different approaches of age assurance and the associated legal, ethical, and also technical considerations that were picked up by my fellow panelists. And just to scratch on some key areas, first of all, a diverse approach to age assurance, really the approach of there is no one-size-fits-all solution, the crucial importance of privacy and data protection concerns that were also already highlighted by Jonathan, and the balance and act of effectiveness and also user experience. As said, this is a very comprehensive research report, and as I mentioned with existing laws and policies, we need to ensure that this knowledge, this information is translated in user-friendly guidance, so we also transmit this expertise and the knowledge we have to educators, to parents that are really, really crucial in this process, and also how can we build capacity and make sure also this is properly implemented at a local level, and that’s why also on the Better Internet for Kids portal that you can find on betterinternetforkids.eu, we very much put age assurance in the spotlight, specifically focusing on two stakeholder groups. First of all, the educators and families really to provide resources to help proper awareness raising, but also knowledge sharing to foster digital literacy. I think this was also a comment that was given in the chat earlier, really how can we ensure proper media literacy education, and that’s why we developed an age assurance toolkit that includes age assurance explainers, and just to give you some examples of what users can find in the school toolkit, first of all explaining what is age assurance in the first place, and I think it was also mentioned other examples were given before age assurance might be a typology or a term that is not so accessible for many people. That’s why we also in this toolkit provide concrete examples of when age assurance comes. Why is it so important and how it actually can protect children? I think that’s also important for carers and parents to understand why is this topic so important? How can it protect my child? And in addition to this, so this is a resource, and as I said, I have a printed copy here, so you can see it’s a much lighter report. We also designed this together with children and young people. That’s also what we really try to work on these resources also together with the end users, because ultimately it’s for them, so it’s also very important to involve them in this process, and I think it’s always very eye-opening, because I think we are very much used to certain terminologies that is quite self-explanatory for us, but for some people it is not. It is not so accessible, so that’s also important to really follow this co-creation process, and then touched up on the group already on the black box of the industry. Of course, the Better Internet for Kids initiative, we really try to bridge the conversation and also have industry and policymakers around the table when we discuss certain topics. So on the website, we also have resources aimed at digital services providers to check their own compliance, and this has been done in the form of the self-assessment tool manual and questionnaire that we also developed in the Netherlands, and really the aim is here for the service provider to critically reflect on their services and how these may intersect with the protection of children and young people, but what is important here to note is, of course, that it only provides guidance, so it’s not a legal compliance mechanism, and here again, and it was mentioned also before, it’s not a one-size-fits-all when we talk about online service providers. We talked about the gate search engines, so I think we also need to acknowledge their diversity, and then maybe just to conclude, also to highlight on behalf of the European Commission that, of course, a lot of focus and work is done here in this space, and this is also complemented by the work the European Commission is doing on age verification. For example, following a risk-based principle, the European Commission, together with the EU member states, is also developing a European approach to age verification in the commission, is preparing for an EU-wide interpropyl and privacy-preserving short-term age verification solution before the European Digital Identity Wallet is offered as of 2026 in the European Union. So I think conversations like today are really, really important, really trying to pull the different strings and bringing different stakeholders together to work together and then hopefully in the future settings of the IGF we will have children and young people but also educators participating in such conversations because we really need them, we really need to understand what are their needs and for us then to act properly and as I said really build this bridge to share the knowledge we are having on the different aspects and really make sure this is translated properly at national and local level.


Maarten Botterman: Thank you very much, Sabrina. I know this is diverse by definition because it’s 29 member states that are all finding their way in this. Globally, this may be very good input. For me, experience that we have of capacity building in general is that we got examples from all over the world of good practice. We got legislatory examples, we got teaching examples, we got practical examples. But how to apply it best in your region is for the people in the region. This is why capacity building isn’t only on using the same guide around the world but it’s also about understanding the why’s and the how’s and make sure that it’s applied for the Indian region, for the different regions in Africa. Africa isn’t one region either and Latin America, et cetera. So you can adapt to that and learn from that. Same with the relationship with children, with parents, around the world. And we need to recognize that we can’t set one standard for all but we can have some principles that are valid for all. So with that, I see, Jutta, you grabbed the microphone.


Jutta Croll: Yes, I grabbed the microphone because I saw a comment or a question in the chat that I would like to address. And that was the question. Sabrina mentioned the EU ID wallet that has to come into place in 24-month time. But already the European Commission has acknowledged more time. And the EU ID wallet is an instrument for ID verification. So it’s more than age verification. But the wallet is also foreseen to make possible age verification only. So it needs to have an option that you can use it only to verify your age, not to give away your identity. And that is very important in regard of the privacy aspects, the data protection aspect that were already mentioned by Jonathan Cave. The question was whether the Commission is developing their own age verification tool. I would not say the Commission will develop it, but they have issued a tender in October this year for the development of an age verification instrument that should be white-labeled so that whatever age verification instrument is available in any country, Euro-global as well, it should have an open interface to that white-labeling tool that the Commission has issued a tender for. And they did so because they gave priority to age assurance. And they would not wait these 24 months for the EU ID wallet to come. So that also shows how important and how topical the thing is that we are talking about here. Age assurance is very topical, not only for the Commission. We’ve heard several sessions talking about that already here at the Internet Governance Forum. And we are pretty sure that train is put on rails, I would say.


Maarten Botterman: Thank you very much for that. So with that, I think we’ve had a pretty good cycle and I’d like to ask people also in the Zoom room if you have any final questions, or here in the room. If you have any final questions, raise it and then we’ll do a final round. Yes, I was looking at… Dorothy has been very active in the chat and Fabrice has been very active in the chat. Any of you wants to talk as well? Otherwise, we go to Sonja. As you’re not raising your hand. Sonja, please.


Sonia Livingstone: Thank you. I just wanted to make a point that hasn’t been perhaps as a political point. I’m very struck by how much the industry innovates in relation to complex and challenging technologies and then introduces them into the marketplace. We’re seeing this with AI now. Suddenly it pops up in all of our search and our social media in ways that were not necessarily asked for. And the same, of course, will happen with IoT. And then the worthy groups like us sit around and say parent leaders must do that. Of course, we want them to. But this is a major shift of innovation in commerce, placing an obligation on ordinary people and on the public sector. And it is, you know, I think this is why the conversations about regulation, certification, standards and obligations on industry are really so crucial because otherwise the burden of making a profit on one side really does fall on those who are already extremely hard pressed. So let’s keep up the pressure on the industry without in any way undermining the argument that of course media literacy and public information and awareness are crucial.


Maarten Botterman: Yes, thank you very much for that. Of course, regulatory innovation is also a point that approaches within Europe. I’m European too. Where the European Commission, for instance, with the AI Act, without immediately going to regulate, first invites the industry to come with, so what should we talk about? What should we regulate? How should good practice look like? And regulation doesn’t only need to be from the countries, but it can also be from industry and self-regulation. Jonathan, yes, please. I learned this from Jonathan. Please, Jonathan.


Jonathan Cave: I just wanted to applaud Sonja’s point really because a lot of these things, there is a transfer of responsibility from industry, well, from the developers of the tech part of the industry to the service providers and the comms providers and the others who are already regulated and from them to us. To a certain extent, society is being used as a kind of beta tester or alpha tester for these technologies. They’re spat out, and the ones that succeed, succeed, and the ones that don’t, don’t. Maybe they grow a regulatory structure around them to make them more robust, but the irreversible changes that take place will nonetheless have taken place and cannot be undone, even if we later come to regret them. And so some element of, A, a precautionary principle, and B, an appropriate placement of responsibility should be important. And when I say appropriate placement, these things are uncertain. So where responsibility lies should be some mixture of being able to understand the uncertainties, being able to do something about them, and being able, in particular financially, evive the disruption involved in getting from where we are now to a solution that we can not only live with, but can sort of accept and understand. And I think simply provide and protect or responding to industry by shoring up the crash barriers and so on encourages industry to take less and less responsibility for the consequences of what it is that they do or to define them in narrower and narrower and more technocratic terms and to say this is safe because in lab tests it works out safely. We saw this with medicine. This is why real-world evidence in the use of drugs is so important. They may survive a randomized clinical trial, but put them in the real world and they don’t work like that. So there needs to be some way of joining this up so that industry at all levels, people and government are partners in something and not people sitting on a predefined responsibility. So anyway, thanks for making it political as it may have been.


Maarten Botterman: Thank you very much for that. We’ve got a lady. Can you introduce yourself in the room?


Helen Mason: Thank you. And thank you for a very interesting session, which I unfortunately came a little late to. But nevertheless, I’m picking up on a few.


Maarten Botterman: What is her name?


Helen Mason: My name is Helen Mason. I’m from Child Helpline International in the Netherlands. We work in 132 countries to provide child helplines. 24-7 to children and young people via a variety of channels and I think two points really, I would say that we must include civil society and the first-line responders in these kinds of discussions because people that are actually talking to children and young people and dealing with reports of harms that have happened online and building the capacity of those frontline responders is absolutely crucial in being able to respond adequately and report and know where to report to to have proper alliances and referral protocols with law enforcement for example with regulators so our work at Child Helpline International is really advancing this particular aspect to make sure that all of our members are well equipped to respond to all kinds of incidences that might occur online and we have much data that shows an increase for example in areas like extortion children and young people not knowing where they should report is there a crime committed what should they do next should they delete the the evidence etc so having those frontline responders you know capacitated to be able to respond adequately is really vital for us one more point I want to make as well is that the data that is generated by the child helplines themselves as a result of the conversations they have with children young people is really a unique resource so I would really encourage all stakeholders to have a look at that information that we collect it’s around prevalence it’s around help-seeking behavior it’s around trends around and just a case material we have it has a lot of information about the actual experiences of children and young people of course it’s all done very safely and anonymized and working together with people like Sonia we can really use this information to feed back into policy and I’d really encourage all of the stakeholders to take a look at the information that we’re publishing online thank you


Maarten Botterman: thank you so much for your remark with that please


Abhilash Nair: thank you thanks for those comments very useful indeed thank you I just wanted to follow up on what Sonia said about corporate liability or imposing obligations on the industry we discussed this at a different panel yesterday I wasn’t on the panel and I’ll hear about to what extent should that liability extend and who should be held accountable for should it just be financial penalties or should executives be sent to prison for draws negligence and other lack of action and I wondered if you had any thoughts on that Sonia because I think I don’t think it is for want of obligations on websites or platforms or providers that things haven’t worked so far financial penalties are sometimes too little even if they sound like a lot of money for the average person in the street for the large company tech company in particular it’s not a lot of money would introducing criminal sanctions for corporate executives make a difference it’s just a thought rather than a question really


Maarten Botterman: yes thank you thank you for that I think with this all and this is one of the reasons several remarks throughout the session on companies will behave when they know it’s paid attention to I think I dare to hope I’m an optimist that some companies really care and do it from the outset and I know there are companies who do and I think these will be the companies from the long run not from the short run profit there’s so much but accountability in that is key what to be accountable for is the thing that we need to be clearer on we’re not gonna mandate from this little group what a parent may or may not say to his child what an industry may or may not put on the market what a child may or may not do but we can help by finding making clear what what to be taken into account and and the capacity development around the world is important in that we discussed from early on that well if you protect children by not allowing to use any of it at some point they will be allowed to use and then we dived in the dip and this is the same problem we see for instance of internet access in Africa the biggest challenge in Africa is to get online but as soon as you’re online you’re in trouble and have a lot of opportunities so you need to be aware before that same is true for children capacity development is also for all stakeholders legislators administrations let’s let’s companies I mean also for another example there is the what in Europe is called the corporate sustainability reporting directive it’s to make companies aware of the ecological footprint of what they do and with that move slowly towards more responsible behavior this is something that here should be an obvious thing to certain there is legislation you cannot harm children let’s make sure that understood what does it means in the context of the new digital world as well and last but not least of course it’s also the ability to act is something that need to be brought in and that need to be brought in together in reasonable ways so this is from the IOT perspective also a very important part of how you deal with things users etc but I really appreciate it to again after a couple of years work together with Utah and you all on something where this comes together because in the end technology is also for the people and to serve the people is what I and my colleague Jonathan tend to believe go to Utah and then to Jonathan and Sonia for the last word and Musa okay and you’re still lost so so after my attempt to round off we will now open up again and then Utah will round off Musa I’m unmuting you please go ahead yes we’re listening and we’re even hearing you


Musa Adam Turai: okay my question is sorry I can’t very let you the problem my question is regarding the okay what can defendants of free expression in these regions address the tension between the protection cultural and religious burden of holding the universal of holding the universal holding the universal rights and to free speech particularly in deeply conservative societies I listening


Maarten Botterman: yes I’m listening I’m I’m trying to comprehend the question that is behind this remark okay how can be how can you get it Jonathan you got it yeah please answer and then continue with your your final statement


Jonathan Cave: okay yes if I understand the question well there is a tension also between free speech rights and in particular the exercise of those rights by children and the need to protect children not only in their own rights of self-expression but from the harmful consequences of this self-expression of other in societies where freedom of speech is heavily restricted that you have freedom of speech but only in certain directions and certainly yeah and certainly in surveillance I mean take the right to be forgotten for example when I was very young I said very many intemperate politically intemperate things later on in my life I went through a period where I was very glad that those things had been forgotten and then later on still I came to a point where it was very important to me in my image of myself that it be those things but fortunately the consequences for me were sort of non-existent or minimal but we have seen that the consequences can be very great and what that says to me is that when we talk about child safety and child protection it is not just protection from the content that they see online but from the social legal political terrorist whatever consequences of using those online platform it’s an issue there where the safety goes beyond the safety within the online environment so I I get that point it’s a hard question I don’t have an answer for it of course what I wanted to say by way of rounding off was really just on this last point about how we make corporations and actually governments pay serious and sustained attention to these issues now I remember that in the antitrust environment when the u.s. passed the Sherman antitrust act the liability on a company that broke competition was only on the company as economic person. It was only with the Clayton Act, where personal liability was brought in, that the big trusts began to sit up and pay attention and change their behavior. So that personal liability does make a difference. In the Guardian today, there was a call for companies to be held responsible. This is the second aspect of this, not just for the harms that they have done in the past, but also for producing improvements into the future. What we’re seeing today with things like the Grenfell Inquiry or the Post Office Inquiry is that when something goes wrong, people are held to account. They’re supposed to stand by themselves and say, we’re sorry, we’ve learned lessons, and so on, until the next thing comes along. This doesn’t really help. It doesn’t really help when the problems are systemic and when the problems cannot be remedied by somebody saying, I’m sorry, or paying an amount of money to somebody for something. We need something that is more continuously engaging. And finally, it is commonly the case, as we saw with Paula Ventles in the Post Office Inquiry, that the people who are supposed to bear the responsibility evade that responsibility or that they didn’t have the information. Now, in many criminal contexts, this has been this concept of what I knew when I took the action is replaced by something which said, you were sitting in this position of responsibility. You had certain privileges, like a universal service obligation. This is what you knew or what you should have known. And if you don’t, if you’re not aware of these things, that by itself is a black mark against you. And it’s only the fact that these things went wrong that caused the light of day to shine upon that. So I think that we should take the issue more seriously. And with politicians, this happens. They come into office. They say things about children and the online risk of their office. And the box has been ticked as far as the newspapers are concerned, but it doesn’t become part of the culture that the safety of children doesn’t become the kind of cultural value on which we act. It actually changes what we do when we have new decisions to make. Okay, so that’s my call to arms. And now I’ll shut up. Thank you.


Maarten Botterman: Thank you very much. As a certified board director myself, I must say, I’ve seen that ongoing trend. And I know I’m personally liable for not doing the right things, not asking the right questions within reason. If I exert my fiduciary duties in the right way, then I can make mistakes too. But I fully appreciate your point. And that attention, that’s a crucial point. Also, call for boards to be aware of what they tell their CEO to do. Make more money or make sure that you do it in the right way. So I see a lot of nodding heads here. And I even see Sonja’s smile. So Sonja, to you, and then the last word to Jutta, please.


Sonia Livingstone: Brilliant. Thank you very much. Lots of really great things said. I wanted to say something, come back to the question of children, the way in which their rights can be heard and acknowledged. So users, the word user is a really problematic word. And I think if we talk about users, we can quickly forget there are children. So by and large, in relation to IoT and other innovations, by and large, children are not the customer. They don’t pay. They don’t always set up the profile, especially for IoT. They don’t seek remedy unless we scaffold that. They don’t bring lawsuits. They don’t get to speak at the IGF. You know, they are kind of uniquely dispossessed in relation to these debates. And yet they are one in three of the world’s Internet users. One in three also of the world’s population. If I continue my statistics, you know, one in three of the users are children, one in three are women and one in three are men. And we have to rethink who the user is and recognise their diversity. I think my last word might be to mention, as hasn’t yet been mentioned, that in General Comment 25, the UN Committee on the Rights of the Child has set out how the Convention on the Rights of the Child applies to the digital environment, including to IoT, to the different technologies, including to a whole range of digital innovations. And in so doing, it maps out and tries to look within the industry and all of those who provide the checks and balances around the industry, as well as speaking to the state. So, I think what we’re saying when we want companies to be aware or board members to tell their CEO or perhaps executives to get arrested when they land at Heathrow, wherever, I think we’re trying to recognise that there are people within this sector and very many agents who can be part of the process of making things better. And I would include those in the engineering schools who are training tomorrow’s engineers. And the data scientists who think they’re just processing anonymised data. It has nothing to do with them. And the marketers who are creating a certain vision of the user and how it might be used when they promote the technology. And so on and so on. You know, great that we’ve talked about procurement, which I think is really critical. And so, I would like the next session at the IGF, if I may be so bold on this topic, to include representation and the voices of children and young people in the room. And to begin with a more disaggregated vision, both of children, but also of the actors who are shaping this technology of the future. Thank you.


Maarten Botterman: Thank you for very beautifully said.


Jutta Croll: Thank you for giving me the last word. I don’t think I need to do any more wrapping up because everything has already been said. Just to put real what Sonja said at the beginning, that children are being impacted by Internet of Things, even though they might not be the users as we understand users so far. But if developers have that in mind, I do think that is very important. And to reflect on Jonathan saying what he was saying about the politicians, I just need to mention that yesterday, we had for the first time ever in 19 years of Internet for Children’s Rights. And we only have five high level sessions set by the United Nations. And one of these five sessions was set for children’s rights in the digital environment. So, awareness is raised. We have come a long way and we have a long way to go. But still, these are steps that are milestones, I do think. And people will remember that and we will bring it forward to Norway next year. Thank you so much for being here, for listening and for taking part. Thank you.


Maarten Botterman: Yes, thank you very much. I just want to applaud this too. Thank you so much, everybody, for attending and for contributing in any way, shape or form. And really appreciate the session, not only as a DCIoT person, but a father and even a grandfather. So, see you around. Yeah. That was good. We can refer to it. We can refer to it. This may be the future of dynamic emissions. I mean, we plan to go for another 30 flights next year. So, that’s what I would need to know. I’m comfortable. Well, if I’m the current principal of the Dynamic Emissions Bureau, we know so much about it, but we haven’t heard any more of it. It’s true, but we saw good deploying. Next year we saw more jamming. Hopefully this will help with that today. For the next two or three months, we will be exclusively the Dynamic Emissions Bureau, and no spayed applications for Dynamic Emissions projects. Most recently, we adished the management of our small business. This year we started out with a new startup company and Dynamic Emissions came out with the idea of raising the speed of our businesses.


J

Jonathan Cave

Speech speed

148 words per minute

Speech length

3517 words

Speech time

1421 seconds

Data collection can be both beneficial and harmful to children

Explanation

Jonathan Cave points out that data collected from children can be a source of both safety and potential harm. The immediate and long-term effects of exposure to inappropriate content or manipulation through profiling are concerns.


Evidence

Example of privacy laws setting age limits for data collection and processing.


Major Discussion Point

Age-aware IoT and data governance


Static age limits may not be appropriate given evolving capacities of children

Explanation

Cave suggests that using chronological age as the sole basis for protecting children online may not be the most appropriate approach. Children’s digital maturity and experience with online environments should be considered.


Major Discussion Point

Age-aware IoT and data governance


Agreed with

Sonia Livingstone


Pratishtha Arora


Agreed on

Need for age-appropriate design in IoT and AI


Differed with

Sonia Livingstone


Differed on

Approach to age verification and assurance


AI can both facilitate and potentially distort children’s development

Explanation

Cave discusses how AI can aid in children’s education and development, but also potentially distort it. He points out that AI devices not only learn about the child but also ‘program’ the child in certain ways.


Evidence

Example of how AI answering questions directly may reduce the educational aspect of children searching for information themselves.


Major Discussion Point

Role of AI in age-aware IoT


S

Sonia Livingstone

Speech speed

145 words per minute

Speech length

2061 words

Speech time

847 seconds

Need to consider broader child rights beyond just privacy and safety

Explanation

Livingstone emphasizes that a child rights approach should consider more than just privacy and safety. She argues for a holistic approach that includes rights such as access to information, participation, and appropriate provision.


Evidence

Mentions the UN Convention on the Rights of the Child and the concept of best interests of the child.


Major Discussion Point

Age-aware IoT and data governance


Differed with

Jonathan Cave


Differed on

Approach to age verification and assurance


Importance of consulting children in design of technologies and policies

Explanation

Livingstone stresses the importance of involving children in the design and development of technologies and policies that affect them. This ensures that children’s perspectives and needs are taken into account.


Major Discussion Point

Age-aware IoT and data governance


Agreed with

Pratishtha Arora


Jonathan Cave


Agreed on

Need for age-appropriate design in IoT and AI


Need to place more responsibility on industry rather than users

Explanation

Livingstone argues that the burden of ensuring safety and appropriate use of technology should not primarily fall on users, especially children and parents. She emphasizes the need for industry to take more responsibility in this area.


Major Discussion Point

Corporate responsibility and regulation


Differed with

Sabrina Vorbau


Differed on

Focus of responsibility in ensuring child safety online


Children are often overlooked as stakeholders in tech development

Explanation

Livingstone points out that children are often not considered as primary stakeholders in technology development, despite being one-third of internet users globally. She argues for greater recognition of children’s diverse needs and experiences in tech development.


Evidence

Statistic that one in three of the world’s Internet users are children.


Major Discussion Point

Corporate responsibility and regulation


M

Maarten Botterman

Speech speed

129 words per minute

Speech length

3732 words

Speech time

1727 seconds

Labeling and certification can empower users to make informed choices

Explanation

Botterman argues that labeling and certification of IoT devices can help users understand what they are buying and using. This transparency enables users to make more informed decisions about the technology they adopt.


Evidence

Examples of past issues with IoT devices, such as default passwords and undisclosed data sharing.


Major Discussion Point

Labeling and certification of IoT devices


Agreed with

Jutta Croll


Abhilash Nair


Agreed on

Importance of labeling and certification for IoT devices


J

Jutta Croll

Speech speed

129 words per minute

Speech length

1120 words

Speech time

518 seconds

Public procurement can be used to drive adoption of standards

Explanation

Croll suggests that using public procurement as a tool can encourage the adoption of standards for IoT devices. By making certain standards a requirement for government purchases, it can stimulate the market for compliant products.


Evidence

Example of Section 508 in the US, which required accessibility features in products purchased by the government.


Major Discussion Point

Labeling and certification of IoT devices


Agreed with

Maarten Botterman


Abhilash Nair


Agreed on

Importance of labeling and certification for IoT devices


A

Abhilash Nair

Speech speed

143 words per minute

Speech length

1200 words

Speech time

502 seconds

Certification could help mitigate literacy issues for parents/caregivers

Explanation

Nair suggests that certification of IoT devices could help address literacy issues among parents and caregivers. This would make it easier for them to understand and manage the technology their children are using, regardless of their educational background.


Major Discussion Point

Labeling and certification of IoT devices


Agreed with

Maarten Botterman


Jutta Croll


Agreed on

Importance of labeling and certification for IoT devices


P

Pratishtha Arora

Speech speed

137 words per minute

Speech length

1038 words

Speech time

453 seconds

Need to consider impacts on children who may not be direct users of IoT devices

Explanation

Arora points out that IoT devices can impact children even when they are not the primary users. This includes situations where children are in environments with IoT devices, such as smart homes or connected cars.


Major Discussion Point

Role of AI in age-aware IoT


Importance of developing age-appropriate AI models and interfaces

Explanation

Arora emphasizes the need for AI models and interfaces that are appropriate for different age groups. This involves considering children’s varying levels of understanding and maturity when designing AI-powered IoT devices.


Major Discussion Point

Role of AI in age-aware IoT


Agreed with

Sonia Livingstone


Jonathan Cave


Agreed on

Need for age-appropriate design in IoT and AI


S

Sabrina Vorbau

Speech speed

137 words per minute

Speech length

1178 words

Speech time

513 seconds

Need to translate research into user-friendly guidance for parents/educators

Explanation

Vorbau stresses the importance of making research findings accessible to parents and educators. This involves creating user-friendly resources that help adults understand and navigate the complexities of children’s online experiences.


Evidence

Example of the Better Internet for Kids initiative developing toolkits and resources for educators and families.


Major Discussion Point

Capacity building and awareness


Differed with

Sonia Livingstone


Differed on

Focus of responsibility in ensuring child safety online


Importance of involving children/youth in developing resources

Explanation

Vorbau highlights the value of involving children and young people in the creation of resources about online safety and digital literacy. This ensures that the materials are relevant and understandable to their target audience.


Evidence

Mention of co-creation process with children and young people in developing resources for the Better Internet for Kids initiative.


Major Discussion Point

Capacity building and awareness


H

Helen Mason

Speech speed

175 words per minute

Speech length

375 words

Speech time

128 seconds

Civil society and frontline responders should be included in discussions

Explanation

Mason argues for the inclusion of civil society organizations and frontline responders in discussions about children’s online safety. These stakeholders have direct experience with children’s issues and can provide valuable insights.


Evidence

Example of Child Helpline International’s work in 132 countries providing support to children.


Major Discussion Point

Capacity building and awareness


Data from child helplines is a valuable resource on children’s experiences

Explanation

Mason points out that data collected by child helplines can provide unique insights into children’s online experiences and challenges. This information can be valuable for policymakers and researchers.


Evidence

Mention of increasing reports of online extortion and children not knowing where to report issues.


Major Discussion Point

Capacity building and awareness


M

Musa Adam Turai

Speech speed

84 words per minute

Speech length

60 words

Speech time

42 seconds

Tension between free speech rights and child protection in some societies

Explanation

Turai raises the issue of balancing free speech rights with child protection, particularly in conservative societies. This highlights the cultural and societal differences in approaching online safety for children.


Major Discussion Point

Corporate responsibility and regulation


Agreements

Agreement Points

Need for age-appropriate design in IoT and AI

speakers

Sonia Livingstone


Pratishtha Arora


Jonathan Cave


arguments

Importance of consulting children in design of technologies and policies


Importance of developing age-appropriate AI models and interfaces


Static age limits may not be appropriate given evolving capacities of children


summary

The speakers agree on the importance of considering children’s evolving capacities and involving them in the design process to ensure age-appropriate IoT and AI technologies.


Importance of labeling and certification for IoT devices

speakers

Maarten Botterman


Jutta Croll


Abhilash Nair


arguments

Labeling and certification can empower users to make informed choices


Public procurement can be used to drive adoption of standards


Certification could help mitigate literacy issues for parents/caregivers


summary

The speakers agree that labeling and certification of IoT devices can empower users, drive adoption of standards, and help address literacy issues for parents and caregivers.


Similar Viewpoints

Both speakers emphasize the need for a more nuanced approach to children’s rights and protection online, considering their evolving capacities rather than relying solely on static age limits.

speakers

Sonia Livingstone


Jonathan Cave


arguments

Need to consider broader child rights beyond just privacy and safety


Static age limits may not be appropriate given evolving capacities of children


Both speakers advocate for including diverse stakeholders, particularly children and those working directly with them, in discussions and decision-making processes related to online safety and technology design.

speakers

Sonia Livingstone


Helen Mason


arguments

Importance of consulting children in design of technologies and policies


Civil society and frontline responders should be included in discussions


Unexpected Consensus

Corporate responsibility in technology development

speakers

Sonia Livingstone


Jonathan Cave


Abhilash Nair


arguments

Need to place more responsibility on industry rather than users


AI can both facilitate and potentially distort children’s development


Certification could help mitigate literacy issues for parents/caregivers


explanation

Despite coming from different perspectives, these speakers unexpectedly converged on the idea that the tech industry should bear more responsibility for ensuring safe and appropriate technology for children, rather than placing the burden primarily on users or parents.


Overall Assessment

Summary

The main areas of agreement include the need for age-appropriate design in IoT and AI, the importance of labeling and certification for IoT devices, and the necessity of involving diverse stakeholders in discussions and decision-making processes.


Consensus level

There is a moderate to high level of consensus among the speakers on key issues. This consensus suggests a growing recognition of the complexities surrounding children’s rights in the digital environment and the need for multi-stakeholder approaches to address these challenges. The implications of this consensus could lead to more collaborative efforts in developing age-aware IoT solutions and more comprehensive policies that consider children’s evolving capacities and rights.


Differences

Different Viewpoints

Approach to age verification and assurance

speakers

Jonathan Cave


Sonia Livingstone


arguments

Static age limits may not be appropriate given evolving capacities of children


Need to consider broader child rights beyond just privacy and safety


summary

While Cave emphasizes the limitations of static age limits, Livingstone advocates for a more holistic approach considering various child rights beyond age verification.


Focus of responsibility in ensuring child safety online

speakers

Sonia Livingstone


Sabrina Vorbau


arguments

Need to place more responsibility on industry rather than users


Need to translate research into user-friendly guidance for parents/educators


summary

Livingstone emphasizes industry responsibility, while Vorbau focuses on empowering parents and educators with user-friendly guidance.


Unexpected Differences

Role of AI in children’s development

speakers

Jonathan Cave


Pratishtha Arora


arguments

AI can both facilitate and potentially distort children’s development


Importance of developing age-appropriate AI models and interfaces


explanation

While both speakers discuss AI’s impact on children, Cave unexpectedly highlights potential negative effects on development, whereas Arora focuses more on the need for age-appropriate design without explicitly addressing potential distortions.


Overall Assessment

summary

The main areas of disagreement revolve around the approach to age verification, the distribution of responsibility between industry and users, and the role of AI in children’s online experiences.


difference_level

The level of disagreement among speakers is moderate. While there is general consensus on the importance of protecting children online, speakers differ in their proposed approaches and emphasis. These differences reflect the complexity of the issue and suggest that a multifaceted approach, incorporating various perspectives, may be necessary to effectively address age-aware IoT and child protection online.


Partial Agreements

Partial Agreements

All speakers agree on the need for more nuanced approaches to child protection online, but differ in their proposed solutions: Cave suggests moving away from static age limits, Livingstone advocates for a broader rights-based approach, and Botterman proposes labeling and certification as tools for informed decision-making.

speakers

Jonathan Cave


Sonia Livingstone


Maarten Botterman


arguments

Static age limits may not be appropriate given evolving capacities of children


Need to consider broader child rights beyond just privacy and safety


Labeling and certification can empower users to make informed choices


Similar Viewpoints

Both speakers emphasize the need for a more nuanced approach to children’s rights and protection online, considering their evolving capacities rather than relying solely on static age limits.

speakers

Sonia Livingstone


Jonathan Cave


arguments

Need to consider broader child rights beyond just privacy and safety


Static age limits may not be appropriate given evolving capacities of children


Both speakers advocate for including diverse stakeholders, particularly children and those working directly with them, in discussions and decision-making processes related to online safety and technology design.

speakers

Sonia Livingstone


Helen Mason


arguments

Importance of consulting children in design of technologies and policies


Civil society and frontline responders should be included in discussions


Takeaways

Key Takeaways

Age-aware IoT needs to consider children’s evolving capacities rather than using static age limits


Labeling and certification of IoT devices can empower users to make informed choices


AI in IoT can both facilitate and potentially distort children’s development


Capacity building and awareness efforts should involve children/youth and translate research into user-friendly guidance


There is a need to place more responsibility on industry rather than users for child safety in IoT


Children are often overlooked as stakeholders in tech development despite being 1 in 3 internet users


Resolutions and Action Items

Involve children and young people in future IGF sessions on this topic


Develop more user-friendly guidance on age assurance for parents and educators


Consider using public procurement to drive adoption of child safety standards in IoT


Unresolved Issues

How to balance free speech rights with child protection, especially in conservative societies


Extent of corporate liability and accountability for child safety issues in IoT


How to effectively implement age assurance across different cultural contexts


How to ensure IoT benefits reach children who are not yet online


Suggested Compromises

Use age brackets rather than hard age limits to allow for flexibility in maturity levels


Develop ‘white-labeled’ age verification tools that can interface with different systems


Balance precautionary principle with allowing children to learn to navigate online risks


Thought Provoking Comments

We need to stay aware that the static perspective of protecting people on the basis of age may not be the most appropriate, and we need to stay aware of that.

speaker

Jonathan Cave


reason

This challenges the conventional approach of using chronological age as the sole basis for online protection measures.


impact

It shifted the discussion towards considering more nuanced, evolving approaches to protecting children online based on their digital maturity rather than just age.


A child rights landscape always seeks to be holistic. So, privacy is key, as has already been said. Safety is key, as has already been said. But the concern about some of the age assurance solutions is that, as Jonathan just said, they introduce age limits. And so, there are also costs, potentially, to children’s rights, as well as benefits.

speaker

Sonia Livingstone


reason

This comment broadens the perspective beyond just safety and privacy to consider the full spectrum of children’s rights.


impact

It prompted a more comprehensive discussion of the trade-offs involved in age assurance technologies and their potential impacts on children’s rights and development.


Denying access will always encourage teens to look at work around, to not engage in dangerous behavior because they have no guidance. Why do we not put more emphasis on media information literacy, so that users understand how to protect themselves?

speaker

Doherty Gordon


reason

This comment challenges the effectiveness of access restriction approaches and suggests an alternative focus on education.


impact

It sparked discussion about the importance of digital literacy and education as complementary or alternative approaches to technical solutions for online safety.


The idea of using public procurement as a tool, as a sort of complement either to self-regulation or to formal regulation is, I think, one that’s worked in a number of areas.

speaker

Jonathan Cave


reason

This introduces a novel policy approach to incentivizing industry compliance with safety standards.


impact

It shifted the conversation towards considering economic incentives and government purchasing power as tools for promoting child-safe technologies.


Users, the word user is a really problematic word. And I think if we talk about users, we can quickly forget there are children. So by and large, in relation to IoT and other innovations, by and large, children are not the customer. They don’t pay. They don’t always set up the profile, especially for IoT. They don’t seek remedy unless we scaffold that. They don’t bring lawsuits. They don’t get to speak at the IGF.

speaker

Sonia Livingstone


reason

This comment highlights how children are often overlooked in discussions about technology users and policy.


impact

It prompted reflection on the need to explicitly consider children’s perspectives and interests in technology development and policy discussions.


Overall Assessment

These key comments shaped the discussion by broadening its scope beyond simple age-based protections to consider more holistic approaches to children’s rights in the digital world. They challenged participants to think about the complexities of balancing protection with other rights, the role of education and literacy, economic incentives for industry compliance, and the importance of explicitly considering children’s perspectives in technology development and policy. The discussion evolved from focusing on technical solutions to exploring a multi-faceted approach involving education, policy, industry incentives, and children’s participation.


Follow-up Questions

How can we ensure proper media literacy education to help users understand how to protect themselves online?

speaker

Doherty Gordon (audience member)


explanation

This is important to empower users, especially children and teens, to navigate online risks safely rather than relying solely on access restrictions.


How can we develop age assurance technologies that themselves respect children’s rights?

speaker

Sonia Livingstone


explanation

This is crucial to ensure that solutions meant to protect children’s rights online do not inadvertently violate those rights in the process.


How can we develop more flexible approaches to age verification that account for children’s evolving capacities rather than relying on strict age limits?

speaker

Sonia Livingstone and Jonathan Cave


explanation

This is important to create more nuanced and effective protections that align with children’s actual developmental stages rather than arbitrary age cutoffs.


How can we ensure that age assurance and online safety measures account for children who may not be the primary user or customer of a service but are still impacted by it?

speaker

Sonia Livingstone


explanation

This is crucial to protect children who may be indirectly affected by IoT and other technologies, even if they are not the intended users.


How can we better incorporate the perspectives and experiences of children and young people into discussions and policymaking around online safety and IoT?

speaker

Sonia Livingstone


explanation

This is important to ensure that policies and technologies are truly responsive to children’s needs and experiences.


How can we address the tension between protecting children online and upholding rights to free expression, particularly in conservative societies?

speaker

Musa Adam Turai (audience member)


explanation

This is important to balance child protection with other fundamental rights across different cultural contexts.


How can we create more effective corporate accountability measures for online child safety that go beyond financial penalties?

speaker

Abhilash Nair and Jonathan Cave


explanation

This is crucial to ensure that companies take their responsibilities towards child safety seriously and make it a core part of their operations.


How can we better integrate civil society organizations and frontline responders into discussions and policymaking around online child safety?

speaker

Helen Mason (Child Helpline International)


explanation

This is important to ensure that policies and technologies are informed by real-world experiences and data from those directly working with affected children.


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.