WS #172 Regulating AI and Emerging Risks for Children’s Rights

18 Dec 2024 13:30h - 15:00h

WS #172 Regulating AI and Emerging Risks for Children’s Rights

Session at a Glance

Summary

This discussion focused on the impact of artificial intelligence (AI) on children and the need for regulation to protect children’s rights in the digital environment. Participants highlighted how AI is pervasive in children’s lives, often without their awareness, and can pose risks such as data exploitation, privacy violations, and exposure to harmful content. Research shows that many AI systems are not designed with children’s best interests in mind, despite children being a significant user base.

The discussion emphasized the importance of developing global standards and regulations for AI that prioritize children’s rights and safety. The EU’s AI Act was cited as a step in the right direction, though challenges remain in its implementation and enforcement. Participants stressed the need for technical standards and frameworks to guide the responsible development and deployment of AI systems affecting children.

Youth perspectives were prominently featured, with concerns raised about AI’s impact on education, creativity, privacy, and mental health. The discussion underscored the importance of involving children in the development of AI policies and regulations. Participants called for increased awareness and education for families and children about AI risks and safeguards.

The conversation concluded with a call to action for policymakers, tech companies, and society at large to ensure AI systems are designed and governed with children’s rights and well-being at the forefront. The upcoming AI code for children was highlighted as a potential blueprint for addressing these concerns and implementing practical safeguards for children in the AI landscape.

Keypoints

Major discussion points:

– The impact of AI on children’s rights, privacy, and wellbeing

– The need for AI regulation and standards that specifically consider children

– The importance of designing AI systems with children’s needs in mind from the start

– Challenges in implementing “safety by design” principles for AI that impacts children

– The role of families, education, and public awareness in protecting children from AI risks

The overall purpose of the discussion was to examine the impacts of AI on children and explore policy, regulatory, and technical solutions to protect children’s rights and wellbeing as AI systems become more prevalent. The discussion aimed to provide input for the upcoming AI Action Summit in Paris.

The tone of the discussion was largely serious and concerned about the risks AI poses to children, but also cautiously optimistic about the potential to develop safeguards and standards. There was some frustration expressed that known issues around children’s online safety have not been adequately addressed as AI has developed. The tone became more solution-oriented and forward-looking towards the end, focusing on upcoming regulations and standards that could help protect children.

Speakers

– Leanda Barrington-Leach: Moderator, representative of Five Rights Foundation

– Nidhi Ramesh: Five Rights Youth Ambassador, 16 years old, from Malaysia

– Jun Zhao: Senior researcher in the Department of Computer Science at Oxford University, leads the Oxford Child-Centered AI Design Lab

– Brando Benifei: Member of the European Parliament, co-rapporteur of the AI Act, co-chair of the child rights intergroup

– Ansgar Koene: AI ethics and public policy regulatory lead at Ernst & Young, trustee of Five Rights Foundation

– Baroness Beeban Kidron: Chair of Five Rights Foundation, member of the House of Lords in the UK, architect of the age-appropriate design code

Additional speakers:

– Peter Zanga Jackson: Regulator from Liberia

– Jutta Croll: German Digital Opportunities Foundation

– Lena Slachmuijlder: Affiliation not specified

– Dorothy Gordon: From UNESCO (mentioned in a question)

Full session report

The Impact of AI on Children: A Comprehensive Discussion

This summary provides an in-depth overview of a discussion focused on the impact of artificial intelligence (AI) on children and the need for regulation to protect children’s rights in the digital environment. The conversation brought together experts from various fields, including youth representation, academia, policy-making, and industry. The session, which experienced some technical difficulties, served as a preparatory event for the AI Action Summit in Paris.

1. AI’s Pervasive Influence on Children’s Lives

The discussion opened with a stark realisation: AI is ubiquitous in children’s lives, often operating without their awareness. Nidhi Ramesh, a 16-year-old Youth Ambassador, highlighted that many children don’t realise most of their online interactions are mediated by AI algorithms, which make choices, recommendations, and even decisions for them. This lack of awareness raises critical questions about informed consent and digital literacy among young users.

Dr Jun Zhao from Oxford University provided empirical evidence, noting that a recent UK survey showed children are twice as likely to adopt new AI technologies compared to adults. This rapid adoption underscores the urgency of addressing potential risks associated with AI use among children.

2. Risks and Challenges

The speakers unanimously agreed that AI poses significant risks to children’s privacy, safety, and well-being. These risks include:

a) Data Exploitation: AI systems can collect sensitive data from children without proper safeguards, as pointed out by Dr Zhao.

b) Privacy Violations: The pervasive nature of AI raises concerns about children’s privacy rights.

c) Exposure to Harmful Content: AI chatbots and recommendation systems can inadvertently expose children to inappropriate content.

d) Mental Health Impacts: The psychological effects of AI, particularly systems designed for companionship, were highlighted as an area of concern.

e) Educational Risks: Nidhi Ramesh raised thought-provoking questions about AI’s impact on learning, noting that while AI can make homework quicker, it risks compromising essential learning skills, creativity, and critical thinking abilities.

f) Amplification of Existing Harms: Leanda Barrington-Leach emphasised that AI can exacerbate existing systemic problems affecting children.

3. Regulatory Landscape and Challenges

The discussion highlighted the evolving regulatory landscape surrounding AI and children’s rights:

a) EU AI Act: Brando Benifei, Member of the European Parliament, noted that while the EU AI Act includes some provisions to protect children, these were not initially present and had to be introduced through amendments. This revelation underscores the importance of vigilance in ensuring children’s rights are protected in AI regulations.

b) Technical Standards: Ansgar Koene from Ernst & Young pointed out that technical standards are still being developed to operationalise AI regulations effectively, particularly for the AI Act.

c) AI Code for Children: Baroness Beeban Kidron mentioned the development of an AI code for children by the Five Rights Foundation. This code aims to provide practical guidance on designing AI systems with children’s rights in mind and is expected to be launched at the Paris summit. It targets policymakers, regulators, and AI developers.

d) Global Cooperation: Benifei stressed the need for global dialogue and cooperation to build common frameworks for protecting children in AI systems.

e) Global Digital Compact: Baroness Kidron highlighted the relevance of the Global Digital Compact to AI and children’s rights, emphasising its potential impact on global governance of digital technologies.

4. Designing AI Systems with Children in Mind

The discussion emphasised the importance of integrating children’s needs and rights into AI development from the outset:

a) Safety by Design: Dr Zhao advocated for incorporating safety by design principles into AI development, noting that some AI companies are already embracing this approach.

b) Organisational Awareness: Koene highlighted that many organisations lack awareness of how their AI systems impact children, suggesting a need for greater education and expertise within the industry.

c) Expert Involvement: The importance of involving subject matter experts on children’s impacts in AI development was stressed.

d) Ethical Considerations: Barrington-Leach argued that AI should not be used to experiment on children, emphasising the need for ethical guidelines in AI development and deployment.

5. Role of Education and Awareness

The discussion touched upon the crucial role of education in protecting children from AI risks:

a) Family Involvement: Peter Zanga Jackson, a regulator from Liberia, highlighted the role of families in educating children about AI.

b) School Curriculum: The need to integrate AI awareness into school curricula was discussed.

c) Public Awareness: Speakers agreed on the importance of increasing public awareness about AI’s impact on children. Koene emphasised the need for public sector support in educating the general population about AI risks.

6. Consumer Rights and Advocacy

Koene pointed out the potential role of consumer rights organisations in advocating for safer AI products and pressuring tech companies to respect children’s rights.

7. Unresolved Issues and Future Directions

The discussion identified several unresolved issues and areas for future focus:

a) Enforcement of Regulations: Questions remain about how to effectively enforce AI regulations and standards across different jurisdictions.

b) Balancing Innovation and Protection: Finding the right balance between fostering AI innovation and protecting children from potential harms remains a challenge.

c) Prioritising Children’s Rights: Ensuring AI companies prioritise children’s rights and safety over profit motives was identified as an ongoing concern.

d) Addressing Subtle Risks: Dr Zhao highlighted the complexity of AI risks and the need for better awareness and translation of policies into practical guidance.

Conclusion

The discussion concluded with a call to action for policymakers, tech companies, and society at large to ensure AI systems are designed and governed with children’s rights and well-being at the forefront. The upcoming AI code for children was highlighted as a potential blueprint for addressing these concerns and implementing practical safeguards for children in the AI landscape.

The conversation demonstrated a high level of consensus on the main issues, with speakers from various backgrounds sharing similar concerns and proposed solutions. This strong agreement implies a clear direction for future policy-making and research in the field of AI governance for children’s protection. However, the discussion also revealed the complexity of the challenges ahead and the need for continued dialogue, research, and collaborative action to ensure a safe and beneficial AI environment for children.

Notable initiatives mentioned include the child rights intergroup in the European Parliament, which Brando Benifei highlighted as an important forum for addressing these issues. The discussion underscored the importance of translating high-level policies and principles into practical, implementable guidelines for AI developers and policymakers to effectively protect children’s rights in the rapidly evolving AI landscape.

Session Transcript

Leanda Barrington-Leach: who we are, Five Rights Foundation, so I was saying that we do research, we develop policy and technical frameworks also to ensure that digital systems are designed to deliver for children and notably with children’s rights in mind, children being all under 18s around the world. As part of this work for the General Comment 25, which sets out how the Convention on Rights of the Child applies to the digital environment, we worked very closely with a number of governments around the world to develop new policy and regulatory frameworks, in particular the age-appropriate design code, which if you haven’t heard about we can tell you more later. The reason we are doing this is that obviously we have seen in our work with kids that there has been basically a global problem in that tech has developed ignoring children and ignoring their established rights that many people fought very very hard for in the past century and suddenly we have a new world order and these are being trampled upon and it’s a global problem because young kids are using the same technology all around the world and living very similar experiences and similar risks and similar harms. Luckily there is a global solution, so global problem, global solution and we are working towards global norms for tech design to ensure as I said that those established rights are taken into account in the digital environment. So today AI, so what we see is something which is maybe not fundamentally new but which is supercharging some of these harms and systemic problems that we have already been addressing and we are looking indeed for, as I say, global standards and a way of addressing this. Luckily there is clearly a rising understanding and convergence, political will, rising political will. to address these issues, in particular for children based on their established rights. How are we going to do it? Well, I am very, very pleased to today be joined by a very distinguished panel of experts and also young people who can help us define some of the things that we need to take forward, in particular as this event is an official preparatory event for the Paris AI Action Summit, and so we are looking at practical solutions to the issues we face. I am going with that to hand over, just for some opening words, to our first speaker, Nidhi. Now, Nidhi is a Five Rights Youth Ambassador, 16 years old, from Malaysia, and there she is. Hello, Nidhi, it’s lovely to see you with us today. Nidhi is a very passionate advocate for children’s rights in the digital environment, has represented children and five rights around the world, including, I think, previously at an IGF. Nidhi, so this is your second time at the IGF, welcome back. Nidhi is also an author, she hosts her own podcast, and is also of the Five Rights Youth Voice podcast, check this out. So with that, Nidhi, over to you, tell us from your perspective, what’s happening and what needs to be done.

Nidhi Ramesh: Hello, everyone, and thank you, Leanda, so much for such a kind introduction. I’ll repeat, my name is Nidhi Ramesh, and as a child rights activist in the digital space, I’m so honoured to be here today, and to be able to share my views and opinions on how AI impacts young people like me all over the world, and what I believe we can do to ensure its responsible use. Before I begin with my own experience, I think it’s key to highlight that AI is on every single platform, mobile application, and website that we all use. When someone says AI, the first thing one might think of is about generative artificial intelligence. intelligence, Gen AI, apps like ChatGBT, Copilot, Jasper AI, Replica, and many others. We live in a world where AI is everywhere, but most of us can’t even tell when we’re interacting with it. Whether it’s through social media algorithms, voice assistants, or personalized learning tools, AI often works in the background, shaping our decisions and experiences. Many children don’t realize that most of their interactions with the online world might actually be through various AI algorithms, making choices, recommendations, and even decisions for them. What’s even more concerning is the misuse of AI by tech companies that put profit above children’s safety and privacy, from recommending harmful and addictive content to collecting data without consent. Many AI systems operate without safeguards for young people. Most children don’t even know they’re being exposed to these algorithms, let alone how to protect themselves from potential harm. That said, I don’t want to paint AI as the villain. AI, when implemented in platforms and used responsibly, has incredible potential, transforming how we learn. AI-powered tools can provide personalized resources, making learning accessible and inclusive. For children with disabilities or those in remote areas, this is a real game-changer. But while these benefits are real, so are the risks, and we can’t afford to ignore them. One major concern is the erosion of originality. AI is flooding the internet with self-generated content, making it harder to find authentic, human-created work. As someone who runs a podcast, I know firsthand how much effort goes into creating something original. and children’s rights in an attempt to spread awareness on topics that are important to me. This also means that I spend a long time researching topics, writing scripts, recording audio, editing, and more in order to publish a full episode. It’s a process that takes time and effort. Yet in this day and age, with the click of a button, one can easily find AI algorithms that can use any topic you give it and generate a compelling script. Other gen AI applications like iSpeech, Descript, Murph, etc. use imported clips of your voice to perfectly imitate what you sound like when you give it this AI-generated script. So in a way, these two AI programs can do what I spend hours and days working on in seconds. And while that might seem convenient, it undermines the value of creativity and hard work that we put in. And this isn’t just about me, it’s about every artist, writer, and musician who’s at risk AI agents. I’ve also written and published two books like Leanda said, but again, nowadays it’s so easy for AI to write or create something similar at just a simple command, undermining so many creators out there who want to share their work with people. Perhaps this example is more relevant to me as an individual. However, the risks and problems that arise for AI are still there and need to be addressed, especially the ones I mentioned at the start. So what do we do about this? How do we ensure that AI works for children and not against them? I believe policymakers and tech companies have a huge role to play. First and foremost, we need stronger laws and regulations around AI governance on social media platforms like Instagram, TikTok, and Snapchat. These systems must prioritize data protection and privacy, especially for children. Young people deserve to know When what information is collected, transparency isn’t optional, it’s essential. We also need AI systems designed with children’s well-being at their core. This means algorithms that promote safety and mental health, rather than exploiting vulnerabilities for profit. Tech companies must be held accountable for the impact their systems have on young minds. Baroness Stephen Kidron and Five Rights, the organization I have the huge honor of being a part of, is currently working on designing and bringing together more regulations on this, which I’m sure will be discussed later as the panel continues. So I’ll leave it there. Thank you.

Leanda Barrington-Leach: Thank you so much, Nidhi. I should not, because you say it so much better than I ever could. I hope you’ll stay with us, because I’m sure our audience in the room and online will have questions and would like to interact with you more afterwards. But we are going to move on to our second panelist, Dr. Jun Cao, who’s joining us from Oxford. Hi, Jun. Great to see you. So Dr. Jun Cao is a senior researcher in the Department of Computer Science at Oxford University. Her research focuses on best algorithm-based decision making on our everyday lives, especially when it regards families and young children. For this, Jun takes a human-centric approach, focusing on understanding users’ needs in order to design technologies that can make a real impact. Jun currently leads the Oxford Child-Centered AI Design Lab and a major research grant examining the challenges of supporting children’s digital agency in the age of AI. So Jun, thank you for joining us. Can you tell us, what’s the research telling us then about the impact of AI on children?

Jun Zhao: Well, thank you so much for the introduction, Liana. And can I just confirm that everyone can hear me all right? I presume that sounds all right.

Leanda Barrington-Leach: Thumbs up, Jun.

Jun Zhao: All right. OK, so I got some slides prepared. Can I project them? Can people see that? I’m also very happy just to talk about the research.

Leanda Barrington-Leach: You bet, Jun.

Jun Zhao: Right. OK, well, thank you very much for inviting me to be here. I wish I could be there in person very much with all of you guys in spirit. It’s shocking to hear Nidhi’s talk and presentation and how much it resonates with our research evidence. You know, as Nidhi said, AI is everywhere in children’s life. From the moment they were born and to the education systems they would be using at home or at school. And we see a very similar number of wide adoption of these technologies if you look at the survey in any countries. And also, as Nidhi said, you know, AI, you know, the rise of AI, children are rapidly embracing these new technologies. Our recent survey in the UK shows that children are twice as likely to adopt these new technologies than adults. And an earlier survey by the Internet Matters also shows that there is such a huge proportion of children in the UK are using AI technologies to help with their schoolwork. So it’s really exciting. And, you know, as Nidhi said, there is a lot of great, exciting opportunities, especially. Support children with their learning, children with special education needs, who needs extra support with their social emotional management. We also see some really exciting examples. to see how AI could help children providing them with better health opportunities, early diagnosis for autism, which is an issue in many countries in the world. And, but also we must be cautious about how all these technologies may or may not have been designed with children’s best interest in mind. So this is the slide showed a recent research we did last year, where we did a systematic review of about 200 pieces work from the human computer interaction research community, which is community pride themselves in designing for human in the heart of the design process. And we tried to analyze how AI has been used in different kinds of application domains for children. It was quite interesting to see how education and healthcare has been the most dominant application areas as well as interestingly keeping children safe online. And now we looked more closely into the range of personal data that’s being used to feed all these algorithms. And we were quite surprised to see the diverse range of really sensitive data, like genetic data, like behavior data could be routinely used by all these AI systems, but not necessarily this full consent or assent from children or even necessary for the function of those applications. And it’s also interesting when we did a review of all the current AI6 principles out there last year, and we tried to do a mapping of all these recommended ethical principles to the actual. Implementation, as everyone can see in this diagram, is a very sparsely populated table. So even basic principles like privacy, like safety, like meeting children’s developmental needs are rarely considered comprehensively in all these applications that are designed for children and often in a very critical area. So this is quite concerning to see how principles are applied in experimental settings. And it’s even more concerning when we see practices taking place in real-world cases. So this is quite an old report from 2007 with the early rise of smart home devices, smart toys. Researchers have identified very quickly serious implications, the safety risks associated with these cute cuddly bears. But you know, seven years on, many legislations have been developed since then, but it was quite… has the rise of the variety of IoT devices and smart home devices. A recent study by us, as well as many other recent studies, have shown that children’s data can be collected by all these devices, even when they’re online or offline. As one of the researchers from a recent privacy conference confessed, individual children probably won’t experience negative consequences due to toys creating profiles about them. And nobody really knows that for sure. So here is another example. see similar to the biases adults would be subject to, children can also be exposed to unfair decisions by AI systems simply due to their race or socioeconomic status, but often probably with much more lasting effects in critical situations such as criminal decision making. Rapid development of AI associated with rapid deployment, but ironically, not always there’s sufficient safeguarding in the process of design. For example, when chatbot-like technologies were deployed by Snapchat last year, some serious risks were immediately reported exposing children to inappropriate content and contact, even when they declared their only age of 13 or 15. So another thing that is quite interesting in our research is we found that although a lot of risks like privacy and safety have been extensively discussed, the exploitive nature of AI algorithms has been rarely discussed. When we began our research with children’s data privacy, we began this experiment of analyzing third-party data tracking behaviors of over 1 million apps from Google Play Store. One of the most shocking discoveries we found from this study is the prevalence of data tracking existence. These cute apps used by children, often very, very young children, when they learn how to begin their handwriting, how to pop a balloon so that they can develop their fine motor skills, this is a violation of children’s basic rights and their vulnerabilities. So, seven years on since our initial research, what has happened, GDPR happened. We repeated our study. It was quite interesting to see that tracking behavior did not change immediately at enforcement of legislation. But what did happen is the app store has made it extremely difficult for us to repeat and continue our data analysis. But what we haven’t stopped is to continue asking the question, why all this tracking of children’s data, and how we can better protect them. It’s interesting to see this recent study published earlier this year, where it provides even more firm evidence about the exploitation of children’s data, the proportion of large social media platforms rely on children’s attention for their advertisement revenue. So just like Nidhi said, these companies are not designing with children’s heart, but their market gains. Several recent studies have made similar findings showing that recommendation systems can actively amplify and direct children to harmful content. For example, here the studies have shown that children identified with mental health issues could be more likely exposed to posts, leading to more mental health risks. Harmful content is because it’s more attention grabbing, invoking stronger emotions and prolong children’s engagement. So many of these studies are actually conducted through simulations because researchers do not access to the platform APIs or the code. of the algorithms. But what happens when we talk to the children directly and ask them about their experiences? This is one of the studies that we conducted last year. Assistant with many other research studies out there, children found experience very passive and disrespectful. And many of them have found it unfair that systems can do this to their data and manipulate their experiences. And while such feelings of being exploited and disrespected can be hard to quantify, we must not neglect how these feelings are fundamentally disrespectful of children’s rights in many ways. And how the same aggressive practices could cause harm for children of different developmental stages or vulnerabilities. And I’ll just leave the evidence discussions here for now and for other speakers, because I think the fundamental phenomenons will have lots of evidence for the fundamental phenomenons, but it will be quite interesting to hear that how the recent EU-AR Act could or could not provide the much needed protection that we need for children of this generation. Thank you very much, Liana.

Leanda Barrington-Leach: Thank you so much, Zhen. I am going to step over here. So I’m in frame for that presentation of some of the overwhelming evidence. And I think if you had a little bit longer, you could have said an awful lot more. I’d like to also point people to some of the research done by Five Rights. I have a disrupted childhood here, which sets out some of the basics of persuasive design and also the pathway. research using avatars shows very clearly how algorithms drive children to very specific harms and of course there’s plenty of evidence from a number of court cases as well of children who to talk about the AI act and I am delighted to welcome the honourable member of the European Parliament maybe over here is uh no apparently the camera needs to see you so um yes so Mr. Brando Benefe who was co-rapporteur of the AI act in the European Parliament and the oversight monitoring group so absolutely critical role to make sure that the AI act delivers and co-chair also of the child rights intergroup in the European Parliament we’re absolutely delighted to have you here

Brando Benifei: yeah I’m really happy I can be here for this opportunity I’m sorry that due to the overlapping with another meeting I’m I have to attend because of the parliament program we have a good delegation here I will need to leave soon after my my intervention maybe if there is one question I can answer but I will continue I want to thank Five Rights Foundation also for the extremely useful contributions that were given during the drafting process of the AI act the original text from the European Commission was unfortunately lacking completely the dimension of child protection it was not there at all so we had to bring it in with amendments from the European Parliament with our drafting work and the negotiations that followed so we have some uh space protection inside AI act not as much as we wanted but there are also some more general provisions that can be applied effectively, if we want to apply them effectively, on the cases that we just heard of. That’s why it’s important that now the Parliament, in the new mandate that just started just a few months ago, both confirmed the existence of the child intergroup, children’s rights intergroup. As I said, I’m now, I will be the vice chair for that. It starts its work now in the new mandate. And it’s an important forum to bring together all the MEPs from different perspectives, all the parliamentarians that want to work on children’s rights. And we confirmed a monitoring group of the AI Act. So after approving the text, now we are following step by step its application. It will be crucial because some of the issues that you have been already talking about with the previous speakers are to be checked in the way that they are applied. For example, in February 2025, full mandatoriness, the full application of the prohibitions, that it’s a very important aspect of the AI Act. And among the prohibited uses, there are also emotional recognition in the study places. We wanted to avoid that, to in fact enter into a form of pressure and intrusion on children in schools. So this is one aspect, for example. But then also predictive policing that can target minors from certain minorities will be prohibited. And also we prohibit the indiscriminate use of AI-powered biometric cameras in live action in a way that will prevent forms of surveillance that can also. infringe the privacy and the protection of children. And we have, for example, prohibit also the facial scraping on the internet. So that’s something that is used that prepare generative AI or chatbots to commit some of the abuses that we have seen. And we are trying to protect this data. But also apart from the prohibitions that will kick in soon, we have very important transparency provisions that will be quite important, looking at the generative AI. And for example, we demand specific protocols by the generative AI developers to contrast the capitals to have this kind of inappropriate conversations that we have seen that has been exemplified earlier and the production of inappropriate content that can be offensive for children. This is something that needs to be entrenched in the way the system is trained and it’s limited and needs to be checked periodically. But also we want to label AI generated content. This is crucial to find another issue that was not very much touched until now in this discussion, which I think it’s very bullying. Cyber mistreatment of children, which is a very important source of mental disorders, of attacking mental health. And in fact, with the new systems of generative AI, you can have a totally new level of extremely damaging cyber bullying of all kinds. And this is something we also need to tackle by avoiding the production, but when the thing is there, at least it needs to be clear that this is not true. That is fake. And so people cannot be. induced to think that a person is doing or saying things that will make them feel ashamed and have mental health problems. And also, finally, I want to underline that these are some more examples about how the AI Act interacts, but I want to concentrate on the fact that this interacts also with the Digital Services Act and with the child sexual abuse material legislation that we have been developing, that has been forwarded by the European Union and we think that the ecosystem needs to work together. As I said, I’m the specialist of the AI Act, I’ve been working on that, but in fact you put that together with this new legislation on child sexual abuse and you can build a proper framework of protection. And in fact, we want to continue a global dialogue, we are working on that, I am doing that with different governments and parliaments so that we can build a common framework of action. And that’s why it’s very important that civil society foundations, organizations can be linked, that are not only between the governments, but also in civil society. And I insist that the parliamentarians have to do their part here. It’s important that we have the IGF parliamentary track that also dealt in one of the discussions about these topics and we need to continue developing in this direction. We hope we can give some good practice by the application of this legislation, but clearly we need to build together an apparatus of actions and legislation, soft and hard laws that can protect our children online. Thank you very much.

Leanda Barrington-Leach: Thank you so much, Brando. I know you have to leave. Do you still have time for another question or anything from the room? Okay. Does anyone have a burning question? Mr. Benifei? I would have lots of questions, but I’ll have to keep them.

Peter Zanga Jackson: Well, my name is Peter Zanga-Jackson, Jr. I’m from Liberia. I’m the regulator. Firstly, thank you for the explanation you gave, but I want to ask you, because the child that we are talking about, they are from the family, and the family is the fundamental of that child. They are in some homes, there’s no check when it comes to the child. Some families, no monitoring. So you don’t think there should be an awareness. Firstly, educate the families as to what they should do. Limit the child or children to some extent before going to the next level of trying to detect giants that develop all of these AR and so on. I can say it’s more on the family. What do you think the family can do as the fundamental of the child? Yeah, children. Our children should start from the possible place before we go outside to find a solution. This is my question.

Brando Benifei: Is it working? You can hear? Okay. So just to answer very quickly on this, I think it’s a very important topic, because we need families to be ready to do their part in this. Obviously, I concentrated on that. It’s also building a culture. And this means you need to give the instruments to the adults to be able to have an informed conversation with their children. So, I don’t think we will solve everything by giving instruments to the adult population. We need schools, we need the formal education targeted at children through the institutions, but obviously if we have a more conscious population, also of the older age, that is not digital native, that needs to be trained, then they can also transmit to their children some basic foundational aspects. Be healthy and protected and conscious and not be manipulated while you are using new technologies, AI, the internet. So, yes, we also need the families to be on board. And we cannot solve everything with that, but at the same time, without investing also in the families, I think we are missing an important piece. I completely agree with you. Thank you.

Leanda Barrington-Leach: Thank you. Thank you very much. Lots of luck in overseeing the AI actor implementation. And we’ll be telling you about what comes from this panel, which is relevant to that later. And I’d just like to say our friend left the room, but I’d like to say that the European Parents Association was very much behind a lot of the work done on the AI Act. They have been big drivers of this. Now, over to our next speaker, Dr. Ansgar Kuna, who is AI ethics and public policy regulatory lead at Ernst & Young. I probably made your title even longer. It’s already quite long. So Ansgar, we are delighted to have as a trustee of the Five Rights Foundation, is our vice chair of our board and an absolute expert in AI and working a lot on the technical standards that are needed. But among others, the implementation and enforcement of the AI Act. So Ansgar, we’re going to hear from you a little bit about the status quo in terms of what we have to get this kind of regulation and also things like the AI convention and the framework that came from the UN a few months ago. So there are a few things. AI Act. So a few words from you, what we’d like to hear about is indeed, what is the status quo in

Ansgar Koene: terms of actually making this real? What’s missing? Where do we go from here? Sure, same check as everyone else. Can you hear me? Okay, good. So yes, we’re definitely in a very interesting period with the introduction of new international charters like the Council of Europe’s charter on AI. Legislation like the AI Act, but also in other jurisdictions that are pushing either through mandatory obligations around safeguards for AI, or are putting on the table an expectation from the regulator that they’re saying, we expect you to follow voluntary codes around responsible use of AI. And certainly, we have seen, if we look at the types of organizations that you are with, that the introduction of has pushed forward the level of engagement, the level of resources also that are provided within organizations, be it public or private sector organizations, to actually make sure that the regulations apply. If we think, especially about the way in which these types of regulations apply and have are going, there is a large challenge for a lot of organizations. There is a large challenge for a lot of organizations, similar to what we’ve seen in the platform space, that the organizations are often not quite aware to what extent what they are doing actually impacts. on children. Similar as what we’ve seen with social media platforms and other online platforms, that when they created the space, they were building the space not with children in mind, they were building the space with adults in mind, even though in reality we know that these platforms are children, they did not even conceive that this is something that they need to be building for. And a similar challenge is in the AI space, especially as in the AI space, as it is moving to a model where we have creators of the core AI models, LLMs being a prime example of that, being separate from the deployers of the AI models that then integrate them into their systems, that there is a distance between those who have the capacity to actually understand and do something about compliance, also compliance with regards to aspects regarding children, that they’re different from the ones that are directly facing the users. And even often those that are directly facing the users are not sufficiently tracking and aware of who exactly their users are. So if the AI Act asks for, prohibits a subliminal manipulation of vulnerable groups, but the deployers and especially the providers are not even aware that young people as a vulnerable group, then of course they are challenged in knowing whether or not they are subliminally manipulating them, or if they are having a negative impact on them, and they don’t even understand what does a negative impact on young people mean. This reflects then on some of the work that has been happening in the standardization space. So last year, the The IEEE published a standard, which Five Rites was a prime, the IEEE 2089 standard on age-appropriate design. And one of the things that that standard actually asks for is that organizations, as they are engaging in the design and development of AI systems, that they have people within that development process who are subject matter experts with regards to the impacts that these systems can have on children. So that there is someone at least involved in the process that thinks about how could this kind of a system impact children, what are the potential challenges, the potential negative impacts that could arise if someone under 18, let alone someone under 10, is using these kinds of systems. However, the standardization space is also, it is a space that is still very much in flux, in development. If we think, for instance, around the standards that are meant to provide the clear operational guidance on how to become compliant with the AI Act, all of those standards are still in development. The European Standards Body, CEN and CENALEC are rushing to try to meet the deadline that the AI Act has set to be able to provide these standards. And because they are rushing, they are focusing at the high level, horizontal level, there are very few, add an understanding as to the particulars that are necessary to address concerns regarding children. Fire rights is participating in the process, but there are multiple standards being developed. Simultaneously, it is highly challenging to be contributing to all of these at the same time, to make sure that the risk management standard. considers the risks to children at the same time as a trust within a standard considers what does accuracy, what our children might use an AI system actually mean. So the technical space around how do we move from a high level intended outcome which the regulations have specified into an operational, what do you need to do on the ground to make sure that the systems work to meet those requirements is still a space that needs a lot of support, it needs a lot of work. And as I’ve said, there is even the core challenge that organizations need to be aware that they even need to consider how children might be impacted by these systems as they’re deploying something at a new chatbot, as they’re deploying AI as part of a system for targeting advertising, as they’re using AI as part of something like that, they’re generally not building it with children in mind. And so it is a space that is dynamic, it is a space that is moving in the right direction to the extent that it has been integrated in the AI actor instance as Brando mentioned. However, because there are so many new things, new compliance activities, new thinking about what does responsible AI actually mean while there is also a huge rush to try to find new ways to actually get a return on investment on these, that there is a huge risk that the particular concerns around children will fall between the cracks if we do not raise enough awareness about this.

Leanda Barrington-Leach: Thank you so much Ansgar. I think we’re going to have our last intervention and then engage in a discussion if that is okay. This one’s not working? Is it working? Okay, great. Thank you so much for that. Indeed, I think it’s absolutely critical that, you know, law is already something, but we need to get down into the weeds to get those technical frameworks in place, because otherwise, companies can say things like, oh, well, we didn’t know that we were exploiting children’s vulnerabilities. We didn’t know that children were there. We are not designing for children. Why do we need… I’m being a little bit provocative, but the reality, of course, is that many of the biggest companies, at the very least, are quite aware that children are a massive market. They are targeting children as their current market and their future market. So, of course, it’s a little bit disingenuous, but until we get all of that detail in place, then that is a game that we will be playing. So, absolutely critical. Thank you so much for that. We have just our last intervention online from Baroness B. Ben Kidron, who is the chair of Five Rights Foundation. Baroness Kidron is a member of the House of Lords in the UK and was the architect of the age-appropriate design code. Baroness Kidron has been a long-term advocate of children’s rights and is currently working on an AI code, which hopefully will feed into some of the things that we’ve been speaking about. So, Baroness Kidron… you you you you you

Baroness Beeban Kidron: I’m delighted that today’s conversation will feed into those which will take place in Paris in February. As Nidhi and Jun will have shared, it’s crucial that we develop artificial intelligence and other automated systems with one eye on how we’re going to impact children. The possibilities are infinite. I was very moved by a system that in real time could monitor a preterm baby heartbeat without having to stick heavy instruments on their paper-thin chest. Terre des Hommes, a fellow children’s rights NGO, recently launched an AI chatbot to support children’s access to justice. Or a few months ago, I met a wonderful group of 14-year-old girls who built an app to teach sign language to hearing classmates so that could all communicate with their deaf peer. There is no holds immense potential. But like any technology, AI must be developed. with children in mind. And I do want to emphasize that it’s a design choice if recommender systems feed children alarming content, promoting heating disorders, or self-harm. It’s a design choice if AI-powered chatbots encourage emotional attachments, which may, in some cases, have led to children taking their own lives. It’s a design choice if, cynically, some of those chatbots revive deceased children through the creation of AI bots imitating their personalities, re-traumatizing their families and friends, and creating a loop in which self-harm or suicide is valorized. As children point out to us repeatedly, it’s a design choice if AI-powered chatbots encourage emotional attachments, promoting heating disorders, or self-harm. It’s a design choice if, cynically, some of those chatbots revive deceased children through the creation of AI bots imitating their personalities, re-traumatizing their families and friends, and creating a loop in which self-harm or suicide is valorized. It’s a design choice if, cynically, some of those chatbots revive deceased children through the creation of AI bots imitating their personalities.

MODERATOR: Dear host, I don’t know if you can hear me, but if you can allow me to share my screen again I can restart the playback.

Leanda Barrington-Leach: I’m so sorry, it seems to be either a choice between online people being able to see and hear and us being able to see and hear. Why don’t we give that a moment, I think there were some questions, at least the ones from the room, I’m not sure about the ones online. Let’s come to that and see if we can get the end of Baroness Kidron’s intervention in a minute. Jutta, you had a question or a point to make.

Jutta Croll: Yes, Jutta Kohl from the German Digital Opportunities Foundation. I had the honour to work with Five Lights Foundation in the working group on general command number 25. And I really appreciate what we heard from Bieben, also what we heard from Ansgar. But at the same time, I’m a bit disillusioned, because I think it’s 10 to 12 years ago when we really talked to tech companies about the concept of safety by design. And that was, although we had artificial intelligence at that point of time, it was not in the hands of the children in the real way. So I would have expected that this principle would be in the standards, would be in the hands of developers to be adhered to and to take in consideration that children… It was obvious throughout all the developments. I would say when the internet came up, it was not designed for children. So maybe we were then going behind that and say, okay, now we have this idea of safety by design. Have in mind that children probably will be users of the services of the devices and so on. And now we end up several years ago with AI and the same situation that we had before with other technology, digital technology. Ansgar, maybe you have an answer to that, not to disillusion me?

Ansgar Koene: I fear that my answer is not going to be something that will remove your disillusionment. So, the practice that we are seeing is to bring things to the market and in that rush, it remains the case that the so-called functional requirements, that is to say the things that need to be there in order to produce the type of output that they want to create, get the prominence sort of the investment and the so-called non-functional, this terminology is terrible, requirements such as making sure that there will not be negative consequences especially are marginalized in the design process unless there is a significant factor behind it, such as the risk of a huge fine. That is why even though we’ve seen discussions around responsible AI principles for many years, there was always a lack of investment to really get it implemented. There was often the case of technologists within the company saying, this is something that we should be doing, but they were not being given the resources to actually do it Until now, there is something like an AI act where you’re going to face fines, now suddenly there is an investment in doing it.

Leanda Barrington-Leach: Is it this that’s crackling? This is okay? It was your voice. Okay, maybe we’ll get Baroness Kidron back and also our other speakers. But in the meantime, oh, wonderful, we can hear. I hope that’s current, not right from the beginning when they couldn’t hear. But in the meantime, do we have any other questions or comments from the room? Otherwise, I will go to two ones online, too. And I have to say, Jutta, I totally agree with you. It’s taking far too long. And as I said a bit before, it’s a little bit disingenuous, because we do know what the issues are and we’ve known for a long time. Let’s go over to Lena.

Lena Slachmuijlder: Yeah, thanks so much. And it’s just such good work that you’re all doing.

Baroness Beeban Kidron: Good morning. I regret not being able to be with you in person at the Internet Governance Forum today. This session is officially accredited as a preparatory event for the AI Action Summit. And I’m delighted to be able to feed into those which will take place in Paris in February. As Nadine and Jun will have shared, it’s crucial that we develop artificial intelligence and other automated systems with one eye on how it will impact choice.

Lena Slachmuijlder: Yeah. Okay. I mean, I also feel as though we’ve known the issues for a long time, and the only age is when they face fines or penalties or litigation. I’m just curious, because there’s people from other countries in the room as well. And I’m wondering, Five Rights has been doing some work globally. You know, are we seeing the same conclusions in terms of the ability of other countries?

Baroness Beeban Kidron: Good morning. I regret not being able to be with you in person at the Internet Governance Forum today. This session is a preparatory event for the AI Action Summit. and I’m delighted that today’s conversation will feed into those which will take place in Paris.

Leanda Barrington-Leach: Thank you. In the past. Use the microphone. I’m going to

Lena Slachmuijlder: I was to the room who also finding similar issues. Is there a sense from other countries that they also need to get in line and have some really robust regulation like we heard from the experience in Europe. It’s just an invitation for others online or in the room. I think the work that I do is also aligned with five rights in it with the Council on tech and social cohesion. It’s trying to regulate the upstream design features that lead to these kinds of harms and also polarization.

Leanda Barrington-Leach: I’m so sorry because I was only half listening and afterwards you’re going to tell me that again because think I want to know. We also have online our speakers Jun and Nidhi. Anything that you have heard please wave or put something in the chat and then I will see it and bring you in. Otherwise is there anything else from the room? I have a question online. Okay so I have a question online from Dorothy Gordon from UNESCO who’s asking how involved to consumer rights organizations in working to get major tech companies to stop abusing children’s rights in this way? I believe we need consumers to deliberately avoid using dangerous products. So that’s public awareness and almost boycotting I guess what consumer organizations can do other things like submitting

Ansgar Koene: complaints to take that. I’m afraid the only part that consumer rights organizations are doing in this space that I can really speak to is that yes we do have at least in Europe when it comes to the standardization for the AI act we do have participation by the consumer rights organization ANEC in helping to make sure that consumer concerns are taken into consideration as the standards are being developed and this isn’t by a non-industry player. I’m not aware as to the activities that are being done regarding educating users as to the impact that these types of technologies may have on them and therefore to try to help them make a informed choice as to whether they want to use these tools. Obviously there are NGOs that are working on these types of things as well. Mozilla every year has before Christmas some activities around which digital tools may be spying on you etc. and will only reach a particular subsection of a population who are generally already I imagine this is a space where we also need support from public sector to do campaigns to help people better understand this. who has the resources to reach the whole population as opposed to only people who already are looking for this kind of information I think that is going to be a big question.

Leanda Barrington-Leach: Thank you, Ansgar. I’m going to take the liberty of getting slightly longer on the tech I’m going to jump in a little bit more quickly AI is not new. Artificial intelligence was created in 1955. AI systems are really a continuation of algorithmic and automated systems which we all have experience with. This also means that AI is not too complicated to understand and it is certainly not too complicated to regulate. Secondly, AI is not a new theory. Generative AI systems are different. It is based on machine learning and artificial general intelligence. Theoreticals now is wholly different. Generative AI is coming. It is unnecessary to tackle tasks that could be automated through other specialized AI even non-AI approaches which are more accurate and energy deficient. The choice of model should be based on necessity and proportionality. Thank you so much. Make sure AI remains an example. We are not. It’s a simple challenge. We’re just a species. We continue to exist in the past. We seek to maximize profit. We try to assume. It’s a simple challenge. It’s a simple challenge. It’s a simple challenge. It’s a simple challenge. It’s true that we seek to sow doubt and uncertainty to keep authorities from effectively legislating and to prevent citizens from demanding effective legislation. But this does not tolerate AI exceptions. It’s no secret that adults have failed to provide children with the best respecting online environment. As AI is not different from previous technologies, the same will happen if we do not act immediately. This is why over the past year, building on global consensus and working hand in hand with global experts in the field, we have developed an AI code for children. Launching the code in the coming months will provide a clear and practical path forward for designing, deploying and governing AI systems, taking into account children’s rights and needs. It’s an important and necessary correction to the persistent failure to consider children and a vital blueprint for delivering on the commitments to children in the global digital compact and in regulatory advances such as the AI Act. We need from the outset to consider how to build the rights and needs of every child into the design and governance of AI systems. The code, which we will launch at Paris hopefully, will be for anyone who designs, adapts or deploys an AI system that impacts children. It is practical, actionable, adaptable and applicable to all kinds of AI systems. It mandates certain expertise and actions and raises questions designed to reveal gaps and risks. It leaves a level of autonomy to find sufficient mitigation measures. It is intended to support existing regulatory initiatives and provide a standard for those jurisdictions that are considering introducing new legislation or regulation. In the Global Digital Compact, all governments agreed on the urgent need to assess and address the potential impact, opportunity and risks of artificial intelligence systems on the well-being and rights of individuals. That’s a quote. Children represent one third of internet users and are early adopters of technology and have unique rights and vulnerabilities. They must be at the centre of our discussions and considerations. I hope I didn’t misquote any of that or that she didn’t change it in the final version. But you can always quote me because I agree with all of that. I think putting children at the centre of the conversation maybe means that in the last few minutes, I’d like to go back to Nidhi, if that is OK with you. Nidhi, are you still with us?

Nidhi Ramesh: Yes, I am.

Leanda Barrington-Leach: Could you bring Nidhi up, please? OK, I don’t think our tech people are listening to me again. Nidhi, if you’re with us, I’d love to hear your reflections. You talk to, hello again, not only your peers and colleagues all the time, but also a big group of child ambassadors within Five Rights. What are the conclusions that you draw from this and what are maybe some of the things that you think that you and your colleagues would like to tell us and for us to take forward?

Nidhi Ramesh: Thank you, Yolanda. That’s such an interesting question. So as Five Rights youth ambassadors, we often- Can’t hear you yet. Oh, sorry. My mic should be on. All right.

Leanda Barrington-Leach: Try again.

Nidhi Ramesh: Hello. Can you hear me now?

Leanda Barrington-Leach: No, still not. AI will one day solve all of these problems, I am sure. Oh, Nidhi, can we hear you now?

Nidhi Ramesh: Yes. Hello. Can you hear me now?

Leanda Barrington-Leach: We can, we can. Go ahead.

Nidhi Ramesh: Perfect. All right. Then I’ll just start again. Thank you so much, Leanda. That’s such an interesting question. And as Five Rights Youth Ambassadors, we often discuss the opportunities and the risks of AI, especially for children and young people. While we see its potential, there have been obviously some key concerns that stand out to us. So one major issue is education. AI can make homework quicker, but it risks taking away from essential learning skills. As one of my peers put it, it’s making homework easier, but at what cost to our learning? And we worry a lot about losing creativity and critical thinking, skills we’ll need later on in life. Another significant concern is privacy. AI systems can analyze so much about us, even from just a photo or a message. One ambassador shared how AI is amazing and how it can help us, but it’s also scary how much it knows about us. Many of us feel uncomfortable with how much information we’re unknowingly sharing, especially when we’re not informed about how it’s being used, like how I mentioned earlier during my first intervention. We’ve also talked a lot about the psychological risks of AI, systems designed for companionship, for example, might seem helpful, but in the long term, have a lot of consequences. As one of our ambassadors said, it’s about more than just privacy. technology, it’s kind of about our values, relying on machines that mimic empathy could affect our real world social skills, especially for vulnerable young people. And of course, there’s the growing threat of deep fakes. Marco, one of our youth ambassadors summed up well by saying that AI tools are developing and deep fakes are becoming scarier, and it can ruin people’s online footprint. So to sum it up, while AI brings immense opportunities, it’s these educational, ethical and privacy related risks that concern us the most. And it’s crucial that AI systems are designed to protect young people with a lot of safeguards that prioritize our rights and well-being.

Leanda Barrington-Leach: Thank you so much. Thank you so much, Nidhi. It’s always wonderful to hear from you and from your fellow youth ambassadors. I hope that the code that we’ll bring out is something that will serve you. But we are going to get your direct feedback, of course, on it very soon. We really hope that you will find some of the elements there to address some of the things that you have brought up. We have two minutes to go. And we have had a very eventful session, I would say. But I don’t know, Jun, if you’re online, if you want to come back with any closing words.

Jun Zhao: Hi, Leanna, can you still hear me alright?

Leanda Barrington-Leach: We can.

Jun Zhao: Oh, fabulous. Fantastic. And what a fabulous session. And I tried to come in a few times. And I think it was got confused a few times when we are trying to manage the video and hybrid attention. And what I just want to really try to come in is about two things the discussion about safety by design and parents’ role in order of safeguarding children. I think I agree with Ansgar’s point. I think we are definitely moving towards the right positive direction, but it’s a really challenging domain. I know there are a lot of Gen AI companies embracing the safety by design principles and trying to integrate that really actively in their design and development process now, which is really encouraging to see that, especially if they are taking that perspective from children. But it’s very complex because the risks are quite diverse. I agree with what Ansgar said. Some of the companies may not be aware of some of the risks for children, but I think some of them do. At the same time, there’s a challenge because the diverse risks, some of them may not seem having direct impacts or immediate safety risks for children. Some of the risks that Nadi raised, like exploitations, manipulations, they may not see it as harm, but they are harmful nevertheless. So it will be quite interesting to see in the next couple of years when the EU AI Act as well as many other acts come into place, how all this understanding about various forms of risks and harms are going to fair out in the legislation enforcement, and how we can all work together to facilitate better awareness, better translation from policies into practical guidance so we can create a better AI world for our children and our society as a whole. And I think that’s all I’ve got to say, Liana. I hope that way we can finish on a positive note. note and something exciting for us to look forward to in 2025.

Leanda Barrington-Leach: Thank you very much, Jin. Indeed, there remain outstanding questions. But as you have said, there’s still plenty going on. We do have 2025, lots of things that we can deliver on. And I think I would just like to reference maybe at the end that in the UN framework, and we’re here under the UN’s umbrella, governing AI for humanity, there was a very, very clear point, which is that AI must not be experimenting on children. AI might be in some ways, some aspects of it, we might be using it in new and novel ways. But, you know, we can innovate all we want. But this is something where we know that our children are too precious and grow up too fast. And the education, as you said, Nidhi, you know, even impacting your education, we’re talking about, you know, the generations on the future. We must not be experimenting on children. And this is what we will take to the to the Paris summit with all of this input. And we hope that all online and in the room, you will come behind us and have a look at this code and see how it can be bettered, improved so that it can deliver on these issues for kids. Thank you so much. I’d like you all to join me in thanking our panelists for this very rich discussion. I’m very grateful for your patience in particular. Thanks so much. Thank you so much for such an amazing session, Landa. Thank you, everyone.

N

Nidhi Ramesh

Speech speed

156 words per minute

Speech length

1168 words

Speech time

448 seconds

AI is ubiquitous in children’s lives but often operates without their awareness

Explanation

AI is present on every platform, application, and website that children use. Many children don’t realize that most of their online interactions are through AI algorithms making choices and decisions for them.

Evidence

Examples given include social media algorithms, voice assistants, and personalized learning tools.

Major Discussion Point

Impact of AI on Children

AI poses risks to children’s privacy, mental health, and learning

Explanation

AI systems can analyze a lot of personal information from children, even from just a photo or message. There are concerns about losing creativity and critical thinking skills due to AI-assisted homework.

Evidence

Quotes from youth ambassadors expressing concerns about privacy and the impact of AI on learning.

Major Discussion Point

Impact of AI on Children

Agreed with

Jun Zhao

Leanda Barrington-Leach

Agreed on

AI poses risks to children’s privacy and well-being

J

Jun Zhao

Speech speed

121 words per minute

Speech length

1695 words

Speech time

839 seconds

AI systems can collect sensitive data from children without proper safeguards

Explanation

AI applications designed for children often use sensitive personal data, including genetic and behavioral data. This data collection often occurs without full consent or necessity for the application’s function.

Evidence

Systematic review of about 200 pieces of work from the human-computer interaction research community.

Major Discussion Point

Impact of AI on Children

Agreed with

Nidhi Ramesh

Leanda Barrington-Leach

Agreed on

AI poses risks to children’s privacy and well-being

AI chatbots and recommendation systems can expose children to inappropriate content

Explanation

AI-powered systems can amplify and direct children to harmful content. This is particularly concerning for children with mental health issues who may be exposed to more risky content.

Evidence

Studies showing that recommendation systems can actively amplify and direct children to harmful content.

Major Discussion Point

Impact of AI on Children

Agreed with

Nidhi Ramesh

Leanda Barrington-Leach

Agreed on

AI poses risks to children’s privacy and well-being

Safety by design principles should be integrated into AI development

Explanation

There is a positive trend of AI companies embracing safety by design principles and integrating them into their development processes. However, the complexity of diverse risks for children makes this challenging.

Evidence

Mention of Gen AI companies actively integrating safety by design principles in their processes.

Major Discussion Point

Designing AI Systems with Children in Mind

L

Leanda Barrington-Leach

Speech speed

153 words per minute

Speech length

2875 words

Speech time

1122 seconds

AI can amplify existing harms and systemic problems affecting children

Explanation

AI is supercharging some of the harms and systemic problems that already exist in the digital environment. This is a global problem as children around the world are using the same technology and facing similar risks and harms.

Major Discussion Point

Impact of AI on Children

Agreed with

Nidhi Ramesh

Jun Zhao

Agreed on

AI poses risks to children’s privacy and well-being

AI should not be used to experiment on children

Explanation

While AI brings opportunities, it should not be used to experiment on children. Children are too precious and their development too important to be subject to experimental AI technologies.

Evidence

Reference to the UN framework on governing AI for humanity, which states that AI must not experiment on children.

Major Discussion Point

Designing AI Systems with Children in Mind

B

Brando Benifei

Speech speed

137 words per minute

Speech length

1135 words

Speech time

493 seconds

The EU AI Act includes some provisions to protect children, but more is needed

Explanation

The EU AI Act now includes provisions for child protection, which were initially lacking in the original text. However, there is still a need for more comprehensive protection measures for children in AI systems.

Evidence

Examples of prohibitions in the AI Act, such as emotional recognition in study places and indiscriminate use of AI-powered biometric cameras.

Major Discussion Point

Regulation and Governance of AI for Children’s Protection

Agreed with

Ansgar Koene

Baroness Beeban Kidron

Agreed on

Need for AI regulation and governance to protect children

Differed with

Ansgar Koene

Differed on

Approach to AI regulation

Global cooperation and dialogue is needed to build common frameworks

Explanation

There is a need for continued global dialogue to build a common framework of action for AI governance. This involves not only governments but also civil society organizations and parliamentarians.

Evidence

Mention of working with different governments and parliaments, and the importance of the IGF parliamentary track.

Major Discussion Point

Regulation and Governance of AI for Children’s Protection

A

Ansgar Koene

Speech speed

131 words per minute

Speech length

1447 words

Speech time

659 seconds

Technical standards are still being developed to operationalize AI regulations

Explanation

Standards to provide clear operational guidance on how to comply with AI regulations are still in development. There is a rush to meet deadlines set by legislation like the AI Act, but this rush may lead to insufficient consideration of children’s concerns.

Evidence

Mention of European Standards Body CEN and CENALEC working on standards for AI Act compliance.

Major Discussion Point

Regulation and Governance of AI for Children’s Protection

Agreed with

Brando Benifei

Baroness Beeban Kidron

Agreed on

Need for AI regulation and governance to protect children

Differed with

Brando Benifei

Differed on

Approach to AI regulation

Organizations often lack awareness of how their AI systems impact children

Explanation

Many organizations deploying AI systems are not aware that their systems may impact children. This lack of awareness makes it challenging to comply with regulations aimed at protecting children.

Major Discussion Point

Designing AI Systems with Children in Mind

There is a need for subject matter experts on children’s impacts in AI development

Explanation

Organizations developing AI systems need to include subject matter experts who understand the potential impacts on children. This expertise is crucial for considering how AI systems could affect children during the development process.

Evidence

Reference to IEEE 2089 standard on age-appropriate design, which calls for including such experts in AI development.

Major Discussion Point

Designing AI Systems with Children in Mind

Consumer rights organizations have a role in advocating for safer AI products

Explanation

Consumer rights organizations are participating in the development of standards for AI regulations. They help ensure that consumer concerns are taken into consideration in the standardization process.

Evidence

Mention of ANEC (European consumer voice in standardisation) participating in AI Act standardization efforts.

Major Discussion Point

Regulation and Governance of AI for Children’s Protection

B

Baroness Beeban Kidron

Speech speed

110 words per minute

Speech length

436 words

Speech time

236 seconds

An AI code for children is being developed to provide practical guidance

Explanation

Five Rights Foundation is developing an AI code for children to provide clear and practical guidance for designing, deploying, and governing AI systems with children’s rights and needs in mind. This code aims to address the persistent failure to consider children in AI development.

Evidence

Mention of the code being developed over the past year, building on global consensus and working with global experts.

Major Discussion Point

Regulation and Governance of AI for Children’s Protection

Agreed with

Brando Benifei

Ansgar Koene

Agreed on

Need for AI regulation and governance to protect children

P

Peter Zanga Jackson

Speech speed

116 words per minute

Speech length

158 words

Speech time

81 seconds

Families and schools have a role in educating children about AI

Explanation

Families, as the fundamental unit for children, should be educated about AI and its impacts. This education should start at home before expanding to broader societal efforts.

Major Discussion Point

Designing AI Systems with Children in Mind

Agreements

Agreement Points

AI poses risks to children’s privacy and well-being

Nidhi Ramesh

Jun Zhao

Leanda Barrington-Leach

AI poses risks to children’s privacy, mental health, and learning

AI systems can collect sensitive data from children without proper safeguards

AI chatbots and recommendation systems can expose children to inappropriate content

AI can amplify existing harms and systemic problems affecting children

Multiple speakers highlighted the various risks AI poses to children, including privacy violations, exposure to inappropriate content, and potential negative impacts on mental health and learning.

Need for AI regulation and governance to protect children

Brando Benifei

Ansgar Koene

Baroness Beeban Kidron

The EU AI Act includes some provisions to protect children, but more is needed

Technical standards are still being developed to operationalize AI regulations

An AI code for children is being developed to provide practical guidance

Speakers agreed on the necessity of developing comprehensive regulations, standards, and guidelines to ensure AI systems are designed and deployed with children’s rights and safety in mind.

Similar Viewpoints

Both speakers emphasized the importance of incorporating children’s perspectives and expertise in AI development processes to ensure systems are designed with children’s safety and rights in mind.

Ansgar Koene

Jun Zhao

Organizations often lack awareness of how their AI systems impact children

There is a need for subject matter experts on children’s impacts in AI development

Safety by design principles should be integrated into AI development

Unexpected Consensus

Global cooperation for AI governance

Brando Benifei

Leanda Barrington-Leach

Global cooperation and dialogue is needed to build common frameworks

AI can amplify existing harms and systemic problems affecting children

While not explicitly stated by all speakers, there was an underlying agreement on the need for global cooperation to address AI’s impact on children, recognizing it as a global issue requiring coordinated solutions.

Overall Assessment

Summary

The speakers generally agreed on the significant risks AI poses to children’s privacy, safety, and well-being, as well as the urgent need for comprehensive regulations and guidelines to protect children in the AI landscape.

Consensus level

High level of consensus on the main issues, with speakers from various backgrounds (youth, academia, policy-making) sharing similar concerns and proposed solutions. This strong agreement implies a clear direction for future policy-making and research in the field of AI governance for children’s protection.

Differences

Different Viewpoints

Approach to AI regulation

Brando Benifei

Ansgar Koene

The EU AI Act includes some provisions to protect children, but more is needed

Technical standards are still being developed to operationalize AI regulations

While Benifei emphasizes the progress made in including child protection provisions in the EU AI Act, Koene highlights the ongoing challenges in developing technical standards to implement these regulations effectively.

Unexpected Differences

Overall Assessment

summary

The main areas of disagreement revolve around the effectiveness of current regulatory efforts and the specific approaches needed to protect children in AI development and deployment.

difference_level

The level of disagreement among the speakers is relatively low. Most speakers agree on the fundamental issues but offer different perspectives or emphasize different aspects of the problem. This suggests a general consensus on the importance of protecting children in AI development, but some differences in how to achieve this goal effectively.

Partial Agreements

Partial Agreements

All speakers agree on the risks AI poses to children, but they differ in their focus. Ramesh emphasizes the impact on learning and mental health, Zhao highlights data collection issues, and Koene points out the lack of awareness among organizations developing AI systems.

Nidhi Ramesh

Jun Zhao

Ansgar Koene

AI poses risks to children’s privacy, mental health, and learning

AI systems can collect sensitive data from children without proper safeguards

Organizations often lack awareness of how their AI systems impact children

Similar Viewpoints

Both speakers emphasized the importance of incorporating children’s perspectives and expertise in AI development processes to ensure systems are designed with children’s safety and rights in mind.

Ansgar Koene

Jun Zhao

Organizations often lack awareness of how their AI systems impact children

There is a need for subject matter experts on children’s impacts in AI development

Safety by design principles should be integrated into AI development

Takeaways

Key Takeaways

AI is pervasive in children’s lives but often operates without their awareness or proper safeguards

AI can amplify existing harms and pose risks to children’s privacy, mental health, and learning

The EU AI Act includes some provisions to protect children, but more comprehensive regulation is needed

Technical standards and practical guidance (like the proposed AI code for children) are still being developed to operationalize AI regulations

Global cooperation and dialogue is needed to build common frameworks for protecting children in AI systems

Organizations developing AI often lack awareness of how their systems impact children

Safety by design principles should be integrated into AI development with input from child impact experts

Resolutions and Action Items

Launch an AI code for children at the upcoming Paris AI Action Summit to provide practical guidance on designing AI systems with children’s rights in mind

Continue developing technical standards to operationalize the EU AI Act’s provisions related to children

Increase awareness and education for families and schools about AI’s impact on children

Unresolved Issues

How to effectively enforce AI regulations and standards across different jurisdictions

How to balance innovation in AI with protecting children from potential harms

How to ensure AI companies prioritize children’s rights and safety over profit motives

How to address the diverse and sometimes subtle risks AI poses to children beyond immediate safety concerns

Suggested Compromises

Allowing some autonomy for AI developers to find appropriate mitigation measures while mandating certain expertise and actions to protect children’s rights

Thought Provoking Comments

Many children don’t realize that most of their interactions with the online world might actually be through various AI algorithms, making choices, recommendations, and even decisions for them.

speaker

Nidhi Ramesh

reason

This highlights a critical lack of awareness among children about how AI is shaping their online experiences, raising important questions about informed consent and digital literacy.

impact

Set the tone for discussing the hidden influence of AI on children’s lives and the need for greater transparency and education.

Our recent survey in the UK shows that children are twice as likely to adopt these new technologies than adults.

speaker

Jun Zhao

reason

Provides concrete data showing children’s rapid adoption of AI technologies, emphasizing the urgency of addressing potential risks.

impact

Shifted the discussion towards the need for proactive measures, given how quickly children are embracing AI.

The original text from the European Commission was unfortunately lacking completely the dimension of child protection it was not there at all so we had to bring it in with amendments from the European Parliament with our drafting work and the negotiations that followed

speaker

Brando Benifei

reason

Reveals how child protection was initially overlooked in major AI legislation, highlighting the importance of advocacy and the role of policymakers in addressing this gap.

impact

Focused the conversation on the legislative process and the need for continued vigilance to ensure children’s rights are protected in AI regulations.

The practice that we are seeing is to bring things to the market and in that rush, it remains the case that the so-called functional requirements, that is to say the things that need to be there in order to produce the type of output that they want to create, get the prominence sort of the investment and the so-called non-functional, this terminology is terrible, requirements such as making sure that there will not be negative consequences especially are marginalized in the design process unless there is a significant factor behind it, such as the risk of a huge fine.

speaker

Ansgar Koene

reason

Provides insight into the industry practices that prioritize functionality over safety, especially for children, unless there are strong regulatory incentives.

impact

Deepened the discussion on the challenges of implementing child protection measures in AI development and the role of regulation in incentivizing change.

AI can make homework quicker, but it risks taking away from essential learning skills. As one of my peers put it, it’s making homework easier, but at what cost to our learning?

speaker

Nidhi Ramesh

reason

Offers a nuanced perspective on the double-edged nature of AI in education, highlighting concerns about its impact on fundamental learning processes.

impact

Brought the discussion back to the practical, everyday implications of AI for children, particularly in education, and raised questions about long-term consequences.

Overall Assessment

These key comments shaped the discussion by highlighting the pervasive yet often invisible influence of AI on children’s lives, the rapid pace of adoption, the initial oversight in legislation, the challenges in implementing protective measures, and the complex implications for education and development. The discussion evolved from raising awareness about the issue to exploring regulatory approaches and industry practices, and finally to considering the nuanced impacts on children’s learning and development. This progression deepened the conversation, moving from broad concerns to specific challenges and potential solutions, while consistently emphasizing the need for a child-centric approach to AI development and regulation.

Follow-up Questions

How can families be better educated and involved in protecting children online?

speaker

Peter Zanga Jackson

explanation

This question addresses the fundamental role of families in safeguarding children’s online experiences and suggests the need for more awareness and education at the family level.

How can we ensure AI systems are designed with children’s well-being as a core priority?

speaker

Baroness Beeban Kidron

explanation

This area of research is crucial for developing AI systems that prioritize children’s rights and safety, rather than exploiting their vulnerabilities for profit.

How can we better implement the principle of ‘safety by design’ in AI and other technologies?

speaker

Jutta Croll

explanation

This question highlights the need to integrate safety considerations from the earliest stages of technology development, especially for systems that may be used by children.

Are other countries outside of Europe developing similar robust regulations for AI and children’s rights?

speaker

Lena Slachmuijlder

explanation

This area of research is important for understanding the global landscape of AI regulation and children’s rights protection across different jurisdictions.

How can consumer rights organizations be more involved in pressuring tech companies to respect children’s rights?

speaker

Dorothy Gordon (UNESCO)

explanation

This question explores the potential role of consumer advocacy in driving change in tech company practices regarding children’s rights and AI.

How can we address the educational risks of AI, such as its impact on critical thinking and creativity?

speaker

Nidhi Ramesh

explanation

This area of research is important for understanding and mitigating the potential negative effects of AI on children’s learning and skill development.

How can we better inform children about how their data is being used by AI systems?

speaker

Nidhi Ramesh

explanation

This question addresses the need for transparency and education around AI and data privacy for young users.

How can we address the psychological risks of AI, particularly systems designed for companionship?

speaker

Nidhi Ramesh

explanation

This area of research is crucial for understanding and mitigating the potential long-term psychological impacts of AI companionship on children’s social skills and emotional development.

How can we better protect against the threat of AI-generated deep fakes, especially for young people?

speaker

Nidhi Ramesh

explanation

This question addresses the growing concern of AI-generated misinformation and its potential impact on children’s online safety and reputation.

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.