Safeguarding Children with Responsible AI
20 Feb 2026 18:00h - 19:00h
Safeguarding Children with Responsible AI
Session at a glance
Summary
This discussion at the India AI Impact Summit focused on the responsible development and governance of AI systems for children, emphasizing the need for proactive safety measures rather than reactive regulation. Baroness Joanna Shields opened by arguing that AI governance for children represents the clearest test of responsible technology development, warning that children must not become “beta testers” for AI systems that simulate intimacy and human connection at unprecedented scale.
Young AI innovator Rahul John Aju provided a child’s perspective, emphasizing the importance of teaching foundational skills before introducing AI tools, comparing it to learning basic mathematics before using calculators. He stressed that schools should teach children “how to think” rather than “what to think,” while highlighting the massive demand for AI education among young people. The panel discussion featured experts from UNICEF, OpenAI, LEGO Education, and academic institutions who explored three key areas: AI’s potential to enhance learning through personalized education, the risks of over-dependency and cultural homogenization, and practical governance solutions.
Key recommendations included implementing “safety by design” principles with robust age verification systems, conducting real-world evaluations of AI systems in deployment contexts, and ensuring children’s active participation in governance decisions. OpenAI’s Chris Lehane outlined a comprehensive child safety package including age assurance, parental controls, and restrictions on targeted advertising. The panelists emphasized the need for AI literacy programs that empower both children and teachers, while warning against creating a technological monoculture that erases cultural diversity. The discussion concluded with calls for “inclusion by default” and treating children as partners rather than passive recipients in shaping AI’s future development.
Keypoints
Major Discussion Points:
– AI Safety and Child Protection: The need to move from a “post-harm regulatory model” (reactive approach used with social media) to “safety by design” for AI systems, with emphasis on age-appropriate experiences, robust age assurance technology, and protecting children from simulated intimacy that they cannot distinguish from authentic human connection.
– AI Literacy and Education: The importance of teaching children foundational skills before introducing AI tools, similar to learning basic math before using calculators. Discussion focused on empowering children to understand AI systems rather than just use them, with emphasis on critical thinking, agency, and personalized learning approaches.
– Technical Evaluation and Real-World Testing: The necessity of conducting ongoing studies and evaluations of AI systems in real-world deployment contexts rather than just laboratory settings, including understanding how children are actually exposed to content, profiling, and commercial influences on platforms.
– Global Governance and Cultural Diversity: Balancing the need for universal safety standards with respect for cultural contexts, while avoiding the risk of creating a “monoculture” dominated by models from the Global North. Discussion included the need for globally interoperable baseline protections while allowing for local customization.
– Children as Active Participants: Emphasizing that children should be involved in the governance and design of AI systems rather than being passive recipients, recognizing their agency and ability to provide valuable feedback on what works and doesn’t work for them.
Overall Purpose:
The discussion aimed to establish frameworks for responsible AI development that prioritizes children’s safety, well-being, and agency. The session sought to move beyond theoretical principles to practical, enforceable measures for protecting children while harnessing AI’s potential to enhance learning, creativity, and access to knowledge.
Overall Tone:
The discussion maintained a tone of “measured optimism” throughout. It began with urgency and concern (particularly in Baroness Shields’ opening about AI engineering “simulated intimacy”), evolved into energetic engagement during Rahul’s presentation, and settled into constructive problem-solving during the panel discussion. The tone remained collaborative and forward-looking, with participants acknowledging both tremendous opportunities and significant risks while emphasizing the need for immediate, thoughtful action rather than reactive measures.
Speakers
Speakers from the provided list:
– Baroness Joanna Shields – Former UK government roles focused on Internet safety and harms, helped build major child online safety coalitions internationally
– Rahul John Aju – Widely recognized as the “AI kid of India,” young AI innovator who has built and deployed real-world AI tools, founded his own AI startup (AIRM Technologies), and advised public institutions on using AI
– Thomas Davin – Director of the Office of Innovation at UNICEF, co-moderator
– Urvashi Aneja – Director of the Digital Futures Lab, co-moderator
– Chris Lehane – Chief Global Affairs Officer for OpenAI
– Tom Hall – Vice President and General Manager at Lego Education (works with the National Legal Foundation)
– Maria Bielikova – Director of the Kempelen Institute for Intelligent Technologies, works on user modeling, personalization, and trustworthy AI, has spoken publicly about disinformation risks
– Moderator – Session moderator/MC (main event moderator, distinct from the panel co-moderators)
Additional speakers:
None – all speakers mentioned in the transcript were included in the provided speakers names list.
Full session report
This discussion at the India AI Impact Summit on day five brought together diverse stakeholders to address AI governance for children. The session was co-moderated by Thomas Davin, Director of the Office of Innovation at UNICEF, and Urvashi Aneja, Director of the Digital Futures Lab, though UN Undersecretary General Amandeep Gill was unable to attend due to Delhi traffic.
Opening: The Urgency of Proactive AI Governance
Baroness Joanna Shields opened with a stark warning about AI regulation, drawing from her 15 years of experience in technology policy. She argued that the “post-harm regulatory model” used with social media would be inadequate for artificial intelligence, stating that AI governance for children represents “the clearest test yet on whether we are governing this technology responsibly and for the public good.”
The Baroness introduced the concept of “artificial intimacy,” explaining that unlike social media platforms which facilitate user interactions, AI systems create direct, personalised relationships with children. These systems are “increasingly embedded in how children learn, communicate, create, and form their own sense of self,” creating “simulated intimacy and human-like interaction at a scale that is hard to imagine.” The risk lies in children’s developmental inability to distinguish between authentic human connection and artificial intimacy, particularly when AI systems are designed to be “persuasive, emotionally responsive, and always available.”
She emphasised that children must not become “beta testers for our AI-enabled world,” noting early indicators of harm including “emotional dependency, manipulation, deep fake abuse, and in some cases, devastating loss.”
A Youth Perspective on AI Literacy
Rahul John Aju, introduced as “the AI kid of India,” brought essential youth perspective to the discussion. Having bunked his exam to attend, Rahul demonstrated the agency that would become a key theme. He is the founder of AIRM Technologies and ThinkCraft Academy, and has created tools like “Rescue AI” while reaching 7 lakh people through his educational content.
Rahul’s core argument centred on educational transformation. He observed that while his father taught him to “question everything,” AI has created a situation where “even parents can’t figure out what is the right information and fake information.” He advocated for mastering foundational skills before introducing AI tools, noting: “I only got access to [calculators] once I learned the basics of maths. I believe AI should be the same. We should learn how to write essays. We should learn how to sing, maybe. Then you should use AI.”
He argued that “schools teach us what to think but I believe schools should teach us how to think,” emphasising critical thinking over rote learning. Rahul demonstrated practical AI literacy by showing the audience tools like Notebook LM and StudyFetch, while warning about trends like using AI for Ghibli-style content creation without understanding the implications.
Industry Perspectives on Safety and Education
Chris Lehane from OpenAI described AI as “an incredibly leveling technology” that could provide every child with their own AI tutor, capable of individualised teaching adapted to different learning styles. He argued that current educational systems, designed for the industrial age, are misaligned with an AI-enabled future that will reward individual agency and creativity.
OpenAI’s child safety approach includes age assurance technology that defaults users to under-18 models when age cannot be determined, comprehensive parental controls, prohibition of targeted advertising to children, and external review processes. The company has also restricted “kid-specific bots” until adequate guardrails are established, acknowledging particular risks of AI systems designed to form relationships with children.
Tom Hall from LEGO Education highlighted a critical implementation gap: while 80% of teachers recognise AI literacy as foundational, only 41% feel prepared to teach it. His approach emphasises empowerment over restriction, describing it as “handing children a screwdriver and saying, here is a fairly complex box, but let’s take it apart and let’s understand what’s under the hood.” LEGO Education has developed policy toolkits to support educators in this transition.
Research Evidence and Real-World Impact
Maria Bielikova, Director of the Kempelen Institute for Intelligent Technologies, brought empirical evidence highlighting gaps between policy intentions and outcomes. Her research on children’s TikTok exposure revealed that while children see fewer formal advertisements, they are “exposed five times more to profiling to the topics with influencers and so on” – circumventing traditional advertising restrictions.
This finding supported her argument that current AI systems are “so complex that we cannot actually measure something that we don’t know. We can observe it and this is quite important to do a lot of studies.” She called for independent evaluation studies rather than relying solely on company analytics, referencing frameworks like the Digital Service Act in Europe.
Bielikova offered a memorable analogy: “It is the same as we will prohibit children to go to the city. But we should know what is going on and we should travel with them through this environment.” This suggests guided exploration rather than blanket prohibitions.
Governance Challenges and Cultural Considerations
The discussion revealed consensus on fundamental principles but complexity in implementation. Baroness Shields warned about cultural homogenisation if AI models primarily reflect Global North perspectives, risking the development of a “monoculture” that would mean “we will lose so much of our cultural diversity, our uniqueness as people.”
She proposed technical solutions through the Open Age Alliance’s development of portable “age keys” that travel with children across platforms, enabling graduated responses rather than blanket age restrictions. However, Lehane acknowledged that privacy regulations, particularly in Europe, create limitations for age assurance technologies.
Unresolved Questions and Future Challenges
Thomas Davin raised a provocative concern about AI’s effectiveness potentially harming development: “Can we design a model that actually gives the wrong answer on purpose so that the child actually struggles because we know that grit is going to be one of the huge skills of tomorrow?” This highlighted tensions between helpful assistance and preserving essential human capabilities.
The discussion also grappled with a sobering statistic Davin shared: “7 out of 10 children in classrooms cannot explain to us a text that they read at 10 years of age,” underscoring the urgency of educational transformation.
Key Principles and Next Steps
Three core directions emerged from the discussion:
Safety by Design: Proactive safety measures built into AI systems from the outset, including age-appropriate content, robust privacy protections, and effective redress mechanisms.
Cultural Inclusion: AI development that represents diverse perspectives, languages, and contexts, including solutions that work for offline and unconnected populations.
Children as Partners: Moving beyond paternalistic protection to include children as active participants in governance, as demonstrated by Rahul’s meaningful engagement with complex policy questions.
Conclusion
The session demonstrated that effective AI governance for children requires technical solutions, regulatory frameworks, and fundamental reimagining of technology development approaches. The convergence between industry and advocacy perspectives on core principles suggests maturing understanding, while unresolved implementation questions indicate significant work remains.
As Davin concluded with “measured optimism,” the discussion highlighted both tremendous potential and significant risks, emphasising the need for continuous evaluation and adaptation as AI systems become more sophisticated and pervasive. The challenge lies in translating these insights into concrete policies that can keep pace with technological advancement while preserving human agency and cultural diversity.
Session transcript
governance. How we manage AI on behalf of children will be the clearest test yet on whether we are governing this technology responsibly and for the public good. AI’s rapid adoption has been driven by extraordinary capabilities, but its continued place in society will depend on trust, and trust is built through responsible design. The post -harm regulatory model that we’ve seen with social media reacting after damage is not fit for purpose in the AI world. AI is fundamentally different. It is not a platform. It is increasingly a one -to -one adaptive interaction embedded in how children learn, communicate, create, and form their own sense of self. Inadvertently, AI is engineering simulated intimacy and human -like interaction at a scale that is not just a matter of how children learn, but how they learn.
It is hard to imagine. When a model says to a child, I care, I understand, that’s not conscience, that’s code. But for a child, it can feel very real. And children are not miniature adults. They cannot reliably distinguish between authentic human connection and artificial intimacy, especially when systems are so persuasive, emotionally responsive, and always available. That difference has implications not only for safety, but for mental health, identity formation, and long -term well -being. We have already seen what happens when the line blurs. Emotional dependency, manipulation, deep fake abuse, and in some cases, devastating loss. Children must not be the beta testers for our AI -enabled world. We need age -appropriate experiences by default. with guardrails around systems that simulate intimacy without accountability.
The question is not whether AI will continue to advance. Of course it will. The question is whether we shape it in a way that safeguards the dignity and the development of children. And accountability begins with protection. And I’m excited to join this distinguished panel to have this important conversation, even though it’s day five of the summit. Thank you very much. I’m going to have to move this back up. I’m sorry.
Thank you so much, Baroness Joanna Shields, for setting the stakes so clearly. Too often, discussions about children and technology speak about children rather than with them. This session is intentional in doing otherwise. Therefore, I am very pleased to introduce Rahul John Aju, widely recognized as the AI kid of India. He is our featured young AI innovator who has built and deployed real -world AI tools, founded his own AI startup, and advised public institutions on using AI. Raul, I’d like to invite you on stage.
Thank you. Thank you, guys. Thank you so much for the lovely introduction. I know safety is a bit boring topic, but it’s a very crucial topic. And I think if I stand there, no one is going to see me, so I’m using a hand mic. So hopefully everyone can see me. Yes? Can I get more energy? Hi, guys. Is this all you guys have? Hi. Perfect. So let’s get started. Starting with, you know, when I was young, my father used to tell me… Okay, I’m still young. I’m still young. Younger, younger. That’s what I bet. He still tells me that Raul, question everything. Be critical about everything. The slide changer is not working. Okay, without the slide changer also it will work.
Okay. Be critical about everything. Ask questions. So I did. Why does the chair have four legs? Why is the sky blue? And also, why do birds fly? Why can’t humans fly then? I bombarded him with a lot of questions. So he just took the phone and he’s like, Raul, this is Google. Go search it. And so I did. But you know, while I was using Google, my parents also taught me one thing. How to figure out what is the correct information and the fake information. And that helped me a lot. But this age of AI, how do you expect me to do it? I don’t think even parents can figure out what is the right information and fake information.
We all agree upon that? Yes? So how do we do that? Because curiosity is there in every child. I think I have enough curiosity. But it only becomes powerful if it’s guided the right way. So how do we guide the right way? Because right now we are just teaching kids how to talk to machines. Before we teach them how to… Question. Now I am just saying random quotes now but let’s dive deeper and see why. I will give an example. Everyone remembers the Ghibli trend? Everyone did it? I did it too. Guilty. But it was very fun to be honest. But what happened there was we were all just taking pictures, uploading our pictures to the cloud.
But we don’t even know what’s happening with it. We all agree, right? But right now kids are also doing the same thing, taking their pictures, uploading it to the cloud. But we tell children don’t be on social media, don’t upload your pictures to social media, don’t share your pictures to strangers and all, right? But what about the AI world? We are missing, the parallel is missing. We need to translate real world safety into the digital world. Because right now even most, okay, I have a question. How many of you guys read the 25 page terms and conditions? I don’t, right? You don’t know what’s happening behind the scenes. I don’t know what’s happening behind the scenes.
like most of these pictures were taken and obviously made for the model to be better for all of us, right? Right now a lot of companies are making sure children are safe but we don’t know about it. Are they safe? There are a lot of unknown AI companies as well. What do we do then? That’s right. Also I created an AI software where you can upload a full terms and conditions or any contract and it will tell you the high risk clauses, low risk clauses and it will, thank you and it will literally tell them what to do, if you should use the product or not, right? So be careful. Anyway, so that tool was known as Rescue AI.
I’ve been working on it for the past three years for emergency, for law people, a lot of things. I don’t want to promote myself too much but I’m trying to do that. But what about when things like that are not there? What about if I didn’t do something like that? That is why AI awareness and safety is necessary. Obviously it is. That’s why you’re called here, Raul. But how do you do that education? Right? How do you teach about AI? You know, recently I got calculator in my school and I am so happy because I don’t have to do maths by multiplying, dividing manually. I can do it through calculators in my exam. By the way, I bunked my exam and came today.
Anyway, very happy for that. But you have to do all this calculation. But because I have a calculator, it’s way easier. But I only got access to it once I learned the basics of maths, right? I believe AS should be same. We should learn how to write essays. We should learn how to sing, maybe. Then you should, I don’t know how to sing. Everyone will run away if I start singing. But you should know the basics and the foundations before you start using AI. I feel that’s when you teach about AI. That’s when you say, okay, AI can help you do the essay. AI can help you do the song. You should use the natural intelligence first.
Then start using artificial intelligence. I believe. It’s about using the combination of both, right? Yes. How many of you guys use natural intelligence? Everyone does, right? I’m mostly reliant on artificial intelligence. I’ve got to switch to it. But that’s what matters. But it’s not just about that. It’s also about how we teach, deliver topics. Starting with personalized content. You know, reading for me is kind of boring. I’m so sorry. But everyone learns differently. It might be through reading. It might be through listening. It might be through watching videos, which I prefer the most. That’s how I learn most of the things that are happening. From geopolitics to cricket, which I love. All of these things I’ve learned because I watch the video.
I’m a more visual person. It’s not one size fits all. But sadly, I feel educationist. And I believe AI can generate content. Wait. It’s not believe. It’s already happening. You guys know about Notebook LM, right? It can generate videos. It can generate podcasts with one textbook content. That’s how I passed all my exams, to be honest. Even not just that, there is this tool StudyFetch where you can upload a chapter content and it will convert it into games. It’s not just about that. Everyone’s interest is different, right? Take a wild guess. What do you think my interests are? Wild guess. AI. AI, exactly. I am here to talk about AI guys. Cricket on the side but AI, right?
What if you connected E is equal to MC square and thought that through AI? You can do that too in this AI world. How do you do that? See, right now schools teach us what to think. I am repeating that. Schools teach us what to think but I believe schools should teach us how to think. How to think and how to think critical, how to think critically and how to face failures, how to communicate. These are basic things. Trust me, to stand here I had to face a lot of failures. But I learned how to do that because of my father. Trust me. I am giving you some credit. So, thank you. See, now he’s recording the audience, clapping for him.
Okay. So, that is what matters. And here’s one proof of demand, okay. I started something under my company, AIRM Technologies, ThinkCraft Academy. Yes, a bit of promoting, but ThinkCraft Academy, where I taught what is AI to building your own AI, LLM, fine tuning and all that, that in 30 days and more than 7 lakh people learned from that. And that course was completely free. And even there was another course going from what is AI to building your own AI as a startup founder, as a student. And that course was also completely free. But do you know how many people joined and learned from that? Again, 7 lakh people did, combined. that shows that people want to learn about AI.
It just has to be delivered the right way. The name of this course, I know everyone is searching right now. It’s on my YouTube channel. I’m a content creator too. Raul the Rockstar. Yes, you might be thinking, what does he not do, right? I’m joking. But a lot of things goes on. See, I am not saying a lot of big things. I believe we all should be open mind. We should be open to learning more things. We should be curious because AI will not take your job. But someone using AI can. But at the same time, the most important thing in the world of AI is also to be as human as possible. My name is Raul.
Thank you so much. Is it okay if I take a small video? Influencer. Thank you so much. I have to do this too, guys. So it’s very simple. Like I said, I have to do this. totally forced to I am just going to say AI Impact Summit how was the session and you guys can be if you guys didn’t like it just say no hated it you guys can say that be fully honest I should say you and also right I am totally joking I am very grateful for this opportunity you know in last November I was wanting to come here I was like register for this and the fact that they called me to speak here I am very grateful for this opportunity and we have to thank them thank you shall we do it AI Impact Summit Delhi by UN ok not by ok what’s the worry it’s a part right ok this is how many times I have to record a normal video thank you so much UN for calling me and AI Impact Summit the audience how was the session was it boring yes was it boring you guys are agreeing it’s boring no thank you guys thank you I will not take too much time
Thank you, Rahul, for that very thoughtful and energizing address. Your perspective underscores a key message for today. The question is not whether children will engage with AI, but whether adults, institutions, and systems are prepared to guide that engagement responsibly. We will now turn to our panel discussion. The discussion will be guided by two co -moderators with deep expertise at the intersection of innovation, policy, and child well -being. I am pleased to introduce our moderators, Thomas Davin, Director of the Office of Innovation at UNICEF, as well as Urvashi Aneja, Director of the Digital Futures Lab, and I invite them to guide the discussion.
Thank you. Can you hear me? Yes? All right, so delighted to be here with you all. I’m one of the two co -moderators, and I’m… delighted to invite four leaders in the industry who are going to have the high bar of keeping you all as entertained and on substance as Raoul just did over now. So please, a warm welcome to Baroness Joanna Shields. Please, Maria Bielikova, Director of the Kempelen Institute for Intelligent Technologies. I took the liberty of not reintroducing Baroness because I think she was already known to you. Chris Lehan, welcome, Chief Global Affair Officer for OpenAI. Tom Hall, welcome, Vice President and General Manager of the National Legal Foundation. Over to you, Alicia.
Alicia Thank you and thank you to the UNICEF team and thank you for that very energizing opening. Yeah, I hope we can live up to that level of dynamism. oh yes can we invest the Baroness wants to know if we can invest in your company okay great so on that very cheerful note thank you all for being here and I’m delighted to be able to moderate this discussion at the India AI Impact Summit as someone who studies the governance choices that shape how technologies land in society I’m interested in a very simple test whether AI expands children’s agency and learning or does it quietly narrow it through design incentives and design choices so let’s begin with what we want AI to enable for children at scale and in practice Tom so perhaps I can start with you first Lego education has recently pushed into computer science and AI learning in young classrooms so what does AI literacy that supports well -being look like in real classrooms and what does it look like in real classrooms and what should we do if we want AI to deepen creativity rather than replace it
Well, first of all, thank you for having me and very tough shoes to fill after Rahul’s spot there I agree with so much of what he just had to say and yeah, I’d love him to come and guide some more conversations Being at this conference, I think we can all see that the rate of technological advancement is breathtaking and I think often we stand, whether we’re deeply involved in it or on the sidelines there can be a feeling of incredible excitement there can also be a feeling of, frankly, doom that this change is happening so fast and I think that we kind of underestimate what the role of children is going to be in this journey They might look at what’s happening in the world of AI and simply see it as a magic box that they can interrogate at the click of a button and ask simple questions and get really quite deep answers back.
It might be a funny video they want to produce. It might be the answer to a history exam that they have to submit on Monday morning. And what we think AI literacy is, is ultimately handing children a screwdriver and saying, here is a fairly complex box, but let’s take it apart and let’s understand what’s under the hood. And let’s understand all the components. So for us, AI literacy is allowing children and empowering them to really kind of interrogate the fundamental basis of computer science and artificial intelligence. And that’s teaching them how computers see the world as data, what is sensing, how to think about kind of predictability, how to think about bias and force conversations and accountability.
So we want to empower children to have deep thoughts about this. We also want to empower teachers. and I think right now again this pace is happening so fast we asked some primary and middle school teachers in the United States what they thought about the pace of artificial intelligence in classrooms and they’re all hugely excited about or a very high number of them are very excited about what’s happening they agree that artificial intelligence literacy needs to be a foundational skill in school but that’s 80 % of them see that only 41 % of them feel remotely ready to go and teach AI literacy in a classroom so I think we have to provide teachers with the tools that are going to allow them to bring real world learning to life
Thanks Tom and I would love to maybe at a later stage in the panel come back to you on the how because we do a lot of work with policy makers trying to do kind of capacity support with policy makers and we really struggle in terms of how do you actually embed AI into AI literacy so I imagine children can do that and I think that’s a really good point We really have to think about the pedagogy quite carefully to make sure that we are imparting that learning. So I’d love to kind of come back to you on that. Chris, if I can bring you in next. Open AI has emphasized that AI systems will increasingly support learning, creativity, and problem -solving for young people.
From your perspective, where do you see the most promising opportunities for AI to positively shape children’s experiences, particularly in ways that strengthen agency, curiosity, or access to knowledge? And you’re not allowed to say what Raoul already said.
I was just going to say you got a great explanation of that. First of all, thanks for having me. Awesome panel. Baroness, always good to be with you. My son would be very jealous that I’m sitting next to the Lego guy. That’s a pretty cool thing. So thank you. And I’ll just also share, I may have to exit a little bit early, because I have a question. I’m supposed to be at the date double scheduled, so if so, my apologies in advance. I’ll try to answer your question at a macro level and then maybe a more specific level that I think picks up on your pedagogy question that you were just asking. First of all, this technology has enormous capabilities to basically individualize teaching, individualize.
I mean, you’re at a place where every kid in the world could, in effect, have their own AI tutor that would be able to help them to learn at the pace that they learn and in ways that they learn. I think amongst, you know, sort of insights in education is kids just learn in very different ways. And this technology could be incredibly liberating in terms of answering that. You mentioned the teachers. We do work with the largest teachers union in the United States, 400 ,000 teachers, to actually train them to develop the AI to, in fact, do some of that individualized teaching. But I think there’s maybe a level down from that, which I think you were sort of picking up on when you were setting up this question.
And that’s the agency question. I know the U.S. public education system better than I know others around the world, so part of what I’ll say is really based on my U.S. experience. But the U.S. K -12, I see the sign, yes, you’re telling me to shut up, K -12 public education system was designed for the industrial age. It was basically designed to take kids who were coming from rural environments and the urban environments and teach them to be able to work in factories. That was both the bells, different classrooms that you would go to, the time that the day started, how long the school day lasted. But sort of at its core was not just literacy in terms of teaching people to read, write, do arithmetic.
It was actually creating an ethos about how you should work and participate in an industrial age economy. I do think one of the big issues that we’re going to need to think about with this particular technology, which is going to really reward people like Rahul who take agency, is how do we actually teach people? Agency. This technology is an incredibly… leveling technology, it scales the ability of anyone to think, to learn, to create, to build, to produce. And the question is, do you actually encourage people to be able to use it that way? Because if so, the way we think about the social contract relationship between capital and labor and how that is calibrated, this technology can have a huge impact on actually giving individuals the ability to control their own labor as owners of it.
Thanks, Chris. And I appreciate particularly the point around agency and how can we teach people agency. And I also wonder that sitting here in India, in the global south, one of the things that we can see very clearly is that agency in some sense is not only a factor of individual capacity, but has so much to do with the broader socioeconomic institutional context in which you are in. And so I wonder how we think about agency. Across different contexts. Back to you.
Thank you so much. let’s get into the next segment which is really about what happens when it fails, what happens when there’s harm that is being done from a UNICEF lens of course when we think of the education in the world today, 7 out of 10 children in classrooms cannot explain to us a text that they read at 10 years of age 7 out of 10, so clearly the technology potential is immense in really realizing huge bounds in learning outcomes what happens if actually we go the other way and we suddenly have an over dependency on that technology for children when we maybe frame the children’s creativity in ways that we actually constrain it or make it one one fits all, so let’s go into that segment of risks and harms and what is the accountability frameworks and how do we protect against this, for those of you who are following carefully I would say that the organizers of the panel have done a beautiful work on gender, I don’t know if you noticed but it’s boys on one side, girls on the other women asking questions to the men same question to the women.
They’re by definition much smarter. That’s pretty clear. And that’s exactly where I was going. And the next question to the women are going to be harder than before, as they should be. So let’s start on a curve. Yes. But to be fair, it continues to be harder and harder as the panel continues. Let’s start with Baroness Joanna. You’ve held UK government roles focused on Internet safety and harms. And you have helped build major child online safety coalitions internationally. From that experience, what is one key lesson from the UK Internet safety agenda that you believe is worthwhile surfacing today? And maybe one area where you think, or you should say, we’ve tried this. Please don’t do this.
If I could convey one thing. After 15 years of looking at how do we regulate technology to prevent harm, how do we regulate technology to prevent harm? I think it’s important to this post -harm paradigm that we’re operating in is not going to work in the AI. future. So we have had to adapt very quickly as governments as harms have emerged using AI. For instance, the deepfake crisis that we’ve experienced recently. I know six, seven jurisdictions of, you know, countries that have implemented very quickly, you know, laws that are specific and targeted to that particular harm. But what we need to do is we need to step back and we need to think about that how do we build and design safety from the ground up.
And my personal view is that this has to come through consultation with the companies. I see a very different type of reaction from the AI models developers. They’re much more receptive to the idea of safety by design and building in guardrails that protect children from the outset. And I’m actually an optimist at the moment because I’m starting to see a lot of people who are doing a lot of the work that we’re doing right now. And I’m actually a lot of people who are doing a lot of the work that we’re doing right now. And I’m actually companies like Like OpenAI just recently announced that they have an age gate, age assurance technology to ensure that children under age, you know, whatever the jurisdiction is, I think it’s 18, okay, are not able to engage with the model and to experience, you know, that.
And I think that’s really important because, you know, we’ve been battling this age on the Internet for 15 years. And now the technology, whether it’s cryptography or biometrics, all kinds of technologies have emerged to where you can preserve privacy and ensure that you can protect privacy. So there’s no excuses anymore for companies not to build in robust age assurance that’s privacy preserving and that can ensure that the design experience that you get is appropriate for the age you are.
Thank you so much. So I love the point that social media, we talk a lot about social media these days, right? Rightfully so. But indeed, it’s been a late awakening worldwide about. the potential of that, but also the potential for what happens to children in many ways, and we cannot make that same mistake with AI. It’s just so much deeper and broader, and we need to look at this a lot more systematically. Maria, if I can come to you. Your work spans user modeling, personalization, as far as I understand it, and trustworthy AI, and you’ve also spoken publicly about disinformation risks. In your view, where do AI systems create the highest risk failure modes for children specifically, and what kind of technical evaluation should be required before deployment?
on TikTok for 10 days in Germany, actually. And then we found out what happened. And maybe I can tell it in second, in second my entry that it was really shocked for us. Thank you so much.
So in essence, really having very clear impact focus research continuously so that can inform potential query mechanism and potential redress mechanism as a way to safeguard against those potential risks.
And how they are exposed to commercial content. And this is the most critical.
Thank you so much.
Even though we have Digital Service Act in Europe.
Thank you. Let’s move to the third segment.
Thanks. Yeah, and I think that brings us really nicely to this question of what next, what do we do? I think we often agree on what needs to be done at the level of principles, safety, transparency, accountability. I think you’ve added another dimension to it when you talk about, in some sense, evaluations, that we need to be doing kind of real -world evaluations in real -world deployment context of these systems, not just testing these systems in a lab setting, but testing, evaluating them in a real -world context. And regularly, I think the hard part, at least when we talk about the principles, things like safety, transparency, and accountability, is how we operationalize them across jurisdictions and also across business models, which I think also speaks to the point you were making around it being a feature and not a bug.
So this segment is really about the how, what becomes enforceable, what becomes measurable, and what changes incentives. Tom, if I can start with you again. As AI becomes more embedded in classrooms and in learning platforms, what governance or design choices are essential to ensure that these tools support children’s well -being at scale, particularly around diverse education systems and cultural contexts?
thank you clearly this is a really uh exciting and uh high you know the potential of this moment in time is enormous so i think everyone should be ambitious uh but at the same time be measured um go in ambitious with your design plans for bringing ai into classrooms and see it as an as an opportunity to maybe make exponential gains in in many different markets where you may have been very challenged before i think there are tremendous opportunities for many markets in the global south right now so see the introduction of ai and ai literacy as something of a reset but you know don’t jump in blindfolded this is a once in a lifetime opportunity to establish essential foundational skills for young people and it’s going to need really careful thought these governance and design choices they’ve got to be built on no regret moves so i would say put data privacy data sovereignty and inclusion and respect for the student at the top of any plan When you sort of teach about, I don’t know, systemic bias and large language models in classrooms, make sure that all kids of all types of diversity and inclusions are represented and can see themselves coming back in the products that they’re experiencing.
Children have a lot to say in this space, so involve them. We’ve published a free AI policy toolkit for classrooms. Have children think about what kind of things they think need to be considered here. It’s going to be a really meaningful conversation between teacher and student. And talking of teachers, I think give them exciting but also relevant curriculum. We have computer science qualifications in the UK. The entry levels for that are critically low. And. Very low for girls. We introduced that 10 years ago. We gave very insufficient training for children. And the curriculum is frankly very dry. I think we have to really think about real world curriculum that is going to excite students. And so let them see themselves with real world problems in the types of learning experiences that we’re putting out there.
I’m speaking on behalf of the LEGO Group. So, you know, children are our role models. I think when you’re designing AI policies for children, this has to be sort of child -centered and child -led. And so just involve them in the plans as you roll them out. And I hope that will lead to some really exciting changes.
Thanks, Tom. Chris, earlier this year, OpenAI’s policy engagement has included calls for common -sense youth safety approaches and more parental control. So what, in your view, should be the baseline governance package for child -facing AI, and what should be globally interoperable versus what is locally set?
Sure. Thank you for the question. And let me just give two points, and then I’ll answer that question specifically. First of all, and I think this is a really smart room, so I’m sure we’re all thinking about it this way, but really important to understand and recognize that this is not social. This is not social media, and we should not make the classic mistake of fighting the last war with the next. next war. There are certainly lessons that are important that you take from it, but understanding that this is going to be a technology that is not just on your device, but is going to be around you in all sorts of different ways, physical world, non -physical world.
So understanding that component. I think secondly, interesting lessons from what we’ve seen on the catastrophic harm side. You’ve seen the emergence of AI safety institutes around the world where the leading frontier labs, for the most part, work with those safety institutes to basically be creating safety standards. UK, US, Europe, Japan, Australia, you’ve seen an early version of that. Here in India, and I do wonder whether there’s some version of that that you actually do specifically for kids’ safety. The third point really goes to your question, which is, yeah, we have put forth, and we’re really the only AI company that has done this thus far. We do hope others will join us. Basically, a multi -pronged approach.
The first, and the Baroness mentioned this, is we do do age assurance. We try to use signals to identify whether you’re under 18 or not. If we identify you as under 18 and if we are unable to identify you, we then default you to an under 18 model. So even if we’re not sure of your age, we do default you to an under 18 model, which has all sorts of restrictions around violence and sexual conversations and mental health type of issues. Three, we build it in with a ton of parental controls. Parents can control whether it has memory or not about your child. Parents can get real -time feedback. Parents can control how long you’re spending on it.
You can get warnings and alerts around stuff if your child is asking stuff that would be in the mental health types of space. Four, we prohibit any targeted advertising of kids using the technology. I think that’s one that’s a clear lesson from the social media age. Fifth, we have an outside review process that we’ve called for. In the U.S., that would be done by like a state attorney general, but someone who’s a part of government to actually review that what you’re saying you in fact are doing. And then finally, prohibit the targeting of specific kids bots. There may come a time and place when we actually have really good guardrails around this, and they can really serve really helpful, positive, productive purposes.
But until we have those guardrails, we think we need to be really, really, really mindful of that. So it is a complete package. We are pushing this in California and a number of states. We want to take it around the world. We’re working with some of the leading children’s advocacy organizations. And anyone here who would want to work with us on it, we really welcome that. And we don’t pretend to have all the answers. Like we’re super humble about this. We do think this is what we’ve seen from our data. This makes a lot of sense. It goes farther than what others have done. But we also know that this is going to be a constant learning process, and this is a beginning, not even the middle, and certainly not the end.
Sorry, just to ask a follow -up question on the bit around how you make this locally relevant. So you have this kind of package, you’re rolling it out in the U.S. How do you then cater it to different contexts?
You know, it’s a great question. Like there are some parts of the world, you know, Europe is an example of this, where there are some privacy limitations that actually impact your ability to do the age assurance at the level that you would like to be able to do it at. So we’re in the process of some of these jurisdictions of trying to work through some of those types of issues. I think there’s other dynamics that potentially come into play, which may be what you’re asking about, you know, cultural context, societal context. And I think those are things that you do have to work through with individual countries because individual countries are going to have their own norms on those.
And I think we’ll also see different levels of vulnerability or different types of vulnerabilities in those different contexts.
Fair enough. If I can bring you in. How should global norms for children’s safety handle cultural and regulatory diversity without creating, in some sense, loopholes that allow companies then to opt for the weakest protection?
So I wanted to take that question in two different directions. First of all, in terms of a global regulatory framework, there are certain standards that are required across every jurisdiction. I mean, every country has an age where children can participate in the digital world. And unfortunately, it’s a blunt instrument in many cases. It applies across the board at a certain age. We’ve been seeing a lot of social media bans recently. And I think that has come out of exasperation on the part of governments, the fact that they just have. They’ve given up trying to regulate this technology, and they’ve decided they’re going to just use that blunt instrument as a guide. And unfortunately, you know, there are benefits then that the children can’t participate in.
But the reality is that this, there’s a little bit of movement here. As the, you know, as the age of assurance technology grows and becomes much more capable, we can custom design experiences for young people that accommodate their level of maturity and capability and ensure that we can meet these requirements in a much more sophisticated and better way. It’s about time we solve for age online once and for all. And I believe we’re getting close to that. There’s an organization called the Open Age Alliance. And it’s a very important organization that’s looking to harmonize standards across all of, age assurance technology. So whatever age assurance you think in your platform is reliable, Open Age will enable you to generate an age key.
And then that age key travels with. the child everywhere they go online. So we’ve got a very absolutely verifiable way for companies to deliver an age -appropriate experience. And you asked me about something else that I think is really important in this context about culture. And if we have a world where we are accepting models from just the global north, I really believe we will lose so much of our cultural diversity, our uniqueness as people, wherever we come from, whatever our background is. We have to be very mindful of the fact that we don’t want to develop a monoculture that is based on a handful of models that everybody uses around the world, and we lose that richness of who we are, what makes us human.
I think that that wasn’t… really the aim of the question, but I couldn’t let it go without bringing that to bear, because this is an absolutely critical question we need to solve. as society.
Thank you. I couldn’t agree more on both those points on how we have to get the age, we have to solve for age verification and then the risk of kind of flattening culture and what that means for children and what that means for how they develop and grow. Maria, last but not least, you’ve helped elevate trustworthy AI as a public agenda in Slovakia and in Europe through initiatives spotlighting responsible practices. So if a regulator asks you for key measures or measurable indicators that an AI system is acting in a child’s best interest, what would those be?
Actually, I already mentioned it somehow that it’s something that AI at this moment is so complex, meaning I mean the neural networks that we have that we we cannot actually measure something that we don’t know. We can observe it and this is quite important to do a lot of studies as we do and not just taking analytics from companies that provide it even though they seem the best because even though they tell that children are not profiled but they are because we see it and sometimes it’s out of their control because we should really make such studies as I mentioned because for example one of the results of a study I mentioned before is that children see less formal advertisement on TikTok.
This is fine but actually they are exposed five times more … to profiling to the topics with influencers and so on. They are not formal advertisements. So we definitely should do a lot of such studies. And the children should be there because if we prohibit everything for them until some age, then they will not be able to explore it. It is the same as we will prohibit children to go to the city. But we should know what is going on and we should travel with them through this environment. And this is probably the most important to doing such studies to really understand what is going on on the platform where they are because they will be there.
I think that’s such a powerful that’s such a powerful analogy, the city one and I think while you were speaking what struck me is that you know we have some tools already we don’t have to kind of approach this afresh so we have actually tools around data protection and privacy if we actually enforce them some of that profiling that you’re talking about need not happen we have tools that allow us to get data from the platforms to actually understand what is happening on these systems so again we have things in our kind of regulatory toolbox that we can exercise and then of course I think in addition to that really this point around contextual evaluations that involve children is so that we can understand what these systems are actually doing Thomas maybe I can hand it back to you to or did you want to add something?
Thomas is my formal name so I thought you were talking
oh right if you would like to add something and then you can hand it on to the other Would you want something? Other Thomas.
A lady said something I thought was very wise to me this morning and said, you know, you’ve got to think about what kind of ancestor you want to be. And I guess we’re at this really interesting moment where we’ve had social media, we’ve had sugar, we’ve had tobacco. Surely now this is our chance to make some really sharp decisions and pay it forward for the next generation. So credit to the lady who said that to me this morning.
Thank you so much, Tom. So it’s going to be very hard to close, so maybe I’ll just try to see at least the points that I took from the panel and hopefully they will resonate. I come away with a sense of, I would say it’s going to sound terribly UN, but measured optimism. One, because the potential is tremendous. We are all aware of that. The potential, at least from a UNICEF lens, on really changing outcomes for children in ways we have never been able to do before is huge. And I think that’s something that we can all be proud of. and the risks are equally tremendously important and potentially will be there for decades if we don’t craft, design it right.
To my mind, there may be three directions that I heard that we are going in the right direction. One is safety by design has to be a must. That’s about age appropriateness. It’s about data privacy. It’s about child rights at the heart. It’s about appropriate content for the right age. It’s about systematic impact measurement. I was struck, Tom, in your session this morning when you were talking about, you know, if we have a model that actually gives the right answer or an answer to children all the time, they might actually lose their sense of curiosity. And I never thought about it like this. What huge loss would that be for humanity if we suddenly have children who are just no more curious because they just ask whatever question?
Can we design a model that actually gives the wrong answer on purpose so that the child actually struggles because we know… that grit is going to be one of the huge skills? of tomorrow. So those things are going to be massively important. Redress mechanism, we don’t talk about this and how we enforce those redress mechanism when things go wrong is also there. The second layer in my mind would be inclusion by default, coming back to Baroness’s point about having a monoculture under risk of this and we know that some of that is already playing out and hopefully having a summit in India is one of the turning points where we can see actually this turning around a little bit where we really have so many more countries beyond the global north creating shaping what those solutions are, having representation of regions of language of different dialects but also children with disabilities which are quite often left as we know out of those out of time.
And maybe one thing that we haven’t really talked about is having solutions that work for the unconnected, having solutions that work offline. We are at risk of just focusing on urban centered people and that will be terrible if we don’t do it right to those who are already kind of struggling by the wayline. And last but not least is children at the heart. And children at the heart because that’s who we want to create that world for, the ancestors we want to be for them but also because Raoul demonstrated that for us, they are the most effective users of that and the ones that have the ability to tell us this works for me, this doesn’t work for me and they should be not just divorced but they should be part of the governance of those mechanisms.
That starts with AI literacy in schools, it starts with also helping parents having the ability to help their children know where to get that literacy and hopefully if we hit all of these right we have a chance.
Thank you all for joining us. I just want to give the floor back to the MC.
Thank you so much to the panelists as well as the moderators and the audience. Also on behalf of Undersecretary General Amandeep Gill, United Nations Special Envoy for Digital and Emerging Technologies. who regrets missing the session as he is stranded with the Secretary General’s program. Even the United Nations motorcade cannot make it through Delhi traffic. But could we please welcome Rahul back up to the stage for a very brief reflection on the discussion?
I’ll make sure it’s brief. First of all, guys, can we have a big glass for them? That was not enough. If you don’t realize, these are the main people who designed the future for us kids. And the fact that I got an opportunity here to speak, thank you again, UN for that. Thank you, AI Summit for that. And whatever they said is very true. You know why? Because at this age, specifically us kids, the policies that are designed, when we are building these AI tools, that should be the first thought of keeping kids in mind, not an afterthought, right? And the fact that that’s happening is good, right? Because from Lego to open AI to all these big places to ma ‘am, everyone here, they’re designing the next world.
and I just want to say a big, big, big thank you and I also want to add one last thing. Thank you so much for always talking about me also in between but more than that, for listening to us kids, you know, for not just having, thinking what we need, for putting our opinion in mind also while building this. So a big thank you from all the children out there. Thank you
Excellencies and distinguished guests, thank you for your participation and engagement. We appreciate the insights shared today and look forward to continued discussion on the responsible advancement of AI. The session is now concluded. Thank you. Thank you. Thank you audience. May I request the session officers to please come to the stage. May I request the audience to exit from the door behind us. Thank you.
Baroness Joanna Shields
Speech speed
149 words per minute
Speech length
1123 words
Speech time
449 seconds
Safety‑by‑design essential
Explanation
Baroness Shields argues that the post‑harm regulatory approach used for social media will not work for AI and that safety must be built into AI systems from the outset. She stresses stepping back to design safety from the ground up for child protection.
Evidence
“I think it’s important to this post‑harm paradigm that we’re operating in is not going to work in the AI.” [1]. “But what we need to do is we need to step back and we need to think about that how do we build and design safety from the ground up.” [10].
Major discussion point
Child‑focused AI governance & safety‑by‑design
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society
Age‑assurance and privacy‑preserving guardrails
Explanation
She highlights the need for robust age‑verification mechanisms that protect privacy while ensuring children only access age‑appropriate AI. Such guardrails are presented as non‑negotiable for companies deploying AI to minors.
Evidence
“And I’m actually companies like Like OpenAI just recently announced that they have an age gate, age assurance technology to ensure that children under age, you know, whatever the jurisdiction is, I think it’s 18, okay, are not able to engage with the model…” [85]. “So there’s no excuses anymore for companies not to build in robust age assurance that’s privacy preserving and that can ensure that the design experience that you get is appropriate for the age you are.” [92].
Major discussion point
Technical and policy safeguards (age assurance, parental controls, content moderation)
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society
Preventing a global‑north monoculture
Explanation
Shields warns that relying on a handful of models from the Global North would erase cultural diversity and affect children’s identity formation. She calls for preserving a pluralistic AI ecosystem.
Evidence
“We have to be very mindful of the fact that we don’t want to develop a monoculture that is based on a handful of models that everybody uses around the world, and we lose that richness of who we are, what makes us human.” [123]. “And if we have a world where we are accepting models from just the global north, I really believe we will lose so much of our cultural diversity, our uniqueness as people…” [124].
Major discussion point
Cultural, contextual, and inclusion considerations
Topics
Social and economic development | Human rights and the ethical dimensions of the information society
Risks of simulated intimacy
Explanation
She points out that AI can create engineered intimacy that children may mistake for genuine human connection, leading to emotional dependency and manipulation. These harms require explicit guardrails.
Evidence
“Inadvertently, AI is engineering simulated intimacy and human‑like interaction at a scale that is not just a matter of how children learn, but how they learn.” [43]. “Emotional dependency, manipulation, deep fake abuse, and in some cases, devastating loss.” [133]. “They cannot reliably distinguish between authentic human connection and artificial intimacy, especially when systems are so persuasive, emotionally responsive, and always available.” [134].
Major discussion point
Risks and harms specific to children
Topics
Human rights and the ethical dimensions of the information society | Artificial intelligence
Thomas Davin
Speech speed
175 words per minute
Speech length
1227 words
Speech time
419 seconds
Safety‑by‑design must be a must
Explanation
Davin stresses that safety by design is non‑negotiable for AI systems aimed at children, positioning it as a foundational requirement.
Evidence
“One is safety by design has to be a must.” [3].
Major discussion point
Child‑focused AI governance & safety‑by‑design
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society
Systematic impact measurement and accountability
Explanation
He calls for systematic measurement of AI impact and robust accountability frameworks to protect children from unintended harms.
Evidence
“It’s about systematic impact measurement.” [24]. “the potential of that, but also the potential for what happens to children in many ways, and we cannot make that same mistake with AI.” [47].
Major discussion point
Child‑focused AI governance & safety‑by‑design
Topics
Artificial intelligence | Monitoring and measurement
Over‑dependency erodes curiosity
Explanation
Davin warns that an over‑reliance on AI that always supplies the correct answer could diminish children’s curiosity and grit, undermining long‑term learning.
Evidence
“what happens if actually we go the other way and we suddenly have an over dependency on that technology for children…” [18]. “I was struck, Tom, … if we have a model that actually gives the right answer … they might actually lose their sense of curiosity.” [77].
Major discussion point
Risks and harms specific to children
Topics
Social and economic development | Human rights and the ethical dimensions of the information society
Inclusion by default
Explanation
He emphasizes that AI solutions must be inclusive by default, representing diverse languages, cultures, and children with disabilities to avoid a monoculture.
Evidence
“The second layer in my mind would be inclusion by default, coming back to Baroness’s point about having a monoculture…” [125].
Major discussion point
Cultural, contextual, and inclusion considerations
Topics
Social and economic development | Human rights and the ethical dimensions of the information society
Rahul John Aju
Speech speed
175 words per minute
Speech length
1914 words
Speech time
656 seconds
Natural intelligence before AI
Explanation
Aju argues that children should first rely on their own natural intelligence and foundational knowledge before turning to AI tools.
Evidence
“You should use the natural intelligence first.” [39]. “But you should know the basics and the foundations before you start using AI.” [44].
Major discussion point
AI literacy and education for children
Topics
Capacity development | Artificial intelligence
Fostering curiosity and critical thinking
Explanation
He stresses that curiosity is innate in every child and must be nurtured; losing it would be a huge loss for humanity.
Evidence
“Because curiosity is there in every child.” [71]. “What huge loss would that be for humanity if we suddenly have children who are just no more curious because they just ask whatever question?” [75].
Major discussion point
Cultivating curiosity, critical thinking, and agency
Topics
Capacity development | Human rights and the ethical dimensions of the information society
Rescue AI tool for contract risk
Explanation
Aju describes a practical AI tool, Rescue AI, that scans contracts for high‑risk clauses, illustrating how technology can support safety for children and families.
Evidence
“Also I created an AI software where you can upload a full terms and conditions or any contract and it will tell you the high risk clauses, low risk clauses and it will, thank you and it will literally tell them what to do, if you should use the product or not, right?” [109]. “Anyway, so that tool was known as Rescue AI.” [110].
Major discussion point
Technical and policy safeguards (practical tools)
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society
Translate real‑world safety into digital world
Explanation
He calls for policies that embed real‑world safety considerations into AI design for children, ensuring safety is a primary design criterion.
Evidence
“We need to translate real world safety into the digital world.” [13]. “Because at this age, specifically us kids, the policies that are designed… should be the first thought of keeping kids in mind, not an afterthought, right?” [14].
Major discussion point
Child‑focused AI governance & safety‑by‑design
Topics
Artificial intelligence | Monitoring and measurement
Urvashi Aneja
Speech speed
169 words per minute
Speech length
1080 words
Speech time
382 seconds
Governance design choices for wellbeing
Explanation
Aneja asks what governance and design choices are needed to ensure AI tools support children’s wellbeing at scale, emphasizing principles of safety, transparency, and accountability.
Evidence
“As AI becomes more embedded in classrooms and in learning platforms, what governance or design choices are essential to ensure that these tools support children’s well‑being at scale…” [8]. “I think we often agree on what needs to be done at the level of principles, safety, transparency, accountability.” [27].
Major discussion point
Child‑focused AI governance & safety‑by‑design
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society
Operationalising safety, transparency, accountability across jurisdictions
Explanation
She notes the difficulty of turning high‑level principles into actionable measures across different legal regimes and business models, and calls for real‑world evaluations.
Evidence
“And regularly, I think the hard part… things like safety, transparency, and accountability, is how we operationalize them across jurisdictions…” [30]. “I think you’ve added another dimension… we need to be doing kind of real‑world evaluations in real‑world deployment context of these systems…” [31].
Major discussion point
Child‑focused AI governance & safety‑by‑design
Topics
Artificial intelligence | Monitoring and measurement
Baseline governance package and measurable indicators
Explanation
Aneja raises the question of what should constitute a baseline governance package for child‑facing AI and which metrics regulators should use to assess child‑best‑interest compliance.
Evidence
“So what, in your view, should be the baseline governance package for child‑facing AI, and what should be globally interoperable versus what is locally set?” [103]. “So if a regulator asks you for key measures or measurable indicators that an AI system is acting in a child’s best interest, what would those be?” [118].
Major discussion point
Technical and policy safeguards (baseline package)
Topics
Artificial intelligence | Data governance
Cultural and regulatory diversity
Explanation
She highlights the need for global norms that respect cultural and regulatory diversity without creating loopholes, and asks how a U.S.‑centric package can be adapted to other contexts.
Evidence
“How should global norms for children’s safety handle cultural and regulatory diversity without creating, in some sense, loopholes…” [19]. “So you have this kind of package, you’re rolling it out in the U.S. How do you then cater it to different contexts?” [128].
Major discussion point
Cultural, contextual, and inclusion considerations
Topics
Social and economic development | Human rights and the ethical dimensions of the information society
Tom Hall
Speech speed
163 words per minute
Speech length
927 words
Speech time
340 seconds
AI literacy as empowerment
Explanation
Hall defines AI literacy as giving children the tools to interrogate the fundamentals of AI, likening it to handing them a screwdriver to take apart a complex box.
Evidence
“So for us, AI literacy is allowing children and empowering them to really kind of interrogate the fundamental basis of computer science and artificial intelligence.” [46]. “And what we think AI literacy is, is ultimately handing children a screwdriver and saying, here is a fairly complex box, but let’s take it apart and let’s understand what’s under the hood.” [51].
Major discussion point
AI literacy and education for children
Topics
Capacity development | Artificial intelligence
Inclusive, engaging curriculum and representation
Explanation
He stresses that AI curricula must be inclusive, representing diverse children and respecting data privacy, sovereignty, and student dignity, especially for the Global South.
Evidence
“make sure that all kids of all types of diversity and inclusions are represented and can see themselves coming back in the products that they’re experiencing… data privacy data sovereignty and inclusion and respect for the student at the top of any plan…” [53].
Major discussion point
Cultural, contextual, and inclusion considerations
Topics
Social and economic development | Human rights and the ethical dimensions of the information society
Chris Lehane
Speech speed
191 words per minute
Speech length
1316 words
Speech time
412 seconds
Age assurance, parental controls, no targeted ads
Explanation
Lehane outlines a multi‑pronged safety package: age assurance, default under‑18 models, extensive parental controls, and a ban on targeted advertising to children.
Evidence
“The first, and the Baroness mentioned this, is we do age assurance.” [91]. “So even if we’re not sure of your age, we do default you to an under 18 model…” [104]. “Four, we prohibit any targeted advertising of kids using the technology.” [137].
Major discussion point
Technical and policy safeguards (age assurance, parental controls, content moderation)
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society
Teaching agency
Explanation
He highlights agency as a socio‑economic skill that must be taught, noting that AI can reward those who take agency but only if they are educated to do so.
Evidence
“I do think one of the big issues … how do we actually teach people?” [82]. “Agency.” [83].
Major discussion point
Cultivating curiosity, critical thinking, and agency
Topics
Capacity development | Human rights and the ethical dimensions of the information society
Adapting standards to cultural contexts
Explanation
Lehane points out that individual countries have their own norms, so global AI standards must be flexible enough to accommodate local cultural and regulatory differences.
Evidence
“And I think those are things that you do have to work through with individual countries because individual countries are going to have their own norms on those.” [126].
Major discussion point
Cultural, contextual, and inclusion considerations
Topics
Human rights and the ethical dimensions of the information society | The enabling environment for digital development
Maria Bielikova
Speech speed
129 words per minute
Speech length
310 words
Speech time
143 seconds
Need for studies on profiling and commercial exposure
Explanation
Bielikova stresses that independent studies are essential to uncover covert profiling and commercial content exposure among children, beyond company‑provided analytics.
Evidence
“We can observe it and this is quite important to do a lot of studies as we do and not just taking analytics from companies that provide it…” [119]. “And how they are exposed to commercial content.” [121].
Major discussion point
Risks and harms specific to children
Topics
Human rights and the ethical dimensions of the information society | Monitoring and measurement
Evidence of heightened profiling exposure
Explanation
She reports that children are exposed to profiling at rates many times higher than adults, underscoring the urgency of protective measures.
Evidence
“This is fine but actually they are exposed five times more … to profiling to the topics with influencers and so on.” [136].
Major discussion point
Risks and harms specific to children
Topics
Human rights and the ethical dimensions of the information society | Monitoring and measurement
Moderator
Speech speed
104 words per minute
Speech length
338 words
Speech time
193 seconds
Adult responsibility to guide AI engagement
Explanation
The moderator frames the discussion by stating that while children will inevitably engage with AI, the readiness of adults, institutions, and systems to guide that engagement responsibly is the real test.
Evidence
“The question is not whether children will engage with AI, but whether adults, institutions, and systems are prepared to guide that engagement responsibly.” [42]. “Too often, discussions about children and technology speak about children rather than with them.” [45].
Major discussion point
Child‑focused AI governance & safety‑by‑design (overall framing)
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society
Agreements
Agreement points
Children should be active participants in AI governance and policy design, not just subjects of adult decision-making
Speakers
– Rahul John Aju
– Tom Hall
– Thomas Davin
– Moderator
Arguments
Children must be included in governance mechanisms and policy design, not just as subjects but as participants
Child-centered and child-led design approaches are essential for AI policies affecting children
Children’s perspectives are essential for understanding what works and what doesn’t in AI systems
Discussions about children and technology should include children as participants rather than just talking about them
Summary
All speakers agreed that children should have meaningful participation in designing AI systems and policies that affect them, recognizing children as capable partners rather than passive recipients of adult decisions
Topics
Human rights and the ethical dimensions of the information society | Artificial intelligence | Capacity development
AI education should focus on foundational skills and critical thinking before introducing AI tools
Speakers
– Rahul John Aju
– Tom Hall
Arguments
Children should learn foundational skills first before using AI tools, similar to learning basic math before using calculators
AI literacy means giving children tools to understand what’s ‘under the hood’ rather than treating AI as a magic box
Summary
Both speakers emphasized that children need to understand fundamentals and develop critical thinking skills before using AI assistance, comparing it to learning basic math before using calculators
Topics
Capacity development | Artificial intelligence | Social and economic development
Safety by design is essential and cannot rely on post-harm regulatory approaches
Speakers
– Baroness Joanna Shields
– Chris Lehane
Arguments
Post-harm regulatory models from social media are inadequate for AI; need safety by design from the outset
Multi-pronged approach including age verification, parental controls, and prohibition of targeted advertising for children
Summary
Both speakers agreed that proactive safety measures must be built into AI systems from the beginning, rather than reacting to harm after it occurs, as was done with social media
Topics
Artificial intelligence | Building confidence and security in the use of ICTs | Human rights and the ethical dimensions of the information society
Age assurance and age-appropriate experiences are crucial for child safety in AI systems
Speakers
– Baroness Joanna Shields
– Chris Lehane
Arguments
Age assurance technology must be implemented with privacy-preserving methods to ensure age-appropriate experiences
Multi-pronged approach including age verification, parental controls, and prohibition of targeted advertising for children
Summary
Both speakers emphasized the importance of robust age verification systems that can deliver age-appropriate AI experiences while preserving privacy
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society | Data governance
Real-world evaluation and continuous monitoring of AI systems affecting children is necessary
Speakers
– Maria Bielikova
– Urvashi Aneja
– Thomas Davin
Arguments
Current AI systems are too complex to fully measure or understand, requiring observational studies
Contextual evaluations involving children are essential to understand real-world AI system impacts
Continuous impact measurement and redress mechanisms are necessary when AI systems fail
Summary
All three speakers agreed that independent, real-world studies and continuous monitoring are essential to understand how AI systems actually affect children in practice
Topics
Monitoring and measurement | Artificial intelligence | Building confidence and security in the use of ICTs
AI has tremendous potential to personalize and improve learning experiences for children
Speakers
– Chris Lehane
– Tom Hall
– Rahul John Aju
Arguments
AI can enable personalized learning experiences that adapt to different learning styles and paces
Need for exciting, relevant curriculum that connects to real-world problems rather than dry technical content
Children have demonstrated capability to learn and engage with AI when content is delivered appropriately
Summary
All speakers recognized AI’s potential to revolutionize education by providing personalized, engaging learning experiences that adapt to individual children’s needs and learning styles
Topics
Artificial intelligence | Social and economic development | Capacity development
Similar viewpoints
Both speakers expressed concern about AI systems potentially creating or reinforcing inequalities, whether through cultural homogenization or structural barriers to agency
Speakers
– Baroness Joanna Shields
– Urvashi Aneja
Arguments
Risk of cultural homogenization if AI models primarily reflect global north perspectives
Agency is not only about individual capacity but is shaped by broader socioeconomic and institutional contexts
Topics
Artificial intelligence | Social and economic development | Closing all digital divides
Both speakers advocated for balanced approaches that allow children to engage with AI while maintaining their natural curiosity and critical thinking skills
Speakers
– Maria Bielikova
– Thomas Davin
Arguments
Children should not be prohibited from exploring AI environments but need guidance and protection
Over-dependency on AI could reduce children’s curiosity and critical thinking abilities
Topics
Human rights and the ethical dimensions of the information society | Artificial intelligence | Capacity development
Both speakers emphasized the importance of inclusive design that considers diverse populations and contexts, ensuring AI benefits reach all children regardless of their circumstances
Speakers
– Tom Hall
– Thomas Davin
Arguments
Data privacy, sovereignty, and inclusion must be foundational ‘no regret moves’ in AI implementation
Solutions must work for unconnected populations and offline contexts, not just urban centers
Topics
Closing all digital divides | Artificial intelligence | Social and economic development
Unexpected consensus
Industry willingness to implement proactive child safety measures
Speakers
– Baroness Joanna Shields
– Chris Lehane
Arguments
Post-harm regulatory models from social media are inadequate for AI; need safety by design from the outset
Multi-pronged approach including age verification, parental controls, and prohibition of targeted advertising for children
Explanation
There was unexpected consensus between a government official and an industry representative about the need for proactive safety measures, with the industry representative actually advocating for stricter standards than typically expected from companies
Topics
Artificial intelligence | Building confidence and security in the use of ICTs | Human rights and the ethical dimensions of the information society
Children’s capability to engage meaningfully with AI governance
Speakers
– Rahul John Aju
– Tom Hall
– Thomas Davin
– Moderator
Arguments
Children must be included in governance mechanisms and policy design, not just as subjects but as participants
Child-centered and child-led design approaches are essential for AI policies affecting children
Children’s perspectives are essential for understanding what works and what doesn’t in AI systems
Discussions about children and technology should include children as participants rather than just talking about them
Explanation
There was remarkable consensus across all speakers, including industry, government, and academic representatives, about children’s capacity to meaningfully participate in AI governance – a view that challenges traditional paternalistic approaches to child protection
Topics
Human rights and the ethical dimensions of the information society | Artificial intelligence | Capacity development
Overall assessment
Summary
The speakers demonstrated strong consensus on key principles: children should be active participants in AI governance, safety must be built into systems from the start, personalized learning has great potential, and real-world evaluation is essential. There was also agreement on the need for inclusive design and balanced approaches that protect children while preserving their agency and curiosity.
Consensus level
High level of consensus across diverse stakeholders (government, industry, academia, and youth representatives) suggests a mature understanding of the challenges and opportunities. This alignment creates a strong foundation for collaborative action on AI governance for children, though implementation details may still require negotiation across different jurisdictions and contexts.
Differences
Different viewpoints
Approach to children’s access to AI systems – prohibition vs guided exploration
Speakers
– Baroness Joanna Shields
– Maria Bielikova
Arguments
Age assurance technology must be implemented with privacy-preserving methods to ensure age-appropriate experiences
Children should not be prohibited from exploring AI environments but need guidance and protection
Summary
Baroness Shields advocates for robust age verification and age-appropriate restrictions, while Maria Bielikova argues against complete prohibition, favoring guided exploration with supervision similar to allowing children to explore a city
Topics
Human rights and the ethical dimensions of the information society | Artificial intelligence | Building confidence and security in the use of ICTs
Timing of AI introduction in education – foundations first vs early integration
Speakers
– Rahul John Aju
– Chris Lehane
Arguments
Children should learn foundational skills first before using AI tools, similar to learning basic math before using calculators
AI can enable personalized learning experiences that adapt to different learning styles and paces
Summary
Rahul advocates for learning natural intelligence and foundational skills before using AI, while Chris emphasizes AI’s immediate potential for individualized tutoring and learning enhancement
Topics
Artificial intelligence | Capacity development | Social and economic development
Source of evaluation data – company analytics vs independent research
Speakers
– Chris Lehane
– Maria Bielikova
Arguments
Multi-pronged approach including age verification, parental controls, and prohibition of targeted advertising for children
Current AI systems are too complex to fully measure or understand, requiring observational studies
Summary
Chris presents OpenAI’s comprehensive safety package based on their data and approach, while Maria emphasizes the need for independent observational studies rather than relying on company-provided analytics
Topics
Artificial intelligence | Monitoring and measurement | Building confidence and security in the use of ICTs
Unexpected differences
Cultural homogenization vs global standards
Speakers
– Baroness Joanna Shields
– Baroness Joanna Shields
Arguments
Global standards needed for certain protections while allowing cultural adaptation in implementation
Risk of cultural homogenization if AI models primarily reflect global north perspectives
Explanation
Unexpectedly, the same speaker (Baroness Shields) presents seemingly contradictory positions – advocating for global standards while simultaneously warning against cultural homogenization from global north dominance
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society | Closing all digital divides
AI as enhancement vs potential harm to natural development
Speakers
– Chris Lehane
– Thomas Davin
Arguments
AI has potential to be a leveling technology that scales individual ability to think, learn, and create
Over-dependency on AI could reduce children’s curiosity and critical thinking abilities
Explanation
Unexpected tension between viewing AI as an empowering, democratizing force versus concern that it might diminish essential human capabilities like curiosity and grit
Topics
Artificial intelligence | Capacity development | Human rights and the ethical dimensions of the information society
Overall assessment
Summary
The discussion revealed moderate disagreements primarily around implementation approaches rather than fundamental goals. Key tensions emerged between protective vs exploratory approaches to children’s AI access, timing of AI integration in education, and reliance on industry vs independent evaluation
Disagreement level
Moderate disagreement with constructive tension. Most speakers shared common goals of child safety and empowerment but differed on methods. The disagreements reflect healthy debate about balancing protection with agency, and suggest the field is still developing best practices rather than having fundamental philosophical divides
Partial agreements
Partial agreements
Both agree on the need for age verification and child protection measures, but differ on implementation approaches – Baroness focuses on industry-wide standards through Open Age Alliance, while Chris presents OpenAI’s specific proprietary solution
Speakers
– Baroness Joanna Shields
– Chris Lehane
Arguments
Age assurance technology must be implemented with privacy-preserving methods to ensure age-appropriate experiences
Multi-pronged approach including age verification, parental controls, and prohibition of targeted advertising for children
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society | Building confidence and security in the use of ICTs
Both agree children should be central to AI policy design, but Tom emphasizes child-centered design in educational products while Rahul focuses on children as active participants in governance and policy-making processes
Speakers
– Tom Hall
– Rahul John Aju
Arguments
Child-centered and child-led design approaches are essential for AI policies affecting children
Children must be included in governance mechanisms and policy design, not just as subjects but as participants
Topics
Human rights and the ethical dimensions of the information society | Capacity development | Artificial intelligence
Both agree on the need for systematic evaluation and measurement, but Thomas emphasizes redress mechanisms for when systems fail while Maria focuses on ongoing observational studies to understand current impacts
Speakers
– Thomas Davin
– Maria Bielikova
Arguments
Continuous impact measurement and redress mechanisms are necessary when AI systems fail
Real-world evaluation studies are essential to understand actual platform impacts on children
Topics
Monitoring and measurement | Artificial intelligence | Building confidence and security in the use of ICTs
Similar viewpoints
Both speakers expressed concern about AI systems potentially creating or reinforcing inequalities, whether through cultural homogenization or structural barriers to agency
Speakers
– Baroness Joanna Shields
– Urvashi Aneja
Arguments
Risk of cultural homogenization if AI models primarily reflect global north perspectives
Agency is not only about individual capacity but is shaped by broader socioeconomic and institutional contexts
Topics
Artificial intelligence | Social and economic development | Closing all digital divides
Both speakers advocated for balanced approaches that allow children to engage with AI while maintaining their natural curiosity and critical thinking skills
Speakers
– Maria Bielikova
– Thomas Davin
Arguments
Children should not be prohibited from exploring AI environments but need guidance and protection
Over-dependency on AI could reduce children’s curiosity and critical thinking abilities
Topics
Human rights and the ethical dimensions of the information society | Artificial intelligence | Capacity development
Both speakers emphasized the importance of inclusive design that considers diverse populations and contexts, ensuring AI benefits reach all children regardless of their circumstances
Speakers
– Tom Hall
– Thomas Davin
Arguments
Data privacy, sovereignty, and inclusion must be foundational ‘no regret moves’ in AI implementation
Solutions must work for unconnected populations and offline contexts, not just urban centers
Topics
Closing all digital divides | Artificial intelligence | Social and economic development
Takeaways
Key takeaways
Post-harm regulatory models from social media are inadequate for AI – safety must be designed from the outset rather than reactive
AI creates simulated intimacy that children cannot distinguish from authentic human connection, requiring specific age-appropriate protections
AI literacy should teach children to understand what’s ‘under the hood’ rather than treating AI as a magic box, with foundational skills learned before AI tool usage
Children must be active participants in AI governance and policy design, not just subjects of protection
AI has tremendous potential to individualize learning and act as a leveling technology that scales thinking and creativity abilities
Real-world evaluation studies are essential to understand actual AI system impacts on children, as current systems are too complex to fully measure in lab settings
Risk of cultural homogenization exists if AI models primarily reflect global north perspectives
Teachers need significant support as only 41% feel ready to teach AI literacy despite 80% recognizing it as foundational
Age assurance technology with privacy-preserving methods is now technically feasible and should be implemented universally
Resolutions and action items
OpenAI committed to continuing their multi-pronged child safety approach including age verification, parental controls, and prohibition of targeted advertising
Call for establishment of AI safety institutes specifically focused on children’s safety, similar to existing catastrophic harm safety institutes
Recommendation for adoption of Open Age Alliance standards to create portable age keys that travel with children across platforms
Need for continued real-world evaluation studies of AI platforms to understand actual impacts on children
Implementation of child-centered and child-led design approaches in AI policy development
Development of exciting, relevant AI literacy curriculum that connects to real-world problems rather than dry technical content
Unresolved issues
How to operationalize AI literacy pedagogy effectively in diverse educational contexts
Balancing global safety standards with local cultural adaptation without creating regulatory loopholes
Addressing privacy limitations in some jurisdictions (like Europe) that impact age assurance capabilities
Developing solutions that work for unconnected populations and offline contexts
Preventing over-dependency on AI that could reduce children’s curiosity and critical thinking abilities
Managing the risk of AI systems giving correct answers all the time, potentially reducing children’s sense of struggle and grit development
Ensuring representation of children with disabilities in AI design and governance
Addressing hidden profiling and commercial content exposure through AI systems
Suggested compromises
Defaulting users to under-18 models when age cannot be determined, erring on the side of protection
Allowing children to explore AI environments with guidance and protection rather than complete prohibition
Balancing individual agency development with necessary safety guardrails
Creating globally interoperable baseline protections while allowing local customization for cultural contexts
Implementing ‘no regret moves’ like data privacy and inclusion as foundational elements while allowing flexibility in other areas
Focusing on safety-by-design collaboration between companies and governments rather than purely regulatory approaches
Thought provoking comments
When a model says to a child, I care, I understand, that’s not conscience, that’s code. But for a child, it can feel very real. And children are not miniature adults. They cannot reliably distinguish between authentic human connection and artificial intimacy, especially when systems are so persuasive, emotionally responsive, and always available.
Speaker
Baroness Joanna Shields
Reason
This comment powerfully articulates the fundamental difference between AI and social media platforms – the intimate, one-to-one nature of AI interaction that can simulate human connection. It introduces the critical concept of ‘artificial intimacy’ and highlights children’s developmental vulnerability to this deception.
Impact
This framed the entire discussion around the unique risks AI poses compared to previous technologies. It established the stakes and influenced subsequent speakers to address the personalized, intimate nature of AI interactions rather than treating AI as just another digital platform.
Schools teach us what to think but I believe schools should teach us how to think. How to think critically and how to face failures, how to communicate… We should learn how to write essays. We should learn how to sing, maybe. Then you should use AI. You should use the natural intelligence first. Then start using artificial intelligence.
Speaker
Rahul John Aju
Reason
This insight from a young person directly challenges the current educational paradigm and offers a concrete framework for AI integration. The analogy to calculators – learning math basics before using calculators – provides a practical model for AI literacy that resonates across contexts.
Impact
This comment shifted the discussion from abstract concerns about AI risks to concrete pedagogical approaches. It influenced later speakers to focus on foundational skills and agency-building, with Tom Hall specifically referencing the need for ‘real-world curriculum’ and Chris Lehane emphasizing the importance of teaching agency.
This technology is an incredibly leveling technology, it scales the ability of anyone to think, to learn, to create, to build, to produce. And the question is, do you actually encourage people to be able to use it that way? Because if so, the way we think about the social contract relationship between capital and labor and how that is calibrated, this technology can have a huge impact on actually giving individuals the ability to control their own labor as owners of it.
Speaker
Chris Lehane
Reason
This comment elevates the discussion beyond immediate safety concerns to fundamental questions about economic structures and power distribution. It reframes AI not just as a learning tool but as potentially transformative for social and economic relationships.
Impact
This broadened the conversation’s scope significantly, prompting Urvashi Aneja to note that ‘agency is not only a factor of individual capacity, but has so much to do with the broader socioeconomic institutional context.’ It shifted focus from technical solutions to systemic considerations.
We can observe it and this is quite important to do a lot of studies as we do and not just taking analytics from companies that provide it even though they seem the best because even though they tell that children are not profiled but they are because we see it… children see less formal advertisement on TikTok. This is fine but actually they are exposed five times more to profiling to the topics with influencers and so on.
Speaker
Maria Bielikova
Reason
This comment introduces crucial empirical evidence that challenges company claims about child protection. It reveals the gap between stated policies and actual outcomes, demonstrating how children can be profiled and influenced in ways that circumvent traditional advertising restrictions.
Impact
This evidence-based intervention shifted the discussion toward the need for independent evaluation and real-world testing rather than relying on company assurances. It influenced the final recommendations around systematic impact measurement and independent oversight.
It is the same as we will prohibit children to go to the city. But we should know what is going on and we should travel with them through this environment.
Speaker
Maria Bielikova
Reason
This powerful analogy reframes the entire approach to child protection in AI environments. Instead of prohibition-based approaches, it suggests guided exploration and accompaniment, which is both more realistic and potentially more effective for building digital literacy.
Impact
This analogy influenced the panel’s conclusion toward ‘children at the heart’ approaches and the importance of AI literacy rather than blanket restrictions. It provided a memorable framework that several speakers referenced in their closing remarks.
If we have a model that actually gives the right answer or an answer to children all the time, they might actually lose their sense of curiosity… Can we design a model that actually gives the wrong answer on purpose so that the child actually struggles because we know that grit is going to be one of the huge skills of tomorrow.
Speaker
Thomas Davin
Reason
This paradoxical insight challenges the assumption that AI should always be helpful and accurate. It introduces the counterintuitive idea that struggle and uncertainty might be essential for child development, even if AI could eliminate them.
Impact
This comment crystallized concerns about over-dependence on AI and sparked reflection on what human capacities might be lost if AI becomes too seamless. It influenced the final emphasis on maintaining human agency and the importance of foundational skills before AI integration.
Overall assessment
These key comments transformed what could have been a technical discussion about AI safety into a profound examination of human development, social structures, and the future of learning. Baroness Shields’ opening established the unique intimacy risks of AI, while Rahul’s youth perspective provided practical wisdom about balancing natural and artificial intelligence. Chris Lehane’s economic framing elevated the stakes to societal transformation, while Maria Bielikova’s empirical evidence and city analogy grounded the discussion in research reality and practical approaches. Thomas Davin’s paradoxical insight about the value of struggle synthesized these themes into a nuanced understanding that effective AI governance for children requires not just safety measures, but careful consideration of what makes us human. Together, these comments created a rich, multi-layered conversation that moved beyond simple risk mitigation to explore fundamental questions about childhood, learning, agency, and human flourishing in an AI-enabled world.
Follow-up questions
How do you actually embed AI literacy into educational curricula and what specific pedagogical approaches work best?
Speaker
Urvashi Aneja
Explanation
This was identified as a critical gap where policy makers and educators struggle with the practical implementation of AI literacy programs, moving beyond principles to actual teaching methods
How can we design AI models that intentionally give wrong answers to preserve children’s curiosity and critical thinking skills?
Speaker
Thomas Davin
Explanation
This explores whether AI systems should be designed to encourage struggle and grit in children rather than always providing correct answers, which could diminish curiosity
How do we create AI solutions that work for unconnected populations and function offline?
Speaker
Thomas Davin
Explanation
This addresses the risk of AI solutions being urban-centered and excluding those who are already marginalized or lack consistent internet connectivity
How should cultural context and societal norms be incorporated into AI safety measures across different jurisdictions?
Speaker
Chris Lehane and Urvashi Aneja
Explanation
This explores the challenge of balancing global safety standards with local cultural values and regulatory frameworks
What are the specific technical evaluation methods needed for real-world deployment of AI systems with children?
Speaker
Maria Bielikova
Explanation
This addresses the need for contextual evaluations beyond laboratory testing to understand how AI systems actually behave when deployed with children
How can we prevent the development of a monoculture in AI that erases cultural diversity and uniqueness?
Speaker
Baroness Joanna Shields
Explanation
This explores the risk of global AI models flattening cultural differences and the need to preserve diverse perspectives and values in AI development
What governance mechanisms are needed to ensure children are included in AI system governance rather than just being subjects of it?
Speaker
Thomas Davin
Explanation
This addresses how to move beyond consulting children to actually including them in the governance and oversight of AI systems that affect them
How can privacy limitations in different jurisdictions be reconciled with effective age assurance technologies?
Speaker
Chris Lehane
Explanation
This explores the technical and legal challenges of implementing robust age verification while respecting privacy regulations that vary across regions
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event

