Safeguarding Children with Responsible AI

20 Feb 2026 18:00h - 19:00h

Safeguarding Children with Responsible AI

Session at a glanceSummary, keypoints, and speakers overview

Summary

The summit opened with Baroness Joanna Shields warning that governing AI for children is the clearest test of responsible technology stewardship and that existing post-harm models are inadequate for AI’s unique, intimate one-to-one interactions with kids [1-13][16-22]. She emphasized that AI simulates intimacy at scale, affecting children’s safety, mental health and identity formation, and called for age-appropriate default experiences and guardrails to protect dignity and development [14-21].


Moderator Rahul John Aju was introduced as a young AI innovator, highlighting the need to speak with, not just about, children in technology debates [26-31]. Rahul argued that children’s innate curiosity must be guided by critical thinking and that AI safety cannot rely on parents alone, noting the difficulty of distinguishing real from fake information in the AI era [58-63][66-69]. He illustrated the problem with examples of photo uploads and unread terms-and-conditions, and described his “Rescue AI” tool that flags risky contract clauses, underscoring the urgency of AI awareness [85-92]. Rahul advocated for foundational education in natural intelligence before AI, proposing personalized, multimodal learning tools such as Notebook LM and StudyFetch, and cited his free ThinkCraft Academy courses that have reached over 700,000 learners [138-173].


In the panel, Tom Hall defined AI literacy as giving children a “screwdriver” to dissect technology, noting that while 80 % of teachers are excited, only 41 % feel prepared, thus calling for teacher tools and child-centered curricula [209-218][306-324]. Chris Lehane highlighted AI’s potential to individualize tutoring and expand agency, warning that existing K-12 systems were built for the industrial age and must be re-engineered to empower learners rather than constrain them [232-248]. Urvashi Aneja stressed the importance of embedding AI literacy within policy and pedagogy, and raised concerns that cultural and socioeconomic contexts affect agency, especially in the Global South [220-253].


The Baroness reiterated that a post-harm regulatory paradigm will not work, advocating safety-by-design, age-assurance technologies, and the Open Age Alliance to create interoperable, privacy-preserving age verification across jurisdictions [266-281][377-395]. Maria Bielikova called for systematic, child-involved studies to detect profiling and other harms, arguing that existing data-protection tools can mitigate risks if properly enforced [402-410]. The panel reached consensus that safety by design, inclusion, offline accessibility, and placing children at the centre of governance are essential to avoid a monoculture and preserve curiosity [419-444]. Participants expressed measured optimism that, with coordinated standards and child-led evaluation, AI can enhance learning without eroding agency [421-426][438-440].


Rahul concluded by thanking the summit and urging that policies keep children’s needs at the forefront, reinforcing the collective commitment to responsibly shape the AI-enabled future for kids [456-462][464-466].


Keypoints


Major discussion points


Child-centric AI governance and safety-by-design – The panel repeatedly stressed that AI must be built with age-appropriate guardrails, robust age-verification and privacy protections from the outset, rather than relying on post-harm regulation. The Baroness highlighted the failure of “post-harm” models for AI and called for “safety from the ground up” and “age-assurance technology” that can be verified across platforms [266-276][278-281][389-395].


AI literacy and agency for children and educators – Multiple speakers argued that children need foundational knowledge and critical thinking skills before they can use AI responsibly. Tom Hall described AI literacy as giving children a “screwdriver” to understand the inner workings of AI and noted that many teachers feel unprepared [209-217][214-216]; Rahul emphasized teaching the basics of maths before relying on calculators and advocated a curriculum that teaches “how to think” rather than just “what to think” [106-118][155-158].


Risk mitigation and the potential harms of unchecked AI – Concerns were raised about emotional dependency, manipulation, profiling, loss of curiosity, and cultural homogenisation. The Baroness warned that simulated intimacy can affect mental health and identity formation [13-16][21]; Maria pointed out hidden profiling on platforms like TikTok despite low formal advertising [402-406]; Thomas warned that over-reliance on AI could blunt children’s curiosity and grit [425-437].


Global policy coordination and contextual adaptation – Participants called for a mix of universal standards (e.g., age-assurance, data-privacy) and locally-tailored rules that respect cultural and regulatory differences. Chris outlined a “multi-pronged” package (age assurance, parental controls, no targeted ads, external review) and noted the need to adapt it across jurisdictions [327-357][340-354]; the Baroness highlighted the Open Age Alliance as a mechanism for interoperable age-verification while cautioning against a monoculture of models [389-398].


Inclusion and equitable access – The discussion stressed that AI solutions must be inclusive of diverse languages, abilities, and offline contexts, especially for the Global South. Tom advocated “data privacy, data sovereignty and inclusion” and stressed involving children in design [306-319][322-324]; Urvashi and Thomas added that policies should address children with disabilities and those without reliable internet [440-442][250-253].


Overall purpose / goal of the discussion


The session aimed to shape a responsible, child-focused AI ecosystem by (1) identifying the regulatory and design gaps that could endanger children’s wellbeing, (2) outlining how AI literacy and agency can be cultivated in schools and homes, (3) proposing concrete governance tools (age-assurance, parental controls, global standards) and (4) ensuring that these solutions are inclusive, culturally sensitive, and equitable across different socioeconomic contexts.


Overall tone and its evolution


– The opening remarks by the Baroness set a serious, urgent tone, emphasizing risk and the need for proactive safeguards.


– Rahul’s contribution shifted the mood to energetic and informal, using humor and personal anecdotes while still stressing the importance of guidance.


– The moderated panel adopted a collaborative and analytical tone, balancing optimism about AI’s potential with caution about harms, and offering concrete policy ideas.


– As the conversation progressed, the tone became hopeful and solution-oriented, highlighting emerging tools (age-gate, Open Age Alliance) and the possibility of inclusive, globally coordinated standards.


– The closing remarks returned to a reflective and appreciative tone, thanking participants and reaffirming a collective commitment to responsible AI for children.


Speakers

Baroness Joanna Shields


– Area of expertise: Internet safety, child online protection, policy & regulation


– Role/Title: Baroness; former UK Government minister for Internet safety and harms; senior leader in international child-online-safety coalitions [S9]


Chris Lehane


– Area of expertise: AI policy, public affairs, child-focused AI safety


– Role/Title: Chief Global Affairs Officer, OpenAI [S1]


Tom Hall


– Area of expertise: AI-enabled education, legal frameworks for technology


– Role/Title: Vice President and General Manager, Lego Education (also representing the National Legal Foundation) [S4]


Urvashi Aneja


– Area of expertise: Digital governance, AI ethics, child-centred technology policy


– Role/Title: Director, Digital Futures Lab [S6]


Maria Bielikova


– Area of expertise: Trustworthy AI, user modelling, personalization, disinformation risk


– Role/Title: Director, Kempelen Institute for Intelligent Technologies [transcript]

Thomas Davin


– Area of expertise: Innovation for children, UNICEF programmes, AI for development


– Role/Title: Director of Innovation, UNICEF (Director of the Office of Innovation at UNICEF) [S10]


Moderator


– Area of expertise: Session facilitation, event moderation


– Role/Title: Moderator of the AI Impact Summit panel [transcript]

Rahul John Aju


– Area of expertise: Youth AI entrepreneurship, AI safety tools, AI education for children


– Role/Title: Young AI innovator; Founder, AIRM Technologies; Founder, ThinkCraft Academy; Speaker at the summit [transcript]

Additional speakers:


Alicia – (appears as a panel participant; specific role or expertise not provided) [transcript]

Full session reportComprehensive analysis and detailed insights

Baroness Joanna Shields (UK Minister for Internet Safety) opened the session warning that governing AI for children will be “the clearest test yet on whether we are governing this technology responsibly and for the public good” [1-3]. She argued that AI’s rapid adoption is driven by extraordinary capabilities, but its continued place in society hinges on trust built through responsible design [2-4]. Rejecting the “post-harm regulatory model” used for social media as “not fit for purpose in the AI world”, she noted that AI differs from a platform because it creates one-to-one adaptive interactions that become part of how children learn, communicate, create and form their sense of self [5-8]. Simulated intimacy can feel real to a child even though it is merely code [9-11], and children cannot reliably distinguish authentic human connection from artificial intimacy [12-13]. This blurring threatens safety, mental health, identity formation and long-term well-being, with observed harms such as emotional dependency, manipulation, deep-fake abuse and devastating loss [14-16]. She concluded that children must not be “beta testers” for AI, calling for age-appropriate experiences by default and guard-rails around systems that simulate intimacy [17-21]. She expressed optimism about joining the panel [22-23].


Rahul John Aju (Founder, AIRM Technologies; Founder & Director, ThinkCraft Academy; advisor to public institutions) was introduced by the moderator as “the AI kid of India” and emphasised the need to speak with children rather than about them [26-29]. He recalled his father’s advice to “question everything” and to be critical [45-47][50-54], and described how his parents taught him to separate correct from fake information when using Google [58-60]. He asked whether, in the AI age, children can still perform this discrimination, noting that even parents struggle to identify reliable information [61-64]. To illustrate digital opacity, he pointed to the habit of uploading photos to the cloud without knowing what happens to them [76-78] and the widespread neglect of lengthy terms-and-conditions [85-87].


He then presented Rescue AI, a prototype he and his team have been developing for three years that can ingest any contract or terms-and-conditions document, flag high-risk and low-risk clauses, and advise whether the product should be used [91-94][95-98]. He framed this tool as evidence that AI awareness and safety are essential, especially when no such safeguards exist [95-98].


Building on foundational knowledge, Rahul argued that children should first master natural intelligence-basic maths, reading and critical thinking-before relying on AI tools [106-118]. He likened AI-enhanced learning to a calculator that becomes useful only after the basics are understood [101-108]. He advocated personalised, multimodal resources, citing Notebook LM’s ability to generate videos and podcasts from textbook content [140-144] and StudyFetch’s conversion of chapters into games [144-146]. Through ThinkCraft Academy he has taught over 700 000 learners how to build and fine-tune large language models in 30 days [166-173]; the academy is supported by AIRM Technologies and his advisory work with public bodies [166-173][468-470].


Panel introduction: Thomas Davin (Director, Office of Innovation, UNICEF); Urvashi Aneja (Director, Digital Futures Lab); Maria Bielikova (Director, Kempelen Institute for Intelligent Technologies); Chris Lehan (Chief Global Affairs Officer, OpenAI); Tom Hall (Vice-President, National Legal Foundation) alongside Baroness Shields [198-206].


Tom Hall (Vice-President, National Legal Foundation) defined AI-literacy as giving children a “screwdriver” to dissect technology, stressing that children must understand how computers see the world as data, how bias works and how accountability can be built [212-216]. He noted that while 80 % of teachers are excited about AI in classrooms, only 41 % feel prepared to teach it, highlighting a capacity gap that requires tools and real-world curricula [209-218]. Hall announced a free AI policy toolkit for classrooms and urged child-centred, inclusive design that respects data privacy and sovereignty [306-311][312-319]. He called for children’s involvement in policy development and warned against one-size-fits-all solutions [322-324]. Importantly, he described “no-regret moves” as design choices that protect inclusion and privacy while allowing iterative improvement [306-311].


Urvashi Aneja (Director, Digital Futures Lab) asked how to embed AI literacy into policy, seeking ways to translate real-world safety practices into the digital AI environment for children [220-223]. She stressed that agency is shaped not only by individual capacity but also by socioeconomic and institutional contexts, especially in the Global South [250-254]. Aneja highlighted the need for child-involved real-world evaluations, redress mechanisms and clear, enforceable principles across jurisdictions [299-304][311-314].


Chris Lehan (Chief Global Affairs Officer, OpenAI) described AI’s potential to provide personalised tutoring that adapts to each child’s pace and learning style [232-236]. He warned that the current K-12 system, designed for the industrial age, limits agency and must be re-engineered to empower learners rather than constrain them [240-246]. Lehan positioned AI as a “leveling technology” that can expand agency, but only if education teaches children how to use it critically [247-249].


Thomas Davin (Director, Office of Innovation, UNICEF) warned of the danger of over-dependence: if AI always supplies the correct answer, children may lose curiosity and grit [425-437]. He suggested that models could deliberately provide occasional wrong answers to foster resilience, and called for systematic impact measurement [425-437]. He also shared a striking statistic: “7 out of 10 children in classrooms cannot explain a text they read at 10 years of age” [471-473]. During the discussion he noted the panel’s deliberate gender arrangement-boys on one side, girls on the other-as a “beautiful” design choice [474-476].


Maria Bielikova (Director, Kempelen Institute for Intelligent Technologies) shifted focus to hidden commercial profiling, noting that while formal advertising on TikTok is low, children are exposed to influencer-driven content five times more often, a risk that can only be uncovered through child-involved studies [402-406]. She used the metaphor of not prohibiting children from the city but travelling with them to understand the environment, arguing that existing data-protection tools should be enforced [408-410]. She added that “Even though we have the Digital Services Act in Europe, the problem persists” [477-479].


Across the discussion, all speakers endorsed proactive, safety-by-design governance with age-appropriate safeguards, rejecting post-harm models [1-3][266-269][327-354][306-311][312-319][322-324][299-304][442-444][419-426]; they agreed that AI-literacy must begin with critical thinking and foundational skills before AI tools are introduced [41-54][106-118][212-216][155-158]; they stressed inclusion and cultural diversity, warning against a monoculture of AI models [395-398][250-254][306-311][370-374]; they supported a global, interoperable age-verification framework while allowing local adaptation [390-394][327-342][375-381]; and they highlighted the need for teachers to receive practical toolkits and training [209-218][306-311][299-304].


Points of disagreement emerged:


* The Baroness advocated a single, interoperable age-key via the Open Age Alliance [390-394], whereas Lehan noted privacy-law limitations (e.g., in Europe) and cultural norms that require country-specific adaptations [370-374].


* Hall promoted rapid, broad AI integration with toolkits and inclusive curricula [306-311], while Lehan cautioned that over-reliance could erode agency and suggested limiting AI’s role to preserve curiosity [247-249][240-246].


* Bielikova identified covert commercial profiling as the highest technical risk [402-406], whereas Lehan focused on explicit harmful content (violence, sexual, mental-health) and advocated age-gates and parental controls [340-349].


* Davin favoured experimental designs such as intentionally wrong answers to preserve grit [425-437], while Bielikova called for continuous observational research involving children to understand platform effects [403-410].


Key take-aways (consolidated):


1. Post-harm regulation is insufficient; safety must be built into AI design from the outset.


2. Robust age-assurance, parental controls and external reviews are essential safeguards.


3. AI-literacy should start with critical thinking and natural-intelligence foundations, with teachers equipped through toolkits and real-world curricula.


4. Personalised AI tutors can boost agency, but over-dependence risks eroding curiosity; intentional challenge-based design is needed.


5. A global age-verification standard (e.g., Open Age Alliance) is required, yet must be adaptable to cultural and regulatory contexts.


6. Continuous real-world impact research, especially on covert profiling, is needed.


7. Children must be directly involved in testing, redress mechanisms and policy design.


Concrete actions announced:


* UNICEF and partners released a free AI policy toolkit for classrooms [306-311].


* OpenAI committed to a multi-pronged safety package (age gates, default under-18 models, parental controls, advertising bans, external review) [340-349][357-360].


* The Baroness highlighted the Open Age Alliance’s work on interoperable age-keys [389-395].


* Tom Hall pledged to embed child-centred governance, data privacy and inclusion in LEGO’s AI education initiatives [306-324].


* Rahul showcased “Rescue AI” and offered to continue developing tools that help children understand contracts [91-94].


* The panel agreed to pursue further collaboration on teacher training, real-world evaluations and inclusion of Global-South perspectives.


Unresolved issues include: embedding AI-literacy effectively across diverse curricula; balancing a universal safety baseline with locally-tailored cultural rules; ensuring emerging AI companies comply with child-safety standards; designing redress and accountability mechanisms; and preserving cultural and linguistic diversity while delivering age-appropriate content.


Proposed compromises: adopt “no-regret” design principles that prioritise privacy, inclusion and child-respect while allowing iterative improvement; implement robust yet privacy-preserving age-verification adaptable locally; combine AI assistance with intentional gaps or challenges to maintain curiosity; use a hybrid governance model that sets global baseline safeguards (age gates, advertising bans) complemented by region-specific cultural guidelines; and involve children throughout design, testing and policy-making.


Thought-provoking remarks: the Baroness described AI as “engineered simulated intimacy at scale”, reframing the conversation from platform risk to relational-psychological risk [6-9]; Rahul’s question about distinguishing real from fake information highlighted everyday challenges for children [58-64]; his “Rescue AI” demo provided a tangible youth-led safety solution [91-94]; Hall warned against over-dependence and championed “no-regret moves” that respect inclusion and data sovereignty [306-311]; Lehan linked AI to broader socioeconomic structures, arguing that AI must empower agency rather than reinforce existing labour contracts [247-249][240-246]; Bielikova’s city metaphor urged “traveling with children” rather than banning them, underscoring the need for contextual, child-involved research [408-410]; Davin’s “ancestor” comment framed the policy challenge as an ethical legacy [415-418]; and the moderator noted that Under-Secretary-General Amandeep Gill was stranded in traffic during the closing [480-482].


Closing: Rahul thanked the United Nations and the summit organisers, reiterating that policies must keep children at the forefront and that young innovators should be heard as they help design the future [455-462]. The moderator thanked all participants, noted the collective commitment to responsible AI advancement for children, mentioned the traffic delay affecting Under-Secretary-General Amandeep Gill, and formally concluded the session [464-467][480-482].


Actionable recommendations


1. Adopt safety-by-design with robust age-assurance and parental-control mechanisms.


2. Deploy teacher-training toolkits and real-world curricula to build AI-literacy.


3. Conduct child-involved real-world evaluations of AI impacts, especially covert profiling.


4. Implement a global interoperable age-key (e.g., Open Age Alliance) while allowing local cultural adaptation.


5. Ensure continuous monitoring, redress and accountability frameworks that involve children in policy design.


Session transcriptComplete transcript of the session
Baroness Joanna Shields

governance. How we manage AI on behalf of children will be the clearest test yet on whether we are governing this technology responsibly and for the public good. AI’s rapid adoption has been driven by extraordinary capabilities, but its continued place in society will depend on trust, and trust is built through responsible design. The post -harm regulatory model that we’ve seen with social media reacting after damage is not fit for purpose in the AI world. AI is fundamentally different. It is not a platform. It is increasingly a one -to -one adaptive interaction embedded in how children learn, communicate, create, and form their own sense of self. Inadvertently, AI is engineering simulated intimacy and human -like interaction at a scale that is not just a matter of how children learn, but how they learn.

It is hard to imagine. When a model says to a child, I care, I understand, that’s not conscience, that’s code. But for a child, it can feel very real. And children are not miniature adults. They cannot reliably distinguish between authentic human connection and artificial intimacy, especially when systems are so persuasive, emotionally responsive, and always available. That difference has implications not only for safety, but for mental health, identity formation, and long -term well -being. We have already seen what happens when the line blurs. Emotional dependency, manipulation, deep fake abuse, and in some cases, devastating loss. Children must not be the beta testers for our AI -enabled world. We need age -appropriate experiences by default. with guardrails around systems that simulate intimacy without accountability.

The question is not whether AI will continue to advance. Of course it will. The question is whether we shape it in a way that safeguards the dignity and the development of children. And accountability begins with protection. And I’m excited to join this distinguished panel to have this important conversation, even though it’s day five of the summit. Thank you very much. I’m going to have to move this back up. I’m sorry.

Moderator

Thank you so much, Baroness Joanna Shields, for setting the stakes so clearly. Too often, discussions about children and technology speak about children rather than with them. This session is intentional in doing otherwise. Therefore, I am very pleased to introduce Rahul John Aju, widely recognized as the AI kid of India. He is our featured young AI innovator who has built and deployed real -world AI tools, founded his own AI startup, and advised public institutions on using AI. Raul, I’d like to invite you on stage.

Rahul John Aju

Thank you. Thank you, guys. Thank you so much for the lovely introduction. I know safety is a bit boring topic, but it’s a very crucial topic. And I think if I stand there, no one is going to see me, so I’m using a hand mic. So hopefully everyone can see me. Yes? Can I get more energy? Hi, guys. Is this all you guys have? Hi. Perfect. So let’s get started. Starting with, you know, when I was young, my father used to tell me… Okay, I’m still young. I’m still young. Younger, younger. That’s what I bet. He still tells me that Raul, question everything. Be critical about everything. The slide changer is not working. Okay, without the slide changer also it will work.

Okay. Be critical about everything. Ask questions. So I did. Why does the chair have four legs? Why is the sky blue? And also, why do birds fly? Why can’t humans fly then? I bombarded him with a lot of questions. So he just took the phone and he’s like, Raul, this is Google. Go search it. And so I did. But you know, while I was using Google, my parents also taught me one thing. How to figure out what is the correct information and the fake information. And that helped me a lot. But this age of AI, how do you expect me to do it? I don’t think even parents can figure out what is the right information and fake information.

We all agree upon that? Yes? So how do we do that? Because curiosity is there in every child. I think I have enough curiosity. But it only becomes powerful if it’s guided the right way. So how do we guide the right way? Because right now we are just teaching kids how to talk to machines. Before we teach them how to… Question. Now I am just saying random quotes now but let’s dive deeper and see why. I will give an example. Everyone remembers the Ghibli trend? Everyone did it? I did it too. Guilty. But it was very fun to be honest. But what happened there was we were all just taking pictures, uploading our pictures to the cloud.

But we don’t even know what’s happening with it. We all agree, right? But right now kids are also doing the same thing, taking their pictures, uploading it to the cloud. But we tell children don’t be on social media, don’t upload your pictures to social media, don’t share your pictures to strangers and all, right? But what about the AI world? We are missing, the parallel is missing. We need to translate real world safety into the digital world. Because right now even most, okay, I have a question. How many of you guys read the 25 page terms and conditions? I don’t, right? You don’t know what’s happening behind the scenes. I don’t know what’s happening behind the scenes.

like most of these pictures were taken and obviously made for the model to be better for all of us, right? Right now a lot of companies are making sure children are safe but we don’t know about it. Are they safe? There are a lot of unknown AI companies as well. What do we do then? That’s right. Also I created an AI software where you can upload a full terms and conditions or any contract and it will tell you the high risk clauses, low risk clauses and it will, thank you and it will literally tell them what to do, if you should use the product or not, right? So be careful. Anyway, so that tool was known as Rescue AI.

I’ve been working on it for the past three years for emergency, for law people, a lot of things. I don’t want to promote myself too much but I’m trying to do that. But what about when things like that are not there? What about if I didn’t do something like that? That is why AI awareness and safety is necessary. Obviously it is. That’s why you’re called here, Raul. But how do you do that education? Right? How do you teach about AI? You know, recently I got calculator in my school and I am so happy because I don’t have to do maths by multiplying, dividing manually. I can do it through calculators in my exam. By the way, I bunked my exam and came today.

Anyway, very happy for that. But you have to do all this calculation. But because I have a calculator, it’s way easier. But I only got access to it once I learned the basics of maths, right? I believe AS should be same. We should learn how to write essays. We should learn how to sing, maybe. Then you should, I don’t know how to sing. Everyone will run away if I start singing. But you should know the basics and the foundations before you start using AI. I feel that’s when you teach about AI. That’s when you say, okay, AI can help you do the essay. AI can help you do the song. You should use the natural intelligence first.

Then start using artificial intelligence. I believe. It’s about using the combination of both, right? Yes. How many of you guys use natural intelligence? Everyone does, right? I’m mostly reliant on artificial intelligence. I’ve got to switch to it. But that’s what matters. But it’s not just about that. It’s also about how we teach, deliver topics. Starting with personalized content. You know, reading for me is kind of boring. I’m so sorry. But everyone learns differently. It might be through reading. It might be through listening. It might be through watching videos, which I prefer the most. That’s how I learn most of the things that are happening. From geopolitics to cricket, which I love. All of these things I’ve learned because I watch the video.

I’m a more visual person. It’s not one size fits all. But sadly, I feel educationist. And I believe AI can generate content. Wait. It’s not believe. It’s already happening. You guys know about Notebook LM, right? It can generate videos. It can generate podcasts with one textbook content. That’s how I passed all my exams, to be honest. Even not just that, there is this tool StudyFetch where you can upload a chapter content and it will convert it into games. It’s not just about that. Everyone’s interest is different, right? Take a wild guess. What do you think my interests are? Wild guess. AI. AI, exactly. I am here to talk about AI guys. Cricket on the side but AI, right?

What if you connected E is equal to MC square and thought that through AI? You can do that too in this AI world. How do you do that? See, right now schools teach us what to think. I am repeating that. Schools teach us what to think but I believe schools should teach us how to think. How to think and how to think critical, how to think critically and how to face failures, how to communicate. These are basic things. Trust me, to stand here I had to face a lot of failures. But I learned how to do that because of my father. Trust me. I am giving you some credit. So, thank you. See, now he’s recording the audience, clapping for him.

Okay. So, that is what matters. And here’s one proof of demand, okay. I started something under my company, AIRM Technologies, ThinkCraft Academy. Yes, a bit of promoting, but ThinkCraft Academy, where I taught what is AI to building your own AI, LLM, fine tuning and all that, that in 30 days and more than 7 lakh people learned from that. And that course was completely free. And even there was another course going from what is AI to building your own AI as a startup founder, as a student. And that course was also completely free. But do you know how many people joined and learned from that? Again, 7 lakh people did, combined. that shows that people want to learn about AI.

It just has to be delivered the right way. The name of this course, I know everyone is searching right now. It’s on my YouTube channel. I’m a content creator too. Raul the Rockstar. Yes, you might be thinking, what does he not do, right? I’m joking. But a lot of things goes on. See, I am not saying a lot of big things. I believe we all should be open mind. We should be open to learning more things. We should be curious because AI will not take your job. But someone using AI can. But at the same time, the most important thing in the world of AI is also to be as human as possible. My name is Raul.

Thank you so much. Is it okay if I take a small video? Influencer. Thank you so much. I have to do this too, guys. So it’s very simple. Like I said, I have to do this. totally forced to I am just going to say AI Impact Summit how was the session and you guys can be if you guys didn’t like it just say no hated it you guys can say that be fully honest I should say you and also right I am totally joking I am very grateful for this opportunity you know in last November I was wanting to come here I was like register for this and the fact that they called me to speak here I am very grateful for this opportunity and we have to thank them thank you shall we do it AI Impact Summit Delhi by UN ok not by ok what’s the worry it’s a part right ok this is how many times I have to record a normal video thank you so much UN for calling me and AI Impact Summit the audience how was the session was it boring yes was it boring you guys are agreeing it’s boring no thank you guys thank you I will not take too much time

Moderator

Thank you, Rahul, for that very thoughtful and energizing address. Your perspective underscores a key message for today. The question is not whether children will engage with AI, but whether adults, institutions, and systems are prepared to guide that engagement responsibly. We will now turn to our panel discussion. The discussion will be guided by two co -moderators with deep expertise at the intersection of innovation, policy, and child well -being. I am pleased to introduce our moderators, Thomas Davin, Director of the Office of Innovation at UNICEF, as well as Urvashi Aneja, Director of the Digital Futures Lab, and I invite them to guide the discussion.

Thomas Davin

Thank you. Can you hear me? Yes? All right, so delighted to be here with you all. I’m one of the two co -moderators, and I’m… delighted to invite four leaders in the industry who are going to have the high bar of keeping you all as entertained and on substance as Raoul just did over now. So please, a warm welcome to Baroness Joanna Shields. Please, Maria Bielikova, Director of the Kempelen Institute for Intelligent Technologies. I took the liberty of not reintroducing Baroness because I think she was already known to you. Chris Lehan, welcome, Chief Global Affair Officer for OpenAI. Tom Hall, welcome, Vice President and General Manager of the National Legal Foundation. Over to you, Alicia.

Urvashi Aneja

Alicia Thank you and thank you to the UNICEF team and thank you for that very energizing opening. Yeah, I hope we can live up to that level of dynamism. oh yes can we invest the Baroness wants to know if we can invest in your company okay great so on that very cheerful note thank you all for being here and I’m delighted to be able to moderate this discussion at the India AI Impact Summit as someone who studies the governance choices that shape how technologies land in society I’m interested in a very simple test whether AI expands children’s agency and learning or does it quietly narrow it through design incentives and design choices so let’s begin with what we want AI to enable for children at scale and in practice Tom so perhaps I can start with you first Lego education has recently pushed into computer science and AI learning in young classrooms so what does AI literacy that supports well -being look like in real classrooms and what does it look like in real classrooms and what should we do if we want AI to deepen creativity rather than replace it

Tom Hall

Well, first of all, thank you for having me and very tough shoes to fill after Rahul’s spot there I agree with so much of what he just had to say and yeah, I’d love him to come and guide some more conversations Being at this conference, I think we can all see that the rate of technological advancement is breathtaking and I think often we stand, whether we’re deeply involved in it or on the sidelines there can be a feeling of incredible excitement there can also be a feeling of, frankly, doom that this change is happening so fast and I think that we kind of underestimate what the role of children is going to be in this journey They might look at what’s happening in the world of AI and simply see it as a magic box that they can interrogate at the click of a button and ask simple questions and get really quite deep answers back.

It might be a funny video they want to produce. It might be the answer to a history exam that they have to submit on Monday morning. And what we think AI literacy is, is ultimately handing children a screwdriver and saying, here is a fairly complex box, but let’s take it apart and let’s understand what’s under the hood. And let’s understand all the components. So for us, AI literacy is allowing children and empowering them to really kind of interrogate the fundamental basis of computer science and artificial intelligence. And that’s teaching them how computers see the world as data, what is sensing, how to think about kind of predictability, how to think about bias and force conversations and accountability.

So we want to empower children to have deep thoughts about this. We also want to empower teachers. and I think right now again this pace is happening so fast we asked some primary and middle school teachers in the United States what they thought about the pace of artificial intelligence in classrooms and they’re all hugely excited about or a very high number of them are very excited about what’s happening they agree that artificial intelligence literacy needs to be a foundational skill in school but that’s 80 % of them see that only 41 % of them feel remotely ready to go and teach AI literacy in a classroom so I think we have to provide teachers with the tools that are going to allow them to bring real world learning to life

Urvashi Aneja

Thanks Tom and I would love to maybe at a later stage in the panel come back to you on the how because we do a lot of work with policy makers trying to do kind of capacity support with policy makers and we really struggle in terms of how do you actually embed AI into AI literacy so I imagine children can do that and I think that’s a really good point We really have to think about the pedagogy quite carefully to make sure that we are imparting that learning. So I’d love to kind of come back to you on that. Chris, if I can bring you in next. Open AI has emphasized that AI systems will increasingly support learning, creativity, and problem -solving for young people.

From your perspective, where do you see the most promising opportunities for AI to positively shape children’s experiences, particularly in ways that strengthen agency, curiosity, or access to knowledge? And you’re not allowed to say what Raoul already said.

Chris Lehane

I was just going to say you got a great explanation of that. First of all, thanks for having me. Awesome panel. Baroness, always good to be with you. My son would be very jealous that I’m sitting next to the Lego guy. That’s a pretty cool thing. So thank you. And I’ll just also share, I may have to exit a little bit early, because I have a question. I’m supposed to be at the date double scheduled, so if so, my apologies in advance. I’ll try to answer your question at a macro level and then maybe a more specific level that I think picks up on your pedagogy question that you were just asking. First of all, this technology has enormous capabilities to basically individualize teaching, individualize.

I mean, you’re at a place where every kid in the world could, in effect, have their own AI tutor that would be able to help them to learn at the pace that they learn and in ways that they learn. I think amongst, you know, sort of insights in education is kids just learn in very different ways. And this technology could be incredibly liberating in terms of answering that. You mentioned the teachers. We do work with the largest teachers union in the United States, 400 ,000 teachers, to actually train them to develop the AI to, in fact, do some of that individualized teaching. But I think there’s maybe a level down from that, which I think you were sort of picking up on when you were setting up this question.

And that’s the agency question. I know the U .S. public education system better than I know others around the world, so part of what I’ll say is really based on my U .S. experience. But the U .S. K -12, I see the sign, yes, you’re telling me to shut up, K -12 public education system was designed for the industrial age. It was basically designed to take kids who were coming from rural environments and the urban environments and teach them to be able to work in factories. That was both the bells, different classrooms that you would go to, the time that the day started, how long the school day lasted. But sort of at its core was not just literacy in terms of teaching people to read, write, do arithmetic.

It was actually creating an ethos about how you should work and participate in an industrial age economy. I do think one of the big issues that we’re going to need to think about with this particular technology, which is going to really reward people like Rahul who take agency, is how do we actually teach people? Agency. This technology is an incredibly… leveling technology, it scales the ability of anyone to think, to learn, to create, to build, to produce. And the question is, do you actually encourage people to be able to use it that way? Because if so, the way we think about the social contract relationship between capital and labor and how that is calibrated, this technology can have a huge impact on actually giving individuals the ability to control their own labor as owners of it.

Urvashi Aneja

Thanks, Chris. And I appreciate particularly the point around agency and how can we teach people agency. And I also wonder that sitting here in India, in the global south, one of the things that we can see very clearly is that agency in some sense is not only a factor of individual capacity, but has so much to do with the broader socioeconomic institutional context in which you are in. And so I wonder how we think about agency. Across different contexts. Back to you.

Thomas Davin

Thank you so much. let’s get into the next segment which is really about what happens when it fails, what happens when there’s harm that is being done from a UNICEF lens of course when we think of the education in the world today, 7 out of 10 children in classrooms cannot explain to us a text that they read at 10 years of age 7 out of 10, so clearly the technology potential is immense in really realizing huge bounds in learning outcomes what happens if actually we go the other way and we suddenly have an over dependency on that technology for children when we maybe frame the children’s creativity in ways that we actually constrain it or make it one one fits all, so let’s go into that segment of risks and harms and what is the accountability frameworks and how do we protect against this, for those of you who are following carefully I would say that the organizers of the panel have done a beautiful work on gender, I don’t know if you noticed but it’s boys on one side, girls on the other women asking questions to the men same question to the women.

They’re by definition much smarter. That’s pretty clear. And that’s exactly where I was going. And the next question to the women are going to be harder than before, as they should be. So let’s start on a curve. Yes. But to be fair, it continues to be harder and harder as the panel continues. Let’s start with Baroness Joanna. You’ve held UK government roles focused on Internet safety and harms. And you have helped build major child online safety coalitions internationally. From that experience, what is one key lesson from the UK Internet safety agenda that you believe is worthwhile surfacing today? And maybe one area where you think, or you should say, we’ve tried this. Please don’t do this.

Baroness Joanna Shields

If I could convey one thing. After 15 years of looking at how do we regulate technology to prevent harm, how do we regulate technology to prevent harm? I think it’s important to this post -harm paradigm that we’re operating in is not going to work in the AI. future. So we have had to adapt very quickly as governments as harms have emerged using AI. For instance, the deepfake crisis that we’ve experienced recently. I know six, seven jurisdictions of, you know, countries that have implemented very quickly, you know, laws that are specific and targeted to that particular harm. But what we need to do is we need to step back and we need to think about that how do we build and design safety from the ground up.

And my personal view is that this has to come through consultation with the companies. I see a very different type of reaction from the AI models developers. They’re much more receptive to the idea of safety by design and building in guardrails that protect children from the outset. And I’m actually an optimist at the moment because I’m starting to see a lot of people who are doing a lot of the work that we’re doing right now. And I’m actually a lot of people who are doing a lot of the work that we’re doing right now. And I’m actually companies like Like OpenAI just recently announced that they have an age gate, age assurance technology to ensure that children under age, you know, whatever the jurisdiction is, I think it’s 18, okay, are not able to engage with the model and to experience, you know, that.

And I think that’s really important because, you know, we’ve been battling this age on the Internet for 15 years. And now the technology, whether it’s cryptography or biometrics, all kinds of technologies have emerged to where you can preserve privacy and ensure that you can protect privacy. So there’s no excuses anymore for companies not to build in robust age assurance that’s privacy preserving and that can ensure that the design experience that you get is appropriate for the age you are.

Thomas Davin

Thank you so much. So I love the point that social media, we talk a lot about social media these days, right? Rightfully so. But indeed, it’s been a late awakening worldwide about. the potential of that, but also the potential for what happens to children in many ways, and we cannot make that same mistake with AI. It’s just so much deeper and broader, and we need to look at this a lot more systematically. Maria, if I can come to you. Your work spans user modeling, personalization, as far as I understand it, and trustworthy AI, and you’ve also spoken publicly about disinformation risks. In your view, where do AI systems create the highest risk failure modes for children specifically, and what kind of technical evaluation should be required before deployment?

Maria Bielikova

on TikTok for 10 days in Germany, actually. And then we found out what happened. And maybe I can tell it in second, in second my entry that it was really shocked for us. Thank you so much.

Thomas Davin

So in essence, really having very clear impact focus research continuously so that can inform potential query mechanism and potential redress mechanism as a way to safeguard against those potential risks.

Maria Bielikova

And how they are exposed to commercial content. And this is the most critical.

Thomas Davin

Thank you so much.

Maria Bielikova

Even though we have Digital Service Act in Europe.

Thomas Davin

Thank you. Let’s move to the third segment.

Urvashi Aneja

Thanks. Yeah, and I think that brings us really nicely to this question of what next, what do we do? I think we often agree on what needs to be done at the level of principles, safety, transparency, accountability. I think you’ve added another dimension to it when you talk about, in some sense, evaluations, that we need to be doing kind of real -world evaluations in real -world deployment context of these systems, not just testing these systems in a lab setting, but testing, evaluating them in a real -world context. And regularly, I think the hard part, at least when we talk about the principles, things like safety, transparency, and accountability, is how we operationalize them across jurisdictions and also across business models, which I think also speaks to the point you were making around it being a feature and not a bug.

So this segment is really about the how, what becomes enforceable, what becomes measurable, and what changes incentives. Tom, if I can start with you again. As AI becomes more embedded in classrooms and in learning platforms, what governance or design choices are essential to ensure that these tools support children’s well -being at scale, particularly around diverse education systems and cultural contexts?

Tom Hall

thank you clearly this is a really uh exciting and uh high you know the potential of this moment in time is enormous so i think everyone should be ambitious uh but at the same time be measured um go in ambitious with your design plans for bringing ai into classrooms and see it as an as an opportunity to maybe make exponential gains in in many different markets where you may have been very challenged before i think there are tremendous opportunities for many markets in the global south right now so see the introduction of ai and ai literacy as something of a reset but you know don’t jump in blindfolded this is a once in a lifetime opportunity to establish essential foundational skills for young people and it’s going to need really careful thought these governance and design choices they’ve got to be built on no regret moves so i would say put data privacy data sovereignty and inclusion and respect for the student at the top of any plan When you sort of teach about, I don’t know, systemic bias and large language models in classrooms, make sure that all kids of all types of diversity and inclusions are represented and can see themselves coming back in the products that they’re experiencing.

Children have a lot to say in this space, so involve them. We’ve published a free AI policy toolkit for classrooms. Have children think about what kind of things they think need to be considered here. It’s going to be a really meaningful conversation between teacher and student. And talking of teachers, I think give them exciting but also relevant curriculum. We have computer science qualifications in the UK. The entry levels for that are critically low. And. Very low for girls. We introduced that 10 years ago. We gave very insufficient training for children. And the curriculum is frankly very dry. I think we have to really think about real world curriculum that is going to excite students. And so let them see themselves with real world problems in the types of learning experiences that we’re putting out there.

I’m speaking on behalf of the LEGO Group. So, you know, children are our role models. I think when you’re designing AI policies for children, this has to be sort of child -centered and child -led. And so just involve them in the plans as you roll them out. And I hope that will lead to some really exciting changes.

Urvashi Aneja

Thanks, Tom. Chris, earlier this year, OpenAI’s policy engagement has included calls for common -sense youth safety approaches and more parental control. So what, in your view, should be the baseline governance package for child -facing AI, and what should be globally interoperable versus what is locally set?

Chris Lehane

Sure. Thank you for the question. And let me just give two points, and then I’ll answer that question specifically. First of all, and I think this is a really smart room, so I’m sure we’re all thinking about it this way, but really important to understand and recognize that this is not social. This is not social media, and we should not make the classic mistake of fighting the last war with the next. next war. There are certainly lessons that are important that you take from it, but understanding that this is going to be a technology that is not just on your device, but is going to be around you in all sorts of different ways, physical world, non -physical world.

So understanding that component. I think secondly, interesting lessons from what we’ve seen on the catastrophic harm side. You’ve seen the emergence of AI safety institutes around the world where the leading frontier labs, for the most part, work with those safety institutes to basically be creating safety standards. UK, US, Europe, Japan, Australia, you’ve seen an early version of that. Here in India, and I do wonder whether there’s some version of that that you actually do specifically for kids’ safety. The third point really goes to your question, which is, yeah, we have put forth, and we’re really the only AI company that has done this thus far. We do hope others will join us. Basically, a multi -pronged approach.

The first, and the Baroness mentioned this, is we do do age assurance. We try to use signals to identify whether you’re under 18 or not. If we identify you as under 18 and if we are unable to identify you, we then default you to an under 18 model. So even if we’re not sure of your age, we do default you to an under 18 model, which has all sorts of restrictions around violence and sexual conversations and mental health type of issues. Three, we build it in with a ton of parental controls. Parents can control whether it has memory or not about your child. Parents can get real -time feedback. Parents can control how long you’re spending on it. You can get warnings and alerts around stuff if your child is asking stuff that would be in the mental health types of space.

Four, we prohibit any targeted advertising of kids using the technology. I think that’s one that’s a clear lesson from the social media age. Fifth, we have an outside review process that we’ve called for. In the U .S., that would be done by like a state attorney general, but someone who’s a part of government to actually review that what you’re saying you in fact are doing. And then finally, prohibit the targeting of specific kids bots. There may come a time and place when we actually have really good guardrails around this, and they can really serve really helpful, positive, productive purposes. But until we have those guardrails, we think we need to be really, really, really mindful of that.

So it is a complete package. We are pushing this in California and a number of states. We want to take it around the world. We’re working with some of the leading children’s advocacy organizations. And anyone here who would want to work with us on it, we really welcome that. And we don’t pretend to have all the answers. Like we’re super humble about this. We do think this is what we’ve seen from our data. This makes a lot of sense. It goes farther than what others have done. But we also know that this is going to be a constant learning process, and this is a beginning, not even the middle, and certainly not the end.

Urvashi Aneja

Sorry, just to ask a follow -up question on the bit around how you make this locally relevant. So you have this kind of package, you’re rolling it out in the U .S. How do you then cater it to different contexts?

Chris Lehane

You know, it’s a great question. Like there are some parts of the world, you know, Europe is an example of this, where there are some privacy limitations that actually impact your ability to do the age assurance at the level that you would like to be able to do it at. So we’re in the process of some of these jurisdictions of trying to work through some of those types of issues. I think there’s other dynamics that potentially come into play, which may be what you’re asking about, you know, cultural context, societal context. And I think those are things that you do have to work through with individual countries because individual countries are going to have their own norms on those.

And I think we’ll also see different levels of vulnerability or different types of vulnerabilities in those different contexts.

Urvashi Aneja

Fair enough. If I can bring you in. How should global norms for children’s safety handle cultural and regulatory diversity without creating, in some sense, loopholes that allow companies then to opt for the weakest protection?

Baroness Joanna Shields

So I wanted to take that question in two different directions. First of all, in terms of a global regulatory framework, there are certain standards that are required across every jurisdiction. I mean, every country has an age where children can participate in the digital world. And unfortunately, it’s a blunt instrument in many cases. It applies across the board at a certain age. We’ve been seeing a lot of social media bans recently. And I think that has come out of exasperation on the part of governments, the fact that they just have. They’ve given up trying to regulate this technology, and they’ve decided they’re going to just use that blunt instrument as a guide. And unfortunately, you know, there are benefits then that the children can’t participate in.

But the reality is that this, there’s a little bit of movement here. As the, you know, as the age of assurance technology grows and becomes much more capable, we can custom design experiences for young people that accommodate their level of maturity and capability and ensure that we can meet these requirements in a much more sophisticated and better way. It’s about time we solve for age online once and for all. And I believe we’re getting close to that. There’s an organization called the Open Age Alliance. And it’s a very important organization that’s looking to harmonize standards across all of, age assurance technology. So whatever age assurance you think in your platform is reliable, Open Age will enable you to generate an age key.

And then that age key travels with. the child everywhere they go online. So we’ve got a very absolutely verifiable way for companies to deliver an age -appropriate experience. And you asked me about something else that I think is really important in this context about culture. And if we have a world where we are accepting models from just the global north, I really believe we will lose so much of our cultural diversity, our uniqueness as people, wherever we come from, whatever our background is. We have to be very mindful of the fact that we don’t want to develop a monoculture that is based on a handful of models that everybody uses around the world, and we lose that richness of who we are, what makes us human.

I think that that wasn’t… really the aim of the question, but I couldn’t let it go without bringing that to bear, because this is an absolutely critical question we need to solve. as society.

Urvashi Aneja

Thank you. I couldn’t agree more on both those points on how we have to get the age, we have to solve for age verification and then the risk of kind of flattening culture and what that means for children and what that means for how they develop and grow. Maria, last but not least, you’ve helped elevate trustworthy AI as a public agenda in Slovakia and in Europe through initiatives spotlighting responsible practices. So if a regulator asks you for key measures or measurable indicators that an AI system is acting in a child’s best interest, what would those be?

Maria Bielikova

Actually, I already mentioned it somehow that it’s something that AI at this moment is so complex, meaning I mean the neural networks that we have that we we cannot actually measure something that we don’t know. We can observe it and this is quite important to do a lot of studies as we do and not just taking analytics from companies that provide it even though they seem the best because even though they tell that children are not profiled but they are because we see it and sometimes it’s out of their control because we should really make such studies as I mentioned because for example one of the results of a study I mentioned before is that children see less formal advertisement on TikTok.

This is fine but actually they are exposed five times more … to profiling to the topics with influencers and so on. They are not formal advertisements. So we definitely should do a lot of such studies. And the children should be there because if we prohibit everything for them until some age, then they will not be able to explore it. It is the same as we will prohibit children to go to the city. But we should know what is going on and we should travel with them through this environment. And this is probably the most important to doing such studies to really understand what is going on on the platform where they are because they will be there.

Urvashi Aneja

I think that’s such a powerful that’s such a powerful analogy, the city one and I think while you were speaking what struck me is that you know we have some tools already we don’t have to kind of approach this afresh so we have actually tools around data protection and privacy if we actually enforce them some of that profiling that you’re talking about need not happen we have tools that allow us to get data from the platforms to actually understand what is happening on these systems so again we have things in our kind of regulatory toolbox that we can exercise and then of course I think in addition to that really this point around contextual evaluations that involve children is so that we can understand what these systems are actually doing Thomas maybe I can hand it back to you to or did you want to add something?

Tom Hall

Thomas is my formal name so I thought you were talking

Urvashi Aneja

oh right if you would like to add something and then you can hand it on to the other Would you want something? Other Thomas.

Tom Hall

A lady said something I thought was very wise to me this morning and said, you know, you’ve got to think about what kind of ancestor you want to be. And I guess we’re at this really interesting moment where we’ve had social media, we’ve had sugar, we’ve had tobacco. Surely now this is our chance to make some really sharp decisions and pay it forward for the next generation. So credit to the lady who said that to me this morning.

Thomas Davin

Thank you so much, Tom. So it’s going to be very hard to close, so maybe I’ll just try to see at least the points that I took from the panel and hopefully they will resonate. I come away with a sense of, I would say it’s going to sound terribly UN, but measured optimism. One, because the potential is tremendous. We are all aware of that. The potential, at least from a UNICEF lens, on really changing outcomes for children in ways we have never been able to do before is huge. And I think that’s something that we can all be proud of. and the risks are equally tremendously important and potentially will be there for decades if we don’t craft, design it right.

To my mind, there may be three directions that I heard that we are going in the right direction. One is safety by design has to be a must. That’s about age appropriateness. It’s about data privacy. It’s about child rights at the heart. It’s about appropriate content for the right age. It’s about systematic impact measurement. I was struck, Tom, in your session this morning when you were talking about, you know, if we have a model that actually gives the right answer or an answer to children all the time, they might actually lose their sense of curiosity. And I never thought about it like this. What huge loss would that be for humanity if we suddenly have children who are just no more curious because they just ask whatever question?

Can we design a model that actually gives the wrong answer on purpose so that the child actually struggles because we know… that grit is going to be one of the huge skills? of tomorrow. So those things are going to be massively important. Redress mechanism, we don’t talk about this and how we enforce those redress mechanism when things go wrong is also there. The second layer in my mind would be inclusion by default, coming back to Baroness’s point about having a monoculture under risk of this and we know that some of that is already playing out and hopefully having a summit in India is one of the turning points where we can see actually this turning around a little bit where we really have so many more countries beyond the global north creating shaping what those solutions are, having representation of regions of language of different dialects but also children with disabilities which are quite often left as we know out of those out of time.

And maybe one thing that we haven’t really talked about is having solutions that work for the unconnected, having solutions that work offline. We are at risk of just focusing on urban centered people and that will be terrible if we don’t do it right to those who are already kind of struggling by the wayline. And last but not least is children at the heart. And children at the heart because that’s who we want to create that world for, the ancestors we want to be for them but also because Raoul demonstrated that for us, they are the most effective users of that and the ones that have the ability to tell us this works for me, this doesn’t work for me and they should be not just divorced but they should be part of the governance of those mechanisms.

That starts with AI literacy in schools, it starts with also helping parents having the ability to help their children know where to get that literacy and hopefully if we hit all of these right we have a chance.

Urvashi Aneja

Thank you all for joining us. I just want to give the floor back to the MC.

Moderator

Thank you so much to the panelists as well as the moderators and the audience. Also on behalf of Undersecretary General Amandeep Gill, United Nations Special Envoy for Digital and Emerging Technologies. who regrets missing the session as he is stranded with the Secretary General’s program. Even the United Nations motorcade cannot make it through Delhi traffic. But could we please welcome Rahul back up to the stage for a very brief reflection on the discussion?

Rahul John Aju

I’ll make sure it’s brief. First of all, guys, can we have a big glass for them? That was not enough. If you don’t realize, these are the main people who designed the future for us kids. And the fact that I got an opportunity here to speak, thank you again, UN for that. Thank you, AI Summit for that. And whatever they said is very true. You know why? Because at this age, specifically us kids, the policies that are designed, when we are building these AI tools, that should be the first thought of keeping kids in mind, not an afterthought, right? And the fact that that’s happening is good, right? Because from Lego to open AI to all these big places to ma ‘am, everyone here, they’re designing the next world.

and I just want to say a big, big, big thank you and I also want to add one last thing. Thank you so much for always talking about me also in between but more than that, for listening to us kids, you know, for not just having, thinking what we need, for putting our opinion in mind also while building this. So a big thank you from all the children out there. Thank you

Moderator

Excellencies and distinguished guests, thank you for your participation and engagement. We appreciate the insights shared today and look forward to continued discussion on the responsible advancement of AI. The session is now concluded. Thank you. Thank you. Thank you audience. May I request the session officers to please come to the stage. May I request the audience to exit from the door behind us. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (43)
Factual NotesClaims verified against the Diplo knowledge base (5)
Confirmedhigh

“Baroness Joanna Shields warned that governing AI for children will be “the clearest test yet on whether we are governing this technology responsibly and for the public good”.”

The knowledge-base entry titled “Safeguarding Children with Responsible AI: Baroness Joanna Shields” records her framing of AI governance for children as a key test of responsible and public-good regulation, confirming this statement [S1].

Confirmedhigh

“Simulated intimacy can feel real to a child even though it is merely code.”

S3 explicitly notes that a model’s messages are “code” but “for a child, it can feel very real”, directly supporting the claim.

Confirmedhigh

“Children cannot reliably distinguish authentic human connection from artificial intimacy.”

S3 also states that “children are not miniature adults. They cannot reliably distinguish between authentic human connection and artificial intimacy,” confirming the claim.

Confirmedhigh

“The blurring threatens safety, mental health, identity formation and long‑term well‑being, with observed harms such as emotional dependency, manipulation, deep‑fake abuse and devastating loss.”

S30 documents deep‑fake abuse as a concrete harm to vulnerable groups, and S35 discusses hidden psychological risks and “AI psychosis”, providing additional context for emotional dependency and mental‑health impacts.

Additional Contextmedium

“Children should first master natural intelligence—basic maths, reading and critical thinking—before relying on AI tools.”

S113 highlights that young people are exposed to sophisticated AI‑generated content and need proper education and verification tools to distinguish real from artificial, reinforcing the argument for foundational learning before AI reliance.

External Sources (119)
S1
Safeguarding Children with Responsible AI — -Chris Lehane- Chief Global Affairs Officer for OpenAI
S2
OpenAI’s push to establish AI as critical infrastructure — In a recent interview,Chris Lehane, the newly appointed vice president of public works at OpenAI, underscores AI’s role …
S3
https://dig.watch/event/india-ai-impact-summit-2026/safeguarding-children-with-responsible-ai — Thank you. Can you hear me? Yes? All right, so delighted to be here with you all. I’m one of the two co -moderators, and…
S4
Safeguarding Children with Responsible AI — -Tom Hall- Vice President and General Manager at Lego Education (works with the National Legal Foundation)
S5
https://dig.watch/event/india-ai-impact-summit-2026/safeguarding-children-with-responsible-ai — Thank you. Can you hear me? Yes? All right, so delighted to be here with you all. I’m one of the two co -moderators, and…
S6
Safeguarding Children with Responsible AI — – Baroness Joanna Shields- Urvashi Aneja
S7
Towards a Safer South Launching the Global South AI Safety Research Network — – Dr. Urvashi Aneja- Ambassador Philip Thigo
S8
Safeguarding Children with Responsible AI — Bielikova offered a memorable analogy: “It is the same as we will prohibit children to go to the city. But we should kno…
S9
Safeguarding Children with Responsible AI — -Baroness Joanna Shields- Former UK government roles focused on Internet safety and harms, helped build major child onli…
S10
High Level Session 4: Securing Child Safety in the Age of the Algorithms — – **Thomas Davin** – Director of Innovation, UNICEF Shivanee Thapa: Thank you. I’ll come back to Minister Bah shortly a…
S11
High Level Session 2: Digital Public Goods and Global Digital Cooperation — – **Thomas Davin** – Global Director for UNICEF Innovation, Session moderator Karianne Tung, Veronica M. Nduva, Nandan…
S12
Children safety online in 2025: Global leaders demand stronger rules — At the20th Internet Governance Forum in Lillestrøm, Norway, global leaders, technology firms, and child rights advocates…
S13
Keynote-Olivier Blum — -Moderator: Role/Title: Conference Moderator; Area of Expertise: Not mentioned -Mr. Schneider: Role/Title: Not mentione…
S14
Keynote-Vinod Khosla — -Moderator: Role/Title: Moderator of the event; Area of Expertise: Not mentioned -Mr. Jeet Adani: Role/Title: Not menti…
S15
Day 0 Event #250 Building Trust and Combatting Fraud in the Internet Ecosystem — – **Frode Sørensen** – Role/Title: Online moderator, colleague of Johannes Vallesverd, Area of Expertise: Online session…
S16
Safeguarding Children with Responsible AI — – Rahul John Aju- Chris Lehane
S17
Intelligent Society Governance Based on Experimentalism | IGF 2023 Open Forum #30 — She highlighted the need for AI systems to be inclusive of diverse voices and ensure that they respond to the needs and …
S18
AI governance debated at IGF 2025: Global cooperation meets local needs — At theInternet Governance Forum (IGF) 2025 in Norway, an expert panel convened to examine the growing complexity of arti…
S19
Child safety online – update on legal regulatory trends combatting child sexual abuse online — Jaap-Henk Hoepman:Yeah, so like I said, so I recently was talking to somebody who was doing research on the false positi…
S20
Data Protection for Next Generation: Putting Children First | IGF 2023 WS #62 — Age verification should not be the default solution for protecting minors’ data. Other non-technical alternatives should…
S21
Cultural diversity — While AI can help preserve cultural diversity, it is crucial to shed light on the problem of cultural homogeneity when d…
S22
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Matthew Prince Cloudflare — “One where AI is bringing about our humanity and our differences, not homogenizing us.”[10]. “AI needs to respect and ac…
S23
Balancing innovation and oversight: AI’s future requires shared governance — At IGF 2024, day two in Riyadh, policymakers, tech experts, and corporate leaders discussed one of the most pressing dil…
S24
Shaping AI to ensure Respect for Human Rights and Democracy | IGF 2023 Day 0 Event #51 — Furthermore, Francesca advocates robustly for targeted regulation in the AI field. She firmly asserts that any necessary…
S25
What is it about AI that we need to regulate? — AI systems’ tendency to perpetuate and amplify existing biases was identified as requiring immediate regulatory attentio…
S26
Rethinking learning: Hope, solutions, and wisdom with AI in the classroom — But this adaptation won’t happen without effort. It requires educators willing to experiment with new approaches even wh…
S27
Responsible AI for Children Safe Playful and Empowering Learning — AI could easily offer little prompts that inspire me to play. It could support diverse learning methods. AI could help u…
S28
Education meets AI — Additionally, the speakers emphasized the need for personalized learning and adaptive teaching methods. They discussed t…
S29
WS #179 Navigating Online Safety for Children and Youth — 1. Global Standards vs Local Adaptation: Keith Andere highlighted the need to adapt global standards to local contexts a…
S30
Parliamentary Session 3 Click with Care Protecting Vulnerable Groups Online — Rather than applying universal Western standards, different regions should be able to establish standards that align wit…
S31
High-Level Session 5: Protecting Children’s Rights in the Digital World — Need for child-appropriate verification, age assurance protocols, and standards for platforms
S32
WS #123 Responsible AI in Security Governance Risks and Innovation — This comment elevated the technical discussion to a more sophisticated understanding of systemic governance challenges. …
S33
WS #232 Innovative Approaches to Teaching AI Fairness & Governance — Tayma argues that educators need to adapt their teaching goals in the AI era. She suggests focusing on developing critic…
S34
AI (and) education: Convergences between Chinese and European pedagogical practices — Norman Sze: Thank you for introduction. It’s my honor to join this forum and share insight from perspective of professio…
S35
Hidden psychological risks and AI psychosis in human-AI relationships — For years, stories and movies have imagined humans interacting with intelligent machines, envisioning a coexistence of t…
S36
Generative AI and Synthetic Realities: Design and Governance | IGF 2023 Networking Session #153 — Generative AI and large language models have the potential to significantly enhance conversational systems. These system…
S37
Interim Report: — 27. Other risks are more a product of humans than AI. Deep fakes and hostile information campaigns are merely the l ates…
S38
DC-IoT & DC-CRIDE: Age aware IoT – Better IoT — Abhilash Nair: Thank you. I want to talk a little bit about why age assurance matters from a legal perspective. As a s…
S39
Leveraging AI to Support Gender Inclusivity | IGF 2023 WS #235 — Another important point emphasized in the analysis is the significance of involving users and technical experts in the p…
S40
Global AI Policy Framework: International Cooperation and Historical Perspectives — Bali contends that fundamental concepts like privacy vary significantly across cultures, and that Global South countries…
S41
WS #100 Integrating the Global South in Global AI Governance — Fadi Salim: Thank you. And this covers a little bit the grassroot element of it. So it’s awareness, diversity, inclusi…
S42
AI & Child Rights: Implementing UNICEF Policy Guidance | IGF 2023 WS #469 — Algorithmic systems indirectly impact children by determining health benefits or loan approvals for their parents By de…
S43
Safeguarding Children with Responsible AI — Artificial intelligence | Monitoring and measurement
S44
WS #172 Regulating AI and Emerging Risks for Children’s Rights — 3. Regulatory Landscape and Challenges 5. Role of Education and Awareness Ansgar Koene: terms of actually making this…
S45
Harnessing AI for Child Protection | IGF 2023 — Monitoring digital content is seen as intrusive and infringing on privacy, while not monitoring absolves platforms of ac…
S46
Governments vs ChatGPT: Investigations around the world — But a more challenging request is for the company to introduce measures for identifying accounts used by children by 30 …
S47
Data Protection for Next Generation: Putting Children First | IGF 2023 WS #62 — Age verification should not be the default solution for protecting minors’ data. Other non-technical alternatives should…
S48
Tech Transformed Cybersecurity: AI’s Role in Securing the Future — Moderator – Massimo Marioni:Now you’re all senior leaders within your companies What do you think are the most important…
S49
Open Forum #26 High-level review of AI governance from Inter-governmental P — 4. Youth: Should be involved in policy-making and allowed to innovate while addressing potential risks. Leydon Shantsek…
S50
Policy Network on Artificial Intelligence | IGF 2023 — Furthermore, one speaker raises the question of whether the world being created aligns with the aspirations for future g…
S51
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — He emphasised the need for policy that balances principle-level guidance with practical guardrails whilst avoiding overl…
S52
Debate over AI regulation intensifies amidst innovation and safety concerns — In recent years, debates over AI haveintensified, oscillating between catastrophic warnings and optimistic visions. Tech…
S53
Building Indias Digital and Industrial Future with AI — These key comments fundamentally elevated the discussion from surface-level policy rhetoric to deep, nuanced analysis of…
S54
What is it about AI that we need to regulate? — Interestingly, some speakers noted that clear regulatory guidance can actually accelerate innovation. Eltjo Poort inWS #…
S55
Open Forum #17 AI Regulation Insights From Parliaments — Cybersecurity | Violent extremism | Children rights Research shows that children are being recruited for extremism and …
S56
Cultural diversity — While AI can help preserve cultural diversity, it is crucial to shed light on the problem of cultural homogeneity when d…
S57
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — Fadi Daou:Wonderful. I think this is so important to be considered by the policymakers. In fact, this multi-stakeholder …
S58
Responsible AI for Children Safe Playful and Empowering Learning — AI could easily offer little prompts that inspire me to play. It could support diverse learning methods. AI could help u…
S59
Generative AI in Education — Personal insights, including those from a mother’s perspective, touched on the challenges of steering children towards b…
S60
WSIS Action Line Facilitators Meeting: 20-Year Progress Report — UNESCO is providing policy guidance on AI in education, focusing on frameworks that emphasize ethical use of AI in educa…
S61
TIMELINE — This strategy will integrate artificial intelligence technologies into the field of education through projects aimed at …
S62
JANUARY 14 TH , 2019 — Digital Inclusion and Education for all is an essential component of AI development. More extensive knowledge a…
S63
High-Level Session 5: Protecting Children’s Rights in the Digital World — Need for child-appropriate verification, age assurance protocols, and standards for platforms
S64
Safeguarding Children with Responsible AI — Age assurance technology must be implemented with privacy-preserving methods to ensure age-appropriate experiences
S65
High Level Session 4: Securing Child Safety in the Age of the Algorithms — – Karianne Tung- Christine Grahn- Emily Yu Barrington-Leach advocates for a fundamental shift in platform design where …
S66
Cybersecurity regulation in the age of AI | IGF 2023 Open Forum #81 — One of the principles focuses on robustness, security, and safety.
S67
Responsible AI for Children Safe Playful and Empowering Learning — This comment established the philosophical foundation for the entire discussion, shifting focus from AI as a consumption…
S68
Education meets AI — In addition to the above topics, the significance of critical information and critical thinking in education was also di…
S69
AI (and) education: Convergences between Chinese and European pedagogical practices — Norman Sze: Thank you for introduction. It’s my honor to join this forum and share insight from perspective of professio…
S70
Hidden psychological risks and AI psychosis in human-AI relationships — For years, stories and movies have imagined humans interacting with intelligent machines, envisioning a coexistence of t…
S71
Generative AI and Synthetic Realities: Design and Governance | IGF 2023 Networking Session #153 — Generative AI and large language models have the potential to significantly enhance conversational systems. These system…
S72
Comprehensive Discussion Report: AI’s Existential Challenge to Human Identity and Society — Harari identifies the potential for AI to become children’s primary interaction partner from birth as the most dangerous…
S73
Most US teens use AI companion bots despite risks — Anew national surveyshows that roughly 72% of American teenagers, aged 13 to 17, have tried AI companion apps such as Re…
S74
Revisiting 10 AI and digital forecasts for 2025: Predictions and Reality — Collaborative Efforts: The Global Age Assurance Standards Summit and the International Age Assurance Working Group are p…
S75
Child safety online – update on legal regulatory trends combatting child sexual abuse online — Jaap-Henk Hoepman:Yeah, so like I said, so I recently was talking to somebody who was doing research on the false positi…
S76
DC-IoT & DC-CRIDE: Age aware IoT – Better IoT — Abhilash Nair: Thank you. I want to talk a little bit about why age assurance matters from a legal perspective. As a s…
S77
Global Youth Summit: Too Young to Scroll? Age verification and social media regulation — All stakeholders, including government, industry, and civil society representatives, acknowledge that there are no perfe…
S78
WS #270 Understanding digital exclusion in AI era — Speaker 4: Yeah, so I think that this is a very important question. I think first, we need to be very inclusive or in …
S79
How can Artificial Intelligence (AI) improve digital accessibility for persons with disabilities? — Audience:Thank you. Thank you so much. I represent you from Chinese mission. We appreciate Her Excellency, Ambassador Es…
S80
WS #100 Integrating the Global South in Global AI Governance — Fadi Salim: Thank you. And this covers a little bit the grassroot element of it. So it’s awareness, diversity, inclusi…
S81
Discussion Report: AI Implementation and Global Accessibility — -Development: Using diverse datasets that include perspectives from the global south, both sexes, and people with disabi…
S82
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — If compute, database and foundational models remain concentrated of a few, we risk creating a new form of inequality, an…
S83
Protection of Subsea Communication Cables — The discussion maintained a consistently serious and urgent tone throughout, reflecting the critical nature of the infra…
S84
Women, peace and security — The overall tone was one of concern and urgency. Many speakers expressed alarm at negative trends and backsliding on wom…
S85
Opening plenary session and adoption of the agenda — Emphasis is placed on the need to protect critical infrastructure and to increase confidence-building measures among nat…
S86
WS #70 Combating Sexual Deepfakes Safeguarding Teens Globally — The discussion maintained a serious, urgent, and collaborative tone throughout. Speakers demonstrated deep concern about…
S87
How Humans Sense / Davos 2025 — The overall tone was enthusiastic and engaging, with the speaker using humor, personal anecdotes, and even a tattoo demo…
S88
Book launch: What changes and remains the same in 20 years in the life of Kurbalija’s book on internet governance? — The tone is academic and informative, with Kurbalija speaking as an expert educator sharing insights from decades of exp…
S89
https://dig.watch/event/india-ai-impact-summit-2026/the-global-power-shift-indias-rise-in-ai-semiconductors — Absolutely. So we are all lucky to be here at this age of AI. We are truly lucky to be in this. No, that was very insigh…
S90
Sauna diplomacy and the quiet power of informality — When transplanted into statecraft, these qualities had radical consequences. At the presidential residence in Tamminiemi…
S91
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — The tone is consistently enthusiastic, patriotic, and inspirational throughout. Sharma maintains an optimistic and confi…
S92
Engineering Accountable AI Agents in a Global Arms Race: A Panel Discussion Report — The discussion maintained a thoughtful but somewhat cautious tone throughout, with speakers acknowledging both opportuni…
S93
Scaling AI for Billions_ Building Digital Public Infrastructure — The discussion maintained a balanced but cautionary tone throughout. While panelists acknowledged the tremendous opportu…
S94
World Economic Forum Panel Discussion: Global Economic Growth in the Age of AI — The conversation maintained a cautiously optimistic tone throughout, characterized by intellectual rigor and practical r…
S95
WS #283 AI Agents: Ensuring Responsible Deployment — The discussion maintained a balanced, thoughtful tone throughout, combining cautious optimism with realistic concern. Pa…
S96
Driving Enterprise Impact Through Scalable AI Adoption — The tone was thoughtful and exploratory rather than alarmist, with participants acknowledging both the transformative po…
S97
High-Level Session 2: Transforming Health: Integrating Innovation and Digital Solutions for Global Well-being — The tone of the discussion was largely optimistic and forward-looking. Panelists acknowledged challenges but focused on …
S98
Main Session 1: Global Access, Global Progress: Managing the Challenges of Global Digital Adoption — The tone of the discussion was largely optimistic and solution-oriented. Speakers highlighted positive examples of how t…
S99
WS #302 Upgrading Digital Governance at the Local Level — The discussion maintained a consistently professional and collaborative tone throughout. It began with formal introducti…
S100
AI: Lifting All Boats / DAVOS 2025 — The tone was largely optimistic and solution-oriented, with speakers acknowledging challenges but focusing on opportunit…
S101
Emerging Markets: Resilience, Innovation, and the Future of Global Development — The tone was notably optimistic and forward-looking throughout the conversation. Panelists consistently emphasized oppor…
S102
Closing remarks — This comment is powerful because it creates a generational identity and responsibility. The repetition emphasizes urgenc…
S103
Open Mic & Closing Ceremony — The overall tone of the session was appreciative, with a sense of accomplishment expressed by participants. As Mary Udum…
S104
Responsible AI in India Leadership Ethics & Global Impact part1_2 — The tone was professional, collaborative, and pragmatically optimistic throughout. Speakers maintained a solution-orient…
S105
[Parliamentary Session Closing] Closing remarks — The tone of the discussion was formal yet collaborative and appreciative. There was a sense of accomplishment for the wo…
S106
Protecting children online with emerging technologies | IGF 2023 Open Forum #15 — Xianliang Ren:Ladies and gentlemen, I am pleased to attend the UN Internet Governance Forum in 2023, which is a forum on…
S107
UK security minister raises alarm on potential misuse of AI technology — Tom Tugendhat, the UK’s Minister of State for Security,has warned of the dangers posed by the malicious use of AI techno…
S108
Building a Digital Society, from Vision to Implementation — Stacey Hines introduced the “SEA” framework for people-centric AI adoption: Storytelling for trust-building, Education f…
S109
From principles to practice: Governing advanced AI in action — Human rights | Economic | Development Sasha presents a causal chain showing how prioritizing responsibility in AI devel…
S110
Keynote-Roy Jakobs — This comment introduces a systems-thinking perspective that acknowledges the complexity of AI implementation beyond just…
S111
WS #462 Bridging the Compute Divide a Global Alliance for AI — Elena Estavillo Flores emphasized the need for “inclusive governance models with meaningful civil society participation”…
S112
Responsible AI in India Leadership Ethics & Global Impact — She notes that a one‑size‑fits‑all approach does not work; diverse industry templates and varying maturity levels create…
S113
Lightning Talk #139 Including youth to the public discourse — Young people are exposed to sophisticated AI-generated content that appears authentic but is fabricated, including fake …
S114
Next Steps for Digital Worlds — Striking a balance between technology usage and other aspects of life, such as interpersonal communication, exercise, an…
S115
DC3 Community Networks: Digital Sovereignty and Sustainability | IGF 2023 — Luca Belli:5, 4, 3, 2, 1. All right. So welcome to everyone to this annual meeting of the Dynamic Coalition on Community…
S116
Session — Kazakova’s role in the negotiations has been characteristically inquisitive and considered; she endeavours to capture th…
S117
Book presentation: “Youth Atlas (Second edition)” | IGF 2023 Launch / Award Event #61 — Despite the challenges, Pyrate is delighted to be working with her team. She values the opportunities for collaboration …
S118
Internet standards and human rights | IGF 2023 WS #460 — Eva Ignatuschtschenko:Thank you. I’m trying to be quick. I think a bit of optimism. We are talking about dozens of stand…
S119
Keynote ‘I’ to the Power of AI An 8-Year-Old on Aspiring India Impacting the World — 8 year old prodigy: Sharing is learning with the rest of the world. One, an AI that is independent. From large global A…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
B
Baroness Joanna Shields
5 arguments149 words per minute1123 words449 seconds
Argument 1
Proactive governance is needed; post‑harm models are inadequate and age‑appropriate safeguards must be built into AI from the start.
EXPLANATION
Baroness Shields argues that the traditional reactive, post‑harm regulatory approach used for social media cannot protect children in the AI era. Instead, governance must be anticipatory, embedding safety and age‑appropriate design into AI systems before they are deployed.
EVIDENCE
She notes that the post-harm paradigm “is not going to work in the AI future” and that “the post-harm regulatory model … is not fit for purpose in the AI world” [3][266-269]. She also stresses that children must not be “beta testers” and that age-appropriate experiences with guardrails are essential [16-18].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Baroness Shields’ call for proactive, safety‑by‑design regulation is documented in S1, and S17 reinforces the need for responsible innovation that anticipates risks.
MAJOR DISCUSSION POINT
Proactive governance and age‑appropriate safeguards
AGREED WITH
Chris Lehane, Tom Hall, Urvashi Aneja, Thomas Davin
Argument 2
Harmonized age‑verification standards (e.g., Open Age Alliance) are needed to provide consistent protection across jurisdictions.
EXPLANATION
The Baroness calls for a global, interoperable age‑verification framework so that children’s age can be reliably confirmed online, enabling consistent age‑appropriate experiences worldwide. She cites the Open Age Alliance as a mechanism to generate a portable age key.
EVIDENCE
She explains that “the Open Age Alliance … will enable you to generate an age key… that travels with the child everywhere they go online” [390-394]. She also highlights the need for “robust age assurance that is privacy preserving” and that technology now makes this possible [278-281].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The importance of robust, privacy‑preserving age verification is highlighted in S1, while S20 discusses the need for careful implementation of age‑verification tools, and S29 examines global standards versus local adaptation.
MAJOR DISCUSSION POINT
Global age‑verification standards
AGREED WITH
Chris Lehane, Urvashi Aneja
Argument 3
Avoid a monoculture of AI models; preserve linguistic and cultural diversity to protect children’s identity development.
EXPLANATION
Baroness Shields warns that relying on a narrow set of AI models from the Global North would erode cultural diversity and limit children’s exposure to varied cultural expressions. She advocates for a pluralistic AI ecosystem that respects local languages and identities.
EVIDENCE
She states that “if we have a world where we are accepting models from just the global north, we will lose so much of our cultural diversity… we don’t want to develop a monoculture” [395-398].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Concerns about cultural homogeneity in AI models are detailed in S21, and S22 stresses that AI should respect and emphasize cultural differences; S17 adds that inclusive voices are essential.
MAJOR DISCUSSION POINT
Preserving cultural diversity in AI
AGREED WITH
Urvashi Aneja, Tom Hall, Chris Lehane
Argument 4
AI is fundamentally different from platforms and requires a distinct regulatory approach.
EXPLANATION
She argues that AI’s one‑to‑one adaptive nature and simulated intimacy set it apart from traditional platforms, so post‑harm models used for social media are inadequate.
EVIDENCE
She says, “The post-harm regulatory model … is not fit for purpose in the AI world” and follows with “AI is fundamentally different. It is not a platform” and describes AI as “a one-to-one adaptive interaction embedded in how children learn” [3-5][6-8].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S23 argues for shared governance tailored to AI’s unique characteristics, and S25 notes that regulation should focus on specific uses rather than treating AI as a generic platform; S17 also points to the need for proactive, AI‑specific safeguards.
MAJOR DISCUSSION POINT
Need for new governance models tailored to AI’s unique characteristics
Argument 5
Guardrails are needed around AI systems that simulate intimacy to protect children.
EXPLANATION
She warns that children should not be used as beta testers for AI that can mimic human connection, calling for age‑appropriate experiences with safeguards against simulated intimacy.
EVIDENCE
She states, “Children must not be the beta testers for our AI-enabled world” and adds, “We need age-appropriate experiences by default with guardrails around systems that simulate intimacy without accountability” [16-18].
MAJOR DISCUSSION POINT
Protective safeguards for AI‑driven simulated intimacy
C
Chris Lehane
6 arguments191 words per minute1316 words412 seconds
Argument 1
Implement age‑assurance, parental controls, and external reviews to block under‑18 access and harmful content.
EXPLANATION
Lehane outlines OpenAI’s multi‑layered safety package that first verifies a user’s age, defaults to an under‑18 model when uncertain, and adds parental controls, real‑time alerts, and an external review process to protect children from harmful content.
EVIDENCE
He describes age-assurance signals, defaulting to an under-18 model, parental controls for memory, time limits, and alerts, a ban on targeted advertising, and an outside review by government authorities [327-354].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
OpenAI’s multi‑layered safety package is described in S1, while S20 provides context on age‑verification and privacy considerations, and S29 discusses aligning such safeguards with global standards.
MAJOR DISCUSSION POINT
Age‑assurance and parental safeguards
AGREED WITH
Baroness Joanna Shields, Urvashi Aneja
Argument 2
AI can level the playing field, but must be used to foster agency and curiosity rather than replace effort; children need space to struggle and develop grit.
EXPLANATION
Lehane emphasizes that AI’s scaling power can democratise learning, yet it should encourage children to think and create rather than provide ready answers, preserving the development of agency and resilience.
EVIDENCE
He notes that AI “is an incredibly… leveling technology” and asks whether we “encourage people to be able to use it that way” or risk undermining the social contract and personal labour control [247-249]. He also mentions the need to avoid replacing effort with AI to preserve agency [240-246].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need to preserve curiosity and encourage effortful learning is emphasized in S26, and S27 highlights AI’s role in empowering rather than replacing learner agency.
MAJOR DISCUSSION POINT
Fostering agency and grit with AI
AGREED WITH
Thomas Davin, Baroness Joanna Shields, Tom Hall
Argument 3
Personalized AI tutors can support diverse learning needs, enhancing agency and self‑directed learning.
EXPLANATION
Lehane points out that AI can provide individualized tutoring for every child, adapting to each learner’s pace and style, thereby expanding agency and self‑directed education.
EVIDENCE
He states that “every kid in the world could… have their own AI tutor that would be able to help them to learn at the pace that they learn and in ways that they learn” [232-236].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The potential of adaptive, personalized tutoring is discussed in S28, which outlines AI‑driven individualized learning experiences.
MAJOR DISCUSSION POINT
Personalized tutoring for agency
Argument 4
Global norms must allow local cultural adaptation while preventing weakest‑link loopholes; standards should be flexible to regional values.
EXPLANATION
Lehane argues that while global safety standards are needed, they must be adaptable to differing privacy laws, cultural expectations, and vulnerability profiles across countries.
EVIDENCE
He references Europe’s privacy limitations affecting age-assurance, and notes that “cultural context, societal context… have to be worked through with individual countries” [370-374].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The tension between global standards and local adaptation is explored in S29 and S30, and S18 provides a broader view of global cooperation meeting local needs.
MAJOR DISCUSSION POINT
Balancing global standards with local contexts
AGREED WITH
Baroness Joanna Shields, Urvashi Aneja, Tom Hall
Argument 5
OpenAI’s safety package includes age gates, parental controls, real‑time alerts, advertising bans, and external review processes.
EXPLANATION
Lehane reiterates the components of OpenAI’s child‑safety framework, emphasizing that it combines technical safeguards with governance mechanisms to protect minors.
EVIDENCE
He lists the age-gate, parental controls, real-time feedback, limits on memory, prohibition of targeted ads, and an external review by state attorneys general or similar bodies [327-354].
MAJOR DISCUSSION POINT
Comprehensive child‑safety package
Argument 6
Collaboration between AI labs and emerging AI safety institutes is essential for creating effective safety standards.
EXPLANATION
He points out that leading frontier labs are working with newly formed safety institutes worldwide to develop safety standards, indicating that such partnerships are crucial for child‑focused AI safety.
EVIDENCE
He notes, “You’ve seen the emergence of AI safety institutes around the world where the leading frontier labs, for the most part, work with those safety institutes to basically be creating safety standards” [334-336].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S23 notes the emerging partnership model between frontier AI labs and safety institutes to develop shared standards.
MAJOR DISCUSSION POINT
Partnerships between AI developers and safety institutes to build standards
T
Tom Hall
4 arguments163 words per minute927 words340 seconds
Argument 1
Design choices must prioritize data privacy, inclusion, and child‑centered governance, with clear “no‑regret” principles.
EXPLANATION
Hall stresses that AI tools for education should be built on a foundation of data privacy, sovereignty, and inclusive design, ensuring that no‑regret moves protect children’s rights and dignity.
EVIDENCE
He lists “data privacy, data sovereignty and inclusion and respect for the student” as top priorities, and calls for “no-regret moves” in design plans [306-311]. He also mentions publishing a free AI policy toolkit and involving children in design decisions [306-324].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Privacy‑first design and respect for cultural differences are highlighted in S20 and S22, while S30 discusses the need for regionally appropriate standards.
MAJOR DISCUSSION POINT
Privacy‑first, inclusive, child‑centered design
Argument 2
AI literacy empowers children to understand data, bias, and algorithmic foundations, turning AI into a “screwdriver” for learning.
EXPLANATION
Hall describes AI literacy as giving children a “screwdriver” to dismantle and understand AI systems, teaching them about data, sensing, predictability, bias, and accountability.
EVIDENCE
He says “handing children a screwdriver… let’s take it apart and understand what’s under the hood… teaching them how computers see the world as data, what is sensing, how to think about bias and accountability” [212-216].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S27 describes AI as a tool that can empower children to explore and understand technology, and S28 supports the pedagogical value of AI literacy.
MAJOR DISCUSSION POINT
AI as a learning tool
AGREED WITH
Chris Lehane, Thomas Davin, Baroness Joanna Shields
Argument 3
Real‑world evaluations and policy toolkits help embed AI literacy sustainably across schools and jurisdictions.
EXPLANATION
Hall argues that practical resources such as a free AI policy toolkit and real‑world curriculum examples are essential for schools to adopt AI literacy responsibly and consistently.
EVIDENCE
He mentions publishing a “free AI policy toolkit for classrooms” and stresses the need for “real world curriculum” that excites students and reflects real-world problems [306-311][316-319].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The importance of real‑world curriculum examples and toolkits is documented in S26, and S27 mentions the development of practical resources for safe AI integration.
MAJOR DISCUSSION POINT
Toolkits for sustainable AI literacy
AGREED WITH
Urvashi Aneja, Maria Bielikova, Thomas Davin
Argument 4
UNICEF and partners have released a free AI policy toolkit for classrooms to guide safe implementation.
EXPLANATION
Hall notes that UNICEF, together with partners, has made a publicly available toolkit that provides guidance for educators on safely integrating AI into teaching.
EVIDENCE
He explicitly states “We’ve published a free AI policy toolkit for classrooms” [306-311].
MAJOR DISCUSSION POINT
UNICEF AI policy toolkit
AGREED WITH
Thomas Davin, Urvashi Aneja
M
Maria Bielikova
3 arguments129 words per minute310 words143 seconds
Argument 1
Highest technical risk for children is exposure to commercial content and covert profiling; continuous impact research is required before deployment.
EXPLANATION
Bielikova highlights that the most serious technical threats to children stem from hidden commercial targeting and profiling, which are not evident from standard analytics, and calls for ongoing impact studies.
EVIDENCE
She points out that children see “less formal advertisement on TikTok” but are “exposed five times more… to profiling” through influencers, emphasizing the need for studies to uncover such hidden risks [403-406].
MAJOR DISCUSSION POINT
Commercial content and profiling risks
AGREED WITH
Urvashi Aneja, Thomas Davin, Tom Hall
Argument 2
Existing data‑protection and privacy tools can be enforced to limit profiling and safeguard children online.
EXPLANATION
Bielikova argues that current data‑protection frameworks, such as the Digital Services Act, already provide mechanisms to curb profiling, and these should be actively applied to protect children.
EVIDENCE
She references the Digital Service Act in Europe as an existing tool that can be leveraged, noting “Even though we have the Digital Service Act in Europe” [297].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S20 discusses leveraging existing data‑protection frameworks such as the Digital Services Act to curb profiling, and S25 underscores regulatory focus on mitigating bias and profiling risks.
MAJOR DISCUSSION POINT
Leveraging data‑protection tools
Argument 3
Ongoing studies and real‑world monitoring are essential to understand platform impacts and to adjust safeguards accordingly.
EXPLANATION
Bielikova stresses the importance of continuous empirical research, involving children in studies, to monitor how platforms affect them and to refine protective measures.
EVIDENCE
She calls for “a lot of studies… children should be there… we should travel with them through this environment” to grasp platform effects [408-410].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for continuous impact research and monitoring is emphasized in S26 and S27, which call for empirical studies to inform safeguards.
MAJOR DISCUSSION POINT
Need for continuous research and monitoring
U
Urvashi Aneja
4 arguments169 words per minute1080 words382 seconds
Argument 1
Safety frameworks should involve children directly in testing and redress mechanisms to ensure they work for the intended users.
EXPLANATION
Aneja argues that effective child‑safety policies must be co‑designed with children, incorporating real‑world evaluations and clear redress pathways so that safeguards are meaningful for young users.
EVIDENCE
She emphasizes “real-world evaluations… operationalize them… and redress mechanisms” as essential for making principles work in practice [299-304].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Child participation in safety design is advocated in S1, and S29 highlights the importance of involving end‑users in testing frameworks.
MAJOR DISCUSSION POINT
Child‑involved testing and redress
AGREED WITH
Maria Bielikova, Thomas Davin, Tom Hall
Argument 2
Pedagogy must be adaptable to diverse learning styles and contexts, leveraging visual, auditory, and interactive media.
EXPLANATION
Aneja stresses that education on AI should respect varied learner preferences—visual, auditory, kinesthetic—and be tailored to different cultural and socioeconomic contexts.
EVIDENCE
She notes the need to “think about pedagogy quite carefully” and that “everyone learns differently… through reading, listening, watching videos” [218-224][128-133].
MAJOR DISCUSSION POINT
Adaptive, multimodal pedagogy
Argument 3
Children should have a voice in AI design and policy to maintain agency and ensure solutions serve their interests.
EXPLANATION
Aneja highlights that children must be active participants in shaping AI systems and regulations, ensuring that their agency is respected and that policies reflect their lived realities.
EVIDENCE
She asks “how do we think about agency… across different contexts” and later stresses that “children at the heart… should be part of the governance of those mechanisms” [250-254][442-444].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S1 stresses child involvement in governance, and S29 reinforces the principle of designing with, not for, children.
MAJOR DISCUSSION POINT
Child participation in governance
Argument 4
Policy frameworks must incorporate Global South perspectives to ensure inclusive, equitable AI governance.
EXPLANATION
Aneja points out that AI governance should not be dominated by Global North viewpoints; incorporating insights from the Global South is essential for equitable outcomes.
EVIDENCE
She remarks on being in India and notes “agency … has so much to do with the broader socioeconomic institutional context” and calls for inclusive perspectives [250-254].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for Global South input in AI governance is discussed in S18, and S30 illustrates how regional blocs can set culturally appropriate standards.
MAJOR DISCUSSION POINT
Inclusion of Global South viewpoints
AGREED WITH
Baroness Joanna Shields, Tom Hall, Chris Lehane
R
Rahul John Aju
4 arguments175 words per minute1914 words656 seconds
Argument 1
Curiosity should be guided; children must first learn critical thinking before being taught how to interact with machines.
EXPLANATION
Rahul reflects on his upbringing, emphasizing that questioning and critical thinking were taught before using search engines, and argues that similar guidance is needed for AI interactions.
EVIDENCE
He recounts his father urging him to “question everything” and his experience using Google to verify information, noting that “parents also taught me how to figure out what is correct information and fake information” [41-60]. He then asks how children can do this in the AI age [61-68].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S26 highlights the educational value of guided curiosity and critical thinking before relying on AI, and S27 supports teaching children to interrogate technology.
MAJOR DISCUSSION POINT
Guided curiosity and critical thinking
Argument 2
“Rescue AI” can analyze contracts and terms‑and‑conditions, flagging high‑risk clauses for users, including children.
EXPLANATION
Rahul describes a tool he built that automatically scans legal documents, identifies risky clauses, and advises users on whether to proceed, illustrating a practical safety solution.
EVIDENCE
He explains that the AI software “can upload a full terms and conditions or any contract and it will tell you the high risk clauses, low risk clauses and … what to do” and names the tool “Rescue AI” [91-94].
MAJOR DISCUSSION POINT
AI tool for contract risk analysis
Argument 3
AI awareness and safety education are necessary for children in the AI age.
EXPLANATION
He stresses that, given the complexity of AI, children need dedicated awareness and safety training to navigate AI tools responsibly.
EVIDENCE
He declares, “That is why AI awareness and safety is necessary” after discussing the limits of parental guidance in the AI era [97-98].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The necessity of AI safety education for youth is reinforced in S27, which calls for empowering learners with knowledge about AI risks.
MAJOR DISCUSSION POINT
Need for AI safety and awareness education for children
Argument 4
Foundational learning should precede AI use; children must master natural intelligence before relying on artificial intelligence.
EXPLANATION
He argues that teaching basic skills and critical thinking first ensures that AI becomes a supportive tool rather than a crutch.
EVIDENCE
He says, “I believe AS should be same. We should learn the basics before using AI. You should use the natural intelligence first, then start using artificial intelligence” [108-114].
MAJOR DISCUSSION POINT
Sequencing education: fundamentals before AI integration
T
Thomas Davin
3 arguments175 words per minute1227 words419 seconds
Argument 1
Teachers feel unprepared; providing them with tools, training, and a real‑world curriculum is essential for effective AI literacy.
EXPLANATION
Thomas highlights the gap between teachers’ enthusiasm for AI and their lack of readiness, calling for resources and curriculum that bridge this divide.
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S26 stresses the need for teacher resources and real‑world curriculum examples, while S27 mentions toolkits that support educator readiness.
MAJOR DISCUSSION POINT
Teacher preparedness for AI literacy
AGREED WITH
Tom Hall, Urvashi Aneja
Argument 2
Over‑reliance on AI risks eroding curiosity and creativity; models could intentionally introduce challenges to build resilience.
EXPLANATION
Thomas warns that if AI always provides correct answers, children may lose curiosity, suggesting that deliberately imperfect models could foster grit and deeper learning.
EVIDENCE
He reflects that “if we have a model that actually gives the right answer… they might lose their sense of curiosity” and proposes designing models that “give the wrong answer on purpose so that the child actually struggles” [432-437].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S26 argues for designing learning experiences that preserve struggle and curiosity, suggesting intentional imperfections in AI outputs.
MAJOR DISCUSSION POINT
Balancing AI assistance with curiosity
AGREED WITH
Chris Lehane, Baroness Joanna Shields, Tom Hall
Argument 3
Systematic impact measurement is required to monitor AI’s effects on learning outcomes and child curiosity.
EXPLANATION
He highlights that without rigorous measurement, AI could diminish curiosity; therefore, ongoing impact assessment is essential to ensure AI supports rather than undermines learning.
EVIDENCE
He remarks that “systematic impact measurement” is needed and warns that “if we have a model that actually gives the right answer … they might lose their sense of curiosity” [425-433].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The call for systematic impact assessment of AI in education is made in S26.
MAJOR DISCUSSION POINT
Importance of monitoring and measuring AI’s impact on education
AGREED WITH
Urvashi Aneja, Maria Bielikova, Tom Hall
M
Moderator
2 arguments104 words per minute338 words193 seconds
Argument 1
Discussions about children and technology must involve children directly rather than speaking about them.
EXPLANATION
The moderator highlights that policy conversations often overlook children’s voices, emphasizing the need for child participation in shaping technology governance.
EVIDENCE
He notes that “Too often, discussions about children and technology speak about children rather than with them” and that “This session is intentional in doing otherwise” [27-28].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Child‑centered policy dialogue is championed in S1, and S29 underscores the importance of involving children in safety discussions.
MAJOR DISCUSSION POINT
Child participation in policy dialogue
Argument 2
Responsibility for guiding children’s AI engagement lies with adults, institutions, and systems.
EXPLANATION
The moderator stresses that the central issue is not whether children will use AI, but whether the surrounding ecosystem is prepared to steer that interaction safely and responsibly.
EVIDENCE
He states, “The question is not whether children will engage with AI, but whether adults, institutions, and systems are prepared to guide that engagement responsibly” [194-195].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Shared governance and adult responsibility for safe AI deployment are highlighted in S23.
MAJOR DISCUSSION POINT
Adult and institutional responsibility for safe AI use by children
Agreements
Agreement Points
Proactive governance and safety‑by‑design with age‑appropriate safeguards are essential, moving away from post‑harm models.
Speakers: Baroness Joanna Shields, Chris Lehane, Tom Hall, Urvashi Aneja, Thomas Davin
Proactive governance is needed; post‑harm models are inadequate and age‑appropriate safeguards must be built into AI from the start. Implement age‑assurance, parental controls, and external reviews to block under‑18 access and harmful content. Design choices must prioritize data privacy, inclusion, and child‑centered governance, with clear ‘no‑regret’ principles. Safety frameworks should involve children directly in testing and redress mechanisms to ensure they work for the intended users. Over‑reliance on AI risks eroding curiosity and creativity; systematic impact measurement is required.
All speakers stress that AI for children must be governed proactively, embedding safety, privacy and age-verification from the outset rather than reacting after harms occur, and that children should be part of the design and redress process [3-5][16-18][266-269][278-281][327-354][306-311][312-319][320-324][299-304][442-444][419-426][432-437].
POLICY CONTEXT (KNOWLEDGE BASE)
Aligns with UNICEF child-rights policy urging safety-by-design and proactive governance, reflected in IGF discussions on responsible AI for children and calls for principle-level guidance with practical guardrails [S42][S51][S48].
Preserving child agency, curiosity and grit; AI should augment, not replace, effortful learning.
Speakers: Chris Lehane, Thomas Davin, Baroness Joanna Shields, Tom Hall
AI can level the playing field, but must be used to foster agency and curiosity rather than replace effort; children need space to struggle and develop grit. Over‑reliance on AI risks eroding curiosity and creativity; models could intentionally introduce challenges to build resilience. That difference has implications not only for safety, but for mental health, identity formation, and long‑term well‑being. AI literacy empowers children to understand data, bias, and algorithmic foundations, turning AI into a “screwdriver” for learning.
Panelists agree that AI must be designed to support agency and curiosity, avoiding a model that always gives the right answer; instead, tools should encourage critical thinking and resilience [247-249][240-246][13-15][16-18][432-437][212-216].
POLICY CONTEXT (KNOWLEDGE BASE)
Mirrors UNICEF guidance on preserving child agency and UNESCO recommendations that AI should augment learning, not replace human effort, as highlighted in responsible AI for children and generative AI in education literature [S58][S59][S42].
Inclusion and cultural diversity must be protected; AI should not create a monoculture.
Speakers: Baroness Joanna Shields, Urvashi Aneja, Tom Hall, Chris Lehane
Avoid a monoculture of AI models; preserve linguistic and cultural diversity to protect children’s identity development. Policy frameworks must incorporate Global South perspectives to ensure inclusive, equitable AI governance. Inclusion and respect for the student; ensure diverse representation in AI products. Global norms must allow local cultural adaptation while preventing weakest‑link loopholes; standards should be flexible to regional values.
All agree that AI systems must reflect cultural and linguistic diversity and that global standards should be adaptable to local contexts to avoid a single-culture dominance [395-398][250-254][399-401][306-311][312-319][370-374].
POLICY CONTEXT (KNOWLEDGE BASE)
Supported by IGF observations on cultural-diversity risks of AI monoculture and calls for inclusive, multi-stakeholder AI development to protect cultural heritage [S56][S57][S62].
A global, interoperable age‑verification framework is needed to protect children across jurisdictions.
Speakers: Baroness Joanna Shields, Chris Lehane, Urvashi Aneja
Harmonized age‑verification standards (e.g., Open Age Alliance) are needed to provide consistent protection across jurisdictions. Implement age‑assurance, parental controls, and external reviews to block under‑18 access and harmful content. How should global norms for children’s safety handle cultural and regulatory diversity without creating loopholes that allow companies to opt for the weakest protection?
Baroness and Chris outline concrete age-verification and safety mechanisms, while Urvashi raises the need for these norms to be globally consistent yet locally adaptable [390-394][327-342][375-381].
POLICY CONTEXT (KNOWLEDGE BASE)
Consistent with multiple policy calls for age-verification mechanisms, including EU-wide age-gating proposals and UNICEF/UNESCO emphasis on interoperable verification while noting privacy concerns [S45][S46][S47][S42].
Teachers need tools, training and real‑world curricula to deliver effective AI literacy.
Speakers: Thomas Davin, Tom Hall, Urvashi Aneja
Teachers feel unprepared; providing them with tools, training, and a real‑world curriculum is essential for effective AI literacy. UNICEF and partners have released a free AI policy toolkit for classrooms to guide safe implementation. Real‑world evaluations and policy toolkits help embed AI literacy sustainably across schools and jurisdictions.
Consensus that educator capacity is a bottleneck and that toolkits and practical curricula are required to scale AI literacy responsibly [419-426][306-311][299-304].
POLICY CONTEXT (KNOWLEDGE BASE)
Echoes UNESCO’s AI-in-education framework that stresses teacher training, curricula development, and awareness-raising as essential for effective AI literacy [S44][S60][S61][S50].
Continuous, real‑world monitoring and impact measurement are required to ensure AI safeguards work for children.
Speakers: Urvashi Aneja, Maria Bielikova, Thomas Davin, Tom Hall
Safety frameworks should involve children directly in testing and redress mechanisms to ensure they work for the intended users. Highest technical risk for children is exposure to commercial content and covert profiling; continuous impact research is required before deployment. Systematic impact measurement is required to monitor AI’s effects on learning outcomes and child curiosity. Real‑world evaluations and policy toolkits help embed AI literacy sustainably across schools and jurisdictions.
All emphasize the need for ongoing empirical research, child-involved testing, and systematic metrics to track AI’s impact and adjust safeguards accordingly [299-304][442-444][403-410][419-426][306-311].
POLICY CONTEXT (KNOWLEDGE BASE)
Aligns with IGF recommendations for continuous monitoring and impact measurement of AI systems affecting children, as outlined in responsible AI safeguarding reports [S43][S45][S55].
Similar Viewpoints
Both stress that safety must be built into AI systems before deployment, using age‑verification and proactive design rather than reacting after harms occur [266-269][327-354].
Speakers: Baroness Joanna Shields, Chris Lehane
Proactive governance is needed; post‑harm models are inadequate and age‑appropriate safeguards must be built into AI from the start. Implement age‑assurance, parental controls, and external reviews to block under‑18 access and harmful content.
Both advocate for strong privacy, inclusion and parental oversight mechanisms as core components of child‑focused AI safety [306-311][327-354].
Speakers: Tom Hall, Chris Lehane
Design choices must prioritize data privacy, inclusion, and child‑centered governance, with clear ‘no‑regret’ principles. Implement age‑assurance, parental controls, and external reviews to block under‑18 access and harmful content.
Both underline the necessity of continuous, child‑involved research and monitoring to detect hidden risks such as covert profiling [299-304][403-410].
Speakers: Urvashi Aneja, Maria Bielikova
Safety frameworks should involve children directly in testing and redress mechanisms to ensure they work for the intended users. Ongoing studies and real‑world monitoring are essential to understand platform impacts and to adjust safeguards accordingly.
Both stress that children need foundational critical‑thinking skills before relying on AI, and that AI should be used to nurture, not replace, curiosity [41-60][61-68][432-437].
Speakers: Rahul John Aju, Thomas Davin
Curiosity should be guided; children must first learn critical thinking before being taught how to interact with machines. Over‑reliance on AI risks eroding curiosity and creativity; models could intentionally introduce challenges to build resilience.
Unexpected Consensus
A youth innovator (Rahul) proposes a concrete AI safety tool (Rescue AI) aligning with senior panel calls for practical safety solutions.
Speakers: Rahul John Aju, Baroness Joanna Shields, Chris Lehane
“Rescue AI” can analyze contracts and terms‑and‑conditions, flagging high‑risk clauses for users, including children. Proactive governance is needed; post‑harm models are inadequate and age‑appropriate safeguards must be built into AI from the start. Implement age‑assurance, parental controls, and external reviews to block under‑18 access and harmful content.
Rahul’s tool exemplifies the type of safety-by-design solution the Baroness and Chris advocate, showing an unexpected alignment between a young practitioner and senior policymakers [91-94][266-269][327-354].
POLICY CONTEXT (KNOWLEDGE BASE)
Reflects IGF youth-engagement recommendations that encourage young innovators to develop safety tools, with Rahul’s proposal cited as an example of youth-led solutions [S49][S53].
Rahul’s emphasis on guided curiosity mirrors senior experts’ warnings about AI eroding curiosity.
Speakers: Rahul John Aju, Chris Lehane, Thomas Davin
Curiosity should be guided; children must first learn critical thinking before being taught how to interact with machines. AI can level the playing field, but must be used to foster agency and curiosity rather than replace effort; children need space to struggle and develop grit. Over‑reliance on AI risks eroding curiosity and creativity; models could intentionally introduce challenges to build resilience.
Despite the age gap, Rahul’s call for guided curiosity aligns with Chris and Thomas’s concerns that AI should not diminish children’s natural inquisitiveness [41-60][61-68][247-249][432-437].
POLICY CONTEXT (KNOWLEDGE BASE)
Resonates with UNICEF warnings about AI diminishing curiosity and with responsible AI guidelines that promote guided exploration rather than passive consumption [S42][S58].
Overall Assessment

There is strong consensus that AI for children must be governed proactively with safety‑by‑design, age‑verification, inclusion, teacher capacity building, and continuous monitoring. Panelists across roles and regions agree on the need for child‑centered design, preservation of cultural diversity, and mechanisms to preserve agency and curiosity.

High consensus – the convergence of viewpoints across senior policymakers, industry leaders, educators, and a youth innovator indicates a shared commitment to child‑focused, inclusive, and measurable AI governance, which should accelerate the development of global standards and practical toolkits.

Differences
Different Viewpoints
Global harmonised age‑verification versus locally‑adapted age‑assurance
Speakers: Baroness Joanna Shields, Chris Lehane
Baroness: Calls for a global, interoperable age-key generated by the Open Age Alliance that travels with the child across platforms [390-394]. Chris: Highlights that privacy-law limitations (e.g., in Europe) constrain age-assurance signals and that cultural and societal contexts require country-specific solutions [370-374].
Both agree age verification is essential, but the Baroness pushes for a single worldwide standard, whereas Chris argues that legal and cultural differences mean a one‑size‑fits‑all solution is not feasible, requiring local adaptation.
POLICY CONTEXT (KNOWLEDGE BASE)
Highlights the tension noted in policy debates between a harmonised global age-verification system and locally-adapted approaches respecting privacy and data-protection laws [S45][S46][S47].
Extent and style of AI integration in education
Speakers: Tom Hall, Chris Lehane
Tom Hall: Advocates broad AI literacy, publishing a free policy toolkit, and embedding AI as a “screwdriver” for learning with inclusive, child-centred design [306-311]. Chris Lehane: Warns that AI must preserve agency and curiosity, suggesting models might intentionally give wrong answers to foster grit and that AI should not replace effortful learning [247-249][240-246].
Tom pushes for rapid, wide‑scale AI adoption and tooling, while Chris cautions against over‑reliance and proposes limiting AI’s role to protect agency, indicating divergent views on how deeply AI should be embedded in classrooms.
POLICY CONTEXT (KNOWLEDGE BASE)
Relates to ongoing discussions on the appropriate depth of AI integration in curricula, balancing innovation with pedagogical integrity as described in UNESCO and IGF education sessions [S60][S61][S44].
Primary risk focus for children using AI platforms
Speakers: Maria Bielikova, Chris Lehane
Maria Bielikova: Identifies covert commercial profiling and hidden influencer-driven targeting as the highest technical risk for children, calling for continuous impact research [403-406]. Chris Lehane: Emphasises content-related harms (violence, sexual, mental-health) and proposes age-gates, parental controls, and external reviews to block such content [340-349].
Maria stresses hidden profiling as the most urgent danger, whereas Chris concentrates on explicit harmful content, revealing a mismatch in perceived priority of child‑safety risks.
POLICY CONTEXT (KNOWLEDGE BASE)
Addresses identified primary risks such as exposure to extremist content and inappropriate material, which have been highlighted in IGF research on child recruitment and content monitoring [S55][S45][S42].
Measurement and evaluation of AI’s impact on children
Speakers: Thomas Davin, Maria Bielikova
Thomas Davin: Calls for systematic impact measurement and even designing AI models that deliberately give wrong answers to preserve curiosity and assess outcomes [425-437]. Maria Bielikova: Argues for ongoing empirical studies and real-world monitoring involving children to understand platform effects and adjust safeguards [408-410].
Thomas proposes a more experimental, design‑centric evaluation (including intentional errors), while Maria advocates for continuous observational research, showing different methodological preferences for impact assessment.
POLICY CONTEXT (KNOWLEDGE BASE)
Calls for evidence-based evaluation echo the AI Policy Research Roadmap’s emphasis on measurement frameworks and IGF monitoring guidelines for children’s AI impact [S43][S51][S45].
Unexpected Differences
Different prioritisation of child‑safety risks (profiling vs explicit harmful content)
Speakers: Maria Bielikova, Chris Lehane
Maria: Highlights covert commercial profiling and influencer-driven targeting as the most serious hidden risk [403-406]. Chris: Focuses on preventing exposure to violent, sexual, or mental-health-related content through age-gates and parental controls [340-349].
While both address safety, the unexpected divergence lies in which risk they deem most urgent—hidden data‑driven profiling versus overt harmful content—suggesting differing threat models among experts.
POLICY CONTEXT (KNOWLEDGE BASE)
Reflects privacy-centric critiques of profiling versus explicit harmful-content safeguards discussed in data-protection forums and age-verification debates [S47][S45][S55].
Optimism about age‑verification technology versus concern over its bluntness
Speakers: Baroness Joanna Shields, Chris Lehane
Baroness: Notes that age-assurance technology is improving but still often a blunt instrument, calling for better solutions [378-382][278-281]. Chris: Presents a working age-assurance system that defaults to an under-18 model and integrates parental controls, showing confidence in current technical solutions [340-349].
The Baroness expresses caution that existing age‑verification remains coarse, whereas Chris displays confidence that current multi‑layered mechanisms are sufficient—an unexpected contrast between caution and optimism.
POLICY CONTEXT (KNOWLEDGE BASE)
Captures the optimism for age-verification tools contrasted with concerns about their bluntness and potential privacy infringement, as debated in IGF sessions on age-gating and data protection [S47][S45][S46].
Overall Assessment

The panel displayed broad consensus on the need to protect children and to embed AI literacy, but significant disagreements emerged around how to implement age‑verification, the depth of AI integration in education, the primary safety risks to prioritise, and the appropriate methods for impact measurement. These divergences reflect differing priorities (global standards vs local adaptation, rapid deployment vs cautious pedagogy) and risk perceptions (profiling vs content harms).

Moderate to high – while all participants share the overarching goal of child safety and empowerment, the contrasting approaches to governance, risk focus, and evaluation suggest that reaching coordinated policy action will require substantial negotiation and compromise.

Partial Agreements
Both aim to protect children through age‑verification and safeguards, but differ on the mechanism—global interoperable age‑keys versus a layered, company‑specific package.
Speakers: Baroness Joanna Shields, Chris Lehane
Baroness: Supports age-verification and guardrails built into AI from the outset [278-281]. Chris: Implements a multi-layered safety package with age-assurance, parental controls, and external review [340-349].
Both seek widespread AI literacy for children, yet Tom focuses on resources and immediate deployment, whereas Thomas stresses careful measurement and pedagogical design that may limit AI’s ease of answer‑giving.
Speakers: Tom Hall, Thomas Davin
Tom Hall: Promotes AI literacy via toolkits, inclusive curricula, and real-world problem-based learning [306-311]. Thomas Davin: Emphasises AI literacy as essential but stresses systematic impact measurement and designing models that challenge children to preserve curiosity [425-437].
Both agree children must participate in shaping AI safety, but Urvashi stresses procedural testing and redress, while Thomas emphasizes broader governance involvement.
Speakers: Urvashi Aneja, Thomas Davin
Urvashi Aneja: Calls for child-involved real-world evaluations, testing, and clear redress mechanisms [299-304]. Thomas Davin: Highlights the need for children to be at the heart of governance, giving them a voice in design and policy [442-444].
Takeaways
Key takeaways
Post‑harm regulatory approaches are insufficient for AI; safety must be built into design from the outset, especially for children. Age‑assurance, robust parental controls, and external independent reviews are essential safeguards for child‑facing AI systems. AI literacy should begin with teaching critical thinking and foundational knowledge before introducing AI tools; teachers need training and practical curricula. Personalized AI tutors can enhance agency and learning, but over‑reliance may erode curiosity and grit; intentional challenge‑based design may be needed. Global harmonisation of age‑verification (e.g., Open Age Alliance) is required, while allowing cultural and regulatory adaptation to avoid a monoculture of models. Continuous real‑world impact research and monitoring of profiling, commercial content, and covert influences are needed to evaluate safety before deployment. Children must be involved directly in testing, redress mechanisms, and policy design to ensure solutions serve their interests.
Resolutions and action items
UNICEF and partners released a free AI policy toolkit for classrooms to guide safe implementation. OpenAI committed to a multi‑pronged safety package: age gates, default under‑18 models, parental controls, advertising bans, and external review processes. Baroness Shields highlighted the Open Age Alliance initiative to create interoperable, privacy‑preserving age‑verification keys. Tom Hall (LEGO) pledged to incorporate child‑centered governance, data privacy, inclusion, and to involve children in policy development. Rahul John Aju showcased “Rescue AI” for contract risk analysis and offered to continue developing tools that help children understand terms and conditions. Panelists agreed to pursue further collaboration on real‑world evaluations, teacher training resources, and inclusion of Global South perspectives.
Unresolved issues
How to embed AI literacy effectively into diverse curricula and teaching practices across different education systems. Specific mechanisms for localising global safety standards without creating weakest‑link loopholes. Methods for ensuring unknown or emerging AI companies comply with child‑safety requirements. Detailed processes for redress and accountability when AI harms children in practice. Balancing the need for age‑appropriate content with preserving cultural and linguistic diversity in AI models.
Suggested compromises
Adopt “no‑regret” design principles that prioritize data privacy, inclusion, and child‑respect while allowing iterative improvements. Implement age‑verification that is robust yet privacy‑preserving, enabling a universal age key that can be adapted locally. Combine AI assistance with intentional gaps or challenges to preserve curiosity and develop resilience. Use a hybrid governance model: global baseline safeguards (age gates, advertising bans) complemented by locally‑tailored cultural guidelines. Involve children in the design and testing phases to balance safety controls with user agency.
Thought Provoking Comments
AI is fundamentally different. It is not a platform. It is increasingly a one‑to‑one adaptive interaction embedded in how children learn, communicate, create, and form their own sense of self. AI is engineering simulated intimacy at scale, and children cannot reliably distinguish between authentic human connection and artificial intimacy.
She reframes the conversation from treating AI like other digital platforms to recognizing its unique relational dynamics with children, highlighting deep‑rooted psychological risks rather than just data privacy.
Sets the thematic foundation for the whole panel, prompting subsequent speakers to discuss safety‑by‑design, age‑appropriate experiences, and the need for new regulatory approaches rather than post‑harm models.
Speaker: Baroness Joanna Shields
While I was using Google, my parents taught me how to figure out what is correct information and what is fake. In this age of AI, how do we expect kids to do it? Parents can’t even figure it out. Curiosity is there in every child, but it only becomes powerful if it’s guided the right way.
Raises the practical challenge of misinformation and critical thinking for children in the AI era, moving the discussion from abstract policy to everyday lived experience.
Triggered dialogue on AI literacy, the role of educators and parents, and led Rahul to introduce his own tool (Rescue AI) as a concrete response to the problem.
Speaker: Rahul John Aju
I created an AI software called Rescue AI that can upload a full terms‑and‑conditions document and highlight high‑risk and low‑risk clauses, telling you whether you should use the product.
Provides a tangible, youth‑driven solution to the transparency problem, demonstrating that children can be innovators in safety, not just passive users.
Illustrated the potential of child‑led tool development, encouraging other panelists to consider how to empower young people to create safeguards, and reinforced the call for AI literacy that includes hands‑on building.
Speaker: Rahul John Aju
We must be careful not to create over‑dependency on AI that narrows creativity. Governance and design choices need to prioritize data privacy, data sovereignty, inclusion, and respect for the student. Involve children in the design process and avoid one‑size‑fits‑all solutions.
Highlights the risk of homogenized AI experiences and stresses inclusive, child‑centered governance, expanding the conversation beyond safety to broader ethical design principles.
Shifted the tone toward concrete design criteria, prompting further discussion on inclusion, cultural diversity, and the need for “no‑regret” moves in policy.
Speaker: Tom Hall
AI is a leveling technology that scales the ability of anyone to think, learn, create, and own their labor. We need to teach people agency so they can use it to control their own work, not just be exploited by the existing social contract between capital and labor.
Connects AI for children to larger socioeconomic structures, introducing the concept of agency as a central metric for future policy, moving the debate from child safety to societal transformation.
Broadened the panel’s scope, leading participants to consider long‑term implications of AI on labor markets and the importance of teaching agency alongside technical skills.
Speaker: Chris Lehane
Age‑assurance technology can create a verifiable age‑key that travels with the child across every platform. The Open Age Alliance is working to harmonise standards so that age‑appropriate experiences are delivered globally.
Offers a concrete, globally‑scalable regulatory mechanism that addresses the blunt‑instrument age bans currently used, linking technical innovation with policy harmonisation.
Prompted dialogue on global standards versus local adaptation, and set the stage for later discussion on cultural diversity and avoiding a monoculture of AI models.
Speaker: Baroness Joanna Shields
We shouldn’t prohibit children from the city; we should travel with them through the environment, study what’s happening, and use those insights to protect them while allowing exploration.
Uses a vivid metaphor to argue for contextual, child‑centered research rather than blanket bans, emphasizing the need for empirical studies and child participation.
Reinforced the call for real‑world evaluations and child involvement, influencing the moderators’ summary that highlighted the importance of “traveling with children” in AI governance.
Speaker: Maria Bielikova
Think about what kind of ancestor you want to be. We have the chance now, after social media and other harmful technologies, to make sharp decisions that will pay forward for the next generation.
Frames the policy challenge as an ethical legacy, giving the discussion a moral urgency that resonates beyond technical solutions.
Served as a concluding moral anchor, influencing the final synthesis that emphasized responsibility, inclusion, and long‑term societal impact.
Speaker: Tom Hall
Overall Assessment

The discussion was driven forward by a handful of pivotal remarks that reframed AI from a mere platform to a relational, agency‑shaping technology. Baroness Shields’ opening about simulated intimacy set a high‑stakes context, which Rahul’s personal experience and his Rescue AI tool grounded in everyday challenges. Tom Hall’s warnings about over‑dependence and inclusion, Chris Lehane’s linkage of AI to broader labor dynamics, and Maria Bielikova’s city metaphor together expanded the conversation from child‑centric safety to systemic design, cultural diversity, and empirical oversight. These comments sparked new sub‑topics—age‑verification standards, global‑local regulatory balance, and the moral imperative of shaping a responsible legacy—thereby deepening the analysis and steering the panel toward concrete, actionable pathways for child‑focused AI governance.

Follow-up Questions
How can children be guided to distinguish between authentic human connection and artificial intimacy, and how can we teach them to critically evaluate AI‑generated information?
Ensures children are not misled by persuasive AI interactions, protecting mental health and fostering critical thinking.
Speaker: Rahul John Aju
What mechanisms can help children understand and evaluate lengthy terms and conditions and privacy policies of AI services?
Improves informed consent and transparency, preventing hidden data collection or harmful clauses.
Speaker: Rahul John Aju
How should we handle unknown or unregulated AI companies that claim to be safe for children?
Addresses gaps in oversight and the need for standards to protect children from potentially unsafe AI products.
Speaker: Rahul John Aju
How can AI education be structured so that children first learn natural‑intelligence fundamentals before using AI tools?
Establishes a solid knowledge base, ensuring AI is used as a supplement rather than a replacement for basic skills.
Speaker: Rahul John Aju
What are the most effective ways to translate real‑world safety practices into the digital AI environment for children?
Bridges the gap between offline safety habits and online AI interactions, reducing risk of harm.
Speaker: Rahul John Aju
What are the highest‑risk failure modes of AI systems for children, and what technical evaluations should be required before deployment?
Identifies specific safety threats and establishes pre‑deployment testing standards to protect children.
Speaker: Thomas Davin (to Maria Bielikova)
What key lesson from the UK Internet safety agenda is most relevant today, and what practice should be avoided in AI governance?
Leverages past policy experience to inform current AI regulation and avoid repeating ineffective approaches.
Speaker: Thomas Davin (to Baroness Joanna Shields)
What governance and design choices are essential to ensure AI tools support children’s well‑being across diverse education systems and cultural contexts?
Ensures AI implementations are inclusive, culturally sensitive, and aligned with varied educational needs.
Speaker: Thomas Davin (to Tom Hall)
What baseline governance package should be globally interoperable for child‑facing AI, and what elements need local adaptation?
Creates a common safety framework while allowing flexibility for jurisdiction‑specific cultural and regulatory factors.
Speaker: Urvashi Aneja (to Chris Lehane)
How should global norms for children’s safety handle cultural and regulatory diversity without creating loopholes that allow the weakest protection?
Prevents regulatory arbitrage and ensures minimum safety standards worldwide.
Speaker: Urvashi Aneja (to Chris Lehane)
What measurable indicators can regulators use to assess whether an AI system is acting in a child’s best interest?
Provides concrete metrics for accountability and ongoing monitoring of AI impacts on children.
Speaker: Urvashi Aneja (to Maria Bielikova)
How can we conduct real‑world evaluations of AI systems in children’s contexts rather than only lab testing?
Ensures that safety and effectiveness assessments reflect actual usage environments and diverse user experiences.
Speaker: Urvashi Aneja
How can we ensure inclusion of children from the Global South, children with disabilities, and those without internet access in AI solutions?
Addresses equity concerns, preventing a digital divide that could exacerbate existing inequalities.
Speaker: Thomas Davin (summary)
Can AI models be intentionally designed to give occasional wrong answers to foster grit and curiosity in children?
Explores pedagogical design that balances assistance with challenge to support long‑term learning skills.
Speaker: Thomas Davin (summary)
What redress mechanisms are needed when AI harms children, and how can they be effectively enforced?
Establishes pathways for remediation and accountability when safety safeguards fail.
Speaker: Thomas Davin (summary)
How can profiling of children on platforms (e.g., via influencer content rather than formal ads) be measured and mitigated?
Targets subtle forms of data exploitation that affect children’s privacy and autonomy.
Speaker: Maria Bielikova
How can age‑assurance technology be standardized globally (e.g., via the Open Age Alliance) and integrated across platforms to protect children?
Seeks a universal, privacy‑preserving solution for age‑appropriate experiences, reducing reliance on blunt age bans.
Speaker: Baroness Joanna Shields

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.