Safeguarding Children with Responsible AI
20 Feb 2026 18:00h - 19:00h
Safeguarding Children with Responsible AI
Summary
The summit opened with Baroness Joanna Shields warning that governing AI for children is the clearest test of responsible technology stewardship and that existing post-harm models are inadequate for AI’s unique, intimate one-to-one interactions with kids [1-13][16-22]. She emphasized that AI simulates intimacy at scale, affecting children’s safety, mental health and identity formation, and called for age-appropriate default experiences and guardrails to protect dignity and development [14-21].
Moderator Rahul John Aju was introduced as a young AI innovator, highlighting the need to speak with, not just about, children in technology debates [26-31]. Rahul argued that children’s innate curiosity must be guided by critical thinking and that AI safety cannot rely on parents alone, noting the difficulty of distinguishing real from fake information in the AI era [58-63][66-69]. He illustrated the problem with examples of photo uploads and unread terms-and-conditions, and described his “Rescue AI” tool that flags risky contract clauses, underscoring the urgency of AI awareness [85-92]. Rahul advocated for foundational education in natural intelligence before AI, proposing personalized, multimodal learning tools such as Notebook LM and StudyFetch, and cited his free ThinkCraft Academy courses that have reached over 700,000 learners [138-173].
In the panel, Tom Hall defined AI literacy as giving children a “screwdriver” to dissect technology, noting that while 80 % of teachers are excited, only 41 % feel prepared, thus calling for teacher tools and child-centered curricula [209-218][306-324]. Chris Lehane highlighted AI’s potential to individualize tutoring and expand agency, warning that existing K-12 systems were built for the industrial age and must be re-engineered to empower learners rather than constrain them [232-248]. Urvashi Aneja stressed the importance of embedding AI literacy within policy and pedagogy, and raised concerns that cultural and socioeconomic contexts affect agency, especially in the Global South [220-253].
The Baroness reiterated that a post-harm regulatory paradigm will not work, advocating safety-by-design, age-assurance technologies, and the Open Age Alliance to create interoperable, privacy-preserving age verification across jurisdictions [266-281][377-395]. Maria Bielikova called for systematic, child-involved studies to detect profiling and other harms, arguing that existing data-protection tools can mitigate risks if properly enforced [402-410]. The panel reached consensus that safety by design, inclusion, offline accessibility, and placing children at the centre of governance are essential to avoid a monoculture and preserve curiosity [419-444]. Participants expressed measured optimism that, with coordinated standards and child-led evaluation, AI can enhance learning without eroding agency [421-426][438-440].
Rahul concluded by thanking the summit and urging that policies keep children’s needs at the forefront, reinforcing the collective commitment to responsibly shape the AI-enabled future for kids [456-462][464-466].
Keypoints
Major discussion points
– Child-centric AI governance and safety-by-design – The panel repeatedly stressed that AI must be built with age-appropriate guardrails, robust age-verification and privacy protections from the outset, rather than relying on post-harm regulation. The Baroness highlighted the failure of “post-harm” models for AI and called for “safety from the ground up” and “age-assurance technology” that can be verified across platforms [266-276][278-281][389-395].
– AI literacy and agency for children and educators – Multiple speakers argued that children need foundational knowledge and critical thinking skills before they can use AI responsibly. Tom Hall described AI literacy as giving children a “screwdriver” to understand the inner workings of AI and noted that many teachers feel unprepared [209-217][214-216]; Rahul emphasized teaching the basics of maths before relying on calculators and advocated a curriculum that teaches “how to think” rather than just “what to think” [106-118][155-158].
– Risk mitigation and the potential harms of unchecked AI – Concerns were raised about emotional dependency, manipulation, profiling, loss of curiosity, and cultural homogenisation. The Baroness warned that simulated intimacy can affect mental health and identity formation [13-16][21]; Maria pointed out hidden profiling on platforms like TikTok despite low formal advertising [402-406]; Thomas warned that over-reliance on AI could blunt children’s curiosity and grit [425-437].
– Global policy coordination and contextual adaptation – Participants called for a mix of universal standards (e.g., age-assurance, data-privacy) and locally-tailored rules that respect cultural and regulatory differences. Chris outlined a “multi-pronged” package (age assurance, parental controls, no targeted ads, external review) and noted the need to adapt it across jurisdictions [327-357][340-354]; the Baroness highlighted the Open Age Alliance as a mechanism for interoperable age-verification while cautioning against a monoculture of models [389-398].
– Inclusion and equitable access – The discussion stressed that AI solutions must be inclusive of diverse languages, abilities, and offline contexts, especially for the Global South. Tom advocated “data privacy, data sovereignty and inclusion” and stressed involving children in design [306-319][322-324]; Urvashi and Thomas added that policies should address children with disabilities and those without reliable internet [440-442][250-253].
Overall purpose / goal of the discussion
The session aimed to shape a responsible, child-focused AI ecosystem by (1) identifying the regulatory and design gaps that could endanger children’s wellbeing, (2) outlining how AI literacy and agency can be cultivated in schools and homes, (3) proposing concrete governance tools (age-assurance, parental controls, global standards) and (4) ensuring that these solutions are inclusive, culturally sensitive, and equitable across different socioeconomic contexts.
Overall tone and its evolution
– The opening remarks by the Baroness set a serious, urgent tone, emphasizing risk and the need for proactive safeguards.
– Rahul’s contribution shifted the mood to energetic and informal, using humor and personal anecdotes while still stressing the importance of guidance.
– The moderated panel adopted a collaborative and analytical tone, balancing optimism about AI’s potential with caution about harms, and offering concrete policy ideas.
– As the conversation progressed, the tone became hopeful and solution-oriented, highlighting emerging tools (age-gate, Open Age Alliance) and the possibility of inclusive, globally coordinated standards.
– The closing remarks returned to a reflective and appreciative tone, thanking participants and reaffirming a collective commitment to responsible AI for children.
Speakers
– Baroness Joanna Shields
– Area of expertise: Internet safety, child online protection, policy & regulation
– Role/Title: Baroness; former UK Government minister for Internet safety and harms; senior leader in international child-online-safety coalitions [S9]
– Chris Lehane
– Area of expertise: AI policy, public affairs, child-focused AI safety
– Role/Title: Chief Global Affairs Officer, OpenAI [S1]
– Tom Hall
– Area of expertise: AI-enabled education, legal frameworks for technology
– Role/Title: Vice President and General Manager, Lego Education (also representing the National Legal Foundation) [S4]
– Urvashi Aneja
– Area of expertise: Digital governance, AI ethics, child-centred technology policy
– Role/Title: Director, Digital Futures Lab [S6]
– Maria Bielikova
– Area of expertise: Trustworthy AI, user modelling, personalization, disinformation risk
– Role/Title: Director, Kempelen Institute for Intelligent Technologies [transcript]
– Thomas Davin
– Area of expertise: Innovation for children, UNICEF programmes, AI for development
– Role/Title: Director of Innovation, UNICEF (Director of the Office of Innovation at UNICEF) [S10]
– Moderator
– Area of expertise: Session facilitation, event moderation
– Role/Title: Moderator of the AI Impact Summit panel [transcript]
– Rahul John Aju
– Area of expertise: Youth AI entrepreneurship, AI safety tools, AI education for children
– Role/Title: Young AI innovator; Founder, AIRM Technologies; Founder, ThinkCraft Academy; Speaker at the summit [transcript]
Additional speakers:
– Alicia – (appears as a panel participant; specific role or expertise not provided) [transcript]
Baroness Joanna Shields (UK Minister for Internet Safety) opened the session warning that governing AI for children will be “the clearest test yet on whether we are governing this technology responsibly and for the public good” [1-3]. She argued that AI’s rapid adoption is driven by extraordinary capabilities, but its continued place in society hinges on trust built through responsible design [2-4]. Rejecting the “post-harm regulatory model” used for social media as “not fit for purpose in the AI world”, she noted that AI differs from a platform because it creates one-to-one adaptive interactions that become part of how children learn, communicate, create and form their sense of self [5-8]. Simulated intimacy can feel real to a child even though it is merely code [9-11], and children cannot reliably distinguish authentic human connection from artificial intimacy [12-13]. This blurring threatens safety, mental health, identity formation and long-term well-being, with observed harms such as emotional dependency, manipulation, deep-fake abuse and devastating loss [14-16]. She concluded that children must not be “beta testers” for AI, calling for age-appropriate experiences by default and guard-rails around systems that simulate intimacy [17-21]. She expressed optimism about joining the panel [22-23].
Rahul John Aju (Founder, AIRM Technologies; Founder & Director, ThinkCraft Academy; advisor to public institutions) was introduced by the moderator as “the AI kid of India” and emphasised the need to speak with children rather than about them [26-29]. He recalled his father’s advice to “question everything” and to be critical [45-47][50-54], and described how his parents taught him to separate correct from fake information when using Google [58-60]. He asked whether, in the AI age, children can still perform this discrimination, noting that even parents struggle to identify reliable information [61-64]. To illustrate digital opacity, he pointed to the habit of uploading photos to the cloud without knowing what happens to them [76-78] and the widespread neglect of lengthy terms-and-conditions [85-87].
He then presented Rescue AI, a prototype he and his team have been developing for three years that can ingest any contract or terms-and-conditions document, flag high-risk and low-risk clauses, and advise whether the product should be used [91-94][95-98]. He framed this tool as evidence that AI awareness and safety are essential, especially when no such safeguards exist [95-98].
Building on foundational knowledge, Rahul argued that children should first master natural intelligence-basic maths, reading and critical thinking-before relying on AI tools [106-118]. He likened AI-enhanced learning to a calculator that becomes useful only after the basics are understood [101-108]. He advocated personalised, multimodal resources, citing Notebook LM’s ability to generate videos and podcasts from textbook content [140-144] and StudyFetch’s conversion of chapters into games [144-146]. Through ThinkCraft Academy he has taught over 700 000 learners how to build and fine-tune large language models in 30 days [166-173]; the academy is supported by AIRM Technologies and his advisory work with public bodies [166-173][468-470].
Panel introduction: Thomas Davin (Director, Office of Innovation, UNICEF); Urvashi Aneja (Director, Digital Futures Lab); Maria Bielikova (Director, Kempelen Institute for Intelligent Technologies); Chris Lehan (Chief Global Affairs Officer, OpenAI); Tom Hall (Vice-President, National Legal Foundation) alongside Baroness Shields [198-206].
Tom Hall (Vice-President, National Legal Foundation) defined AI-literacy as giving children a “screwdriver” to dissect technology, stressing that children must understand how computers see the world as data, how bias works and how accountability can be built [212-216]. He noted that while 80 % of teachers are excited about AI in classrooms, only 41 % feel prepared to teach it, highlighting a capacity gap that requires tools and real-world curricula [209-218]. Hall announced a free AI policy toolkit for classrooms and urged child-centred, inclusive design that respects data privacy and sovereignty [306-311][312-319]. He called for children’s involvement in policy development and warned against one-size-fits-all solutions [322-324]. Importantly, he described “no-regret moves” as design choices that protect inclusion and privacy while allowing iterative improvement [306-311].
Urvashi Aneja (Director, Digital Futures Lab) asked how to embed AI literacy into policy, seeking ways to translate real-world safety practices into the digital AI environment for children [220-223]. She stressed that agency is shaped not only by individual capacity but also by socioeconomic and institutional contexts, especially in the Global South [250-254]. Aneja highlighted the need for child-involved real-world evaluations, redress mechanisms and clear, enforceable principles across jurisdictions [299-304][311-314].
Chris Lehan (Chief Global Affairs Officer, OpenAI) described AI’s potential to provide personalised tutoring that adapts to each child’s pace and learning style [232-236]. He warned that the current K-12 system, designed for the industrial age, limits agency and must be re-engineered to empower learners rather than constrain them [240-246]. Lehan positioned AI as a “leveling technology” that can expand agency, but only if education teaches children how to use it critically [247-249].
Thomas Davin (Director, Office of Innovation, UNICEF) warned of the danger of over-dependence: if AI always supplies the correct answer, children may lose curiosity and grit [425-437]. He suggested that models could deliberately provide occasional wrong answers to foster resilience, and called for systematic impact measurement [425-437]. He also shared a striking statistic: “7 out of 10 children in classrooms cannot explain a text they read at 10 years of age” [471-473]. During the discussion he noted the panel’s deliberate gender arrangement-boys on one side, girls on the other-as a “beautiful” design choice [474-476].
Maria Bielikova (Director, Kempelen Institute for Intelligent Technologies) shifted focus to hidden commercial profiling, noting that while formal advertising on TikTok is low, children are exposed to influencer-driven content five times more often, a risk that can only be uncovered through child-involved studies [402-406]. She used the metaphor of not prohibiting children from the city but travelling with them to understand the environment, arguing that existing data-protection tools should be enforced [408-410]. She added that “Even though we have the Digital Services Act in Europe, the problem persists” [477-479].
Across the discussion, all speakers endorsed proactive, safety-by-design governance with age-appropriate safeguards, rejecting post-harm models [1-3][266-269][327-354][306-311][312-319][322-324][299-304][442-444][419-426]; they agreed that AI-literacy must begin with critical thinking and foundational skills before AI tools are introduced [41-54][106-118][212-216][155-158]; they stressed inclusion and cultural diversity, warning against a monoculture of AI models [395-398][250-254][306-311][370-374]; they supported a global, interoperable age-verification framework while allowing local adaptation [390-394][327-342][375-381]; and they highlighted the need for teachers to receive practical toolkits and training [209-218][306-311][299-304].
Points of disagreement emerged:
* The Baroness advocated a single, interoperable age-key via the Open Age Alliance [390-394], whereas Lehan noted privacy-law limitations (e.g., in Europe) and cultural norms that require country-specific adaptations [370-374].
* Hall promoted rapid, broad AI integration with toolkits and inclusive curricula [306-311], while Lehan cautioned that over-reliance could erode agency and suggested limiting AI’s role to preserve curiosity [247-249][240-246].
* Bielikova identified covert commercial profiling as the highest technical risk [402-406], whereas Lehan focused on explicit harmful content (violence, sexual, mental-health) and advocated age-gates and parental controls [340-349].
* Davin favoured experimental designs such as intentionally wrong answers to preserve grit [425-437], while Bielikova called for continuous observational research involving children to understand platform effects [403-410].
Key take-aways (consolidated):
1. Post-harm regulation is insufficient; safety must be built into AI design from the outset.
2. Robust age-assurance, parental controls and external reviews are essential safeguards.
3. AI-literacy should start with critical thinking and natural-intelligence foundations, with teachers equipped through toolkits and real-world curricula.
4. Personalised AI tutors can boost agency, but over-dependence risks eroding curiosity; intentional challenge-based design is needed.
5. A global age-verification standard (e.g., Open Age Alliance) is required, yet must be adaptable to cultural and regulatory contexts.
6. Continuous real-world impact research, especially on covert profiling, is needed.
7. Children must be directly involved in testing, redress mechanisms and policy design.
Concrete actions announced:
* UNICEF and partners released a free AI policy toolkit for classrooms [306-311].
* OpenAI committed to a multi-pronged safety package (age gates, default under-18 models, parental controls, advertising bans, external review) [340-349][357-360].
* The Baroness highlighted the Open Age Alliance’s work on interoperable age-keys [389-395].
* Tom Hall pledged to embed child-centred governance, data privacy and inclusion in LEGO’s AI education initiatives [306-324].
* Rahul showcased “Rescue AI” and offered to continue developing tools that help children understand contracts [91-94].
* The panel agreed to pursue further collaboration on teacher training, real-world evaluations and inclusion of Global-South perspectives.
Unresolved issues include: embedding AI-literacy effectively across diverse curricula; balancing a universal safety baseline with locally-tailored cultural rules; ensuring emerging AI companies comply with child-safety standards; designing redress and accountability mechanisms; and preserving cultural and linguistic diversity while delivering age-appropriate content.
Proposed compromises: adopt “no-regret” design principles that prioritise privacy, inclusion and child-respect while allowing iterative improvement; implement robust yet privacy-preserving age-verification adaptable locally; combine AI assistance with intentional gaps or challenges to maintain curiosity; use a hybrid governance model that sets global baseline safeguards (age gates, advertising bans) complemented by region-specific cultural guidelines; and involve children throughout design, testing and policy-making.
Thought-provoking remarks: the Baroness described AI as “engineered simulated intimacy at scale”, reframing the conversation from platform risk to relational-psychological risk [6-9]; Rahul’s question about distinguishing real from fake information highlighted everyday challenges for children [58-64]; his “Rescue AI” demo provided a tangible youth-led safety solution [91-94]; Hall warned against over-dependence and championed “no-regret moves” that respect inclusion and data sovereignty [306-311]; Lehan linked AI to broader socioeconomic structures, arguing that AI must empower agency rather than reinforce existing labour contracts [247-249][240-246]; Bielikova’s city metaphor urged “traveling with children” rather than banning them, underscoring the need for contextual, child-involved research [408-410]; Davin’s “ancestor” comment framed the policy challenge as an ethical legacy [415-418]; and the moderator noted that Under-Secretary-General Amandeep Gill was stranded in traffic during the closing [480-482].
Closing: Rahul thanked the United Nations and the summit organisers, reiterating that policies must keep children at the forefront and that young innovators should be heard as they help design the future [455-462]. The moderator thanked all participants, noted the collective commitment to responsible AI advancement for children, mentioned the traffic delay affecting Under-Secretary-General Amandeep Gill, and formally concluded the session [464-467][480-482].
Actionable recommendations
1. Adopt safety-by-design with robust age-assurance and parental-control mechanisms.
2. Deploy teacher-training toolkits and real-world curricula to build AI-literacy.
3. Conduct child-involved real-world evaluations of AI impacts, especially covert profiling.
4. Implement a global interoperable age-key (e.g., Open Age Alliance) while allowing local cultural adaptation.
5. Ensure continuous monitoring, redress and accountability frameworks that involve children in policy design.
governance. How we manage AI on behalf of children will be the clearest test yet on whether we are governing this technology responsibly and for the public good. AI’s rapid adoption has been driven by extraordinary capabilities, but its continued place in society will depend on trust, and trust is built through responsible design. The post -harm regulatory model that we’ve seen with social media reacting after damage is not fit for purpose in the AI world. AI is fundamentally different. It is not a platform. It is increasingly a one -to -one adaptive interaction embedded in how children learn, communicate, create, and form their own sense of self. Inadvertently, AI is engineering simulated intimacy and human -like interaction at a scale that is not just a matter of how children learn, but how they learn.
It is hard to imagine. When a model says to a child, I care, I understand, that’s not conscience, that’s code. But for a child, it can feel very real. And children are not miniature adults. They cannot reliably distinguish between authentic human connection and artificial intimacy, especially when systems are so persuasive, emotionally responsive, and always available. That difference has implications not only for safety, but for mental health, identity formation, and long -term well -being. We have already seen what happens when the line blurs. Emotional dependency, manipulation, deep fake abuse, and in some cases, devastating loss. Children must not be the beta testers for our AI -enabled world. We need age -appropriate experiences by default. with guardrails around systems that simulate intimacy without accountability.
The question is not whether AI will continue to advance. Of course it will. The question is whether we shape it in a way that safeguards the dignity and the development of children. And accountability begins with protection. And I’m excited to join this distinguished panel to have this important conversation, even though it’s day five of the summit. Thank you very much. I’m going to have to move this back up. I’m sorry.
Thank you so much, Baroness Joanna Shields, for setting the stakes so clearly. Too often, discussions about children and technology speak about children rather than with them. This session is intentional in doing otherwise. Therefore, I am very pleased to introduce Rahul John Aju, widely recognized as the AI kid of India. He is our featured young AI innovator who has built and deployed real -world AI tools, founded his own AI startup, and advised public institutions on using AI. Raul, I’d like to invite you on stage.
Thank you. Thank you, guys. Thank you so much for the lovely introduction. I know safety is a bit boring topic, but it’s a very crucial topic. And I think if I stand there, no one is going to see me, so I’m using a hand mic. So hopefully everyone can see me. Yes? Can I get more energy? Hi, guys. Is this all you guys have? Hi. Perfect. So let’s get started. Starting with, you know, when I was young, my father used to tell me… Okay, I’m still young. I’m still young. Younger, younger. That’s what I bet. He still tells me that Raul, question everything. Be critical about everything. The slide changer is not working. Okay, without the slide changer also it will work.
Okay. Be critical about everything. Ask questions. So I did. Why does the chair have four legs? Why is the sky blue? And also, why do birds fly? Why can’t humans fly then? I bombarded him with a lot of questions. So he just took the phone and he’s like, Raul, this is Google. Go search it. And so I did. But you know, while I was using Google, my parents also taught me one thing. How to figure out what is the correct information and the fake information. And that helped me a lot. But this age of AI, how do you expect me to do it? I don’t think even parents can figure out what is the right information and fake information.
We all agree upon that? Yes? So how do we do that? Because curiosity is there in every child. I think I have enough curiosity. But it only becomes powerful if it’s guided the right way. So how do we guide the right way? Because right now we are just teaching kids how to talk to machines. Before we teach them how to… Question. Now I am just saying random quotes now but let’s dive deeper and see why. I will give an example. Everyone remembers the Ghibli trend? Everyone did it? I did it too. Guilty. But it was very fun to be honest. But what happened there was we were all just taking pictures, uploading our pictures to the cloud.
But we don’t even know what’s happening with it. We all agree, right? But right now kids are also doing the same thing, taking their pictures, uploading it to the cloud. But we tell children don’t be on social media, don’t upload your pictures to social media, don’t share your pictures to strangers and all, right? But what about the AI world? We are missing, the parallel is missing. We need to translate real world safety into the digital world. Because right now even most, okay, I have a question. How many of you guys read the 25 page terms and conditions? I don’t, right? You don’t know what’s happening behind the scenes. I don’t know what’s happening behind the scenes.
like most of these pictures were taken and obviously made for the model to be better for all of us, right? Right now a lot of companies are making sure children are safe but we don’t know about it. Are they safe? There are a lot of unknown AI companies as well. What do we do then? That’s right. Also I created an AI software where you can upload a full terms and conditions or any contract and it will tell you the high risk clauses, low risk clauses and it will, thank you and it will literally tell them what to do, if you should use the product or not, right? So be careful. Anyway, so that tool was known as Rescue AI.
I’ve been working on it for the past three years for emergency, for law people, a lot of things. I don’t want to promote myself too much but I’m trying to do that. But what about when things like that are not there? What about if I didn’t do something like that? That is why AI awareness and safety is necessary. Obviously it is. That’s why you’re called here, Raul. But how do you do that education? Right? How do you teach about AI? You know, recently I got calculator in my school and I am so happy because I don’t have to do maths by multiplying, dividing manually. I can do it through calculators in my exam. By the way, I bunked my exam and came today.
Anyway, very happy for that. But you have to do all this calculation. But because I have a calculator, it’s way easier. But I only got access to it once I learned the basics of maths, right? I believe AS should be same. We should learn how to write essays. We should learn how to sing, maybe. Then you should, I don’t know how to sing. Everyone will run away if I start singing. But you should know the basics and the foundations before you start using AI. I feel that’s when you teach about AI. That’s when you say, okay, AI can help you do the essay. AI can help you do the song. You should use the natural intelligence first.
Then start using artificial intelligence. I believe. It’s about using the combination of both, right? Yes. How many of you guys use natural intelligence? Everyone does, right? I’m mostly reliant on artificial intelligence. I’ve got to switch to it. But that’s what matters. But it’s not just about that. It’s also about how we teach, deliver topics. Starting with personalized content. You know, reading for me is kind of boring. I’m so sorry. But everyone learns differently. It might be through reading. It might be through listening. It might be through watching videos, which I prefer the most. That’s how I learn most of the things that are happening. From geopolitics to cricket, which I love. All of these things I’ve learned because I watch the video.
I’m a more visual person. It’s not one size fits all. But sadly, I feel educationist. And I believe AI can generate content. Wait. It’s not believe. It’s already happening. You guys know about Notebook LM, right? It can generate videos. It can generate podcasts with one textbook content. That’s how I passed all my exams, to be honest. Even not just that, there is this tool StudyFetch where you can upload a chapter content and it will convert it into games. It’s not just about that. Everyone’s interest is different, right? Take a wild guess. What do you think my interests are? Wild guess. AI. AI, exactly. I am here to talk about AI guys. Cricket on the side but AI, right?
What if you connected E is equal to MC square and thought that through AI? You can do that too in this AI world. How do you do that? See, right now schools teach us what to think. I am repeating that. Schools teach us what to think but I believe schools should teach us how to think. How to think and how to think critical, how to think critically and how to face failures, how to communicate. These are basic things. Trust me, to stand here I had to face a lot of failures. But I learned how to do that because of my father. Trust me. I am giving you some credit. So, thank you. See, now he’s recording the audience, clapping for him.
Okay. So, that is what matters. And here’s one proof of demand, okay. I started something under my company, AIRM Technologies, ThinkCraft Academy. Yes, a bit of promoting, but ThinkCraft Academy, where I taught what is AI to building your own AI, LLM, fine tuning and all that, that in 30 days and more than 7 lakh people learned from that. And that course was completely free. And even there was another course going from what is AI to building your own AI as a startup founder, as a student. And that course was also completely free. But do you know how many people joined and learned from that? Again, 7 lakh people did, combined. that shows that people want to learn about AI.
It just has to be delivered the right way. The name of this course, I know everyone is searching right now. It’s on my YouTube channel. I’m a content creator too. Raul the Rockstar. Yes, you might be thinking, what does he not do, right? I’m joking. But a lot of things goes on. See, I am not saying a lot of big things. I believe we all should be open mind. We should be open to learning more things. We should be curious because AI will not take your job. But someone using AI can. But at the same time, the most important thing in the world of AI is also to be as human as possible. My name is Raul.
Thank you so much. Is it okay if I take a small video? Influencer. Thank you so much. I have to do this too, guys. So it’s very simple. Like I said, I have to do this. totally forced to I am just going to say AI Impact Summit how was the session and you guys can be if you guys didn’t like it just say no hated it you guys can say that be fully honest I should say you and also right I am totally joking I am very grateful for this opportunity you know in last November I was wanting to come here I was like register for this and the fact that they called me to speak here I am very grateful for this opportunity and we have to thank them thank you shall we do it AI Impact Summit Delhi by UN ok not by ok what’s the worry it’s a part right ok this is how many times I have to record a normal video thank you so much UN for calling me and AI Impact Summit the audience how was the session was it boring yes was it boring you guys are agreeing it’s boring no thank you guys thank you I will not take too much time
Thank you, Rahul, for that very thoughtful and energizing address. Your perspective underscores a key message for today. The question is not whether children will engage with AI, but whether adults, institutions, and systems are prepared to guide that engagement responsibly. We will now turn to our panel discussion. The discussion will be guided by two co -moderators with deep expertise at the intersection of innovation, policy, and child well -being. I am pleased to introduce our moderators, Thomas Davin, Director of the Office of Innovation at UNICEF, as well as Urvashi Aneja, Director of the Digital Futures Lab, and I invite them to guide the discussion.
Thank you. Can you hear me? Yes? All right, so delighted to be here with you all. I’m one of the two co -moderators, and I’m… delighted to invite four leaders in the industry who are going to have the high bar of keeping you all as entertained and on substance as Raoul just did over now. So please, a warm welcome to Baroness Joanna Shields. Please, Maria Bielikova, Director of the Kempelen Institute for Intelligent Technologies. I took the liberty of not reintroducing Baroness because I think she was already known to you. Chris Lehan, welcome, Chief Global Affair Officer for OpenAI. Tom Hall, welcome, Vice President and General Manager of the National Legal Foundation. Over to you, Alicia.
Alicia Thank you and thank you to the UNICEF team and thank you for that very energizing opening. Yeah, I hope we can live up to that level of dynamism. oh yes can we invest the Baroness wants to know if we can invest in your company okay great so on that very cheerful note thank you all for being here and I’m delighted to be able to moderate this discussion at the India AI Impact Summit as someone who studies the governance choices that shape how technologies land in society I’m interested in a very simple test whether AI expands children’s agency and learning or does it quietly narrow it through design incentives and design choices so let’s begin with what we want AI to enable for children at scale and in practice Tom so perhaps I can start with you first Lego education has recently pushed into computer science and AI learning in young classrooms so what does AI literacy that supports well -being look like in real classrooms and what does it look like in real classrooms and what should we do if we want AI to deepen creativity rather than replace it
Well, first of all, thank you for having me and very tough shoes to fill after Rahul’s spot there I agree with so much of what he just had to say and yeah, I’d love him to come and guide some more conversations Being at this conference, I think we can all see that the rate of technological advancement is breathtaking and I think often we stand, whether we’re deeply involved in it or on the sidelines there can be a feeling of incredible excitement there can also be a feeling of, frankly, doom that this change is happening so fast and I think that we kind of underestimate what the role of children is going to be in this journey They might look at what’s happening in the world of AI and simply see it as a magic box that they can interrogate at the click of a button and ask simple questions and get really quite deep answers back.
It might be a funny video they want to produce. It might be the answer to a history exam that they have to submit on Monday morning. And what we think AI literacy is, is ultimately handing children a screwdriver and saying, here is a fairly complex box, but let’s take it apart and let’s understand what’s under the hood. And let’s understand all the components. So for us, AI literacy is allowing children and empowering them to really kind of interrogate the fundamental basis of computer science and artificial intelligence. And that’s teaching them how computers see the world as data, what is sensing, how to think about kind of predictability, how to think about bias and force conversations and accountability.
So we want to empower children to have deep thoughts about this. We also want to empower teachers. and I think right now again this pace is happening so fast we asked some primary and middle school teachers in the United States what they thought about the pace of artificial intelligence in classrooms and they’re all hugely excited about or a very high number of them are very excited about what’s happening they agree that artificial intelligence literacy needs to be a foundational skill in school but that’s 80 % of them see that only 41 % of them feel remotely ready to go and teach AI literacy in a classroom so I think we have to provide teachers with the tools that are going to allow them to bring real world learning to life
Thanks Tom and I would love to maybe at a later stage in the panel come back to you on the how because we do a lot of work with policy makers trying to do kind of capacity support with policy makers and we really struggle in terms of how do you actually embed AI into AI literacy so I imagine children can do that and I think that’s a really good point We really have to think about the pedagogy quite carefully to make sure that we are imparting that learning. So I’d love to kind of come back to you on that. Chris, if I can bring you in next. Open AI has emphasized that AI systems will increasingly support learning, creativity, and problem -solving for young people.
From your perspective, where do you see the most promising opportunities for AI to positively shape children’s experiences, particularly in ways that strengthen agency, curiosity, or access to knowledge? And you’re not allowed to say what Raoul already said.
I was just going to say you got a great explanation of that. First of all, thanks for having me. Awesome panel. Baroness, always good to be with you. My son would be very jealous that I’m sitting next to the Lego guy. That’s a pretty cool thing. So thank you. And I’ll just also share, I may have to exit a little bit early, because I have a question. I’m supposed to be at the date double scheduled, so if so, my apologies in advance. I’ll try to answer your question at a macro level and then maybe a more specific level that I think picks up on your pedagogy question that you were just asking. First of all, this technology has enormous capabilities to basically individualize teaching, individualize.
I mean, you’re at a place where every kid in the world could, in effect, have their own AI tutor that would be able to help them to learn at the pace that they learn and in ways that they learn. I think amongst, you know, sort of insights in education is kids just learn in very different ways. And this technology could be incredibly liberating in terms of answering that. You mentioned the teachers. We do work with the largest teachers union in the United States, 400 ,000 teachers, to actually train them to develop the AI to, in fact, do some of that individualized teaching. But I think there’s maybe a level down from that, which I think you were sort of picking up on when you were setting up this question.
And that’s the agency question. I know the U .S. public education system better than I know others around the world, so part of what I’ll say is really based on my U .S. experience. But the U .S. K -12, I see the sign, yes, you’re telling me to shut up, K -12 public education system was designed for the industrial age. It was basically designed to take kids who were coming from rural environments and the urban environments and teach them to be able to work in factories. That was both the bells, different classrooms that you would go to, the time that the day started, how long the school day lasted. But sort of at its core was not just literacy in terms of teaching people to read, write, do arithmetic.
It was actually creating an ethos about how you should work and participate in an industrial age economy. I do think one of the big issues that we’re going to need to think about with this particular technology, which is going to really reward people like Rahul who take agency, is how do we actually teach people? Agency. This technology is an incredibly… leveling technology, it scales the ability of anyone to think, to learn, to create, to build, to produce. And the question is, do you actually encourage people to be able to use it that way? Because if so, the way we think about the social contract relationship between capital and labor and how that is calibrated, this technology can have a huge impact on actually giving individuals the ability to control their own labor as owners of it.
Thanks, Chris. And I appreciate particularly the point around agency and how can we teach people agency. And I also wonder that sitting here in India, in the global south, one of the things that we can see very clearly is that agency in some sense is not only a factor of individual capacity, but has so much to do with the broader socioeconomic institutional context in which you are in. And so I wonder how we think about agency. Across different contexts. Back to you.
Thank you so much. let’s get into the next segment which is really about what happens when it fails, what happens when there’s harm that is being done from a UNICEF lens of course when we think of the education in the world today, 7 out of 10 children in classrooms cannot explain to us a text that they read at 10 years of age 7 out of 10, so clearly the technology potential is immense in really realizing huge bounds in learning outcomes what happens if actually we go the other way and we suddenly have an over dependency on that technology for children when we maybe frame the children’s creativity in ways that we actually constrain it or make it one one fits all, so let’s go into that segment of risks and harms and what is the accountability frameworks and how do we protect against this, for those of you who are following carefully I would say that the organizers of the panel have done a beautiful work on gender, I don’t know if you noticed but it’s boys on one side, girls on the other women asking questions to the men same question to the women.
They’re by definition much smarter. That’s pretty clear. And that’s exactly where I was going. And the next question to the women are going to be harder than before, as they should be. So let’s start on a curve. Yes. But to be fair, it continues to be harder and harder as the panel continues. Let’s start with Baroness Joanna. You’ve held UK government roles focused on Internet safety and harms. And you have helped build major child online safety coalitions internationally. From that experience, what is one key lesson from the UK Internet safety agenda that you believe is worthwhile surfacing today? And maybe one area where you think, or you should say, we’ve tried this. Please don’t do this.
If I could convey one thing. After 15 years of looking at how do we regulate technology to prevent harm, how do we regulate technology to prevent harm? I think it’s important to this post -harm paradigm that we’re operating in is not going to work in the AI. future. So we have had to adapt very quickly as governments as harms have emerged using AI. For instance, the deepfake crisis that we’ve experienced recently. I know six, seven jurisdictions of, you know, countries that have implemented very quickly, you know, laws that are specific and targeted to that particular harm. But what we need to do is we need to step back and we need to think about that how do we build and design safety from the ground up.
And my personal view is that this has to come through consultation with the companies. I see a very different type of reaction from the AI models developers. They’re much more receptive to the idea of safety by design and building in guardrails that protect children from the outset. And I’m actually an optimist at the moment because I’m starting to see a lot of people who are doing a lot of the work that we’re doing right now. And I’m actually a lot of people who are doing a lot of the work that we’re doing right now. And I’m actually companies like Like OpenAI just recently announced that they have an age gate, age assurance technology to ensure that children under age, you know, whatever the jurisdiction is, I think it’s 18, okay, are not able to engage with the model and to experience, you know, that.
And I think that’s really important because, you know, we’ve been battling this age on the Internet for 15 years. And now the technology, whether it’s cryptography or biometrics, all kinds of technologies have emerged to where you can preserve privacy and ensure that you can protect privacy. So there’s no excuses anymore for companies not to build in robust age assurance that’s privacy preserving and that can ensure that the design experience that you get is appropriate for the age you are.
Thank you so much. So I love the point that social media, we talk a lot about social media these days, right? Rightfully so. But indeed, it’s been a late awakening worldwide about. the potential of that, but also the potential for what happens to children in many ways, and we cannot make that same mistake with AI. It’s just so much deeper and broader, and we need to look at this a lot more systematically. Maria, if I can come to you. Your work spans user modeling, personalization, as far as I understand it, and trustworthy AI, and you’ve also spoken publicly about disinformation risks. In your view, where do AI systems create the highest risk failure modes for children specifically, and what kind of technical evaluation should be required before deployment?
on TikTok for 10 days in Germany, actually. And then we found out what happened. And maybe I can tell it in second, in second my entry that it was really shocked for us. Thank you so much.
So in essence, really having very clear impact focus research continuously so that can inform potential query mechanism and potential redress mechanism as a way to safeguard against those potential risks.
And how they are exposed to commercial content. And this is the most critical.
Thank you so much.
Even though we have Digital Service Act in Europe.
Thank you. Let’s move to the third segment.
Thanks. Yeah, and I think that brings us really nicely to this question of what next, what do we do? I think we often agree on what needs to be done at the level of principles, safety, transparency, accountability. I think you’ve added another dimension to it when you talk about, in some sense, evaluations, that we need to be doing kind of real -world evaluations in real -world deployment context of these systems, not just testing these systems in a lab setting, but testing, evaluating them in a real -world context. And regularly, I think the hard part, at least when we talk about the principles, things like safety, transparency, and accountability, is how we operationalize them across jurisdictions and also across business models, which I think also speaks to the point you were making around it being a feature and not a bug.
So this segment is really about the how, what becomes enforceable, what becomes measurable, and what changes incentives. Tom, if I can start with you again. As AI becomes more embedded in classrooms and in learning platforms, what governance or design choices are essential to ensure that these tools support children’s well -being at scale, particularly around diverse education systems and cultural contexts?
thank you clearly this is a really uh exciting and uh high you know the potential of this moment in time is enormous so i think everyone should be ambitious uh but at the same time be measured um go in ambitious with your design plans for bringing ai into classrooms and see it as an as an opportunity to maybe make exponential gains in in many different markets where you may have been very challenged before i think there are tremendous opportunities for many markets in the global south right now so see the introduction of ai and ai literacy as something of a reset but you know don’t jump in blindfolded this is a once in a lifetime opportunity to establish essential foundational skills for young people and it’s going to need really careful thought these governance and design choices they’ve got to be built on no regret moves so i would say put data privacy data sovereignty and inclusion and respect for the student at the top of any plan When you sort of teach about, I don’t know, systemic bias and large language models in classrooms, make sure that all kids of all types of diversity and inclusions are represented and can see themselves coming back in the products that they’re experiencing.
Children have a lot to say in this space, so involve them. We’ve published a free AI policy toolkit for classrooms. Have children think about what kind of things they think need to be considered here. It’s going to be a really meaningful conversation between teacher and student. And talking of teachers, I think give them exciting but also relevant curriculum. We have computer science qualifications in the UK. The entry levels for that are critically low. And. Very low for girls. We introduced that 10 years ago. We gave very insufficient training for children. And the curriculum is frankly very dry. I think we have to really think about real world curriculum that is going to excite students. And so let them see themselves with real world problems in the types of learning experiences that we’re putting out there.
I’m speaking on behalf of the LEGO Group. So, you know, children are our role models. I think when you’re designing AI policies for children, this has to be sort of child -centered and child -led. And so just involve them in the plans as you roll them out. And I hope that will lead to some really exciting changes.
Thanks, Tom. Chris, earlier this year, OpenAI’s policy engagement has included calls for common -sense youth safety approaches and more parental control. So what, in your view, should be the baseline governance package for child -facing AI, and what should be globally interoperable versus what is locally set?
Sure. Thank you for the question. And let me just give two points, and then I’ll answer that question specifically. First of all, and I think this is a really smart room, so I’m sure we’re all thinking about it this way, but really important to understand and recognize that this is not social. This is not social media, and we should not make the classic mistake of fighting the last war with the next. next war. There are certainly lessons that are important that you take from it, but understanding that this is going to be a technology that is not just on your device, but is going to be around you in all sorts of different ways, physical world, non -physical world.
So understanding that component. I think secondly, interesting lessons from what we’ve seen on the catastrophic harm side. You’ve seen the emergence of AI safety institutes around the world where the leading frontier labs, for the most part, work with those safety institutes to basically be creating safety standards. UK, US, Europe, Japan, Australia, you’ve seen an early version of that. Here in India, and I do wonder whether there’s some version of that that you actually do specifically for kids’ safety. The third point really goes to your question, which is, yeah, we have put forth, and we’re really the only AI company that has done this thus far. We do hope others will join us. Basically, a multi -pronged approach.
The first, and the Baroness mentioned this, is we do do age assurance. We try to use signals to identify whether you’re under 18 or not. If we identify you as under 18 and if we are unable to identify you, we then default you to an under 18 model. So even if we’re not sure of your age, we do default you to an under 18 model, which has all sorts of restrictions around violence and sexual conversations and mental health type of issues. Three, we build it in with a ton of parental controls. Parents can control whether it has memory or not about your child. Parents can get real -time feedback. Parents can control how long you’re spending on it. You can get warnings and alerts around stuff if your child is asking stuff that would be in the mental health types of space.
Four, we prohibit any targeted advertising of kids using the technology. I think that’s one that’s a clear lesson from the social media age. Fifth, we have an outside review process that we’ve called for. In the U .S., that would be done by like a state attorney general, but someone who’s a part of government to actually review that what you’re saying you in fact are doing. And then finally, prohibit the targeting of specific kids bots. There may come a time and place when we actually have really good guardrails around this, and they can really serve really helpful, positive, productive purposes. But until we have those guardrails, we think we need to be really, really, really mindful of that.
So it is a complete package. We are pushing this in California and a number of states. We want to take it around the world. We’re working with some of the leading children’s advocacy organizations. And anyone here who would want to work with us on it, we really welcome that. And we don’t pretend to have all the answers. Like we’re super humble about this. We do think this is what we’ve seen from our data. This makes a lot of sense. It goes farther than what others have done. But we also know that this is going to be a constant learning process, and this is a beginning, not even the middle, and certainly not the end.
Sorry, just to ask a follow -up question on the bit around how you make this locally relevant. So you have this kind of package, you’re rolling it out in the U .S. How do you then cater it to different contexts?
You know, it’s a great question. Like there are some parts of the world, you know, Europe is an example of this, where there are some privacy limitations that actually impact your ability to do the age assurance at the level that you would like to be able to do it at. So we’re in the process of some of these jurisdictions of trying to work through some of those types of issues. I think there’s other dynamics that potentially come into play, which may be what you’re asking about, you know, cultural context, societal context. And I think those are things that you do have to work through with individual countries because individual countries are going to have their own norms on those.
And I think we’ll also see different levels of vulnerability or different types of vulnerabilities in those different contexts.
Fair enough. If I can bring you in. How should global norms for children’s safety handle cultural and regulatory diversity without creating, in some sense, loopholes that allow companies then to opt for the weakest protection?
So I wanted to take that question in two different directions. First of all, in terms of a global regulatory framework, there are certain standards that are required across every jurisdiction. I mean, every country has an age where children can participate in the digital world. And unfortunately, it’s a blunt instrument in many cases. It applies across the board at a certain age. We’ve been seeing a lot of social media bans recently. And I think that has come out of exasperation on the part of governments, the fact that they just have. They’ve given up trying to regulate this technology, and they’ve decided they’re going to just use that blunt instrument as a guide. And unfortunately, you know, there are benefits then that the children can’t participate in.
But the reality is that this, there’s a little bit of movement here. As the, you know, as the age of assurance technology grows and becomes much more capable, we can custom design experiences for young people that accommodate their level of maturity and capability and ensure that we can meet these requirements in a much more sophisticated and better way. It’s about time we solve for age online once and for all. And I believe we’re getting close to that. There’s an organization called the Open Age Alliance. And it’s a very important organization that’s looking to harmonize standards across all of, age assurance technology. So whatever age assurance you think in your platform is reliable, Open Age will enable you to generate an age key.
And then that age key travels with. the child everywhere they go online. So we’ve got a very absolutely verifiable way for companies to deliver an age -appropriate experience. And you asked me about something else that I think is really important in this context about culture. And if we have a world where we are accepting models from just the global north, I really believe we will lose so much of our cultural diversity, our uniqueness as people, wherever we come from, whatever our background is. We have to be very mindful of the fact that we don’t want to develop a monoculture that is based on a handful of models that everybody uses around the world, and we lose that richness of who we are, what makes us human.
I think that that wasn’t… really the aim of the question, but I couldn’t let it go without bringing that to bear, because this is an absolutely critical question we need to solve. as society.
Thank you. I couldn’t agree more on both those points on how we have to get the age, we have to solve for age verification and then the risk of kind of flattening culture and what that means for children and what that means for how they develop and grow. Maria, last but not least, you’ve helped elevate trustworthy AI as a public agenda in Slovakia and in Europe through initiatives spotlighting responsible practices. So if a regulator asks you for key measures or measurable indicators that an AI system is acting in a child’s best interest, what would those be?
Actually, I already mentioned it somehow that it’s something that AI at this moment is so complex, meaning I mean the neural networks that we have that we we cannot actually measure something that we don’t know. We can observe it and this is quite important to do a lot of studies as we do and not just taking analytics from companies that provide it even though they seem the best because even though they tell that children are not profiled but they are because we see it and sometimes it’s out of their control because we should really make such studies as I mentioned because for example one of the results of a study I mentioned before is that children see less formal advertisement on TikTok.
This is fine but actually they are exposed five times more … to profiling to the topics with influencers and so on. They are not formal advertisements. So we definitely should do a lot of such studies. And the children should be there because if we prohibit everything for them until some age, then they will not be able to explore it. It is the same as we will prohibit children to go to the city. But we should know what is going on and we should travel with them through this environment. And this is probably the most important to doing such studies to really understand what is going on on the platform where they are because they will be there.
I think that’s such a powerful that’s such a powerful analogy, the city one and I think while you were speaking what struck me is that you know we have some tools already we don’t have to kind of approach this afresh so we have actually tools around data protection and privacy if we actually enforce them some of that profiling that you’re talking about need not happen we have tools that allow us to get data from the platforms to actually understand what is happening on these systems so again we have things in our kind of regulatory toolbox that we can exercise and then of course I think in addition to that really this point around contextual evaluations that involve children is so that we can understand what these systems are actually doing Thomas maybe I can hand it back to you to or did you want to add something?
Thomas is my formal name so I thought you were talking
oh right if you would like to add something and then you can hand it on to the other Would you want something? Other Thomas.
A lady said something I thought was very wise to me this morning and said, you know, you’ve got to think about what kind of ancestor you want to be. And I guess we’re at this really interesting moment where we’ve had social media, we’ve had sugar, we’ve had tobacco. Surely now this is our chance to make some really sharp decisions and pay it forward for the next generation. So credit to the lady who said that to me this morning.
Thank you so much, Tom. So it’s going to be very hard to close, so maybe I’ll just try to see at least the points that I took from the panel and hopefully they will resonate. I come away with a sense of, I would say it’s going to sound terribly UN, but measured optimism. One, because the potential is tremendous. We are all aware of that. The potential, at least from a UNICEF lens, on really changing outcomes for children in ways we have never been able to do before is huge. And I think that’s something that we can all be proud of. and the risks are equally tremendously important and potentially will be there for decades if we don’t craft, design it right.
To my mind, there may be three directions that I heard that we are going in the right direction. One is safety by design has to be a must. That’s about age appropriateness. It’s about data privacy. It’s about child rights at the heart. It’s about appropriate content for the right age. It’s about systematic impact measurement. I was struck, Tom, in your session this morning when you were talking about, you know, if we have a model that actually gives the right answer or an answer to children all the time, they might actually lose their sense of curiosity. And I never thought about it like this. What huge loss would that be for humanity if we suddenly have children who are just no more curious because they just ask whatever question?
Can we design a model that actually gives the wrong answer on purpose so that the child actually struggles because we know… that grit is going to be one of the huge skills? of tomorrow. So those things are going to be massively important. Redress mechanism, we don’t talk about this and how we enforce those redress mechanism when things go wrong is also there. The second layer in my mind would be inclusion by default, coming back to Baroness’s point about having a monoculture under risk of this and we know that some of that is already playing out and hopefully having a summit in India is one of the turning points where we can see actually this turning around a little bit where we really have so many more countries beyond the global north creating shaping what those solutions are, having representation of regions of language of different dialects but also children with disabilities which are quite often left as we know out of those out of time.
And maybe one thing that we haven’t really talked about is having solutions that work for the unconnected, having solutions that work offline. We are at risk of just focusing on urban centered people and that will be terrible if we don’t do it right to those who are already kind of struggling by the wayline. And last but not least is children at the heart. And children at the heart because that’s who we want to create that world for, the ancestors we want to be for them but also because Raoul demonstrated that for us, they are the most effective users of that and the ones that have the ability to tell us this works for me, this doesn’t work for me and they should be not just divorced but they should be part of the governance of those mechanisms.
That starts with AI literacy in schools, it starts with also helping parents having the ability to help their children know where to get that literacy and hopefully if we hit all of these right we have a chance.
Thank you all for joining us. I just want to give the floor back to the MC.
Thank you so much to the panelists as well as the moderators and the audience. Also on behalf of Undersecretary General Amandeep Gill, United Nations Special Envoy for Digital and Emerging Technologies. who regrets missing the session as he is stranded with the Secretary General’s program. Even the United Nations motorcade cannot make it through Delhi traffic. But could we please welcome Rahul back up to the stage for a very brief reflection on the discussion?
I’ll make sure it’s brief. First of all, guys, can we have a big glass for them? That was not enough. If you don’t realize, these are the main people who designed the future for us kids. And the fact that I got an opportunity here to speak, thank you again, UN for that. Thank you, AI Summit for that. And whatever they said is very true. You know why? Because at this age, specifically us kids, the policies that are designed, when we are building these AI tools, that should be the first thought of keeping kids in mind, not an afterthought, right? And the fact that that’s happening is good, right? Because from Lego to open AI to all these big places to ma ‘am, everyone here, they’re designing the next world.
and I just want to say a big, big, big thank you and I also want to add one last thing. Thank you so much for always talking about me also in between but more than that, for listening to us kids, you know, for not just having, thinking what we need, for putting our opinion in mind also while building this. So a big thank you from all the children out there. Thank you
Excellencies and distinguished guests, thank you for your participation and engagement. We appreciate the insights shared today and look forward to continued discussion on the responsible advancement of AI. The session is now concluded. Thank you. Thank you. Thank you audience. May I request the session officers to please come to the stage. May I request the audience to exit from the door behind us. Thank you.
Need for child-appropriate verification, age assurance protocols, and standards for platforms
EventAge assurance technology must be implemented with privacy-preserving methods to ensure age-appropriate experiences
Event– Karianne Tung- Christine Grahn- Emily Yu Barrington-Leach advocates for a fundamental shift in platform design where safety and privacy protections are the default state for children, requiring act…
EventOne of the principles focuses on robustness, security, and safety.
EventThis comment established the philosophical foundation for the entire discussion, shifting focus from AI as a consumption tool to AI as something children should understand, critique, and ultimately de…
EventIn addition to the above topics, the significance of critical information and critical thinking in education was also discussed. Speakers emphasised the positive impact of teaching critical thinking s…
EventNorman Sze: Thank you for introduction. It’s my honor to join this forum and share insight from perspective of professional service and consulting industry. Of course, AI already shaping how we work, …
EventFor years, stories and movies have imagined humans interacting with intelligent machines, envisioning a coexistence of these two forms of intelligence. What once felt like purely amusing fiction now r…
UpdatesGenerative AI and large language models have the potential to significantly enhance conversational systems. These systems possess the capability to handle a wide range of tasks, allowing for parallel …
EventHarari identifies the potential for AI to become children’s primary interaction partner from birth as the most dangerous development. He characterizes this as an uncontrolled psychological experiment …
EventAnew national surveyshows that roughly 72% of American teenagers, aged 13 to 17, have tried AI companion apps such as Replika,Character.AI, and Nomi, with over half interacting with them regularly. Al…
UpdatesCollaborative Efforts: The Global Age Assurance Standards Summit and the International Age Assurance Working Group are promoting interoperability, privacy, and regulatory consistency across jurisdicti…
BlogJaap-Henk Hoepman:Yeah, so like I said, so I recently was talking to somebody who was doing research on the false positives of matching known CSAM. I haven’t yet… so I cannot really comment on that….
EventAbhilash Nair: Thank you. I want to talk a little bit about why age assurance matters from a legal perspective. As a starting point, we know that requires some form of age assurance for various cont…
EventAll stakeholders, including government, industry, and civil society representatives, acknowledge that there are no perfect solutions and that approaches must be flexible and evolving. This level of hu…
EventSpeaker 4: Yeah, so I think that this is a very important question. I think first, we need to be very inclusive or in the design approach of all these policies around AI should be very inclusive, …
EventAudience:Thank you. Thank you so much. I represent you from Chinese mission. We appreciate Her Excellency, Ambassador Escobar, Chair and other co-sponsors who hold this event. It is very important one…
EventFadi Salim: Thank you. And this covers a little bit the grassroot element of it. So it’s awareness, diversity, inclusion, access. I’ll ask, I’ll be back to you, Dr. Nibal, and ask you about the hig…
Event-Development: Using diverse datasets that include perspectives from the global south, both sexes, and people with disabilities
EventIf compute, database and foundational models remain concentrated of a few, we risk creating a new form of inequality, an AI divide, that will deepen global disparities and weaken national agency. For …
EventThe discussion maintained a consistently serious and urgent tone throughout, reflecting the critical nature of the infrastructure being discussed. Speakers emphasized that “the risks are no longer hyp…
EventThe overall tone was one of concern and urgency. Many speakers expressed alarm at negative trends and backsliding on women’s rights. However, there was also a sense of renewed commitment and determina…
EventEmphasis is placed on the need to protect critical infrastructure and to increase confidence-building measures among nations. The address focuses on key issues such as the effective enforcement of cur…
EventThe discussion maintained a serious, urgent, and collaborative tone throughout. Speakers demonstrated deep concern about the rapidly escalating problem while remaining solution-focused and constructiv…
EventThe overall tone was enthusiastic and engaging, with the speaker using humor, personal anecdotes, and even a tattoo demonstration to make complex scientific concepts accessible. The tone remained cons…
EventThe tone is academic and informative, with Kurbalija speaking as an expert educator sharing insights from decades of experience. The atmosphere is collegial and somewhat informal, with personal anecdo…
EventAbsolutely. So we are all lucky to be here at this age of AI. We are truly lucky to be in this. No, that was very insightful. Thank you, Thomas. So Vivek, a question for you. I’m going to say what is …
Event_reportingWhen transplanted into statecraft, these qualities had radical consequences. At the presidential residence in Tamminiemi, Kekkonen hosted visiting Soviet leaders not only in reception rooms but on the…
BlogThe tone is consistently enthusiastic, patriotic, and inspirational throughout. Sharma maintains an optimistic and confident demeanor, using personal anecdotes and analogies to make his points accessi…
EventThe discussion maintained a thoughtful but somewhat cautious tone throughout, with speakers acknowledging both opportunities and significant challenges. While there were moments of optimism about AI’s…
EventThe discussion maintained a balanced but cautionary tone throughout. While panelists acknowledged the tremendous opportunities AI presents for cybersecurity (describing it as creating a “level playing…
EventThe conversation maintained a cautiously optimistic tone throughout, characterized by intellectual rigor and practical realism. While speakers expressed genuine excitement about AI’s transformative po…
EventThe discussion maintained a balanced, thoughtful tone throughout, combining cautious optimism with realistic concern. Panelists demonstrated technical expertise while acknowledging significant unknown…
EventThe tone was thoughtful and exploratory rather than alarmist, with participants acknowledging both the transformative potential and genuine risks of AI in education. While there were moments of concer…
EventThe tone of the discussion was largely optimistic and forward-looking. Panelists acknowledged challenges but focused on potential solutions and opportunities for progress. There was a sense of urgency…
EventThe tone of the discussion was largely optimistic and solution-oriented. Speakers highlighted positive examples of how technology is being used to empower communities and improve lives. At the same ti…
EventThe discussion maintained a consistently professional and collaborative tone throughout. It began with formal introductions and technical explanations, evolved into an enthusiastic presentation of pra…
EventThe tone was largely optimistic and solution-oriented, with speakers acknowledging challenges but focusing on opportunities and potential ways forward. There was a sense of urgency about the need for …
EventThe tone was notably optimistic and forward-looking throughout the conversation. Panelists consistently emphasized opportunities rather than obstacles, with particular enthusiasm around technology’s p…
EventThis comment is powerful because it creates a generational identity and responsibility. The repetition emphasizes urgency and collective ownership. The distinction between AI as ‘means’ versus ‘end’ i…
EventThe overall tone of the session was appreciative, with a sense of accomplishment expressed by participants. As Mary Uduma, who mentioned her upcoming 73rd birthday, concluded, the forum represented a …
EventThe tone was professional, collaborative, and pragmatically optimistic throughout. Speakers maintained a solution-oriented approach, sharing real-world examples and acknowledging challenges while emph…
EventThe tone of the discussion was formal yet collaborative and appreciative. There was a sense of accomplishment for the work done during the track, gratitude expressed to organizers and hosts, and enthu…
Event“Baroness Joanna Shields warned that governing AI for children will be “the clearest test yet on whether we are governing this technology responsibly and for the public good”.”
The knowledge-base entry titled “Safeguarding Children with Responsible AI: Baroness Joanna Shields” records her framing of AI governance for children as a key test of responsible and public-good regulation, confirming this statement [S1].
“Simulated intimacy can feel real to a child even though it is merely code.”
S3 explicitly notes that a model’s messages are “code” but “for a child, it can feel very real”, directly supporting the claim.
“Children cannot reliably distinguish authentic human connection from artificial intimacy.”
S3 also states that “children are not miniature adults. They cannot reliably distinguish between authentic human connection and artificial intimacy,” confirming the claim.
“The blurring threatens safety, mental health, identity formation and long‑term well‑being, with observed harms such as emotional dependency, manipulation, deep‑fake abuse and devastating loss.”
S30 documents deep‑fake abuse as a concrete harm to vulnerable groups, and S35 discusses hidden psychological risks and “AI psychosis”, providing additional context for emotional dependency and mental‑health impacts.
“Children should first master natural intelligence—basic maths, reading and critical thinking—before relying on AI tools.”
S113 highlights that young people are exposed to sophisticated AI‑generated content and need proper education and verification tools to distinguish real from artificial, reinforcing the argument for foundational learning before AI reliance.
There is strong consensus that AI for children must be governed proactively with safety‑by‑design, age‑verification, inclusion, teacher capacity building, and continuous monitoring. Panelists across roles and regions agree on the need for child‑centered design, preservation of cultural diversity, and mechanisms to preserve agency and curiosity.
High consensus – the convergence of viewpoints across senior policymakers, industry leaders, educators, and a youth innovator indicates a shared commitment to child‑focused, inclusive, and measurable AI governance, which should accelerate the development of global standards and practical toolkits.
The panel displayed broad consensus on the need to protect children and to embed AI literacy, but significant disagreements emerged around how to implement age‑verification, the depth of AI integration in education, the primary safety risks to prioritise, and the appropriate methods for impact measurement. These divergences reflect differing priorities (global standards vs local adaptation, rapid deployment vs cautious pedagogy) and risk perceptions (profiling vs content harms).
Moderate to high – while all participants share the overarching goal of child safety and empowerment, the contrasting approaches to governance, risk focus, and evaluation suggest that reaching coordinated policy action will require substantial negotiation and compromise.
The discussion was driven forward by a handful of pivotal remarks that reframed AI from a mere platform to a relational, agency‑shaping technology. Baroness Shields’ opening about simulated intimacy set a high‑stakes context, which Rahul’s personal experience and his Rescue AI tool grounded in everyday challenges. Tom Hall’s warnings about over‑dependence and inclusion, Chris Lehane’s linkage of AI to broader labor dynamics, and Maria Bielikova’s city metaphor together expanded the conversation from child‑centric safety to systemic design, cultural diversity, and empirical oversight. These comments sparked new sub‑topics—age‑verification standards, global‑local regulatory balance, and the moral imperative of shaping a responsible legacy—thereby deepening the analysis and steering the panel toward concrete, actionable pathways for child‑focused AI governance.
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event

