Responsible AI for Children Safe Playful and Empowering Learning

20 Feb 2026 16:00h - 17:00h

Responsible AI for Children Safe Playful and Empowering Learning

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel convened to examine how AI literacy can be built into children’s education and why it is essential for their future participation in an AI-driven world [5][152]. Tom Hall argued that AI should be taught as a technology rather than a “magic box,” emphasizing that children need to understand underlying concepts such as probability, data sensing and algorithmic bias instead of merely consuming AI outputs [16-21][23-26]. He warned that many young learners treat generative AI as a shortcut, which risks passive consumption and undermines critical thinking, so curricula must move beyond excitement to mastery of fundamentals [15][18]. Atish Joshua Gonsalves described LEGO Education’s new computer-science and AI product, which is built on four values-child agency, safety, transparency and hands-on collaborative learning-and is designed to run AI features locally to protect privacy [32-34][306-308]. The demo showed students training a pre-trained image classifier to control a robot’s movements, teaching them that AI predictions are probabilistic, improve with more data, and can contain bias [46-50][48-51]. Richa Menke highlighted that AI can enrich play by inspiring imagination, but cautioned that over-reliance on efficiency or personalization may erode children’s creative struggle and long-term agency [97-104][130-138][146-151]. She noted that generative AI’s “hallucinations” might be playful features in games, yet the technology is not yet ready for childhood without deliberate deliberation about its impact [124-127][115-116]. Saadhna Panday of UNICEF India stressed that AI’s benefits are unevenly distributed, citing the contrast between urban Delhi and rural Jharkhand, and called for evidence-based, equitable solutions that keep teachers and children at the centre [162-165][170-176]. She also pointed out the need for multilingual, low-resource tools and for safeguarding children’s privacy, trust and participation in AI-enhanced learning environments [205-212][306-308]. The panel reached consensus that empowering teachers with clear policies, scaffolding resources and a “5E” instructional model is crucial for scaling AI literacy responsibly [214-218][369-372]. Participants agreed that hands-on, collaborative activities-such as LEGO’s design challenges and the First LEGO League-provide the “magic” of creation while reinforcing technical concepts [263-276][376-378]. Finally, the discussion concluded that AI literacy must be treated as a core modern literacy, integrated with safety, equity and agency, so that children become designers of future AI rather than merely its users [26-27][386-395].


Keypoints


Major discussion points


AI literacy must go beyond “magic-box” usage and teach foundational concepts.


Participants stressed that children should understand how AI works, not just treat it as a mysterious tool. Tom Hall highlighted the need to move from “magic” to a “screwdriver” that lets kids see under the hood [19-24]; Atish echoed this by defining AI literacy as “understanding today’s technology… and the fundamentals” [31-34]; early remarks from Speaker 1 framed AI as an unavoidable, essential skill [3][5].


Hands-on, collaborative play is the preferred vehicle for teaching AI.


LEGO representatives described a learning model that combines physical building with coding to give children agency while keeping safety front-and-center. Atish detailed the classroom demo, the “AI Dancer,” and the emphasis on active creation [31-34][36-41][46-51]; Richa outlined LEGO’s four guiding values-child agency, safety, hands-on immersion, and foundational knowledge [94-112][306-309]; Tom Hall linked tactile learning to stronger brain engagement and deeper mastery [263-280].


Equity and contextual relevance are critical for scaling AI education.


Saadhna highlighted the stark contrast between urban Delhi and rural Jharkhand, urging solutions that work in multilingual, low-resource settings [152-164][210-214][251-259][381-385]; Atish added that “frugal AI” and age-appropriate, screen-free approaches can bridge gaps in underserved environments [238-250].


Safety, privacy, and ethical safeguards are non-negotiable.


Across the panel, participants agreed that any AI interaction with children must meet high safety standards. Atish listed LEGO’s safety rules (no anthropomorphising, local data processing) [31-34]; Richa reiterated that privacy and safety are foundational and that current LEGO products do not embed AI for this reason [306-309]; Tom Hall warned against “shotgun” adoption without rigorous safety research [344-363]; Saadhna asked how to balance joy with risk [300-304].


Teachers and parents need concrete resources and capacity-building.


The discussion repeatedly called for tools, training, and support structures for educators and families. Atish noted the need to empower teachers before dropping new standards [81-88]; Tom Hall suggested a facilitated AI-policy conversation template for classrooms [214-236]; audience questions from Nikhil and Asha asked for parent-focused curricula and affordable teacher training [323-328][332-340].


Overall purpose / goal


The panel aimed to define a responsible, inclusive roadmap for AI literacy in K-12 education-showcasing how hands-on, play-based learning can demystify AI, while simultaneously addressing safety, equity, and the need for teacher and parent support to ensure all children can become informed creators rather than passive consumers of AI.


Overall tone


The conversation began with an upbeat, visionary tone, celebrating children’s curiosity and the potential of AI-enhanced play. As the dialogue progressed, the tone shifted to a more cautious, reflective stance, emphasizing ethical safeguards, equity challenges, and the urgency of building teacher capacity. Throughout, the tone remained collaborative and solution-oriented, moving from optimism to a balanced mix of hope and responsibility.


Speakers

Saadhna Panday


Area of expertise: AI literacy, education policy, child protection


Role / Title: Chief of Education, UNICEF India; Panel moderator


Asha Nanavati


Area of expertise: Education leadership, AI adoption in schools


Role / Title: Representative, Alliance Educational Foundation (runs a charitable K-12 school in Kerala) [S4]


Tom Hall


Area of expertise: AI literacy, hands-on learning, educational technology


Role / Title: Vice President and General Manager, LEGO Education [S5]


Nikhil Bawa


Area of expertise: AI and education commentary, parent resources


Role / Title: Writer/Researcher on AI and education (independent) [S7]


Richa Menke


Area of expertise: Interactive play, AI-enabled learning products, safety & privacy


Role / Title: Head of Interactive Play, LEGO Group [S10]


Speaker 4


Area of expertise: (not specified)


Role / Title: (not specified; appears as an audience participant or brief interjector) [S11]


Atish Joshua Gonsalves


Area of expertise: AI-driven educational product design, hands-on classroom implementation


Role / Title: Product lead / presenter for LEGO Education AI & Data curriculum (inferred from presentation) [S14]


Speaker 1


Area of expertise: (not specified; appears to be a student voice)


Role / Title: Student participant / youth representative in the discussion [S16]


Additional speakers:


Steve – referenced by Richa Menke (“Thanks, Steve.”); role/title not provided in the transcript.


Full session reportComprehensive analysis and detailed insights

Opening framing – Speaker 1 opens the session by likening artificial intelligence (AI) to taxes, arguing that AI is now unavoidable and that children must be equipped to engage with it or risk being left behind [2-6][3][5]. He stresses the need for AI literacy and for young people to have a voice in AI policy because “AI literacy is really important” [2-6][1-6].


Tom Hall – why AI must be taught as technology – Hall expands the opening premise, warning that treating AI as a “magic-box” creates a passive-consumer mindset. He uses a screwdriver metaphor to argue that children should be able to open the box and understand foundational concepts such as probability, data sensing, algorithmic bias and the probabilistic nature of AI predictions [16-27][15][18-21][19-24][25-27]. Hall calls for AI literacy to become a modern core literacy alongside maths and reading, not an elective [26-27]. He also cites the 2014 UK CS GCSE rollout failure, attributing it to a lack of trained teachers and an outdated curriculum [300-312].


Atish Gonsalves – LEGO Education product overview – Atish introduces LEGO Education’s new computer-science and AI offering, grounding it in four guiding values: child agency, safety & well-being, transparency, and hands-on collaborative learning [32-34][94-112]. He outlines concrete safety rules – no anthropomorphising of AI, on-device processing so data never leaves the device, clear data provenance for all models, and universal design to support neuro-diverse learners [31-34][46-51]. The live demo of the “AI Dancer” shows pupils training a pre-trained image classifier with their own pose data, observing how confidence scores shift as they move [46-51]; the demo illustrates that AI is probabilistic, improves with more training data, and can be biased when the training set is not representative [48-50]. Atish also references the 5E instructional model (engage, explore, explain, elaborate, evaluate) used in the LEGO Education Teacher Portal [32-38][369-372] and mentions the First Lego League as an example of open-ended, collaborative AI projects [340-350].


Lesson “Strike a Pose” – Speaker 1 describes the “Strike a Pose” activity, which combines LEGO bricks, the Coding Canvas, and a custom classifier. Students build a robot, collect pose data, train a classifier, and present their work, thereby moving from users to designers of AI [45-58][40-42][55-73]. The lesson follows the 5E structure and reinforces the three core lessons highlighted in the demo [48-50].


Atish – teacher support & frugal AI – Atish emphasizes the LEGO Education Teacher Portal, which provides curriculum, lesson plans and scaffolding aligned with the 5E model [32-38][369-372]. He promotes “frugal AI” approaches that teach computational concepts such as loops and probability using bricks alone, without screens or heavy hardware [238-250][243-247].


Richa Menke – SmartPlay & SmartBreak – Richa presents the SmartPlay platform, a screen-free, sensor-driven play system that responds with sounds and motions but currently does not employ generative AI for safety reasons [100-108]. She outlines three tensions that must be balanced when introducing AI to children: efficiency vs. imagination, personalization vs. identity, and assistance vs. agency [120-138]. Richa also reiterates the non-negotiable safety and privacy safeguards, echoing Atish’s design rules [306-314].


Saadhna Panday (UNICEF India) – equity & evidence – Saadhna highlights the stark contrast between AI-enabled education in urban Delhi and the near-absence of such tools for a tribal girl in rural Jharkhand, warning that AI could exacerbate existing inequalities if deployed irresponsibly [152-176][170-176]. She cites AI-driven early detection of pancreatic cancer as a motivating example of AI’s societal impact [160-168]. Saadhna calls for multilingual, low-cost, evidence-based solutions that keep teachers and children at the centre of design [205-212][251-259].


Panel Q&A


* Tom Hall reiterates that LEGO provides a template for classroom AI-policy discussions, encouraging a “pause-and-discuss” approach where teachers and children jointly shape AI policies before tools are introduced [214-236][229-236].


* Atish stresses the importance of frugal, age-appropriate tools and the teacher portal for scaling AI literacy [238-250].


* Richa reinforces the safety-first stance, noting that none of LEGO’s current products use generative AI until safety standards are met [306-314].


* Nikhil Bawa asks for parent-focused resources and guidance on supporting unstructured play [380-388].


* Asha Nanavati queries affordable teacher-training models for charitable schools in India [390-398].


Closing remarks – Saadhna thanks the participants and restates that the responsibility to protect children while delivering equitable, evidence-based AI education rests on all stakeholders. She calls for rapid yet safe empowerment of teachers, learners and parents, and reaffirms the shared commitment to treat AI literacy as a core modern literacy embedded in hands-on, play-based pedagogy, upheld by the highest safety and privacy standards, and accessible to every child regardless of geography or resources [386-395].


Across the session the panel reaches consensus that AI literacy is essential for future participation, must focus on fundamentals rather than black-box perception, benefits from tactile collaborative learning, requires non-negotiable safety, privacy and fairness, depends on teacher empowerment and resource provision, and must be delivered through equitable, localized and frugal approaches to avoid widening the AI divide [1-6][16-23][24-27][31-34][381-385][152-164][238-250].


Session transcriptComplete transcript of the session
Speaker 1

curious how it works and I think that a lot of kids are. I would love to learn how it can be used in everyday life and how it can be used as an accurate source of information. AI is like taxes, it’s unavoidable and if you don’t learn to evolve with it you’re gonna be left behind. I definitely want to be a part of solving big problems. We need to have a say in AI policies because AI literacy is really important. Thanks for finally asking us what we think. Bye.

Tom Hall

He breaks me every time. These were children that we brought into a school in California in December. actors in there. There’s just a lot of children with opinions and the little boy at the end, he just had a lot to say. He is very wise. But these are, those were the views of just some smart, inspiring young people. They’re not just eager to use AI, but I think you can see they’re especially eager to understand and to build things with it. And just as you saw, they have some really clear ideas about how it should and shouldn’t be used in today’s classrooms. But of course, you know, excitement and confidence are not the same as mastery or comprehension.

We do see an unfortunate trend where children do not understand the fundamentals of the systems they’re interacting with. And I think you can particularly see that in younger children who often see generative AI systems. As a kind of magic box that they can… into where you know you type in a text or a question and then outcome images and videos and entertaining things and maybe even the answer to a history essay question I think we need to be really clear that AI is not magic it’s not a magic toolbox it’s a technology system and foundational AI literacy isn’t about teaching children how to use this magic box I think far more importantly it’s like how do we give the child the screwdriver to take that box apart and really understand what’s going on under the cover so while you know supporting children to use AI tools safely ethically and effectively today is important I think far more it’s about equipping them with the knowledge and the tools the confidence to build what is yet to come So therefore our definition of AI literacy when we talk about it, it’s about understanding today’s technology, yes, but it’s far more about understanding the fundamental concepts so that you are armed and ready for what is yet to be designed, and actually so that you can be the designer of what is to come.

So I think that we have underestimated the role we have to play in preparing children today. We don’t want them to be passive consumers of AI. Instead, we really believe that we should be arming them with the tools, the literacies that are required to lead, to design, to create. And our goal is not about sort of robot -proofing our children for what’s coming at them, but just making sure that they are ready to build a better future and they’ve got the tools in their hands. So let’s talk about AI literacy as understanding the foundations of AI. So AI is the foundation of computer science and AI concepts, and that is about understanding the fundamental concepts of AI.

understanding probability, how computers sort of sense the world as data points and data sensors, sensing algorithmic bias and understanding all of the nuances of that. We don’t want that to be an elective or selective choice for just the few. We believe that these concepts have to be elevated to the status of modern literacy alongside maths and reading, problem solving, creativity and collaboration. And I think it’s best if we show you how we plan to do this in classrooms. So I’m going to hand over to Atish, and we’re going to run a live demo, which is always fun at a conference event.

Atish Joshua Gonsalves

Great, thanks, Tom. And I’m also delighted to introduce AI Dancer, who’s on the table here, who hopefully will do some dancing soon as well. So, yeah, very excited to share. I’m going to share a bit more about how we’ve translated some of these principles that Tom was talking about into the product. So I’m here. excited to shout about our new computer science and AI product which is just fresh off the press which we just announced in January and will hit schools in April but all of this we need to do this very responsibly we saw this kid earlier in the video talk about AI should be safe fair transparent so this is very wise kid right so we really agree at Lego education we’ve established clear guidelines for how this should work so let me step you through some of these guidelines so AI should be safe we do not generate any text or any media we do not anthropomorphize I got that right this time it’s just a fancy way of saying we do not make it think that AI is human we do not want them forming any unhealthy emotional bonds we we ensure that all our digital products are rooted in the principles of universal design of design principles and we are designed to for kids who have neurodiversity we’re designing for kids who have different learning needs so it’s really important that our products are designed in a very fair way transparent all the models that we would would use would which should have very clear data provenance so should understand where the data has come from which has trained those models and understand whether the models have been trained on different geographies on different kinds of kids on different kinds of adults so ensuring that these models have clear data provenance is super critical for us and then finally privacy so I just want to stress that in all our products AI features run locally on the devices nothing ever leaves the device nothing ever goes to us at the Lego group nothing goes to third parties no login is collected there’s in terms of the trading whether the kids are building their own AI models or they’re using pre -existing models nothing ever leaves so safety and student well -being is a red line is a non -negotiable for us so everything we know about decades of education research and the way we use AI is very important to us and I think that’s what we need to show us that kids look best when they are building when they’re using their hands and really creating and we do and we’ve seen this very much at like education and through years of research so now more than ever children need to learn and need to learn together so much of computer science and AI today is stores with kids sitting in front of the screen with the headphones on by themselves learning and I don’t think we see this as a vision for learning for us kids should be building together coding together experimenting together tinkering together and sharing together so that is really our vision of how kids should be learning computer science and AI so when they tackle these when they tackle these new technologies they also have those cross -cutting skills to also deal with us in the real world so bringing this all together at Lego education we have these four values that govern our approach to AI literacy so we prioritize child agency and engagement to ensure students are active participants in their own learning journeys we empower students with the foundations of AI that Tom was talking about that remain relevant as the technology evolves.

We uphold child safety and well -being as it’s non -negotiable for every AI interaction in the class and we foster hands -on immersive and collaborative experiences that inspire creativity and shared learning. So that is really the four principles that are driving all of this. So how do we make this, how do we bring this into a classroom? How do we, with our products, how do we make sure it’s hands -on, it’s understandable and safe for kids? So I would encourage you also after the session to go to the booth, I think it’s in Hall 3, and actually see these products in person, get hands -on with them, try them out yourself. So we’re really helping students to build real AI literacy by demystifying how AI works.

Through these playful features and lessons, learners explore concepts like computer vision, probability ballistic thinking, classification, machine learning, while seeing their ideas come to life. The result is student agency. Kids not just using AI but actually understanding and building with it. So what better way to show you how kids are using it than for me to try to actually make you use it. So here we have a lesson which is about teaching kids about pre -trained classifiers. So this is in the last unit of once they’ve gone to some core principles of computer science they’ve learned about basics and events and loops and data structures. So at the end they are looking at AI and data and here they’re learning about how you can use a pre -trained classifier, the model that already exists, to bring their AI down to life.

One thing you’ll notice here when the code is up here that the camera that they can use, the camera by default is off. So this is all. sort of in line with the principles of AI safety so it’s an explicit action the kids are taking and here when I hit play now okay I’ve got that’s why I have a video okay no worries so what I’m gonna do always fun trying to do a live demo we always have a backup so yeah you can see that as I’m lifting my hands up and down you you’re seeing the different probabilities changing here and what the kids are learning through this is that with traditional computer science you’ve got zeros and ones things can be on and off with AI what they’re learning here is there’s a 80 70 90 percent chance that I’ve lifted my left hand up or my right hand up or both hands up and then that’s triggering the different events so they’ve learned about events in earlier lessons and that’s what I’m talking about triggering those.

So they learn that AI is not always right. They’re learning that the more data that’s trained into the model, the better it gets. And they also learn from an ethics perspective that if the AI model is not trained with enough kids’ examples, it will have biases in it as well. So these are very core principles of AI, but taught in a very simple and playful way and making the AI dancer come to life. So

Speaker 1

Ready to excite your students with computer science and AI? This lesson is called Strike a Pose. Students will learn how to customize an AI classifier and program AI -activated events. We’ll kick off with a big question to spark curiosity. How could you train a robot to follow your movements? We will explore the topic through the computer science concepts AI and data. The question is tied to a real -life example, how AI can be trained to recognize images through data. This makes it more relatable to both students and teachers. In groups of four, each student picks a minifigure, which indicates their roles in the collaborative building process. The group will build a robot with movable arms and discuss how it might work.

Then it’s time to get hands -on with coding. Groups will open Lego Education Coding Canvas, enter the lesson pin, and connect their hardware. Students create and train their own AI custom classifier by posing in front of the camera and capturing pose data. With simple pre -made code and their classifier, groups explore making the robot mimic their arm poses. Group members take turns so everyone gets hands -on. Two students develop the build of the robot. While the other two iterate on their code and later they swap. Students present their robot, talk about their iteration process, and discuss how they created and trained their class. At the end of this lesson, students will be able to say, I can create a custom classifier.

I can use PoseData to train a custom classifier. I can describe how to create a custom classifier and use data to train it. This is the third of four lessons in the AI and Data unit, where students explore how computers learn from data. In the following lessons, students investigate how data quality and quantity can improve how their AI detects their poses. At the end, they apply what they’ve learned through an open -ended design challenge. All materials for this lesson can be found on the LEGO Education Teacher Portal lesson plan, ready -to -use classroom presentation and facilitation notes. No extra. No extra prep time needed.

Atish Joshua Gonsalves

So you got to see how AI model is really used, how the AI dance is really used in the classroom and what you saw also in the classroom there were kids meaningful roles in the building process as they were building out the model but also meaningful roles when they’re coding and also training the AI as well. And all of this also for the kids but none of this can happen without teachers, right? So we cannot simply drop new standards and mandates on educators without the support for them. You saw in the video briefly referenced the teacher portal where the teachers get all the resources and the support they need to bring computer science and AI to kids.

We know that most teachers who are teaching computer science are actually not computer science teachers themselves. They are teaching math, they’re teaching science, they’re teaching English and so they need to be prepared to really scale this up as well. So we really see this not as a problem. It’s not as a challenge in terms of access to tools but an access to confidence. So I think this is a nice, there’s a couple of very nice quotes here but I’m also, it’s a nice quote. I just wanted to hand over to Richa. I’m very pleased to hand over to her, who leads product development on the retail side and is behind the super exciting Smartbricks, if you’ve seen those.

Richa Menke

Thanks, Steve. Hi, everyone, good morning. Thank you for having me. So, my name is Richa Menke. I head up interactive play at the LEGO Group. So, we’ve just heard an important call to action in terms of AI literacy. So, preparing children to understand and navigate an AI -powered world. And this matters enormously. But what I’d like to do is spend a few minutes discussing the other side of this question, which is, how do we prepare AI for kids and imagination? And part of the reason we’re here is that we believe our focus on play and imagination not only unlocks exciting new play experiences, it might just be the unlock to a more inclusive and empowering future of AI.

So, childhood, as we know, is formative. it’s not a market opportunity, it’s a developmental window that closes. What enters that window shapes who we become. Our sense of confidence, our curiosity, our relationship with struggle and creation, and very importantly, that shaping can often be invisible. So this is very important to us in what we do in the Creative Play Lab, which is the innovation team at the LEGO Group. So what we do is we look at how do we create more and more relevant play experiences for kids, how do we employ new technologies in service of better play for kids, but always keeping in mind our DNA as the LEGO Group, that hands -on, minds -on play experience that we all love.

So eight years ago, our team asked the question, in a world of digital screens, how could we offer kids, more interactivity in their LEGO play experiences, but without… screen. And we were really, really committed to this and spent eight years getting there. And we just launched in January, the SmartPlay platform, which is a new dimension of Lego play. What this is, is, you know, as the child is playing with the SmartBreak in their models, the play actually responds with appropriate sounds and behaviors. So imagine you have your Star Wars X -Wing, and you know, the way you move it around, you know, if you fly with it, it’ll swoosh, if you drop it, it’ll make a crash sound.

So, you know, it’s really responsive to the kid. And all of this without a screen. Without a screen. That was very, very important to us. And also without AI. And we just, we didn’t need AI in this solution. But, you know, also, we’re not entirely sure if AI is ready for childhood. We really believe that childhood deserves deliberation. And that deliberation might be an unlock, as I mentioned, to the future of AI. So first of all, AI holds tremendous potential when you think about play. When you think of the creative barriers that kids face in play. So for example, I’m sitting with my brick bin, I have a ton of bricks. I don’t know where to start, this fear of blank canvas.

AI could easily offer little prompts that inspire me to play. It could support diverse learning methods. AI could help us better understand a child’s intent so we could offer more better, relevant, meaningful experiences. And one of my favorite aspects, which I think is super interesting, is that generative AI is probabilistic. And in other contexts, like productivity, a hallucination is a bug. But when it comes to play, maybe that hallucination is just a playful feature. So there’s huge potential in what AI could bring to offer better play. But of course, as you know, there are many challenges that need to be addressed. And there’s three… key tensions that we think are really important to address when we think about kids and childhood.

So first of all, it’s this tension between efficiency and imagination. If I can get an answer just like this, I don’t have to wait. I don’t have to struggle. I don’t have to develop my imagination. And does that rob kids of the opportunity to really develop their imagination and more importantly, develop the confidence in their own imagination? Personalization and identity. A child at seven is not the same as who they’re going to be at 17. So if we start personalizing the experience for who they are at seven, are we holding them back? And then finally, assistance and agency. Are we raising kids who are, it’s very easy for them to prompt, but they don’t have the ability to really persevere through.

So if I can get an answer just like this, I don’t have to wait. I don’t have to struggle. These are some of the key tensions that we see. And of course, there’s a lot of opportunities, but we feel the responsibility. to ensure that these are addressed. So when we develop new play experiences, we ask ourselves the question, does this increase or decrease the choices that a child has? So child agency. Does this expand imagination? I’d encourage you to ask yourself the question as you develop AI solutions. Does it preserve that healthy developmental friction where you have to actually think, and finally, just would I want this shaping my child inner voice as a way to really think about what’s right?

And I’d love to leave you with this question that we spend a lot of time thinking about is, as we look at AI systems today, what exactly are we optimizing for and how important that choice is? So if today AI systems, if we optimize for engagement, what we’re going to get is more attention. But what if, what if… If we optimize for childhood, then we’re going to optimize for potential. Thank you very much.

Saadhna Panday

All right. Good morning, everybody. I’m Sadhna Pandey, and I’m the chief of education at UNICEF India. And it’s a pleasure to moderate today’s panel discussion on AI literacy and children. So we’ve heard a lot at the summit about the wonder of tech. It really feels good to talk about the wonder of children and of education. So I want to thank Legol for creating the space for this discussion. We all know that AI has brought a step change in how we live, work, and play. And there’s no doubt that it is impacting children’s lives and how they experience education. The problem is that AI is not just a tool for education. The problem is that it is doing it unevenly.

For a child living in urban Delhi, AI has found its way into their education either through the home or the school. But for a poor tribal girl living in rural Jharkhand, perhaps not so much. Education systems are facing massive learning challenges for which governments are seeking equitable, scalable and evidence -based solutions. Two to three decades of digital learning has yielded small -scale wins and modest impact on learning. And yet we’ve seen the massive impact of AI already on health systems and that gives us tremendous hope. I keep repeating this example because I’m fascinated with it. In the area of radiology, AI has helped the diagnosis of pancreatic cancer 438 days earlier than would have been normally expected.

We were previously diagnosing pancreatic cancer at fourth stage. We can now diagnose it at stage one and it diagnoses it with greater accuracy than any human ever can and this without touching a patient and that makes me feel excited. We are looking for that kind of accelerator in education. Something that’s going to bring efficiency and quality without widening inequality and as you’ve said that remains deeply human centered because we know that learning is an inherently social process. We cannot be naive about this. We are walking a tightrope between something that is scaling so far and evolving so rapidly but anybody who’s worked in the education system knows it’s a big ship it takes a wide berth to turn but even when with that we are looking for a public good out of AI because we need it these are really tough interests to marry but it has been done for vaccine rollout and it is being done in countries like Estonia right now within the education space through all of this you got it bang on we’ve got to keep teachers pedagogy and curricula at the center and more than anything else we need to keep children at the center matching their right to learn by multi modes including tech with their right to protection participation and privacy keep that in mind and we need to keep that in mind and we need to keep that in mind and we need to keep that in mind and we need to keep that in mind and we need to keep But time and again we make the error that we underestimate the capacity of children.

They’re not passive recipients of education. They have tremendous agency. They can consume tech, they can shape it, and no doubt they will lead it in time. So today’s conversation is about agency. How do we build AI that empowers children to become creative, critical, independent thinkers that maximize the potential, take out of the best of AI, but offset its risks? To help us through that conversation, I have Tom and Richard. Welcome again, Tom and Richard. And we’re looking forward to a very robust engagement. this morning. Okay. So Tom, we’re going to start with you. So you talked about AI sometimes feeling magical, that it’s abracadabra and voila, something beautiful appears. And we know how children love magic.

They really become enthralled with it and

Tom Hall

Children do indeed love magic, don’t we all? And we all like fast results. And increasingly, We have much shorter attention spans than we had maybe even 10 years ago, and so we’re all looking for quick fixes. I think we often, well, I think we’re overlooking the fact that children have immediate access to data and information now that they trust inherently from the get -go, and they will take a question and feed it back as if it is the gospel. So there is this real danger that AI is indeed seen as a magic box, particularly generative AI, and I think that that’s amazing that children have this inherent curiosity and the Lego group sort of celebrates that curiosity every day.

It’s a wonderful thing. But as I said, I think it’s a real mistake if we don’t teach children to question the magic and actually make magic for themselves. And in order to do that, that’s why we are… so passionate about these fundamentals of AI literacy because if we simply hand children a box that promises quick magical results I think we are really short -selling them so I’d much rather yeah we hand over the screwdriver we hand over the the kind of compass and allow them to take things apart and start to create their own ideas I’m not sure if I addressed your question there but I think that the magic is the magic is something I would we really want children to create their own and I don’t think that we should be under any illusion that they’re going to work this out without an education system that takes and a societal system that takes this responsibility very very seriously and it’s not about taking this responsibility in a few months or a few years time the time is now to maybe stop some things and actually start a fundamentally different approach

Speaker 1

losing

Saadhna Panday

the responsibility to protect them.

Richa Menke

Thank you. Thank you for the question. Yes, it’s challenging because kids have access all the time. You can’t stop it. As you say, they have a mind of their own. But I think as we’ve seen even with social media that maybe we don’t always understand the long -term consequences. While I can have an immediate reaction and something that makes me happy in the minute, what is that going to do in the long run? So I think this focus on education as a filter to understand the long -term as a kind of compass of what is a better experience I think is incredibly important. So that’s kind of our position in terms of how we would employ AI.

Saadhna Panday

Wonderful. So there’s two things that we need for empowerment. One is foundational skills. The child needs to have a basic level of literacy to be able to engage with language models. The And then the last thing that we need to do is to understand the language. Second, critical web and AI literacy. And the model you put out looks fantastic. Now let’s take the model into a real -world classroom. What is it going to look like in rural Rajasthan where we’ve got multigrained, multilingual, multilevel classes? How do we make this come alive and have relevance for those type of settings?

Tom Hall

I think that the best thing you can do, and any teachers in this room will know this, ask children who are looking at you the question, like what type of conversation do they want to have? And in the form of AI, we’ve just produced a template to discuss AI policies with your classes. And that’s what we’re going to do. And that’s what we’re going to do. And that’s what we’re going to do. And that’s what we’re going to do. And that’s what we’re going to do. And that’s what we’re going to do. And that’s what we’re going to do. And that’s what we’re going to do. And that’s what we’re going to do. And that’s what we’re going to do.

And that’s what we’re going to do. And that’s what we’re going to do. And that’s what we’re going to do. assess this question in a very, very smart, thoughtful way. And if we don’t ask them the question, again, we are very guilty of simply publishing something and deciding that it’s in their best interest. Of course we need to guide them, and we’ve got a lot of information that we need to share with them. But let them think their way through this, and the best way to do that is ask the questions. So, yeah, take a discussion around, you know, where does bias show up in their lives? What might that look like if a technology system leant too heavily on a false set of information?

Teaching them sort of the basics of if -then concepts. I think you can do that in any type of classroom, and you don’t need any type of equipment on the table. You need minds to be switched on, and to do that I think you need to ask children the questions, and you need to trust that they’re going to have some thoughts, and you need to help them guide that policy. So that’s something we’d love to see widely spread.

Atish Joshua Gonsalves

Yeah, maybe just coming in from… So… Prior to Lego, I also worked… with the UN Refugee Agency for many years and also sort of saw these applications of ed tech in quite rural or humanitarian contexts as well. So I think there are interesting ways to bring some of these concepts to life, even in very, I think I heard the phrase frugal AI being used here at the conference. But one of the things I think even for us, just because we have access to these powerful models doesn’t mean we need to put those necessarily directly into the hands of kids as well. So even as we look at sort of education progression from kindergarten right up to grade eight and beyond, the age appropriateness is super important.

So even as we’re looking at the littlest ones and how they learn about computational concepts and AI, a lot of where we start is actually completely screen free. They are working with understanding computer science concepts like sequences and loops and just doing this completely with bricks. And you can imagine some of these contexts, it may be bricks, it may be something else, but it isn’t even like the hardware. or a screen at all. So you can teach concepts of probability and computational thinking even without some of these, if you don’t have these resources. And this actually aligns well with, as we think of age -appropriate progression. But I really challenge the audience as well around this need to want to put things into kids’ hands directly in any context.

I mean, not just in challenging contexts in rural India, but also in other countries as well. Well, let’s not rush for the fastest and the best model, but what’s actually right for the kids as well.

Saadhna Panday

Absolutely. We need to generate a fair amount of evidence before we rush to scale with something like this. Although we have to mediate the fact that smartphone penetration in a country like India is widespread. So access is there. And a school is a microcosm of a local community. So whatever is happening in the whole country, in our home, is going to reflect. in the school, and if it impacts child well -being or if it impacts learning, then the schooling system will have to respond. So Tom, I’m coming back to you again. AI can sometimes feel very passive. You put something in, you get something out. But we know that the best learning happens through engagement.

It’s that journey of discovery that excites the child. So how do we make this thing interactive? What do we need to do to support creativity in the use of AI?

Tom Hall

I’ll declare my bias here, which is that I work for LEGO Group, therefore I’m kind of deeply entrenched in a passion for hands -on learning and a deep belief that when you use your hands, and the science backs this up, you are engaging all parts of your brains that lead to learning. You lead to deeper engagement and ultimately… ultimately a deeper mastery of the subject in front of you. We could show through thousands of research studies that we’ve done through the LEGO Foundation or with any of our research partners that spatial awareness skills develop stronger when children are using their hands. The very basics of mathematics in primary years will develop in a stronger way when you’re using manipulatives and you’re thinking through things.

So this use of hands and manipulatives is something we believe in so deeply. So I think artificial intelligence, it’s a concept of technology. We really believe there’s no reason why hands -on learning shouldn’t be brought in here. You saw in the video that we designed for collaboration first. So this is not a one -on -one learning experience. We really want children to learn together. Groups of four. One, two, three, four. Whatever number you put around the table. We want them to be looking at each other and challenging each other. working in groups, learning the fundamentals of collaboration. It’s not always easy. Things will break. You’ll have to start again. You might not like the role you’ve been given.

That’s a great life lesson. So I think AI can sometimes feel the magic box, but also maybe the dark box. And actually, it’s about helping kids understand that there are really clean and to understand technology fundamentals that underlie artificial intelligence and give them curriculum that means something to them. So we introduced a computer science GCSE in the UK back in 2014. I went to school in the UK. It’s where I live. I’m not too proud to say that that was a failure in terms of uptake by students because there were two mistakes that we made. One was a really lack of teachers and there was no teacher training. So there was no… kind of innovation put into the delivery pipeline, but then there was also a real lack of innovation in the courseware and the curriculum that we designed for that GCSE.

And so children just sat very bored in a computer science class learning very outdated principles. So I think the best thing we can do for interactivity and artificial intelligence sort of education is apply this to things that mean something to today’s teenagers and young people. And that means kind of meeting them where they are and sort of helping them apply fundamentals of AI to the life that’s going on around them. And I think that applies both to the child in the classroom and also the teacher. So give them curriculum that sort of applies now rather than

Saadhna Panday

I must say that I’ve seen the joy of the Lego bricks. I’m South African and I would travel to the United States and I would travel to the United States and I would travel to the United States and I would travel to the rural areas of KwaZulu -Natal and there’d be nothing else. there except a hut. You go to the back of the hut and you see a child with two things, the workbook given by South African government and hand -me -down Lego bricks. And you would see that coming alive of head, heart, and mind. And it was beautiful to see. So thank you, Lego, for that. All right. Richa, I’m coming back to you.

We’re excited about the tech, but we’re also worried about safety. And we’re worried about privacy. And our young adolescents, in particular, who also make up the child cohort, are worried about privacy and safety. So in all of the issues that a private entity needs to think about when they’re designing a digital experience for children, where does safety and privacy stand? And how do you create this joyful, meaningful, and meaningful experience for children? Thankful experience while reducing the risk with the tool. like AI?

Richa Menke

Thank you. So, as you can imagine, safety, privacy, these are absolutely foundational and non -negotiable as we’ve seen on the LEGO education side and similarly in ours. And just to be clear, none of our LEGO products actually employ AI. So the smart break is not using it because for all of these exact same reasons that we have a very high bar, if you look through the lens of childhood, we have a higher bar that we need to meet. So there is this tension, though, that obviously there’s so much potential for meaningful, incredible, hands -on play developed through AI, but at the same time, we need to ensure that until that bar is met, we would not put that in our products.

Saadhna Panday

Excellent. So for our young people of today who will be consumers of AI, trust, transparency, privacy, sustainability, and voice would be critically important. important that we’re not just handing something to them. They get to shape it and co -create it with us. At this point in time, we have a couple of minutes. So we’re going to take a couple of questions from the audience. Since I’m left -handed, my bias is on the left side. I’m declaring it up front. So I’m going to take three quick questions in the first round, and then I will come across. So I’ll take one from the front, one from the back, and then on this side. Right. Okay. Over to you.

Nikhil Bawa

Thank you. Thank you. Fantastic session. My name is Nikhil Bawa. I write about AI and education. I’m just curious about what advice you would have for parents because schools are going to be slow to adapt. And so do you have resources for parents in particular about, because they will, I mean, I’m trying to develop an alternate home curriculum for four hours a week outside. at a school for my kid. Just curious about what you would recommend for parents. You need a combination of structured and unstructured play both, right? I want to know your views on how you’re thinking unstructured play with AI and then play around with also other things like self -regulation, which becomes very difficult for even a team to manage.

So that’s one question and the second is, we’re doing a research on this entire AI adoption at homes which is beyond classrooms. And the initial findings are quite disturbing because it is getting a adopted and adopted just because it’s becoming like a race, especially in India. So I would also like to know if there are some recommendations of various AI play adoption from you guys. Okay, beyond the classroom.

Asha Nanavati

Good morning. Thank you so much. My name is Asha Nanavati. I’m with Alliance Educational Foundation, which runs a charitable small K -12 school in Kerala. They love the Lego products, you know. But I really heard what you said earlier, Richa, about capacity building, about including teachers. We’re a charitable school. All profits go back to the meals, the child. And we don’t maybe have funding for training teachers on AI adoption safety practices. We have play school learners up. So is Lego thinking about doing anything in India? We definitely would love to hear more about that. Thank you.

Tom Hall

Can we take a response to those questions? Can I work back? So we have a recommended AI toolkit to take into classrooms. And it’s a facilitated conversation with children around, you know, what do you think about AI? What should a policy be for a school and a classroom? To be honest, I think that is applicable to a group of teachers in a training day as it is to children and a teacher. And I’ve seen really great examples of schools that I know in the UK following a similar approach. I think maybe there’s a theme in all of the questions. Like maybe don’t worry about applying the brakes, right? Things are moving incredibly fast. I wouldn’t go along with what can feel like this very fast river or wave or current.

I think it’s perfectly okay to apply the brakes and say we need to hit pause and we need to have a conversation. And the conversation needs to be about what do we want. And when I say we, I mean the children in the classroom and the teacher. Like what do we want to get out of this experience? And I think have the conversation. Have the conversation first and don’t worry too much about the tools or the software. that you’re worried that you might be missing out on using. And as Richard just shared, we’re not using generative AI in our products, and that’s for a very deliberate reason, because we just don’t know enough yet about safety and privacy.

We have conducted research into that, and we’re following that very closely, but we’re not willing to take any risks. And I think this time of childhood is just too precious to make some shotgun choices that we’re going to pay very heavily for in the future. So I think empower the teacher and the child to have some really formative discussions about what do we want to get out of this, and then maybe look at what’s available.

Speaker 4

Le

Atish Joshua Gonsalves

arning our child agency versus some scaffolding. So as we bring these products also into core classes, classrooms as part of education strategy now. We do understand the needs for teachers to provide a scaffolding as they take them through this learning journey. So we have, for example, at LEGO Education, we follow something called a 5E model of engage, explore, explain, elaborate, and evaluate. But it’s just a fancy way of saying how do you sort of get the kids hooked initially to a big picture question or a real -life example. But you provide the educators and the students sort of a structure as you go through this process of thinking about that question. I think who had that question yesterday, the distance between a question and answer and that space between that’s where magic or inspiration happens, right?

And so giving that space for that to happen. And then when they’re building – and so you’re providing the structure for them to work in groups and build this out. But towards the end, in the elaboration phase and at the end of every unit, there’s something called a design challenge where the kids are not provided that much instruction. They’re given an open -ended prompt. And then they take the concepts and learn. They take the lessons that they’ve learned and apply that in a more open -ended way. outside of the Lego education computer science and AI product we also have something called First Lego League which is the world’s largest STEM annual STEM competition as well and there you see these it’s it’s so inspiring to see these groups of eight kids building a robotics or challenge and then doing a science theme as well but that’s completely open -ended so they will go beyond what a 45 minute lesson is what they would do within a 45 minute lesson and have a lot more agency in terms of what they can create beyond what so the teacher would take them through in in a classroom Ni

Tom Hall

khil we have some really great resources actually available online both on Lego and Lego foundation around facilitated play with your child and it’s starting from very early years through to later years

Saadhna Panday

so I’m gonna take two more questions because we coming to the end of the session we need to close we need to close okay now I will take one question but really

Tom Hall

Well, I think we heard a lot yesterday that we need to make sure that any tools that are made available are done so in languages that mean something to you on the ground. So I think there are many tools out there that can do automated translation. We hope that the quality is going to be really strong in them. We’re currently producing in English language. Of course, there will be localizations in the future.

Saadhna Panday

All right, colleagues, we need to come to a close because people need to move to the next session. We’re designing for safety, for equity. and while we provide services, we need to match it with demand. And to match with demand, teachers, learners, parents need to be empowered. That responsibility rests with all of us. It’s hard to do many things in an education system. Empowerment is not one of them. We can do that quickly. We can do that with scale and we can do that with equity. So I want to say thank you to our panelists for today for having an engaging conversation and a big thank you to Lego for bringing us together to have a conversation about children, education, and AI.

Thank you so much. The session is closed. Thank you. Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (15)
Factual NotesClaims verified against the Diplo knowledge base (3)
Confirmedhigh

“AI literacy is essential and children need a voice in AI policy; AI is unavoidable like taxes.”

The knowledge base stresses the importance of empowering young people to engage with AI and participate in its governance, aligning with the claim that AI literacy is crucial and youth should have a voice in policy [S1] and [S89] and [S90] and highlights the broader need for responsible AI for children [S2].

Confirmedhigh

“LEGO Education ensures AI safety by processing data on‑device so data never leaves the device, provides clear data provenance, and avoids anthropomorphising AI.”

LEGO’s child-centric design emphasizes on-device AI processing and privacy-by-design, ensuring data stays local and supporting transparent, safe AI experiences, which corroborates the reported safety rules [S62] and the edge-computing, on-device model approach described in privacy-focused sources [S97] and [S98].

Confirmedmedium

“LEGO Education’s design supports neuro‑diverse learners through universal design principles.”

LEGO’s commitment to inclusive, child-well-being-focused design, including support for neuro-diverse learners, is documented in the knowledge base, confirming the claim of universal design for diverse learners [S62].

External Sources (99)
S1
https://dig.watch/event/india-ai-impact-summit-2026/responsible-ai-for-children-safe-playful-and-empowering-learning — Absolutely. We need to generate a fair amount of evidence before we rush to scale with something like this. Although we …
S2
Responsible AI for Children Safe Playful and Empowering Learning — – Saadhna Panday- Asha Nanavati – Tom Hall- Saadhna Panday
S3
Responsible AI for Children Safe Playful and Empowering Learning — – Saadhna Panday- Atish Joshua Gonsalves- Asha Nanavati – Tom Hall- Atish Joshua Gonsalves- Speaker 4- Asha Nanavati
S4
https://dig.watch/event/india-ai-impact-summit-2026/responsible-ai-for-children-safe-playful-and-empowering-learning — Good morning. Thank you so much. My name is Asha Nanavati. I’m with Alliance Educational Foundation, which runs a charit…
S5
Safeguarding Children with Responsible AI — -Tom Hall- Vice President and General Manager at Lego Education (works with the National Legal Foundation)
S6
https://dig.watch/event/india-ai-impact-summit-2026/safeguarding-children-with-responsible-ai — Thank you. Can you hear me? Yes? All right, so delighted to be here with you all. I’m one of the two co -moderators, and…
S8
https://dig.watch/event/india-ai-impact-summit-2026/responsible-ai-for-children-safe-playful-and-empowering-learning — So that’s one question and the second is, we’re doing a research on this entire AI adoption at homes which is beyond cla…
S9
https://dig.watch/event/india-ai-impact-summit-2026/responsible-ai-for-children-safe-playful-and-empowering-learning — AI could easily offer little prompts that inspire me to play. It could support diverse learning methods. AI could help u…
S10
Responsible AI for Children Safe Playful and Empowering Learning — Thanks, Steve. Hi, everyone, good morning. Thank you for having me. So, my name is Richa Menke. I head up interactive pl…
S11
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S12
S13
Press Briefing by HMIT Ashwani Vaishnav on AI Impact Summit 2026 l Day 5 — -Speaker 4: Role/title not mentioned (made a brief interjection during the session)
S14
Responsible AI for Children Safe Playful and Empowering Learning — – Saadhna Panday- Atish Joshua Gonsalves- Asha Nanavati – Tom Hall- Atish Joshua Gonsalves- Richa Menke – Tom Hall- At…
S15
Keynote-Martin Schroeter — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or host introd…
S16
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S17
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event host or moderator introd…
S18
S19
Education, Inclusion, Literacy: Musts for Positive AI Future | IGF 2023 Launch / Award Event #27 — Lee Rainie:Thank you so much, President Book. It’s a pleasure to be here and to be associated with this really important…
S20
Building Indias Digital and Industrial Future with AI — – Rahul Vatts- Speaker 1 – Speaker 1- Deepak Maheshwari
S21
From principles to practice: Governing advanced AI in action — Sasha Rubel: It’s not an afterthought. I love that. Safety is the foundation and not an afterthought. It’s again one of …
S22
Conversation: 02 — “So that’s why without trust and safety and understanding of what’s happening in your underlying environment, it becomes…
S23
Let’s design the next Global Dialogue on Ai & Metaverses | IGF 2023 Town Hall #25 — In conclusion, the analysis of the data provides insights into AI, misinformation, education, and inclusivity. A balance…
S24
Digital Public Infrastructure, Policy Harmonisation, and Digital Cooperation – AI, Data Governance,and Innovation for Development — Incorporating local languages is important for making technology accessible to non-English speakers.
S25
Welfare for All Ensuring Equitable AI in the Worlds Democracies — Yeah, thanks, Steve. Very well covered. If I can add just a few more points. I think one of the challenges we see is cop…
S26
How nonprofits are using AI-based innovations to scale their impact — Right. Yeah, I guess the error rate, the hit ratio and what kind of an impact it has depends on the use case. And if the…
S27
Rethinking learning: Hope, solutions, and wisdom with AI in the classroom — But teachers need support. They need professional development around AI literacy, reasonable class sizes that allow for …
S28
Risks and opportunities of a new UN cybercrime treaty | IGF 2023 WS #225 — Sophie:Yes, I can give a short insight. So we have the Digital Services Act and the European Union, which is going to be…
S29
AI award-winning headless flamingo photo found to be real — A controversialAI-generated photo of a headless flamingohas ignited a heated debate over the ethical implications of AI …
S30
Brainstorming with AI opens new doors for innovation — AI is increasingly embraced as a reliable creative partner, offering speed and breadth in idea generation. In Fast Compa…
S31
Open Forum #38 Harnessing AI innovation while respecting privacy rights — Audience: Thank you so much for your presentation. My name is Hasara Tebi. I’m from Mawadda Association for Family Sta…
S32
WS #162 Overregulation: Balance Policy and Innovation in Technology — Tercova emphasizes that patient privacy, data protection, and minimizing bias in algorithms are non-negotiable aspects o…
S33
Privacy concerns intensify as Big Tech announce new AI-enhanced functionalities — Apple, Microsoft, and Google arespearheadinga technological revolution with their vision of AI smartphones and computers…
S34
WS #172 Regulating AI and Emerging Risks for Children’s Rights — Nidhi Ramesh: Hello, everyone, and thank you, Leanda, so much for such a kind introduction. I’ll repeat, my name is Ni…
S35
When Code and Creativity Collide: AI’s Transformation of Music and Creative Expression — Moderate to significant disagreements with important implications. The speakers’ different perspectives on AI’s current …
S36
AI (and) education: Convergences between Chinese and European pedagogical practices — Norman Sze: Thank you for introduction. It’s my honor to join this forum and share insight from perspective of professio…
S37
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Unexpectedly, there was strong consensus across industry, government, and academic perspectives on the need for collabor…
S38
WSIS Action Line Facilitators Meeting: 20-Year Progress Report — UNESCO is providing policy guidance on AI in education, focusing on frameworks that emphasize ethical use of AI in educa…
S39
Leveraging the UN system to advance global AI Governance efforts — Equally, there’s an emphasis placed on the benefits of collaboration and teamwork. The analysis proposes that cooperativ…
S40
Education meets AI — Participants stressed the need for unbiased data to ensure fair and equal treatment. It was acknowledged that mistakes m…
S41
Skilling and Education in AI — The Professor took a notably realistic turn in acknowledging that AI will inevitably create new forms of inequality, des…
S42
WS #232 Innovative Approaches to Teaching AI Fairness & Governance — 2. Create Flexible Learning Frameworks: Develop adaptable educational approaches that can be tailored to different conte…
S43
Rethinking Africa’s digital trade: Entrepreneurship, innovation, & value creation in the age of Generative AI (depHub) — In summary, the analysis raises critical concerns regarding data protection, privacy, and ethical considerations. It und…
S44
#IGF2020: Final report — Kids are advisedto resist the urge to answer bullies, or alternatively, to block them while seeking help from those they…
S45
Informal Stakeholder Consultation Session — Making Capacity Building Concrete and Funded:Emphasized the need for concrete action on capacity building by providing f…
S46
Opening of the session — Capacity building is essential for political and institutional resource development. There is a need for reflecting cap…
S47
DCAD & DC-OER: Building Barrier-Free Emerging Tech through Open Solutions — The discussion emphasized the importance of a multi-stakeholder approach, involving policymakers, educators, developers,…
S48
Responsible AI for Children Safe Playful and Empowering Learning — “So this use of hands and manipulatives is something we believe in so deeply.”[45]. “You lead to deeper engagement and u…
S49
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Virginia Dignam: Maybe I can take this one. Yes, thank you for the comment, Elcho. Yes, it is a risk and it is an issue …
S50
Lessons learned: Offering our course on AI for the first time — Participants who attended the AI course were frequently motivated by professional needs. Either they had been requested …
S51
A Global Human Rights Approach to Responsible AI Governance | IGF 2023 WS #288 — UNICEF has played a proactive role in the field of AI for children by creating policy guidance on the topic. Importantly…
S52
AI and Magical Realism: When technology blurs the line between wonder and reality — Avoid usingmagicalarguments for practical governance: e.g. framing current policy issues on market, human rights, and kn…
S53
Open Forum #33 Building an International AI Cooperation Ecosystem — Participant: ≫ Distinguished guests, dear friends, it is a great honor to speak to you today on a topic that is reshapin…
S54
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — Pavan Duggal:We are very clear that the legal frameworks of artificial intelligence have to be an important catalyst in …
S55
Generative AI and Synthetic Realities: Design and Governance | IGF 2023 Networking Session #153 — Generative AI and large language models have the potential to significantly enhance conversational systems. These system…
S56
Atelier #1 : « Infrastructures et services numériques à l’ère de l’IA : quels enjeux de régulation, de sécurité et de souveraineté des données ? » — Drudeisha Madhub Mme la Présidente, je vous remercie vraiment parce que ça a été jusqu’à présent très brillant de votre …
S57
When language models fabricate truth: AI hallucinations and the limits of trust — AI has come far from rule-based systems and chatbots with preset answers.Large language models (LLMs), powered by vast a…
S58
From principles to practice: Governing advanced AI in action — Brian Tse: right now? First of all, it’s a great honor to be on this panel today. To ensure that AI could be used as a f…
S59
Advancing Scientific AI with Safety Ethics and Responsibility — High level of consensus with significant implications for AI governance policy. The agreement across speakers from diffe…
S60
Education, Inclusion, Literacy: Musts for Positive AI Future | IGF 2023 Launch / Award Event #27 — Additionally, there are worries about the use of unverified information in machine learning processes. These concerns hi…
S61
WS #376 Elevating Childrens Voices in AI Design — Stephen Balkam: Yeah, this feels like deja vu all over again, I was very much involved in the web 1.0 back in the mid 90…
S62
RITEC: Prioritizing Child Well-Being in Digital Design | IGF 2023 Open Forum #52 — By addressing the crisis head-on, LEGO Group demonstrates their commitment to protecting children and building a safer o…
S63
Safeguarding Children with Responsible AI — Tom Hall from LEGO Education highlighted a critical implementation gap: while 80% of teachers recognise AI literacy as f…
S64
Can (generative) AI be compatible with Data Protection? | IGF 2023 #24 — Gbenga Sesan:It’s like you framed my conversation already. I’m glad we’re having a lot of conversations around AI. This …
S65
AI & Child Rights: Implementing UNICEF Policy Guidance | IGF 2023 WS #469 — The World and a majority of Education systems lacking literacy and numeracy levels The project also had a positive impa…
S66
Empowering India & the Global South Through AI Literacy — I hope we don’t become artificially polite, but then I’m hoping that some of these things rubs off in the language of te…
S67
Laying the foundations for AI governance — This comment is insightful because it directly contradicts the common assumption that companies oppose regulation. Seafo…
S68
AI (and) education: Convergences between Chinese and European pedagogical practices — Norman Sze: Thank you for introduction. It’s my honor to join this forum and share insight from perspective of professio…
S69
The AI gold rush where the miners are broke — The rapid rise of AI has drawn a wave of ambitious investors eager to tap into what many consider the next major economi…
S70
Responsible AI for Children Safe Playful and Empowering Learning — It’s a wonderful thing. But as I said, I think it’s a real mistake if we don’t teach children to question the magic and …
S71
AI (and) education: Convergences between Chinese and European pedagogical practices — Norman Sze: Thank you for introduction. It’s my honor to join this forum and share insight from perspective of professio…
S72
Safeguarding Children with Responsible AI — This comment shifted the discussion from abstract concerns about AI risks to concrete pedagogical approaches. It influen…
S73
WS #232 Innovative Approaches to Teaching AI Fairness & Governance — Ayaz Karimov: Yeah. I can hear myself. So it means actually you can also hear me. Today, I will talk a little bit about …
S74
Empowering Workers in the Age of AI — Tom Wambeke: Good afternoon. This is the last input before we can go a little bit more interactive. As you see from the …
S75
Scaling Multistakeholder Partnerships: Connectivity and Education — Ms. Erin Chemery:Thanks so much, Karen. And thank you to ITU and GECA for hosting us today. I’m really loving the learni…
S76
Education meets AI — It is argued that employing a wide variety of people to collect data and design algorithms can ensure that no one is lef…
S77
Transforming Health Systems with AI From Lab to Last Mile — Data privacy, security and ethical safeguards
S78
WS #162 Overregulation: Balance Policy and Innovation in Technology — Natalie Tercova: Of course, I’ll try to be very brief. So I very much agree that it very depends on the specific case…
S79
Dedicated stakeholder session (in accordance with agreed modalities for the participation of stakeholders of 22 April 2022)/OEWG 2025 — Red en Defensa de los Derechos Digitales: It is essential for states and stakeholders to collaborate to strengthen the …
S80
#IGF2020: Final report — Kids are advisedto resist the urge to answer bullies, or alternatively, to block them while seeking help from those they…
S81
DCAD & DC-OER: Building Barrier-Free Emerging Tech through Open Solutions — The discussion emphasized the importance of a multi-stakeholder approach, involving policymakers, educators, developers,…
S82
Open Forum #29 Multisectoral action and innovation for child safety — Examples include developing tailored school curricula materials, capacity-building efforts for teachers and parents, and…
S83
WSIS Action Line C7 E-learning — This comment redirected the conversation toward practical implementation challenges and the need for capacity building. …
S84
AN INTRODUCTION TO — A much needed step beyond awareness building and training of youth, parents and educators is capacity building in the ar…
S85
[Parliamentary Session 4] Fostering Inclusive Digital Innovation and Transformation — Mario Nobile: Only one minute if I may. Robert answered all the questions, but three points. The first one, Italy, …
S86
A Digital Future for All (morning sessions) — Amr Talaat: The hope of digital, or is it the fear of digital? Distinguished guests, this is a question that resonates…
S87
OPENING STATEMENTS FROM STAKEHOLDERS — Discussions on artificial intelligence show that technological development is not without risk. Discussions on emerging…
S88
How African knowledge and wisdom can inspire the development and governance of AI — The aim is to safeguard the accurate portrayal and preservation of Africa’s knowledge and cultural heritage, entrusting …
S89
AI for Good Impact Initiative — It is important for young people to feel they can contribute to and influence their future. Young people should be able…
S90
Open Forum #26 High-level review of AI governance from Inter-governmental P — The speaker mentions the perception among youth that governance often comes in to regulate innovative ideas before they …
S91
AI promises, ethics, and human rights: Time to open Pandora’s box — Given the variety, interdependence, and complexity of the issues, multiple approaches need to be combined in order to me…
S92
UN Human Rights Council: High level discussion on AI and human rights — Systemic prompt:You are supposed to answer questions on the impact of AI on human rights with specific reference of the …
S93
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — This comment was prophetic in highlighting how technological disruption (like AI automating coding) can make narrow skil…
S94
WS #214 Youth-Led Digital Futures: Integrating Perspectives and Governance — Andere mentions the mismatch between university education and the skills needed for the future of work, citing his perso…
S95
Key points by session — – -The city of Chicago implemented a successful education reform thanks to clear vision and leadership on the part of ke…
S96
New Colours of Knowledge — Regarding the possibility of attracting and retaining the best individuals in the prof ession, one of the identified pro…
S97
Trusted Connections_ Ethical AI in Telecom & 6G Networks — He highlights that privacy‑sensitive and latency‑critical workloads are best kept at the edge, where user data never lea…
S98
Google’s AI Edge Gallery boosts privacy with on-device model use — Google has released anexperimental app called AI Edge Gallery, allowing Android users to run AI models directly on their…
S99
AI and Human Connection: Navigating Trust and Reality in a Fragmented World — Ramadori criticizes the current approach of trying to fix AI problems after they manifest, arguing that this patching me…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
S
Speaker 1
1 argument146 words per minute455 words186 seconds
Argument 1
AI literacy essential for future participation (Speaker 1)
EXPLANATION
Speaker 1 argues that understanding AI is a prerequisite for staying relevant in a world where the technology is ubiquitous. They stress that without AI literacy individuals will be left behind and that young people should have a voice in shaping AI policies.
EVIDENCE
The speaker expresses curiosity about how AI works and a desire to learn its everyday applications, likens AI to taxes as unavoidable, states a commitment to solving big problems, and calls for a say in AI policies, concluding with gratitude for being asked for input [1-6].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Empowering India and the Global South through AI literacy and calls for inclusive AI education underline the necessity of AI literacy for future participation [S18], while discussions on education, inclusion, and literacy as must-haves for a positive AI future reinforce this point [S19]. Building India’s digital and industrial future with AI also highlights the central role of AI literacy [S20].
MAJOR DISCUSSION POINT
AI literacy as a societal necessity
AGREED WITH
Tom Hall, Atish Joshua Gonsalves, Saadhna Panday
T
Tom Hall
6 arguments167 words per minute2191 words786 seconds
Argument 1
Teach AI fundamentals, not “magic box” perception (Tom Hall)
EXPLANATION
Tom Hall contends that AI should be presented as a technology rather than a mysterious magic box. He advocates for giving children the conceptual tools—like a screwdriver—to deconstruct and understand AI systems, not just to consume their outputs.
EVIDENCE
He describes how children view generative AI as a magic box that produces answers instantly, then argues that AI literacy must move beyond using the box to understanding its underlying mechanisms, using the screwdriver metaphor to emphasize foundational knowledge [16-23].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Responsible AI for Children emphasizes that AI is a technology system, not a magic box, and stresses teaching fundamentals rather than superficial use [S2]; Safeguarding Children with Responsible AI notes the shift from abstract concerns to concrete curriculum that demystifies AI [S5].
MAJOR DISCUSSION POINT
Demystifying AI for learners
Argument 2
Manipulatives and tactile learning deepen engagement and mastery (Tom Hall)
EXPLANATION
Tom Hall emphasizes that hands‑on, manipulative‑based learning engages multiple brain regions and leads to deeper mastery of concepts. He links tactile interaction with improved spatial awareness and foundational mathematics.
EVIDENCE
He cites research from the LEGO Foundation showing that manipulatives strengthen spatial awareness and basic math skills, and argues that such hands-on approaches should be extended to AI education [263-270].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The responsible AI discussion highlights the value of hands-on manipulatives for deeper engagement and mastery of concepts [S2]; Safeguarding Children with Responsible AI also cites research linking tactile learning to stronger spatial awareness and mastery [S5].
MAJOR DISCUSSION POINT
Physical interaction as a learning catalyst
AGREED WITH
Atish Joshua Gonsalves, Richa Menke, Saadhna Panday
Argument 3
Emphasize policy discussion and safety as core to AI adoption (Tom Hall)
EXPLANATION
Tom Hall stresses that before deploying AI tools in classrooms, educators must facilitate policy discussions with children about bias, safety, and ethical use. He warns against treating AI as a black‑box solution without critical questioning.
EVIDENCE
He notes children’s tendency to trust AI outputs as gospel, calls for asking children about bias and encouraging them to think through policy questions, and repeats the need for guided discussion rather than simply providing tools [187-194][229-236].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
From Principles to Practice stresses that safety must be built-in, not an afterthought, for AI adoption [S21]; Conversation notes that trust and safety are prerequisites for effective AI use [S22]; Safeguarding Children with Responsible AI calls for policy dialogue and safety-first approaches in classrooms [S5]; Professional development literature underscores the need for teacher support when introducing AI safely [S27].
MAJOR DISCUSSION POINT
Policy dialogue as a safety measure
AGREED WITH
Richa Menke, Atish Joshua Gonsalves, Saadhna Panday
Argument 4
Localization of AI tools into relevant languages is essential (Tom Hall)
EXPLANATION
Tom Hall argues that AI tools must be available in languages that are meaningful to local learners to ensure equitable access. He mentions ongoing work to translate tools beyond English.
EVIDENCE
He states that tools should be delivered in local languages, notes current production in English, and promises future localizations, highlighting the importance of automated translation quality [381-385].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Digital Public Infrastructure research highlights the importance of local language incorporation for accessibility [S24]; Welfare for All discusses challenges of copying regulations without local adaptation, underscoring the need for language-specific localization [S25].
MAJOR DISCUSSION POINT
Language relevance for equitable AI use
AGREED WITH
Saadhna Panday, Atish Joshua Gonsalves
Argument 5
Teachers require dedicated AI training and resources to scale implementation (Tom Hall)
EXPLANATION
Tom Hall points out that teachers need specific AI training, resources, and structured toolkits to effectively bring AI literacy into classrooms. Without this support, scaling AI education will be limited.
EVIDENCE
He describes a recommended AI toolkit for classroom conversations, cites examples of UK schools using the approach, and stresses the need for teacher training days and discussions before tool deployment [344-350].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Safeguarding Children with Responsible AI points to the necessity of teacher training and structured toolkits for AI literacy [S5]; How nonprofits are using AI-based innovations notes that teacher support is key to scaling impact [S26]; Rethinking Learning stresses professional development and institutional backing for teachers introducing AI [S27].
MAJOR DISCUSSION POINT
Professional development for educators
AGREED WITH
Atish Joshua Gonsalves, Saadhna Panday, Asha Nanavati
Argument 6
Children should be enabled to create their own “magic” rather than consume it passively (Tom Hall)
EXPLANATION
Tom Hall argues that children must move from passive consumption of AI outputs to active creation, using the metaphor of handing them a screwdriver and compass to build their own solutions. This shift fosters deeper understanding and agency.
EVIDENCE
He describes the danger of viewing AI as a magic box, emphasizes giving children tools to deconstruct and create, and calls for an education system that takes this responsibility seriously, asserting that the time to act is now [191-194][229-236].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Responsible AI for Children stresses giving children tools to create rather than merely consume, highlighting agency over the “magic box” perception [S2]; Safeguarding Children with Responsible AI references the need for real-world curriculum that empowers children to build their own solutions [S5].
MAJOR DISCUSSION POINT
Empowering children as creators
AGREED WITH
Richa Menke, Saadhna Panday
R
Richa Menke
3 arguments163 words per minute1203 words441 seconds
Argument 1
Play‑driven imagination can harness AI’s creative potential (Richa Menke)
EXPLANATION
Richa Menke suggests that AI, when integrated with play, can unlock new creative possibilities for children. She highlights that generative AI’s probabilistic “hallucinations” can become playful features rather than bugs.
EVIDENCE
She explains that AI can inspire children facing a blank canvas, support diverse learning methods, and that hallucinations in generative AI could be viewed as playful features, illustrating the potential of AI-enhanced play [118-128].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Brainstorming with AI opens new doors for innovation describes AI as a creative partner that can inspire play and imagination [S30]; When Code and Creativity Collide discusses AI’s transformation of creative expression, supporting the idea of AI-enhanced imaginative play [S35].
MAJOR DISCUSSION POINT
AI as a catalyst for imaginative play
AGREED WITH
Tom Hall, Atish Joshua Gonsalves, Saadhna Panday
Argument 2
Safety and privacy are non‑negotiable; current products avoid AI use (Richa Menke)
EXPLANATION
Richa stresses that safety and privacy are absolute requirements for child‑focused products, and therefore their current LEGO offerings deliberately do not incorporate AI. This cautious stance reflects a higher standard for childhood products.
EVIDENCE
She states that safety and privacy are foundational, notes that none of LEGO’s AI products currently employ AI, and explains the high bar set for childhood experiences, emphasizing the tension between potential and safety [306-314].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
WS #172 Regulating AI and Emerging Risks for Children’s Rights emphasizes safety and privacy as non-negotiable for child-focused AI [S28]; Open Forum #38 highlights privacy concerns in AI deployments [S31]; Privacy concerns intensify as Big Tech announce new AI-enhanced functionalities discusses the imperative of privacy and data protection [S32]; WS #162 Overregulation stresses privacy and bias as essential safeguards [S33].
MAJOR DISCUSSION POINT
Zero‑tolerance for safety risks
AGREED WITH
Atish Joshua Gonsalves, Saadhna Panday, Tom Hall
Argument 3
Tension between efficiency and imagination; over‑reliance on AI may curb creativity (Richa Menke)
EXPLANATION
Richa identifies a key tension: while AI can deliver fast answers, over‑reliance may diminish children’s imagination and confidence in their own creative abilities. She warns that personalization at a young age could limit future development.
EVIDENCE
She outlines the conflict between efficiency (quick answers) and imagination (struggle, creative development), and raises concerns about early personalization potentially holding children back, linking these points to broader risks for imagination and agency [130-135].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Responsible AI for Children directly addresses the tension between quick AI answers and the need for imagination, warning that efficiency can limit creative development [S2]; When Code and Creativity Collide further explores how AI can both aid and hinder creative processes [S35].
MAJOR DISCUSSION POINT
Balancing speed with creative growth
AGREED WITH
Tom Hall, Saadhna Panday
A
Atish Joshua Gonsalves
4 arguments214 words per minute2059 words575 seconds
Argument 1
Lego product delivers safe, hands‑on, collaborative AI experiences (Atish Joshua Gonsalves)
EXPLANATION
Atish describes LEGO Education’s AI product as a safe, hands‑on platform that encourages collaboration among students. The product is built on universal design principles and runs AI locally to protect privacy.
EVIDENCE
He outlines guidelines for safety (no anthropomorphising, no text generation), fairness, transparency (clear data provenance), and privacy (on-device processing), and explains how the product supports collaborative, hands-on learning through coding and building activities [31-34][36-41][46-51].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Safeguarding Children with Responsible AI outlines safe, hands-on approaches for child AI learning, aligning with LEGO’s collaborative, safety-first design [S5]; Open Forum #38 underscores the importance of trust and privacy in child-focused AI tools [S31].
MAJOR DISCUSSION POINT
Safe, collaborative AI learning tools
AGREED WITH
Tom Hall, Richa Menke, Saadhna Panday
Argument 2
Built‑in safety, fairness, transparency, and on‑device privacy safeguards (Atish Joshua Gonsalves)
EXPLANATION
Atish emphasizes that the LEGO AI solution embeds safety, fairness, transparency, and privacy by design, ensuring that data never leaves the device and that models have clear provenance.
EVIDENCE
He details that the product does not generate text or media, avoids anthropomorphising, follows universal design for neurodiverse learners, guarantees data provenance, and runs all AI locally with no data transmission to third parties [31-34].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Privacy concerns intensify as Big Tech announce new AI-enhanced functionalities stresses privacy-by-design and bias mitigation as essential [S33]; WS #162 Overregulation highlights privacy, bias, and data protection as non-negotiable aspects of AI for children [S32]; WS #172 Regulating AI and Emerging Risks for Children’s Rights reinforces the need for trust, transparency, and safety [S28].
MAJOR DISCUSSION POINT
Privacy‑by‑design in educational AI
AGREED WITH
Richa Menke, Saadhna Panday, Tom Hall
Argument 3
Frugal, screen‑free, age‑appropriate AI concepts enable learning in resource‑limited settings (Atish Joshua Gonsalves)
EXPLANATION
Atish argues that AI concepts can be taught without screens or expensive hardware, using frugal approaches such as brick‑based activities that introduce computational thinking and probability even in low‑resource environments.
EVIDENCE
He references his prior work with UNHCR, mentions “frugal AI”, stresses age-appropriate progression starting with screen-free brick activities that teach sequences, loops, and probability without hardware [238-250].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Digital Public Infrastructure research notes that local language and low-resource accessibility are crucial for equitable AI adoption, supporting frugal, screen-free approaches in underserved contexts [S24]; Welfare for All discusses the challenges of applying standards without local adaptation, underscoring the need for resource-appropriate solutions [S25].
MAJOR DISCUSSION POINT
Low‑cost, screen‑free AI education
AGREED WITH
Tom Hall, Saadhna Panday
Argument 4
Dedicated teacher portal provides curriculum, guides, and scaffolding (Atish Joshua Gonsalves)
EXPLANATION
Atish highlights a teacher portal that supplies lesson plans, coding canvases, and scaffolding resources, enabling educators to support students in building and understanding AI.
EVIDENCE
He mentions the teacher portal that offers curriculum, guides, and scaffolding, allowing teachers to facilitate hands-on AI lessons and supports educators in delivering the product’s learning objectives [32-38].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Safeguarding Children with Responsible AI highlights the importance of teacher-focused curricula and guides for AI literacy [S5]; How nonprofits are using AI-based innovations points to the need for teacher support resources to enable effective AI instruction [S26].
MAJOR DISCUSSION POINT
Teacher‑centric support infrastructure
AGREED WITH
Tom Hall, Saadhna Panday, Asha Nanavati
S
Saadhna Panday
6 arguments129 words per minute1416 words657 seconds
Argument 1
AI must be equitable and keep child agency central (Saadhna Panday)
EXPLANATION
Saadhna stresses that AI should be deployed in ways that are equitable and that preserve children’s agency. She warns that AI can exacerbate existing inequalities if not carefully managed.
EVIDENCE
She notes AI’s uneven impact across urban Delhi versus rural Jharkhand, emphasizes the need for equitable, evidence-based solutions, and highlights children’s agency and capacity to shape technology [160-178].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Responsible AI for Children stresses that children are active agents, not passive recipients, aligning with calls for equitable AI that preserves agency [S2]; WS #172 Regulating AI and Emerging Risks for Children’s Rights underscores equity and child agency in AI deployment [S34].
MAJOR DISCUSSION POINT
Equity and agency in AI deployment
AGREED WITH
Speaker 1, Tom Hall, Atish Joshua Gonsalves
Argument 2
Joy of Lego bricks illustrates learning in low‑resource contexts (Saadhna Panday)
EXPLANATION
Saadhna shares a personal anecdote about seeing children in a rural South African hut using hand‑me‑down LEGO bricks alongside a workbook, illustrating how simple, tangible resources can spark learning even in resource‑poor settings.
EVIDENCE
She recounts traveling to rural KwaZulu-Natal, seeing a child with a workbook and LEGO bricks, describing the experience as “beautiful” and a vivid example of head, heart, and mind learning [293-298].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Digital Public Infrastructure research highlights how tangible, low-tech resources can bridge digital divides in low-resource settings, echoing the LEGO brick example [S24].
MAJOR DISCUSSION POINT
Tangible play as a bridge in low‑resource schools
AGREED WITH
Tom Hall, Atish Joshua Gonsalves, Richa Menke
Argument 3
Trust, transparency, and privacy must underpin all child‑focused AI tools (Saadhna Panday)
EXPLANATION
Saadhna argues that any AI tool for children must be built on trust, transparency, and robust privacy safeguards, ensuring that children’s rights are protected.
EVIDENCE
She calls for trust, transparency, privacy, and safety as non-negotiable, referencing concerns about privacy and safety raised earlier and emphasizing the need for these principles in all child-focused AI tools [310-314][300-304].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Open Forum #38 and Privacy concerns intensify as Big Tech announce new AI-enhanced functionalities both stress trust, transparency, and privacy as foundational for child-focused AI [S31], [S33]; WS #172 Regulating AI and Emerging Risks for Children’s Rights reinforces these principles [S28].
MAJOR DISCUSSION POINT
Rights‑based design for child AI
AGREED WITH
Richa Menke, Atish Joshua Gonsalves, Tom Hall
Argument 4
Urban‑rural AI divide threatens equitable education outcomes (Saadhna Panday)
EXPLANATION
Saadhna highlights the stark contrast between AI exposure in urban schools and its scarcity in remote tribal areas, warning that this divide could widen educational inequities.
EVIDENCE
She contrasts AI presence in urban Delhi homes and schools with its near-absence for a tribal girl in rural Jharkhand, underscoring the uneven rollout of AI in education [162-168].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Digital Public Infrastructure emphasizes the importance of local language and access to mitigate urban-rural gaps [S24]; Welfare for All discusses challenges of applying uniform standards across diverse locales, highlighting the urban-rural divide issue [S25].
MAJOR DISCUSSION POINT
Geographic disparity in AI access
AGREED WITH
Tom Hall, Atish Joshua Gonsalves
Argument 5
Empowering educators is key to delivering effective AI literacy (Saadhna Panday)
EXPLANATION
Saadhna stresses that teachers must be empowered with knowledge, tools, and support to effectively teach AI literacy, positioning educator empowerment as central to successful implementation.
EVIDENCE
She notes the need for empowerment, references the panel’s focus on empowerment, and invites audience questions, indicating that teacher capacity is essential for scaling AI literacy [205-210][311-313].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Safeguarding Children with Responsible AI calls for teacher empowerment and resources to scale AI literacy [S5]; Rethinking Learning stresses professional development and institutional backing for teachers introducing AI [S27]; How nonprofits are using AI-based innovations notes teacher support as critical for impact [S26].
MAJOR DISCUSSION POINT
Teacher empowerment for AI literacy
AGREED WITH
Tom Hall, Atish Joshua Gonsalves, Asha Nanavati
Argument 6
Emphasize child agency, critical thinking, and responsible AI use (Saadhna Panday)
EXPLANATION
Saadhna calls for AI education that foregrounds child agency, critical thinking, and responsible use, arguing that children should be active creators rather than passive consumers.
EVIDENCE
She describes children as not passive recipients but agents who can consume, shape, and eventually lead AI, and frames the conversation around building agency, critical, independent thinkers [175-179].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Responsible AI for Children highlights child agency and the need for critical, independent thinking in AI education [S2]; WS #172 Regulating AI and Emerging Risks for Children’s Rights underscores responsible AI use centered on children’s rights and agency [S34].
MAJOR DISCUSSION POINT
Agency‑centric AI education
AGREED WITH
Tom Hall, Richa Menke
A
Asha Nanavati
1 argument140 words per minute101 words43 seconds
Argument 1
Need affordable teacher training and support for AI in Indian charitable schools (Asha Nanavati)
EXPLANATION
Asha points out that charitable schools in India lack funding for teacher training on AI safety and adoption, and asks whether LEGO plans to support such initiatives locally.
EVIDENCE
She describes her charitable K-12 school in Kerala, notes limited resources for AI teacher training, and asks if LEGO is planning any programs in India to address these gaps [332-342].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Safeguarding Children with Responsible AI stresses the necessity of teacher training and resources for safe AI adoption in schools [S5]; How nonprofits are using AI-based innovations and Rethinking Learning both underline the importance of affordable professional development for educators in low-resource contexts [S26], [S27].
MAJOR DISCUSSION POINT
Funding and support for teacher capacity in low‑income schools
AGREED WITH
Tom Hall, Atish Joshua Gonsalves, Saadhna Panday
N
Nikhil Bawa
1 argument116 words per minute200 words102 seconds
Argument 1
Parents need structured/unstructured resources for home AI learning (Nikhil Bawa)
EXPLANATION
Nikhil asks for guidance and resources that parents can use at home to teach AI, emphasizing the need for both structured curricula and unstructured play to support learning outside school.
EVIDENCE
He requests advice for parents, mentions developing an alternate home curriculum of four hours per week, and asks about resources for unstructured play, self-regulation, and broader AI adoption at home [320-328].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Safeguarding Children with Responsible AI mentions the need for both structured curricula and play-based, unstructured learning resources to support AI education beyond the classroom [S5]; How nonprofits are using AI-based innovations notes the role of community channels (e.g., WhatsApp) in extending learning to homes [S26].
MAJOR DISCUSSION POINT
Home‑based AI education support
S
Speaker 4
1 argument1 words per minute1 words54 seconds
Argument 1
Brief learner acknowledgment of learning moment (Speaker 4)
EXPLANATION
Speaker 4 offers a short interjection that appears to acknowledge a learning moment, though the content is incomplete.
MAJOR DISCUSSION POINT
Learner acknowledgment
Agreements
Agreement Points
AI literacy is essential and should focus on fundamentals rather than a magical perception
Speakers: Speaker 1, Tom Hall, Atish Joshua Gonsalves, Saadhna Panday
AI literacy essential for future participation (Speaker 1) Teach AI fundamentals, not “magic box” (Tom Hall) Lego product delivers safe, hands‑on, collaborative AI experiences (Atish Joshua Gonsalves) AI must be equitable and keep child agency central (Saadhna Panday)
All four speakers stress that understanding AI – its basic concepts, limits and societal impact – is a prerequisite for future participation and for children to have a voice in AI policy, rejecting the view of AI as a mysterious magic box. [1-6][16-23][24-27][31-34][160-178]
POLICY CONTEXT (KNOWLEDGE BASE)
UNICEF’s policy guidance stresses grounding AI literacy in concrete fundamentals rather than hype, echoing the call to avoid “magical” framing of technology [S51][S52].
Hands‑on, tactile, play‑based learning deepens engagement and mastery of AI concepts
Speakers: Tom Hall, Atish Joshua Gonsalves, Richa Menke, Saadhna Panday
Manipulatives and tactile learning deepen engagement and mastery (Tom Hall) Lego product delivers safe, hands‑on, collaborative AI experiences (Atish Joshua Gonsalves) Play‑driven imagination can harness AI’s creative potential (Richa Menke) Joy of Lego bricks illustrates learning in low‑resource contexts (Saadhna Panday)
The speakers agree that learning through physical manipulatives, building, and playful interaction – whether with LEGO bricks or AI-enhanced play – leads to stronger cognitive development and better grasp of AI principles. [263-270][31-34][36-41][118-128][293-298]
POLICY CONTEXT (KNOWLEDGE BASE)
The “Responsible AI for Children” framework highlights that manipulatives and hands-on activities lead to deeper engagement and mastery, a principle also reflected in LEGO Education’s emphasis on giving children a “screwdriver” to explore AI [S48][S63].
Safety, privacy and ethical safeguards are non‑negotiable in child‑focused AI tools
Speakers: Richa Menke, Atish Joshua Gonsalves, Saadhna Panday, Tom Hall
Safety and privacy are non‑negotiable; current products avoid AI use (Richa Menke) Built‑in safety, fairness, transparency, and on‑device privacy safeguards (Atish Joshua Gonsalves) Trust, transparency, and privacy must underpin all child‑focused AI tools (Saadhna Panday) Emphasize policy discussion and safety as core to AI adoption (Tom Hall)
All speakers underline that any AI solution for children must guarantee safety, privacy and ethical design, with data never leaving the device and clear governance, making these aspects a hard floor for product development. [306-314][31-34][46-51][310-314][300-304][187-194][229-236]
POLICY CONTEXT (KNOWLEDGE BASE)
Multiple sources underline non-negotiable safety standards for children, including UNICEF’s child-rights-based AI policy, LEGO’s public commitment to child-online safety, and broader calls for safeguards against unverified content [S51][S62][S60][S65].
Teacher empowerment and provision of resources are critical for scaling AI literacy
Speakers: Tom Hall, Atish Joshua Gonsalves, Saadhna Panday, Asha Nanavati
Teachers require dedicated AI training and resources to scale implementation (Tom Hall) Dedicated teacher portal provides curriculum, guides, and scaffolding (Atish Joshua Gonsalves) Empowering educators is key to delivering effective AI literacy (Saadhna Panday) Need affordable teacher training and support for AI in Indian charitable schools (Asha Nanavati)
The panel concurs that without targeted professional development, teacher toolkits and ongoing support, AI literacy cannot be effectively introduced or scaled, especially in low-resource settings. [344-350][32-38][205-210][311-313][332-342]
POLICY CONTEXT (KNOWLEDGE BASE)
LEGO Education identifies a gap where teachers recognise AI literacy but feel unprepared, urging empowerment and resource provision; similar calls appear in global capacity-building discussions for AI education [S63][S66].
Equity and localization are necessary to avoid widening the AI divide
Speakers: Tom Hall, Saadhna Panday, Atish Joshua Gonsalves
Localization of AI tools into relevant languages is essential (Tom Hall) Urban‑rural AI divide threatens equitable education outcomes (Saadhna Panday) Frugal, screen‑free, age‑appropriate AI concepts enable learning in resource‑limited settings (Atish Joshua Gonsalves)
All three speakers stress that AI tools must be adapted to local languages, low-resource contexts and frugal implementations to ensure that children in rural or underserved areas are not left behind. [381-385][162-168][160-178][238-250]
POLICY CONTEXT (KNOWLEDGE BASE)
UNICEF’s child-rights framework and the Inclusive AI dialogue stress that AI initiatives must be localized and equitable to prevent deepening existing digital divides [S65][S54][S66].
AI should be a tool for creation, not passive consumption; avoid the “magic box” trap
Speakers: Tom Hall, Richa Menke, Saadhna Panday
Children should be enabled to create their own “magic” rather than consume it passively (Tom Hall) Tension between efficiency and imagination; over‑reliance on AI may curb creativity (Richa Menke) Emphasize child agency, critical thinking, and responsible AI use (Saadhna Panday)
The speakers agree that children must move from being passive users of AI outputs to active creators, preserving imagination and critical thinking, and that education should give them the tools to build rather than just receive. [191-194][229-236][130-135][175-179]
POLICY CONTEXT (KNOWLEDGE BASE)
Policy discussions warn against framing AI as a mysterious “magic box” and instead promote its use as a creative instrument, aligning with responsible AI for children guidelines [S52][S48].
Similar Viewpoints
Both emphasize that children need to be active participants in shaping AI’s role in society, linking literacy with agency and equitable involvement. [1-6][160-178]
Speakers: Speaker 1, Saadhna Panday
AI literacy essential for future participation (Speaker 1) AI must be equitable and keep child agency central (Saadhna Panday)
Unexpected Consensus
Current LEGO products deliberately do not incorporate generative AI due to safety concerns
Speakers: Richa Menke, Tom Hall
Safety and privacy are non‑negotiable; current products avoid AI use (Richa Menke) Emphasize policy discussion and safety as core to AI adoption (Tom Hall) – includes statement that LEGO does not use generative AI in its products
While many panelists promote AI-enabled learning tools, both Richa and Tom unexpectedly agree that LEGO’s existing offerings intentionally omit AI to meet a high safety bar, highlighting a cautious approach despite overall enthusiasm for AI in education. [306-308][361-363]
POLICY CONTEXT (KNOWLEDGE BASE)
LEGO’s public statements on prioritizing child well-being and avoiding generative AI in current offerings illustrate a precautionary approach consistent with industry safety recommendations [S62][S63].
Overall Assessment

The panel shows strong convergence on the need for AI literacy grounded in fundamentals, hands‑on and play‑based pedagogy, rigorous safety and privacy safeguards, and robust teacher support. Consensus also exists on equity, localization and the danger of treating AI as a magical black box. The only notable divergence is the degree of optimism about deploying AI now versus a more cautious stance, yet even that is bridged by shared safety concerns.

High consensus across most thematic areas, indicating a shared vision that AI education must be foundational, safe, equitable and teacher‑driven. This consensus suggests that future policy and product development can build on these common principles to advance inclusive AI literacy.

Differences
Different Viewpoints
Contradiction over whether LEGO Education products currently incorporate AI functionality
Speakers: Atish Joshua Gonsalves, Richa Menke
Lego product delivers safe, hands‑on, collaborative AI experiences Safety and privacy are non‑negotiable; current products avoid AI use
Atish describes a LEGO Education offering that runs AI locally on devices, providing safe, hands-on collaborative experiences [31-34][36-41]. Richa counters that none of LEGO’s AI products actually employ AI, citing safety and privacy as the reason for deliberately omitting AI [306-314].
Whether AI should be deployed in classrooms now with safeguards versus being held back until safety is fully assured
Speakers: Tom Hall, Richa Menke
Teach AI fundamentals, not “magic box” perception Children should be enabled to create their own “magic” rather than consume it passively Safety and privacy are non‑negotiable; current products avoid AI use
Tom argues for introducing AI tools in schools, emphasizing that children need to understand fundamentals and be given the means to create their own solutions, while maintaining safety through policy dialogue and teacher support [187-194][229-236]. Richa maintains a cautious stance, stating that LEGO’s current products avoid AI altogether because safety and privacy are non-negotiable, suggesting a pause on AI integration until those concerns are resolved [306-314].
POLICY CONTEXT (KNOWLEDGE BASE)
The debate mirrors broader policy tensions: some experts argue for immediate, safeguarded rollout, while others cite hallucination risks and the need for robust safeguards before classroom adoption [S57][S60][S58].
Impact of AI on children’s imagination and creativity – catalyst or constraint
Speakers: Richa Menke, Tom Hall
Tension between efficiency and imagination; over‑reliance on AI may curb creativity Children should be enabled to create their own “magic” rather than consume it passively
Richa warns that AI’s efficiency (quick answers) can suppress imagination and confidence, arguing that early personalization may limit creative development [130-135]. Tom counters that AI, when paired with hands-on learning, can empower children to build their own “magic” and thus foster creativity rather than diminish it [191-194][229-236].
Unexpected Differences
Direct contradiction about the presence of AI in LEGO’s current product line
Speakers: Atish Joshua Gonsalves, Richa Menke
Lego product delivers safe, hands‑on, collaborative AI experiences Safety and privacy are non‑negotiable; current products avoid AI use
While Atish presents a LEGO Education solution that runs AI locally on devices, Richa explicitly states that none of LEGO’s AI products currently employ AI, citing safety and privacy concerns. This stark inconsistency was not anticipated given the shared corporate context.
Differing views on AI hallucinations as a playful feature versus a risk to be demystified
Speakers: Richa Menke, Tom Hall
Play‑driven imagination can harness AI’s creative potential Teach AI fundamentals, not “magic box” perception
Richa frames AI hallucinations in generative models as potentially playful, enriching imagination, whereas Tom treats the “magic box” metaphor as a risk that must be stripped away through foundational teaching. The contrast between seeing AI’s unpredictability as a feature versus a hazard was not foreseen.
POLICY CONTEXT (KNOWLEDGE BASE)
Research on AI hallucinations highlights the danger of presenting fabricated outputs as playful features, urging demystification and trust-building measures [S57].
Overall Assessment

The panel shows strong consensus on the importance of AI literacy, child agency, and safety, but notable disagreements arise around the actual use of AI in LEGO products, the timing of AI integration in classrooms, and the perceived impact of AI on creativity. These disputes centre on technical implementation versus policy caution, and on whether AI can be a creative catalyst or a potential inhibitor.

Moderate – while participants share overarching goals (equitable, safe AI education), they diverge on concrete approaches (immediate AI deployment with safeguards vs. postponement, and even on whether AI is present at all). This level of disagreement suggests that future collaborations will need clear alignment on product roadmaps and shared safety standards to avoid mixed messaging.

Partial Agreements
All three agree that safety, privacy, and trust are essential for AI in education, but they differ on how to achieve it: Tom focuses on policy dialogue and teacher‑led discussions [187-194][229-236]; Saadhna stresses rights‑based design principles of trust, transparency, and privacy [310-314][300-304]; Atish emphasizes technical safeguards built into the product (local processing, data provenance) [31-34].
Speakers: Tom Hall, Saadhna Panday, Atish Joshua Gonsalves
Emphasize policy discussion and safety as core to AI adoption Trust, transparency, and privacy must underpin all child‑focused AI tools Built‑in safety, fairness, transparency, and on‑device privacy safeguards
All agree that teachers need support to deliver AI literacy. Tom proposes a concrete AI toolkit and training days for teachers [344-350]; Saadhna calls for broader empowerment of educators with resources and capacity building [205-210]; Asha asks for affordable, possibly funded, teacher‑training programmes for low‑income schools in India [332-342]. The divergence lies in the scale and funding mechanisms suggested.
Speakers: Tom Hall, Saadhna Panday, Asha Nanavati
Teachers require dedicated AI training and resources to scale implementation Empowering educators is key to delivering effective AI literacy Need affordable teacher training and support for AI in Indian charitable schools
Both champion hands‑on, tactile learning. Tom cites research showing manipulatives improve spatial awareness and math mastery [263-270]; Atish describes a LEGO product that provides collaborative, hands‑on AI activities built on universal design principles [31-34][36-41]. They differ in focus: Tom emphasizes the pedagogical research base, while Atish highlights product‑level implementation.
Speakers: Tom Hall, Atish Joshua Gonsalves
Manipulatives and tactile learning deepen engagement and mastery Lego product delivers safe, hands‑on, collaborative AI experiences
Takeaways
Key takeaways
AI literacy is essential for future participation and must focus on fundamentals rather than treating AI as a “magic box”. Hands‑on, collaborative, tactile learning (e.g., LEGO bricks) deepens engagement and helps children build real AI understanding. Safety, privacy, fairness, and transparency are non‑negotiable; current LEGO products run AI locally or avoid AI until standards are met. Equity and access are critical – there is a stark urban‑rural divide and a need for multilingual, low‑cost solutions and teacher support. Teachers need dedicated professional development, resources, and scaffolding (teacher portal, 5E model) to scale AI literacy. Balancing imagination, agency, and risk: children should create their own “magic” and retain agency, not be passive consumers. Policy discussion with children is important; involving them in shaping AI guidelines empowers agency. Frugal, screen‑free, age‑appropriate approaches can introduce AI concepts where resources are limited.
Resolutions and action items
LEGO Education will launch its new computer‑science and AI product in schools (April rollout) with built‑in safety safeguards. A teacher portal with curriculum, lesson plans, and scaffolding (5E model) will be provided to support educators. An AI toolkit/template for classroom policy discussions will be made available for teachers to facilitate dialogue with students. LEGO will showcase the product hands‑on at the conference booth (Hall 3) for educators to try. Commitment to keep AI processing on‑device, ensuring no data leaves the device and maintaining privacy. Future plans include localization of materials into additional languages and continued research on safety/ethics before adding generative AI to products.
Unresolved issues
Concrete strategies for delivering AI literacy in rural, multilingual, low‑resource classrooms (e.g., Rajasthan, Jharkhand) remain undefined. Funding and scalable teacher‑training programs for charitable schools in India (as raised by Asha Nanavati) were not resolved. Specific resources or guidelines for parents to support home‑based AI learning were requested but not provided. Determination of the appropriate age or stage to introduce generative AI versus screen‑free concepts is still open. Evidence‑generation and evaluation framework before large‑scale rollout was mentioned as needed but no plan was set. How to balance efficiency versus imagination in practice (preventing over‑reliance on AI) remains an ongoing tension.
Suggested compromises
Adopt a “pause and discuss” approach: hold policy conversations with children before deploying new AI tools. Start with screen‑free, brick‑based teaching of computational concepts, then gradually introduce AI features when safety is assured. Provide low‑tech AI discussion toolkits that do not require heavy hardware, allowing use in resource‑constrained settings. Combine structured (curriculum‑based) activities with unstructured play to satisfy both learning outcomes and creative exploration.
Thought Provoking Comments
AI is like taxes, it’s unavoidable and if you don’t learn to evolve with it you’re gonna be left behind.
Frames AI adoption as an inevitable societal shift, emphasizing urgency for literacy and policy involvement, which sets the stakes for the entire discussion.
Established the central premise that AI literacy is not optional, prompting subsequent speakers to justify why education systems must act now and to propose concrete strategies.
Speaker: Speaker 1
We need to give the child the screwdriver to take that box apart and really understand what’s going on under the cover… we don’t want them to be passive consumers of AI, but to be designers of what is to come.
Uses a vivid metaphor to shift the view of AI from a magical black‑box to a system that can be deconstructed and rebuilt, highlighting the need for deep, hands‑on understanding.
Redirected the conversation from surface‑level tool use to foundational literacy, leading others (e.g., Atish, Richa) to discuss curriculum design, hands‑on learning, and the importance of building rather than just using AI.
Speaker: Tom Hall
Childhood is a developmental window that closes; what enters that window shapes who we become. AI for play should expand imagination, not shortcut it.
Introduces the ethical tension between efficiency and imagination, questioning whether AI might diminish creative struggle that is essential for development.
Created a turning point where the panel moved from describing products to debating the broader developmental implications of AI, prompting follow‑up comments about bias, personalization, and agency.
Speaker: Richa Menke
We have to ask children what kind of conversation they want to have with AI and let them think through policy questions themselves.
Advocates for child‑centered policy co‑creation, turning the discussion toward participatory governance rather than top‑down implementation.
Shifted the tone from product showcase to democratic engagement, influencing Saadhna’s emphasis on equity and prompting audience questions about parental guidance and community involvement.
Speaker: Tom Hall
Even in the most resource‑constrained settings we can teach computational concepts like probability and loops without screens—using bricks or other tangible tools.
Challenges the assumption that AI education requires high‑tech hardware, introducing the concept of “frugal AI” and screen‑free pedagogy.
Opened a new line of discussion on scalability and inclusivity, leading Saadhna to raise concerns about rural implementation and prompting suggestions for low‑cost, teacher‑led approaches.
Speaker: Atish Joshua Gonsalves
If we optimize AI systems for engagement we get more attention; if we optimize for childhood we get potential. What we choose to optimize matters.
Poses a strategic question about the fundamental goals of AI design, reframing the conversation around value alignment rather than technical features.
Deepened the analysis by introducing a policy‑level dilemma, causing other speakers to reflect on safety, privacy, and the long‑term impact of AI on children’s development.
Speaker: Richa Menke
Safety and privacy are non‑negotiable; we deliberately do not embed AI in our current LEGO products until we are sure the bar is met.
Provides a concrete stance that contrasts with the enthusiasm for AI integration, highlighting responsible product development and risk aversion.
Tempered the earlier optimism, prompting Tom and Atish to discuss scaffolding, teacher training, and the timeline for safe AI deployment.
Speaker: Richa Menke
Equity is a core concern: AI is reaching urban children in Delhi but not a tribal girl in rural Jharkhand. We need scalable, evidence‑based solutions that don’t widen inequality.
Brings the social justice dimension into focus, reminding the panel that technological solutions must address systemic disparities.
Steered the conversation toward practical challenges of deployment in diverse contexts, leading to questions about language localization, teacher capacity, and frugal AI approaches.
Speaker: Saadhna Panday
Hands‑on, collaborative learning—using the 5E model and open‑ended design challenges—creates the ‘space between question and answer’ where magic and inspiration happen.
Links pedagogical theory to concrete classroom practice, emphasizing the importance of structured yet open learning experiences for AI literacy.
Provided a practical framework that other speakers referenced when discussing curriculum design, reinforcing the panel’s consensus on experiential learning.
Speaker: Atish Joshua Gonsalves
We must not rush to put the fastest, best model in kids’ hands; we should consider what is right for them, even in well‑resourced settings.
Reiterates caution against premature adoption of advanced AI, highlighting ethical responsibility over technological hype.
Reinforced earlier safety concerns, influencing the final remarks about pausing, reflecting, and ensuring that any AI integration aligns with child‑centric values.
Speaker: Atish Joshua Gonsalves
Overall Assessment

The discussion was shaped by a series of pivotal remarks that moved the conversation from a product‑centric showcase to a nuanced debate about ethics, equity, and pedagogy. Early framing of AI as inevitable set a sense of urgency, while Tom Hall’s screwdriver metaphor reframed AI as a tool to be deconstructed, prompting deeper exploration of foundational literacy. Richa Menke’s focus on imagination versus efficiency and the optimization question introduced a strategic, values‑based lens, steering the panel toward considerations of developmental impact and societal goals. Contributions from Atish and Saadhna highlighted practical pathways for inclusive, low‑tech implementation and the stark equity gaps that must be addressed. Repeated emphasis on safety, privacy, and responsible rollout acted as a grounding force, ensuring that enthusiasm for AI did not eclipse caution. Collectively, these thought‑provoking comments redirected the dialogue toward child‑centered, equitable, and ethically sound AI education, shaping a balanced and forward‑looking conclusion.

Follow-up Questions
How can AI literacy be implemented effectively in rural, multilingual, multilevel classrooms such as those in rural Rajasthan?
Seeks strategies to make AI education equitable and relevant for underserved, linguistically diverse settings.
Speaker: Saadhna Panday
What resources and guidance can be provided to parents to support AI learning at home, given schools adapt slowly?
Parents need structured and unstructured play materials and advice to build a home AI curriculum while schools lag behind.
Speaker: Nikhil Bawa
What recommendations exist for responsible AI play adoption beyond the classroom, especially in home environments?
Raises concern about rapid, competitive AI adoption at home and asks for best‑practice guidance.
Speaker: Nikhil Bawa
Is LEGO planning initiatives in India to support teacher training and AI safety practices for charitable schools?
Requests localized support and capacity‑building for schools with limited funding.
Speaker: Asha Nanavati
How can we make AI learning interactive and supportive of creativity in the classroom?
Looks for methods to move beyond passive AI outputs toward engaging, creative experiences for students.
Speaker: Saadhna Panday
What should AI systems be optimized for (e.g., engagement vs childhood development) and how does that choice affect outcomes?
Highlights the need to align AI objectives with child development rather than merely maximizing attention.
Speaker: Richa Menke
What evidence is needed to assess AI’s impact on learning outcomes and equity before scaling deployments?
Calls for rigorous research to avoid unintended harms and ensure equitable benefits.
Speaker: Saadhna Panday
What safety, privacy, and ethical standards should govern AI‑enabled children’s products, especially regarding local vs cloud processing?
Emphasizes the necessity of non‑negotiable safeguards for child wellbeing in AI products.
Speaker: Richa Menke
How effective are current teacher training and professional development programs for integrating AI literacy into curricula?
Points out the existing gap in teacher preparedness and the need to evaluate training models.
Speaker: Tom Hall
How can bias in AI models trained on limited or non‑representative data be identified and mitigated in educational contexts?
Demonstrated in the demo; requires systematic study of bias mitigation techniques.
Speaker: Atish Joshua Gonsalves
What are the feasibility and impact of frugal, screen‑free AI education approaches for low‑resource settings?
Explores age‑appropriate, resource‑light methods for teaching AI concepts without screens.
Speaker: Atish Joshua Gonsalves
What are the long‑term effects of using AI hallucinations as playful features in children’s play?
Considers whether generative AI ‘mistakes’ can be beneficial or harmful in play contexts.
Speaker: Richa Menke
How does hands‑on, manipulatives‑based learning compare to screen‑based methods in fostering AI understanding and retention?
Claims deeper engagement with physical tools; needs empirical validation.
Speaker: Tom Hall
How can multilingual AI tools and automated translation be ensured to be high‑quality for diverse classroom contexts?
Ensures AI resources are accessible and effective across languages.
Speaker: Tom Hall
What is the optimal balance between child agency and scaffolding in AI‑enhanced curricula (e.g., using the 5E model)?
Seeks frameworks that support autonomy while providing necessary guidance.
Speaker: Atish Joshua Gonsalves
How can AI literacy be implemented effectively in humanitarian contexts such as refugee camps?
Draws on experience with UN Refugee Agency; needs adaptable, low‑resource solutions.
Speaker: Atish Joshua Gonsalves

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.