Responsible AI for Children Safe Playful and Empowering Learning
20 Feb 2026 16:00h - 17:00h
Responsible AI for Children Safe Playful and Empowering Learning
Summary
The panel convened to examine how AI literacy can be built into children’s education and why it is essential for their future participation in an AI-driven world [5][152]. Tom Hall argued that AI should be taught as a technology rather than a “magic box,” emphasizing that children need to understand underlying concepts such as probability, data sensing and algorithmic bias instead of merely consuming AI outputs [16-21][23-26]. He warned that many young learners treat generative AI as a shortcut, which risks passive consumption and undermines critical thinking, so curricula must move beyond excitement to mastery of fundamentals [15][18]. Atish Joshua Gonsalves described LEGO Education’s new computer-science and AI product, which is built on four values-child agency, safety, transparency and hands-on collaborative learning-and is designed to run AI features locally to protect privacy [32-34][306-308]. The demo showed students training a pre-trained image classifier to control a robot’s movements, teaching them that AI predictions are probabilistic, improve with more data, and can contain bias [46-50][48-51]. Richa Menke highlighted that AI can enrich play by inspiring imagination, but cautioned that over-reliance on efficiency or personalization may erode children’s creative struggle and long-term agency [97-104][130-138][146-151]. She noted that generative AI’s “hallucinations” might be playful features in games, yet the technology is not yet ready for childhood without deliberate deliberation about its impact [124-127][115-116]. Saadhna Panday of UNICEF India stressed that AI’s benefits are unevenly distributed, citing the contrast between urban Delhi and rural Jharkhand, and called for evidence-based, equitable solutions that keep teachers and children at the centre [162-165][170-176]. She also pointed out the need for multilingual, low-resource tools and for safeguarding children’s privacy, trust and participation in AI-enhanced learning environments [205-212][306-308]. The panel reached consensus that empowering teachers with clear policies, scaffolding resources and a “5E” instructional model is crucial for scaling AI literacy responsibly [214-218][369-372]. Participants agreed that hands-on, collaborative activities-such as LEGO’s design challenges and the First LEGO League-provide the “magic” of creation while reinforcing technical concepts [263-276][376-378]. Finally, the discussion concluded that AI literacy must be treated as a core modern literacy, integrated with safety, equity and agency, so that children become designers of future AI rather than merely its users [26-27][386-395].
Keypoints
Major discussion points
– AI literacy must go beyond “magic-box” usage and teach foundational concepts.
Participants stressed that children should understand how AI works, not just treat it as a mysterious tool. Tom Hall highlighted the need to move from “magic” to a “screwdriver” that lets kids see under the hood [19-24]; Atish echoed this by defining AI literacy as “understanding today’s technology… and the fundamentals” [31-34]; early remarks from Speaker 1 framed AI as an unavoidable, essential skill [3][5].
– Hands-on, collaborative play is the preferred vehicle for teaching AI.
LEGO representatives described a learning model that combines physical building with coding to give children agency while keeping safety front-and-center. Atish detailed the classroom demo, the “AI Dancer,” and the emphasis on active creation [31-34][36-41][46-51]; Richa outlined LEGO’s four guiding values-child agency, safety, hands-on immersion, and foundational knowledge [94-112][306-309]; Tom Hall linked tactile learning to stronger brain engagement and deeper mastery [263-280].
– Equity and contextual relevance are critical for scaling AI education.
Saadhna highlighted the stark contrast between urban Delhi and rural Jharkhand, urging solutions that work in multilingual, low-resource settings [152-164][210-214][251-259][381-385]; Atish added that “frugal AI” and age-appropriate, screen-free approaches can bridge gaps in underserved environments [238-250].
– Safety, privacy, and ethical safeguards are non-negotiable.
Across the panel, participants agreed that any AI interaction with children must meet high safety standards. Atish listed LEGO’s safety rules (no anthropomorphising, local data processing) [31-34]; Richa reiterated that privacy and safety are foundational and that current LEGO products do not embed AI for this reason [306-309]; Tom Hall warned against “shotgun” adoption without rigorous safety research [344-363]; Saadhna asked how to balance joy with risk [300-304].
– Teachers and parents need concrete resources and capacity-building.
The discussion repeatedly called for tools, training, and support structures for educators and families. Atish noted the need to empower teachers before dropping new standards [81-88]; Tom Hall suggested a facilitated AI-policy conversation template for classrooms [214-236]; audience questions from Nikhil and Asha asked for parent-focused curricula and affordable teacher training [323-328][332-340].
Overall purpose / goal
The panel aimed to define a responsible, inclusive roadmap for AI literacy in K-12 education-showcasing how hands-on, play-based learning can demystify AI, while simultaneously addressing safety, equity, and the need for teacher and parent support to ensure all children can become informed creators rather than passive consumers of AI.
Overall tone
The conversation began with an upbeat, visionary tone, celebrating children’s curiosity and the potential of AI-enhanced play. As the dialogue progressed, the tone shifted to a more cautious, reflective stance, emphasizing ethical safeguards, equity challenges, and the urgency of building teacher capacity. Throughout, the tone remained collaborative and solution-oriented, moving from optimism to a balanced mix of hope and responsibility.
Speakers
– Saadhna Panday
– Area of expertise: AI literacy, education policy, child protection
– Role / Title: Chief of Education, UNICEF India; Panel moderator
– Asha Nanavati
– Area of expertise: Education leadership, AI adoption in schools
– Role / Title: Representative, Alliance Educational Foundation (runs a charitable K-12 school in Kerala) [S4]
– Tom Hall
– Area of expertise: AI literacy, hands-on learning, educational technology
– Role / Title: Vice President and General Manager, LEGO Education [S5]
– Nikhil Bawa
– Area of expertise: AI and education commentary, parent resources
– Role / Title: Writer/Researcher on AI and education (independent) [S7]
– Richa Menke
– Area of expertise: Interactive play, AI-enabled learning products, safety & privacy
– Role / Title: Head of Interactive Play, LEGO Group [S10]
– Speaker 4
– Area of expertise: (not specified)
– Role / Title: (not specified; appears as an audience participant or brief interjector) [S11]
– Atish Joshua Gonsalves
– Area of expertise: AI-driven educational product design, hands-on classroom implementation
– Role / Title: Product lead / presenter for LEGO Education AI & Data curriculum (inferred from presentation) [S14]
– Speaker 1
– Area of expertise: (not specified; appears to be a student voice)
– Role / Title: Student participant / youth representative in the discussion [S16]
Additional speakers:
– Steve – referenced by Richa Menke (“Thanks, Steve.”); role/title not provided in the transcript.
Opening framing – Speaker 1 opens the session by likening artificial intelligence (AI) to taxes, arguing that AI is now unavoidable and that children must be equipped to engage with it or risk being left behind [2-6][3][5]. He stresses the need for AI literacy and for young people to have a voice in AI policy because “AI literacy is really important” [2-6][1-6].
Tom Hall – why AI must be taught as technology – Hall expands the opening premise, warning that treating AI as a “magic-box” creates a passive-consumer mindset. He uses a screwdriver metaphor to argue that children should be able to open the box and understand foundational concepts such as probability, data sensing, algorithmic bias and the probabilistic nature of AI predictions [16-27][15][18-21][19-24][25-27]. Hall calls for AI literacy to become a modern core literacy alongside maths and reading, not an elective [26-27]. He also cites the 2014 UK CS GCSE rollout failure, attributing it to a lack of trained teachers and an outdated curriculum [300-312].
Atish Gonsalves – LEGO Education product overview – Atish introduces LEGO Education’s new computer-science and AI offering, grounding it in four guiding values: child agency, safety & well-being, transparency, and hands-on collaborative learning [32-34][94-112]. He outlines concrete safety rules – no anthropomorphising of AI, on-device processing so data never leaves the device, clear data provenance for all models, and universal design to support neuro-diverse learners [31-34][46-51]. The live demo of the “AI Dancer” shows pupils training a pre-trained image classifier with their own pose data, observing how confidence scores shift as they move [46-51]; the demo illustrates that AI is probabilistic, improves with more training data, and can be biased when the training set is not representative [48-50]. Atish also references the 5E instructional model (engage, explore, explain, elaborate, evaluate) used in the LEGO Education Teacher Portal [32-38][369-372] and mentions the First Lego League as an example of open-ended, collaborative AI projects [340-350].
Lesson “Strike a Pose” – Speaker 1 describes the “Strike a Pose” activity, which combines LEGO bricks, the Coding Canvas, and a custom classifier. Students build a robot, collect pose data, train a classifier, and present their work, thereby moving from users to designers of AI [45-58][40-42][55-73]. The lesson follows the 5E structure and reinforces the three core lessons highlighted in the demo [48-50].
Atish – teacher support & frugal AI – Atish emphasizes the LEGO Education Teacher Portal, which provides curriculum, lesson plans and scaffolding aligned with the 5E model [32-38][369-372]. He promotes “frugal AI” approaches that teach computational concepts such as loops and probability using bricks alone, without screens or heavy hardware [238-250][243-247].
Richa Menke – SmartPlay & SmartBreak – Richa presents the SmartPlay platform, a screen-free, sensor-driven play system that responds with sounds and motions but currently does not employ generative AI for safety reasons [100-108]. She outlines three tensions that must be balanced when introducing AI to children: efficiency vs. imagination, personalization vs. identity, and assistance vs. agency [120-138]. Richa also reiterates the non-negotiable safety and privacy safeguards, echoing Atish’s design rules [306-314].
Saadhna Panday (UNICEF India) – equity & evidence – Saadhna highlights the stark contrast between AI-enabled education in urban Delhi and the near-absence of such tools for a tribal girl in rural Jharkhand, warning that AI could exacerbate existing inequalities if deployed irresponsibly [152-176][170-176]. She cites AI-driven early detection of pancreatic cancer as a motivating example of AI’s societal impact [160-168]. Saadhna calls for multilingual, low-cost, evidence-based solutions that keep teachers and children at the centre of design [205-212][251-259].
Panel Q&A –
* Tom Hall reiterates that LEGO provides a template for classroom AI-policy discussions, encouraging a “pause-and-discuss” approach where teachers and children jointly shape AI policies before tools are introduced [214-236][229-236].
* Atish stresses the importance of frugal, age-appropriate tools and the teacher portal for scaling AI literacy [238-250].
* Richa reinforces the safety-first stance, noting that none of LEGO’s current products use generative AI until safety standards are met [306-314].
* Nikhil Bawa asks for parent-focused resources and guidance on supporting unstructured play [380-388].
* Asha Nanavati queries affordable teacher-training models for charitable schools in India [390-398].
Closing remarks – Saadhna thanks the participants and restates that the responsibility to protect children while delivering equitable, evidence-based AI education rests on all stakeholders. She calls for rapid yet safe empowerment of teachers, learners and parents, and reaffirms the shared commitment to treat AI literacy as a core modern literacy embedded in hands-on, play-based pedagogy, upheld by the highest safety and privacy standards, and accessible to every child regardless of geography or resources [386-395].
Across the session the panel reaches consensus that AI literacy is essential for future participation, must focus on fundamentals rather than black-box perception, benefits from tactile collaborative learning, requires non-negotiable safety, privacy and fairness, depends on teacher empowerment and resource provision, and must be delivered through equitable, localized and frugal approaches to avoid widening the AI divide [1-6][16-23][24-27][31-34][381-385][152-164][238-250].
curious how it works and I think that a lot of kids are. I would love to learn how it can be used in everyday life and how it can be used as an accurate source of information. AI is like taxes, it’s unavoidable and if you don’t learn to evolve with it you’re gonna be left behind. I definitely want to be a part of solving big problems. We need to have a say in AI policies because AI literacy is really important. Thanks for finally asking us what we think. Bye.
He breaks me every time. These were children that we brought into a school in California in December. actors in there. There’s just a lot of children with opinions and the little boy at the end, he just had a lot to say. He is very wise. But these are, those were the views of just some smart, inspiring young people. They’re not just eager to use AI, but I think you can see they’re especially eager to understand and to build things with it. And just as you saw, they have some really clear ideas about how it should and shouldn’t be used in today’s classrooms. But of course, you know, excitement and confidence are not the same as mastery or comprehension.
We do see an unfortunate trend where children do not understand the fundamentals of the systems they’re interacting with. And I think you can particularly see that in younger children who often see generative AI systems. As a kind of magic box that they can… into where you know you type in a text or a question and then outcome images and videos and entertaining things and maybe even the answer to a history essay question I think we need to be really clear that AI is not magic it’s not a magic toolbox it’s a technology system and foundational AI literacy isn’t about teaching children how to use this magic box I think far more importantly it’s like how do we give the child the screwdriver to take that box apart and really understand what’s going on under the cover so while you know supporting children to use AI tools safely ethically and effectively today is important I think far more it’s about equipping them with the knowledge and the tools the confidence to build what is yet to come So therefore our definition of AI literacy when we talk about it, it’s about understanding today’s technology, yes, but it’s far more about understanding the fundamental concepts so that you are armed and ready for what is yet to be designed, and actually so that you can be the designer of what is to come.
So I think that we have underestimated the role we have to play in preparing children today. We don’t want them to be passive consumers of AI. Instead, we really believe that we should be arming them with the tools, the literacies that are required to lead, to design, to create. And our goal is not about sort of robot -proofing our children for what’s coming at them, but just making sure that they are ready to build a better future and they’ve got the tools in their hands. So let’s talk about AI literacy as understanding the foundations of AI. So AI is the foundation of computer science and AI concepts, and that is about understanding the fundamental concepts of AI.
understanding probability, how computers sort of sense the world as data points and data sensors, sensing algorithmic bias and understanding all of the nuances of that. We don’t want that to be an elective or selective choice for just the few. We believe that these concepts have to be elevated to the status of modern literacy alongside maths and reading, problem solving, creativity and collaboration. And I think it’s best if we show you how we plan to do this in classrooms. So I’m going to hand over to Atish, and we’re going to run a live demo, which is always fun at a conference event.
Great, thanks, Tom. And I’m also delighted to introduce AI Dancer, who’s on the table here, who hopefully will do some dancing soon as well. So, yeah, very excited to share. I’m going to share a bit more about how we’ve translated some of these principles that Tom was talking about into the product. So I’m here. excited to shout about our new computer science and AI product which is just fresh off the press which we just announced in January and will hit schools in April but all of this we need to do this very responsibly we saw this kid earlier in the video talk about AI should be safe fair transparent so this is very wise kid right so we really agree at Lego education we’ve established clear guidelines for how this should work so let me step you through some of these guidelines so AI should be safe we do not generate any text or any media we do not anthropomorphize I got that right this time it’s just a fancy way of saying we do not make it think that AI is human we do not want them forming any unhealthy emotional bonds we we ensure that all our digital products are rooted in the principles of universal design of design principles and we are designed to for kids who have neurodiversity we’re designing for kids who have different learning needs so it’s really important that our products are designed in a very fair way transparent all the models that we would would use would which should have very clear data provenance so should understand where the data has come from which has trained those models and understand whether the models have been trained on different geographies on different kinds of kids on different kinds of adults so ensuring that these models have clear data provenance is super critical for us and then finally privacy so I just want to stress that in all our products AI features run locally on the devices nothing ever leaves the device nothing ever goes to us at the Lego group nothing goes to third parties no login is collected there’s in terms of the trading whether the kids are building their own AI models or they’re using pre -existing models nothing ever leaves so safety and student well -being is a red line is a non -negotiable for us so everything we know about decades of education research and the way we use AI is very important to us and I think that’s what we need to show us that kids look best when they are building when they’re using their hands and really creating and we do and we’ve seen this very much at like education and through years of research so now more than ever children need to learn and need to learn together so much of computer science and AI today is stores with kids sitting in front of the screen with the headphones on by themselves learning and I don’t think we see this as a vision for learning for us kids should be building together coding together experimenting together tinkering together and sharing together so that is really our vision of how kids should be learning computer science and AI so when they tackle these when they tackle these new technologies they also have those cross -cutting skills to also deal with us in the real world so bringing this all together at Lego education we have these four values that govern our approach to AI literacy so we prioritize child agency and engagement to ensure students are active participants in their own learning journeys we empower students with the foundations of AI that Tom was talking about that remain relevant as the technology evolves.
We uphold child safety and well -being as it’s non -negotiable for every AI interaction in the class and we foster hands -on immersive and collaborative experiences that inspire creativity and shared learning. So that is really the four principles that are driving all of this. So how do we make this, how do we bring this into a classroom? How do we, with our products, how do we make sure it’s hands -on, it’s understandable and safe for kids? So I would encourage you also after the session to go to the booth, I think it’s in Hall 3, and actually see these products in person, get hands -on with them, try them out yourself. So we’re really helping students to build real AI literacy by demystifying how AI works.
Through these playful features and lessons, learners explore concepts like computer vision, probability ballistic thinking, classification, machine learning, while seeing their ideas come to life. The result is student agency. Kids not just using AI but actually understanding and building with it. So what better way to show you how kids are using it than for me to try to actually make you use it. So here we have a lesson which is about teaching kids about pre -trained classifiers. So this is in the last unit of once they’ve gone to some core principles of computer science they’ve learned about basics and events and loops and data structures. So at the end they are looking at AI and data and here they’re learning about how you can use a pre -trained classifier, the model that already exists, to bring their AI down to life.
One thing you’ll notice here when the code is up here that the camera that they can use, the camera by default is off. So this is all. sort of in line with the principles of AI safety so it’s an explicit action the kids are taking and here when I hit play now okay I’ve got that’s why I have a video okay no worries so what I’m gonna do always fun trying to do a live demo we always have a backup so yeah you can see that as I’m lifting my hands up and down you you’re seeing the different probabilities changing here and what the kids are learning through this is that with traditional computer science you’ve got zeros and ones things can be on and off with AI what they’re learning here is there’s a 80 70 90 percent chance that I’ve lifted my left hand up or my right hand up or both hands up and then that’s triggering the different events so they’ve learned about events in earlier lessons and that’s what I’m talking about triggering those.
So they learn that AI is not always right. They’re learning that the more data that’s trained into the model, the better it gets. And they also learn from an ethics perspective that if the AI model is not trained with enough kids’ examples, it will have biases in it as well. So these are very core principles of AI, but taught in a very simple and playful way and making the AI dancer come to life. So
Ready to excite your students with computer science and AI? This lesson is called Strike a Pose. Students will learn how to customize an AI classifier and program AI -activated events. We’ll kick off with a big question to spark curiosity. How could you train a robot to follow your movements? We will explore the topic through the computer science concepts AI and data. The question is tied to a real -life example, how AI can be trained to recognize images through data. This makes it more relatable to both students and teachers. In groups of four, each student picks a minifigure, which indicates their roles in the collaborative building process. The group will build a robot with movable arms and discuss how it might work.
Then it’s time to get hands -on with coding. Groups will open Lego Education Coding Canvas, enter the lesson pin, and connect their hardware. Students create and train their own AI custom classifier by posing in front of the camera and capturing pose data. With simple pre -made code and their classifier, groups explore making the robot mimic their arm poses. Group members take turns so everyone gets hands -on. Two students develop the build of the robot. While the other two iterate on their code and later they swap. Students present their robot, talk about their iteration process, and discuss how they created and trained their class. At the end of this lesson, students will be able to say, I can create a custom classifier.
I can use PoseData to train a custom classifier. I can describe how to create a custom classifier and use data to train it. This is the third of four lessons in the AI and Data unit, where students explore how computers learn from data. In the following lessons, students investigate how data quality and quantity can improve how their AI detects their poses. At the end, they apply what they’ve learned through an open -ended design challenge. All materials for this lesson can be found on the LEGO Education Teacher Portal lesson plan, ready -to -use classroom presentation and facilitation notes. No extra. No extra prep time needed.
So you got to see how AI model is really used, how the AI dance is really used in the classroom and what you saw also in the classroom there were kids meaningful roles in the building process as they were building out the model but also meaningful roles when they’re coding and also training the AI as well. And all of this also for the kids but none of this can happen without teachers, right? So we cannot simply drop new standards and mandates on educators without the support for them. You saw in the video briefly referenced the teacher portal where the teachers get all the resources and the support they need to bring computer science and AI to kids.
We know that most teachers who are teaching computer science are actually not computer science teachers themselves. They are teaching math, they’re teaching science, they’re teaching English and so they need to be prepared to really scale this up as well. So we really see this not as a problem. It’s not as a challenge in terms of access to tools but an access to confidence. So I think this is a nice, there’s a couple of very nice quotes here but I’m also, it’s a nice quote. I just wanted to hand over to Richa. I’m very pleased to hand over to her, who leads product development on the retail side and is behind the super exciting Smartbricks, if you’ve seen those.
Thanks, Steve. Hi, everyone, good morning. Thank you for having me. So, my name is Richa Menke. I head up interactive play at the LEGO Group. So, we’ve just heard an important call to action in terms of AI literacy. So, preparing children to understand and navigate an AI -powered world. And this matters enormously. But what I’d like to do is spend a few minutes discussing the other side of this question, which is, how do we prepare AI for kids and imagination? And part of the reason we’re here is that we believe our focus on play and imagination not only unlocks exciting new play experiences, it might just be the unlock to a more inclusive and empowering future of AI.
So, childhood, as we know, is formative. it’s not a market opportunity, it’s a developmental window that closes. What enters that window shapes who we become. Our sense of confidence, our curiosity, our relationship with struggle and creation, and very importantly, that shaping can often be invisible. So this is very important to us in what we do in the Creative Play Lab, which is the innovation team at the LEGO Group. So what we do is we look at how do we create more and more relevant play experiences for kids, how do we employ new technologies in service of better play for kids, but always keeping in mind our DNA as the LEGO Group, that hands -on, minds -on play experience that we all love.
So eight years ago, our team asked the question, in a world of digital screens, how could we offer kids, more interactivity in their LEGO play experiences, but without… screen. And we were really, really committed to this and spent eight years getting there. And we just launched in January, the SmartPlay platform, which is a new dimension of Lego play. What this is, is, you know, as the child is playing with the SmartBreak in their models, the play actually responds with appropriate sounds and behaviors. So imagine you have your Star Wars X -Wing, and you know, the way you move it around, you know, if you fly with it, it’ll swoosh, if you drop it, it’ll make a crash sound.
So, you know, it’s really responsive to the kid. And all of this without a screen. Without a screen. That was very, very important to us. And also without AI. And we just, we didn’t need AI in this solution. But, you know, also, we’re not entirely sure if AI is ready for childhood. We really believe that childhood deserves deliberation. And that deliberation might be an unlock, as I mentioned, to the future of AI. So first of all, AI holds tremendous potential when you think about play. When you think of the creative barriers that kids face in play. So for example, I’m sitting with my brick bin, I have a ton of bricks. I don’t know where to start, this fear of blank canvas.
AI could easily offer little prompts that inspire me to play. It could support diverse learning methods. AI could help us better understand a child’s intent so we could offer more better, relevant, meaningful experiences. And one of my favorite aspects, which I think is super interesting, is that generative AI is probabilistic. And in other contexts, like productivity, a hallucination is a bug. But when it comes to play, maybe that hallucination is just a playful feature. So there’s huge potential in what AI could bring to offer better play. But of course, as you know, there are many challenges that need to be addressed. And there’s three… key tensions that we think are really important to address when we think about kids and childhood.
So first of all, it’s this tension between efficiency and imagination. If I can get an answer just like this, I don’t have to wait. I don’t have to struggle. I don’t have to develop my imagination. And does that rob kids of the opportunity to really develop their imagination and more importantly, develop the confidence in their own imagination? Personalization and identity. A child at seven is not the same as who they’re going to be at 17. So if we start personalizing the experience for who they are at seven, are we holding them back? And then finally, assistance and agency. Are we raising kids who are, it’s very easy for them to prompt, but they don’t have the ability to really persevere through.
So if I can get an answer just like this, I don’t have to wait. I don’t have to struggle. These are some of the key tensions that we see. And of course, there’s a lot of opportunities, but we feel the responsibility. to ensure that these are addressed. So when we develop new play experiences, we ask ourselves the question, does this increase or decrease the choices that a child has? So child agency. Does this expand imagination? I’d encourage you to ask yourself the question as you develop AI solutions. Does it preserve that healthy developmental friction where you have to actually think, and finally, just would I want this shaping my child inner voice as a way to really think about what’s right?
And I’d love to leave you with this question that we spend a lot of time thinking about is, as we look at AI systems today, what exactly are we optimizing for and how important that choice is? So if today AI systems, if we optimize for engagement, what we’re going to get is more attention. But what if, what if… If we optimize for childhood, then we’re going to optimize for potential. Thank you very much.
All right. Good morning, everybody. I’m Sadhna Pandey, and I’m the chief of education at UNICEF India. And it’s a pleasure to moderate today’s panel discussion on AI literacy and children. So we’ve heard a lot at the summit about the wonder of tech. It really feels good to talk about the wonder of children and of education. So I want to thank Legol for creating the space for this discussion. We all know that AI has brought a step change in how we live, work, and play. And there’s no doubt that it is impacting children’s lives and how they experience education. The problem is that AI is not just a tool for education. The problem is that it is doing it unevenly.
For a child living in urban Delhi, AI has found its way into their education either through the home or the school. But for a poor tribal girl living in rural Jharkhand, perhaps not so much. Education systems are facing massive learning challenges for which governments are seeking equitable, scalable and evidence -based solutions. Two to three decades of digital learning has yielded small -scale wins and modest impact on learning. And yet we’ve seen the massive impact of AI already on health systems and that gives us tremendous hope. I keep repeating this example because I’m fascinated with it. In the area of radiology, AI has helped the diagnosis of pancreatic cancer 438 days earlier than would have been normally expected.
We were previously diagnosing pancreatic cancer at fourth stage. We can now diagnose it at stage one and it diagnoses it with greater accuracy than any human ever can and this without touching a patient and that makes me feel excited. We are looking for that kind of accelerator in education. Something that’s going to bring efficiency and quality without widening inequality and as you’ve said that remains deeply human centered because we know that learning is an inherently social process. We cannot be naive about this. We are walking a tightrope between something that is scaling so far and evolving so rapidly but anybody who’s worked in the education system knows it’s a big ship it takes a wide berth to turn but even when with that we are looking for a public good out of AI because we need it these are really tough interests to marry but it has been done for vaccine rollout and it is being done in countries like Estonia right now within the education space through all of this you got it bang on we’ve got to keep teachers pedagogy and curricula at the center and more than anything else we need to keep children at the center matching their right to learn by multi modes including tech with their right to protection participation and privacy keep that in mind and we need to keep that in mind and we need to keep that in mind and we need to keep that in mind and we need to keep that in mind and we need to keep But time and again we make the error that we underestimate the capacity of children.
They’re not passive recipients of education. They have tremendous agency. They can consume tech, they can shape it, and no doubt they will lead it in time. So today’s conversation is about agency. How do we build AI that empowers children to become creative, critical, independent thinkers that maximize the potential, take out of the best of AI, but offset its risks? To help us through that conversation, I have Tom and Richard. Welcome again, Tom and Richard. And we’re looking forward to a very robust engagement. this morning. Okay. So Tom, we’re going to start with you. So you talked about AI sometimes feeling magical, that it’s abracadabra and voila, something beautiful appears. And we know how children love magic.
They really become enthralled with it and
Children do indeed love magic, don’t we all? And we all like fast results. And increasingly, We have much shorter attention spans than we had maybe even 10 years ago, and so we’re all looking for quick fixes. I think we often, well, I think we’re overlooking the fact that children have immediate access to data and information now that they trust inherently from the get -go, and they will take a question and feed it back as if it is the gospel. So there is this real danger that AI is indeed seen as a magic box, particularly generative AI, and I think that that’s amazing that children have this inherent curiosity and the Lego group sort of celebrates that curiosity every day.
It’s a wonderful thing. But as I said, I think it’s a real mistake if we don’t teach children to question the magic and actually make magic for themselves. And in order to do that, that’s why we are… so passionate about these fundamentals of AI literacy because if we simply hand children a box that promises quick magical results I think we are really short -selling them so I’d much rather yeah we hand over the screwdriver we hand over the the kind of compass and allow them to take things apart and start to create their own ideas I’m not sure if I addressed your question there but I think that the magic is the magic is something I would we really want children to create their own and I don’t think that we should be under any illusion that they’re going to work this out without an education system that takes and a societal system that takes this responsibility very very seriously and it’s not about taking this responsibility in a few months or a few years time the time is now to maybe stop some things and actually start a fundamentally different approach
losing
the responsibility to protect them.
Thank you. Thank you for the question. Yes, it’s challenging because kids have access all the time. You can’t stop it. As you say, they have a mind of their own. But I think as we’ve seen even with social media that maybe we don’t always understand the long -term consequences. While I can have an immediate reaction and something that makes me happy in the minute, what is that going to do in the long run? So I think this focus on education as a filter to understand the long -term as a kind of compass of what is a better experience I think is incredibly important. So that’s kind of our position in terms of how we would employ AI.
Wonderful. So there’s two things that we need for empowerment. One is foundational skills. The child needs to have a basic level of literacy to be able to engage with language models. The And then the last thing that we need to do is to understand the language. Second, critical web and AI literacy. And the model you put out looks fantastic. Now let’s take the model into a real -world classroom. What is it going to look like in rural Rajasthan where we’ve got multigrained, multilingual, multilevel classes? How do we make this come alive and have relevance for those type of settings?
I think that the best thing you can do, and any teachers in this room will know this, ask children who are looking at you the question, like what type of conversation do they want to have? And in the form of AI, we’ve just produced a template to discuss AI policies with your classes. And that’s what we’re going to do. And that’s what we’re going to do. And that’s what we’re going to do. And that’s what we’re going to do. And that’s what we’re going to do. And that’s what we’re going to do. And that’s what we’re going to do. And that’s what we’re going to do. And that’s what we’re going to do. And that’s what we’re going to do.
And that’s what we’re going to do. And that’s what we’re going to do. And that’s what we’re going to do. assess this question in a very, very smart, thoughtful way. And if we don’t ask them the question, again, we are very guilty of simply publishing something and deciding that it’s in their best interest. Of course we need to guide them, and we’ve got a lot of information that we need to share with them. But let them think their way through this, and the best way to do that is ask the questions. So, yeah, take a discussion around, you know, where does bias show up in their lives? What might that look like if a technology system leant too heavily on a false set of information?
Teaching them sort of the basics of if -then concepts. I think you can do that in any type of classroom, and you don’t need any type of equipment on the table. You need minds to be switched on, and to do that I think you need to ask children the questions, and you need to trust that they’re going to have some thoughts, and you need to help them guide that policy. So that’s something we’d love to see widely spread.
Yeah, maybe just coming in from… So… Prior to Lego, I also worked… with the UN Refugee Agency for many years and also sort of saw these applications of ed tech in quite rural or humanitarian contexts as well. So I think there are interesting ways to bring some of these concepts to life, even in very, I think I heard the phrase frugal AI being used here at the conference. But one of the things I think even for us, just because we have access to these powerful models doesn’t mean we need to put those necessarily directly into the hands of kids as well. So even as we look at sort of education progression from kindergarten right up to grade eight and beyond, the age appropriateness is super important.
So even as we’re looking at the littlest ones and how they learn about computational concepts and AI, a lot of where we start is actually completely screen free. They are working with understanding computer science concepts like sequences and loops and just doing this completely with bricks. And you can imagine some of these contexts, it may be bricks, it may be something else, but it isn’t even like the hardware. or a screen at all. So you can teach concepts of probability and computational thinking even without some of these, if you don’t have these resources. And this actually aligns well with, as we think of age -appropriate progression. But I really challenge the audience as well around this need to want to put things into kids’ hands directly in any context.
I mean, not just in challenging contexts in rural India, but also in other countries as well. Well, let’s not rush for the fastest and the best model, but what’s actually right for the kids as well.
Absolutely. We need to generate a fair amount of evidence before we rush to scale with something like this. Although we have to mediate the fact that smartphone penetration in a country like India is widespread. So access is there. And a school is a microcosm of a local community. So whatever is happening in the whole country, in our home, is going to reflect. in the school, and if it impacts child well -being or if it impacts learning, then the schooling system will have to respond. So Tom, I’m coming back to you again. AI can sometimes feel very passive. You put something in, you get something out. But we know that the best learning happens through engagement.
It’s that journey of discovery that excites the child. So how do we make this thing interactive? What do we need to do to support creativity in the use of AI?
I’ll declare my bias here, which is that I work for LEGO Group, therefore I’m kind of deeply entrenched in a passion for hands -on learning and a deep belief that when you use your hands, and the science backs this up, you are engaging all parts of your brains that lead to learning. You lead to deeper engagement and ultimately… ultimately a deeper mastery of the subject in front of you. We could show through thousands of research studies that we’ve done through the LEGO Foundation or with any of our research partners that spatial awareness skills develop stronger when children are using their hands. The very basics of mathematics in primary years will develop in a stronger way when you’re using manipulatives and you’re thinking through things.
So this use of hands and manipulatives is something we believe in so deeply. So I think artificial intelligence, it’s a concept of technology. We really believe there’s no reason why hands -on learning shouldn’t be brought in here. You saw in the video that we designed for collaboration first. So this is not a one -on -one learning experience. We really want children to learn together. Groups of four. One, two, three, four. Whatever number you put around the table. We want them to be looking at each other and challenging each other. working in groups, learning the fundamentals of collaboration. It’s not always easy. Things will break. You’ll have to start again. You might not like the role you’ve been given.
That’s a great life lesson. So I think AI can sometimes feel the magic box, but also maybe the dark box. And actually, it’s about helping kids understand that there are really clean and to understand technology fundamentals that underlie artificial intelligence and give them curriculum that means something to them. So we introduced a computer science GCSE in the UK back in 2014. I went to school in the UK. It’s where I live. I’m not too proud to say that that was a failure in terms of uptake by students because there were two mistakes that we made. One was a really lack of teachers and there was no teacher training. So there was no… kind of innovation put into the delivery pipeline, but then there was also a real lack of innovation in the courseware and the curriculum that we designed for that GCSE.
And so children just sat very bored in a computer science class learning very outdated principles. So I think the best thing we can do for interactivity and artificial intelligence sort of education is apply this to things that mean something to today’s teenagers and young people. And that means kind of meeting them where they are and sort of helping them apply fundamentals of AI to the life that’s going on around them. And I think that applies both to the child in the classroom and also the teacher. So give them curriculum that sort of applies now rather than
I must say that I’ve seen the joy of the Lego bricks. I’m South African and I would travel to the United States and I would travel to the United States and I would travel to the United States and I would travel to the rural areas of KwaZulu -Natal and there’d be nothing else. there except a hut. You go to the back of the hut and you see a child with two things, the workbook given by South African government and hand -me -down Lego bricks. And you would see that coming alive of head, heart, and mind. And it was beautiful to see. So thank you, Lego, for that. All right. Richa, I’m coming back to you.
We’re excited about the tech, but we’re also worried about safety. And we’re worried about privacy. And our young adolescents, in particular, who also make up the child cohort, are worried about privacy and safety. So in all of the issues that a private entity needs to think about when they’re designing a digital experience for children, where does safety and privacy stand? And how do you create this joyful, meaningful, and meaningful experience for children? Thankful experience while reducing the risk with the tool. like AI?
Thank you. So, as you can imagine, safety, privacy, these are absolutely foundational and non -negotiable as we’ve seen on the LEGO education side and similarly in ours. And just to be clear, none of our LEGO products actually employ AI. So the smart break is not using it because for all of these exact same reasons that we have a very high bar, if you look through the lens of childhood, we have a higher bar that we need to meet. So there is this tension, though, that obviously there’s so much potential for meaningful, incredible, hands -on play developed through AI, but at the same time, we need to ensure that until that bar is met, we would not put that in our products.
Excellent. So for our young people of today who will be consumers of AI, trust, transparency, privacy, sustainability, and voice would be critically important. important that we’re not just handing something to them. They get to shape it and co -create it with us. At this point in time, we have a couple of minutes. So we’re going to take a couple of questions from the audience. Since I’m left -handed, my bias is on the left side. I’m declaring it up front. So I’m going to take three quick questions in the first round, and then I will come across. So I’ll take one from the front, one from the back, and then on this side. Right. Okay. Over to you.
Thank you. Thank you. Fantastic session. My name is Nikhil Bawa. I write about AI and education. I’m just curious about what advice you would have for parents because schools are going to be slow to adapt. And so do you have resources for parents in particular about, because they will, I mean, I’m trying to develop an alternate home curriculum for four hours a week outside. at a school for my kid. Just curious about what you would recommend for parents. You need a combination of structured and unstructured play both, right? I want to know your views on how you’re thinking unstructured play with AI and then play around with also other things like self -regulation, which becomes very difficult for even a team to manage.
So that’s one question and the second is, we’re doing a research on this entire AI adoption at homes which is beyond classrooms. And the initial findings are quite disturbing because it is getting a adopted and adopted just because it’s becoming like a race, especially in India. So I would also like to know if there are some recommendations of various AI play adoption from you guys. Okay, beyond the classroom.
Good morning. Thank you so much. My name is Asha Nanavati. I’m with Alliance Educational Foundation, which runs a charitable small K -12 school in Kerala. They love the Lego products, you know. But I really heard what you said earlier, Richa, about capacity building, about including teachers. We’re a charitable school. All profits go back to the meals, the child. And we don’t maybe have funding for training teachers on AI adoption safety practices. We have play school learners up. So is Lego thinking about doing anything in India? We definitely would love to hear more about that. Thank you.
Can we take a response to those questions? Can I work back? So we have a recommended AI toolkit to take into classrooms. And it’s a facilitated conversation with children around, you know, what do you think about AI? What should a policy be for a school and a classroom? To be honest, I think that is applicable to a group of teachers in a training day as it is to children and a teacher. And I’ve seen really great examples of schools that I know in the UK following a similar approach. I think maybe there’s a theme in all of the questions. Like maybe don’t worry about applying the brakes, right? Things are moving incredibly fast. I wouldn’t go along with what can feel like this very fast river or wave or current.
I think it’s perfectly okay to apply the brakes and say we need to hit pause and we need to have a conversation. And the conversation needs to be about what do we want. And when I say we, I mean the children in the classroom and the teacher. Like what do we want to get out of this experience? And I think have the conversation. Have the conversation first and don’t worry too much about the tools or the software. that you’re worried that you might be missing out on using. And as Richard just shared, we’re not using generative AI in our products, and that’s for a very deliberate reason, because we just don’t know enough yet about safety and privacy.
We have conducted research into that, and we’re following that very closely, but we’re not willing to take any risks. And I think this time of childhood is just too precious to make some shotgun choices that we’re going to pay very heavily for in the future. So I think empower the teacher and the child to have some really formative discussions about what do we want to get out of this, and then maybe look at what’s available.
Le
arning our child agency versus some scaffolding. So as we bring these products also into core classes, classrooms as part of education strategy now. We do understand the needs for teachers to provide a scaffolding as they take them through this learning journey. So we have, for example, at LEGO Education, we follow something called a 5E model of engage, explore, explain, elaborate, and evaluate. But it’s just a fancy way of saying how do you sort of get the kids hooked initially to a big picture question or a real -life example. But you provide the educators and the students sort of a structure as you go through this process of thinking about that question. I think who had that question yesterday, the distance between a question and answer and that space between that’s where magic or inspiration happens, right?
And so giving that space for that to happen. And then when they’re building – and so you’re providing the structure for them to work in groups and build this out. But towards the end, in the elaboration phase and at the end of every unit, there’s something called a design challenge where the kids are not provided that much instruction. They’re given an open -ended prompt. And then they take the concepts and learn. They take the lessons that they’ve learned and apply that in a more open -ended way. outside of the Lego education computer science and AI product we also have something called First Lego League which is the world’s largest STEM annual STEM competition as well and there you see these it’s it’s so inspiring to see these groups of eight kids building a robotics or challenge and then doing a science theme as well but that’s completely open -ended so they will go beyond what a 45 minute lesson is what they would do within a 45 minute lesson and have a lot more agency in terms of what they can create beyond what so the teacher would take them through in in a classroom Ni
khil we have some really great resources actually available online both on Lego and Lego foundation around facilitated play with your child and it’s starting from very early years through to later years
so I’m gonna take two more questions because we coming to the end of the session we need to close we need to close okay now I will take one question but really
Well, I think we heard a lot yesterday that we need to make sure that any tools that are made available are done so in languages that mean something to you on the ground. So I think there are many tools out there that can do automated translation. We hope that the quality is going to be really strong in them. We’re currently producing in English language. Of course, there will be localizations in the future.
All right, colleagues, we need to come to a close because people need to move to the next session. We’re designing for safety, for equity. and while we provide services, we need to match it with demand. And to match with demand, teachers, learners, parents need to be empowered. That responsibility rests with all of us. It’s hard to do many things in an education system. Empowerment is not one of them. We can do that quickly. We can do that with scale and we can do that with equity. So I want to say thank you to our panelists for today for having an engaging conversation and a big thank you to Lego for bringing us together to have a conversation about children, education, and AI.
Thank you so much. The session is closed. Thank you. Thank you. Thank you.
It’s a wonderful thing. But as I said, I think it’s a real mistake if we don’t teach children to question the magic and actually make magic for themselves. And in order to do that, that’s why we are…..
EventNorman Sze: Thank you for introduction. It’s my honor to join this forum and share insight from perspective of professional service and consulting industry. Of course, AI already shaping how we work, …
EventThis comment shifted the discussion from abstract concerns about AI risks to concrete pedagogical approaches. It influenced later speakers to focus on foundational skills and agency-building, with Tom…
EventAyaz Karimov: Yeah. I can hear myself. So it means actually you can also hear me. Today, I will talk a little bit about the serious games and I will also show you one game that I made and we played l…
EventTom Wambeke: Good afternoon. This is the last input before we can go a little bit more interactive. As you see from the title, one of my hobbies is finding new abbreviations for AI, becoming moving be…
EventMs. Erin Chemery:Thanks so much, Karen. And thank you to ITU and GECA for hosting us today. I’m really loving the learning outcome piece, starting to filter into the conversation. So I think we are, a…
EventIt is argued that employing a wide variety of people to collect data and design algorithms can ensure that no one is left behind. This aligns with the goals of gender equality and reduced inequalities…
EventData privacy, security and ethical safeguards
EventNatalie Tercova: Of course, I’ll try to be very brief. So I very much agree that it very depends on the specific case. We sometimes have discussions about, OK, what we should do in health care. …
EventRed en Defensa de los Derechos Digitales: It is essential for states and stakeholders to collaborate to strengthen the integrity of the supply chain, ensuring that the digital products and services …
EventKids are advisedto resist the urge to answer bullies, or alternatively, to block them while seeking help from those they can trust (e.g. parents, peers, caregivers, or teachers). Thoughparents had bec…
EventThe discussion emphasized the importance of a multi-stakeholder approach, involving policymakers, educators, developers, and persons with disabilities in creating inclusive digital environments. Chall…
EventExamples include developing tailored school curricula materials, capacity-building efforts for teachers and parents, and research-based prevention measures.
EventThis comment redirected the conversation toward practical implementation challenges and the need for capacity building. It influenced subsequent speakers to emphasize teacher training and support syst…
EventA much needed step beyond awareness building and training of youth, parents and educators is capacity building in the area of Internet safety, targeted at the multistakeholder composition of policy ma…
Resource“AI literacy is essential and children need a voice in AI policy; AI is unavoidable like taxes.”
The knowledge base stresses the importance of empowering young people to engage with AI and participate in its governance, aligning with the claim that AI literacy is crucial and youth should have a voice in policy [S1] and [S89] and [S90] and highlights the broader need for responsible AI for children [S2].
“LEGO Education ensures AI safety by processing data on‑device so data never leaves the device, provides clear data provenance, and avoids anthropomorphising AI.”
LEGO’s child-centric design emphasizes on-device AI processing and privacy-by-design, ensuring data stays local and supporting transparent, safe AI experiences, which corroborates the reported safety rules [S62] and the edge-computing, on-device model approach described in privacy-focused sources [S97] and [S98].
“LEGO Education’s design supports neuro‑diverse learners through universal design principles.”
LEGO’s commitment to inclusive, child-well-being-focused design, including support for neuro-diverse learners, is documented in the knowledge base, confirming the claim of universal design for diverse learners [S62].
The panel shows strong convergence on the need for AI literacy grounded in fundamentals, hands‑on and play‑based pedagogy, rigorous safety and privacy safeguards, and robust teacher support. Consensus also exists on equity, localization and the danger of treating AI as a magical black box. The only notable divergence is the degree of optimism about deploying AI now versus a more cautious stance, yet even that is bridged by shared safety concerns.
High consensus across most thematic areas, indicating a shared vision that AI education must be foundational, safe, equitable and teacher‑driven. This consensus suggests that future policy and product development can build on these common principles to advance inclusive AI literacy.
The panel shows strong consensus on the importance of AI literacy, child agency, and safety, but notable disagreements arise around the actual use of AI in LEGO products, the timing of AI integration in classrooms, and the perceived impact of AI on creativity. These disputes centre on technical implementation versus policy caution, and on whether AI can be a creative catalyst or a potential inhibitor.
Moderate – while participants share overarching goals (equitable, safe AI education), they diverge on concrete approaches (immediate AI deployment with safeguards vs. postponement, and even on whether AI is present at all). This level of disagreement suggests that future collaborations will need clear alignment on product roadmaps and shared safety standards to avoid mixed messaging.
The discussion was shaped by a series of pivotal remarks that moved the conversation from a product‑centric showcase to a nuanced debate about ethics, equity, and pedagogy. Early framing of AI as inevitable set a sense of urgency, while Tom Hall’s screwdriver metaphor reframed AI as a tool to be deconstructed, prompting deeper exploration of foundational literacy. Richa Menke’s focus on imagination versus efficiency and the optimization question introduced a strategic, values‑based lens, steering the panel toward considerations of developmental impact and societal goals. Contributions from Atish and Saadhna highlighted practical pathways for inclusive, low‑tech implementation and the stark equity gaps that must be addressed. Repeated emphasis on safety, privacy, and responsible rollout acted as a grounding force, ensuring that enthusiasm for AI did not eclipse caution. Collectively, these thought‑provoking comments redirected the dialogue toward child‑centered, equitable, and ethically sound AI education, shaping a balanced and forward‑looking conclusion.
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event

