Responsible AI for Children Safe Playful and Empowering Learning

20 Feb 2026 16:00h - 17:00h

Responsible AI for Children Safe Playful and Empowering Learning

Session at a glance

Summary

This discussion focused on AI literacy for children and how to prepare young learners to understand and navigate an AI-powered world. The session, moderated by UNICEF India’s Chief of Education Saadhna Panday, featured representatives from LEGO Education discussing their approach to teaching AI concepts to children through hands-on, collaborative learning experiences.


Tom Hall from LEGO Education emphasized that children should not view AI as a “magic box” but should understand the fundamental concepts underlying the technology. He argued for giving children the tools to deconstruct and comprehend AI systems rather than simply teaching them to use AI tools. The company has developed a computer science and AI curriculum that teaches concepts like probability, algorithmic bias, and machine learning through physical building and coding activities that run locally on devices to ensure privacy and safety.


Atish Joshua Gonsalves demonstrated how students can learn about AI classifiers by training models to recognize their poses and movements, which then control LEGO robots. This approach emphasizes collaborative learning in groups of four, where students take turns building, coding, and training AI models. The curriculum is designed to be accessible even in resource-constrained environments, starting with screen-free activities using physical bricks to teach computational thinking concepts.


Richa Menke from LEGO’s Creative Play Lab discussed the tension between AI’s potential benefits and risks for children. She highlighted concerns about efficiency versus imagination, personalization versus identity development, and assistance versus agency. LEGO’s current smart brick products deliberately avoid using AI, maintaining high safety and privacy standards while the technology matures.


The panelists stressed that AI literacy should be treated as a fundamental skill alongside reading and mathematics, with education systems taking immediate responsibility to prepare children not just as AI consumers but as future creators and leaders of the technology.


Keypoints

Major Discussion Points:

AI Literacy as Fundamental Education: The need to teach children foundational AI concepts (probability, algorithmic bias, data processing) as core literacy skills alongside reading and math, rather than treating AI as a “magic box” or elective subject


Safety and Privacy in AI for Children: Establishing non-negotiable principles for AI products designed for children, including local processing (no data leaving devices), transparency in data provenance, and avoiding anthropomorphization of AI systems


Hands-on, Collaborative Learning Approach: Emphasizing that children learn best through physical manipulation, building together, and social interaction rather than isolated screen-based experiences, with AI education integrated into tactile, group-based activities


Balancing AI Assistance with Child Agency: Addressing the tension between AI’s efficiency and the need to preserve children’s imagination, struggle, and creative development – ensuring AI empowers rather than replaces critical thinking and problem-solving skills


Equity and Accessibility in AI Education: Discussing how to make AI literacy relevant and accessible across diverse contexts, from urban schools with resources to rural, multilingual, multi-level classrooms with limited technology access


Overall Purpose:

The discussion aimed to explore how to responsibly introduce AI literacy to children through educational products and curricula, with LEGO Education presenting their approach to teaching foundational AI concepts through hands-on, collaborative learning experiences while maintaining strict safety and privacy standards.


Overall Tone:

The tone was consistently thoughtful and cautious throughout, with speakers emphasizing responsibility and child welfare over technological advancement. There was an underlying sense of urgency about preparing children for an AI-powered future, balanced with deliberate restraint about rushing to implement AI tools without proper safeguards. The conversation maintained an optimistic but measured approach, celebrating children’s capabilities while acknowledging the serious considerations required when designing AI experiences for young learners.


Speakers

Speakers from the provided list:


Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations


Tom Hall: Works at LEGO Group, involved in AI literacy education and curriculum development


Atish Joshua Gonsalves: Product development at LEGO Education, previously worked with UN Refugee Agency, focuses on computer science and AI education products


Richa Menke: Heads up interactive play at the LEGO Group, leads the Creative Play Lab innovation team, focuses on creating interactive play experiences


Saadhna Panday: Chief of Education at UNICEF India, moderator of the panel discussion on AI literacy and children


Nikhil Bawa: Audience member, writes about AI and education


Asha Nanavati: Works with Alliance Educational Foundation, which runs a charitable K-12 school in Kerala


Speaker 4: Role/title not specified – appears to be an audience member asking a question


Additional speakers:


None identified beyond the provided speakers names list.


Full session report

This comprehensive discussion on AI literacy for children, moderated by UNICEF India’s Chief of Education Saadhna Panday, brought together representatives from LEGO Education to explore how to responsibly prepare young learners for an AI-powered world. The session began with an impactful video featuring children’s voices about AI and followed a structured format of presentations and demonstrations before concluding with a panel discussion that revealed a thoughtful, cautious approach prioritising child development over technological advancement.


Opening: Children’s Voices on AI

The session opened with a powerful video featuring children discussing AI and their desire to be included in AI policy conversations. As Tom Hall from LEGO Education noted, referring to one particular child in the video, “He breaks me every time” when discussing the impact of hearing directly from young people about their perspectives on AI. The children in the video articulated principles for AI education that should be “safe, fair, and transparent,” immediately establishing that this discussion would centre children’s agency rather than treating them as passive recipients of technology.


One child’s statement was particularly striking: “We need to have a say in AI policies because AI literacy is really important. Thanks for finally asking us what we think.” This opening set the tone for the entire discussion, emphasising that children are not merely users of AI tools but stakeholders who should contribute to AI development and governance decisions.


Reframing AI Education: From Magic Box to Understanding

Tom Hall from LEGO Education presented a fundamental reframing of AI education philosophy, challenging the conventional approach of simply teaching children to use AI tools. “AI literacy isn’t about teaching children how to use this magic box,” Hall argued. “I think far more importantly it’s like how do we give the child the screwdriver to take that box apart and really understand what’s going on under the cover.”


Hall explained that children currently view generative AI systems as magical boxes where “you type in a text or a question and then outcome images and videos and entertaining things and maybe even the answer to a history essay question.” However, he emphasised that foundational AI literacy should focus on understanding concepts such as probability, how computers process data, algorithmic bias, and the nuances of these systems.


This approach aims to transform children from passive consumers into active investigators and future creators. The goal is not merely to prepare children for today’s AI tools, but to equip them with knowledge and confidence to “build what is yet to come” and “be the designer of what is to come.” Hall argued that AI literacy should be elevated to the status of modern literacy alongside mathematics and reading, rather than being treated as an elective subject for a select few.


Hall also referenced the UK’s introduction of computer science GCSE in 2014 as an example of how educational systems can successfully integrate new technological literacies into core curriculum requirements.


Demonstrating AI Literacy: The “Strike a Pose” Lesson

Atish Joshua Gonsalves from LEGO Education provided a practical demonstration of their AI education approach through their “Strike a Pose” lesson. In this hands-on activity, students work in groups of four, taking turns building, coding, and training AI models. Students create custom AI classifiers by posing in front of cameras and training models to recognise their movements, which then control LEGO robots including their “AI Dancer” robot.


This demonstration illustrated how children can understand fundamental AI concepts like probability and classification whilst seeing their ideas come to life through physical construction. Gonsalves emphasised that their curriculum follows a 5E model: engage, explore, explain, elaborate, and evaluate, ensuring structured learning progression while maintaining hands-on engagement.


Importantly, Gonsalves explained that their approach begins with completely screen-free activities, where foundational computer science concepts like sequences, loops, and probability are taught using physical bricks before introducing any digital components. This progression ensures that children develop computational thinking through tactile experiences before engaging with more abstract digital concepts.


The Science of Hands-On Learning

Hall presented research-backed arguments for physical, collaborative learning experiences over isolated digital interactions. “When you use your hands, and the science backs this up, you are engaging all parts of your brains that lead to learning,” he explained. He cited research showing that spatial awareness skills and basic mathematics develop more effectively when children use manipulatives and think through problems physically.


LEGO’s approach deliberately designs for collaboration first, with children working in groups where they share roles and responsibilities. This methodology extends to their First LEGO League, which Hall described as “the world’s largest STEM competition,” demonstrating their commitment to collaborative, hands-on learning at scale.


Safety and Privacy as Non-Negotiable Foundations

A striking aspect of the discussion was LEGO’s unwavering commitment to child safety and privacy. Gonsalves detailed their comprehensive safety guidelines, which include ensuring that all AI features in their educational products run locally on devices with no data ever leaving the device, no collection of login information, and no sharing with third parties. They deliberately avoid anthropomorphising AI systems to prevent children from forming unhealthy emotional bonds with technology.


Richa Menke from LEGO’s Creative Play Lab revealed a surprising decision: their current SmartBrick retail products deliberately avoid using AI altogether. Despite being a company focused on AI education, she explained that “LEGO products currently don’t employ AI because the safety and privacy bar hasn’t been met for childhood applications.” This decision reflects their belief that “childhood deserves deliberation” and that rushing to implement AI without proper safeguards could have long-term consequences.


Gonsalves emphasised that “safety and student well-being is a red line, is a non-negotiable for us.” Their AI education tools are designed with explicit user controls, such as cameras that are off by default and require deliberate activation by students, ensuring that children make conscious choices about their technology interactions.


Addressing Critical Tensions in AI and Child Development

Richa Menke introduced three crucial tensions that must be addressed when developing AI for children. First, the tension between efficiency and imagination: “If I can get an answer just like this, I don’t have to wait. I don’t have to struggle. I don’t have to develop my imagination.” This raises concerns about whether AI’s quick responses might rob children of important developmental experiences that build resilience and creativity.


Second, the tension between personalisation and identity development: “A child at seven is not the same as who they’re going to be at 17. So if we start personalising the experience for who they are at seven, are we holding them back?” This highlights concerns about AI systems potentially constraining children’s natural development and identity exploration.


Third, the tension between assistance and agency: whether AI tools might create children who are skilled at prompting but lack the ability to persevere through challenges independently. These tensions frame a fundamental question about what AI development should optimise for. Menke argued that whilst current AI systems often optimise for engagement and attention, “if we optimize for childhood, then we’re going to optimize for potential.”


The Promise and Urgency of AI

Panday illustrated AI’s transformative potential through a compelling example: AI systems that can detect pancreatic cancer 438 days earlier than traditional methods. This example demonstrated why AI literacy is not just about technology education but about preparing children for a world where AI will fundamentally change how we approach problems and solutions.


She also referenced Estonia as an example of a country successfully implementing AI education at a national level, showing that systematic AI literacy programs are not only possible but already being implemented effectively in some contexts.


Equity and Accessibility Challenges

The discussion acknowledged significant equity concerns in AI education implementation. Panday highlighted that “for a child living in urban Delhi, AI has found its way into their education either through the home or the school. But for a poor tribal girl living in rural Jharkhand, perhaps not so much.” This uneven access creates new forms of digital divide that education systems must address.


The challenge extends beyond access to technology to include teacher preparation and support. Gonsalves noted that “most teachers who are teaching computer science are actually not computer science teachers themselves. They are teaching math, they’re teaching science, they’re teaching English.” This reality requires comprehensive support systems, including ready-to-use lesson plans, classroom presentations, and facilitation notes that require no additional preparation time.


Questions from the audience highlighted practical challenges faced by charitable schools that cannot afford AI training for teachers, and the need for resources in local languages. The panellists acknowledged these challenges whilst emphasising that many AI literacy concepts can be taught without expensive technology, starting with discussion-based approaches and physical materials.


Children as Active Agents and Policy Contributors

A recurring theme throughout the discussion was the recognition of children’s agency and capacity to contribute meaningfully to AI development and governance. Panday emphasised that “time and again we make the error that we underestimate the capacity of children. They’re not passive recipients of education. They have tremendous agency. They can consume tech, they can shape it, and no doubt they will lead it in time.”


Hall advocated for involving children in AI policy discussions within schools, providing them with templates to discuss AI policies and trusting their thoughtful responses. This approach represents a significant shift from traditional educational technology implementation, where children are typically seen as recipients rather than contributors to policy and design decisions.


Implementation Strategies and Practical Approaches

LEGO Education’s practical approach, with their computer science and AI product announced in January and set to reach schools in April, involves a structured learning progression that begins with foundational computer science concepts before introducing AI elements. Their curriculum balances structured learning with open-ended creativity, with early lessons providing scaffolding and guidance while later units include design challenges where students apply learned concepts more autonomously.


For contexts with limited resources, the panellists emphasised that many fundamental concepts can be taught through discussion and physical manipulation without requiring advanced technology. Hall suggested starting with conversations about bias, if-then concepts, and policy discussions that can be conducted in any classroom setting.


The session also referenced opportunities for hands-on experience, with attendees encouraged to visit Hall 3 booth to try the products themselves, demonstrating the commitment to experiential learning that extends beyond the formal presentation.


The Call to Action: Acting Now with Scale and Equity

The discussion concluded with a strong emphasis on urgency and action. Rather than waiting for perfect solutions, the panellists advocated for beginning AI literacy education immediately with available resources and approaches. The session established that successful AI literacy education requires treating children as active agents rather than passive consumers, prioritising hands-on collaborative learning over isolated digital experiences, maintaining non-negotiable safety and privacy standards, and addressing equity concerns to ensure universal access.


Panday emphasised that empowerment through AI literacy is something “we can do quickly, with scale and with equity,” rejecting the notion that comprehensive AI education must wait for ideal conditions or resources.


Conclusion: A Child-First Approach to AI Education

The session demonstrated a mature approach to educational technology that prioritises long-term human development over short-term technological capabilities. The discussion reframed AI education from a technology-first to a child-first approach, emphasising that the goal is not to prepare children for AI, but to prepare AI for children whilst empowering young people to become the creators and leaders of future AI development.


Most significantly, the conversation established that responsible AI literacy education requires balancing technological potential with developmental appropriateness, ensuring that children are equipped not just to use AI tools, but to understand, question, and ultimately shape the AI-powered world they will inherit and lead. The philosophical shift represents a deliberate choice to prioritise childhood development and agency over technological advancement, whilst maintaining optimism about children’s capacity to become thoughtful creators and leaders in an AI-powered future.


The recurring message throughout was clear: children deserve to be heard, included, and empowered in discussions about AI, and the time to begin this work is now, with whatever resources and approaches are available, rather than waiting for perfect conditions that may never arrive.


Session transcript

Speaker 1

curious how it works and I think that a lot of kids are. I would love to learn how it can be used in everyday life and how it can be used as an accurate source of information. AI is like taxes, it’s unavoidable and if you don’t learn to evolve with it you’re gonna be left behind. I definitely want to be a part of solving big problems. We need to have a say in AI policies because AI literacy is really important. Thanks for finally asking us what we think. Bye.

Tom Hall

He breaks me every time. These were children that we brought into a school in California in December. actors in there. There’s just a lot of children with opinions and the little boy at the end, he just had a lot to say. He is very wise. But these are, those were the views of just some smart, inspiring young people. They’re not just eager to use AI, but I think you can see they’re especially eager to understand and to build things with it. And just as you saw, they have some really clear ideas about how it should and shouldn’t be used in today’s classrooms. But of course, you know, excitement and confidence are not the same as mastery or comprehension.

We do see an unfortunate trend where children do not understand the fundamentals of the systems they’re interacting with. And I think you can particularly see that in younger children who often see generative AI systems. As a kind of magic box that they can… into where you know you type in a text or a question and then outcome images and videos and entertaining things and maybe even the answer to a history essay question I think we need to be really clear that AI is not magic it’s not a magic toolbox it’s a technology system and foundational AI literacy isn’t about teaching children how to use this magic box I think far more importantly it’s like how do we give the child the screwdriver to take that box apart and really understand what’s going on under the cover so while you know supporting children to use AI tools safely ethically and effectively today is important I think far more it’s about equipping them with the knowledge and the tools the confidence to build what is yet to come So therefore our definition of AI literacy when we talk about it, it’s about understanding today’s technology, yes, but it’s far more about understanding the fundamental concepts so that you are armed and ready for what is yet to be designed, and actually so that you can be the designer of what is to come.

So I think that we have underestimated the role we have to play in preparing children today. We don’t want them to be passive consumers of AI. Instead, we really believe that we should be arming them with the tools, the literacies that are required to lead, to design, to create. And our goal is not about sort of robot -proofing our children for what’s coming at them, but just making sure that they are ready to build a better future and they’ve got the tools in their hands. So let’s talk about AI literacy as understanding the foundations of AI. So AI is the foundation of computer science and AI concepts, and that is about understanding the fundamental concepts of AI.

understanding probability, how computers sort of sense the world as data points and data sensors, sensing algorithmic bias and understanding all of the nuances of that. We don’t want that to be an elective or selective choice for just the few. We believe that these concepts have to be elevated to the status of modern literacy alongside maths and reading, problem solving, creativity and collaboration. And I think it’s best if we show you how we plan to do this in classrooms. So I’m going to hand over to Atish, and we’re going to run a live demo, which is always fun at a conference event.

Atish Joshua Gonsalves

Great, thanks, Tom. And I’m also delighted to introduce AI Dancer, who’s on the table here, who hopefully will do some dancing soon as well. So, yeah, very excited to share. I’m going to share a bit more about how we’ve translated some of these principles that Tom was talking about into the product. So I’m here. excited to shout about our new computer science and AI product which is just fresh off the press which we just announced in January and will hit schools in April but all of this we need to do this very responsibly we saw this kid earlier in the video talk about AI should be safe fair transparent so this is very wise kid right so we really agree at Lego education we’ve established clear guidelines for how this should work so let me step you through some of these guidelines so AI should be safe we do not generate any text or any media we do not anthropomorphize I got that right this time it’s just a fancy way of saying we do not make it think that AI is human we do not want them forming any unhealthy emotional bonds we we ensure that all our digital products are rooted in the principles of universal design of design principles and we are designed to for kids who have neurodiversity we’re designing for kids who have different learning needs so it’s really important that our products are designed in a very fair way transparent all the models that we would would use would which should have very clear data provenance so should understand where the data has come from which has trained those models and understand whether the models have been trained on different geographies on different kinds of kids on different kinds of adults so ensuring that these models have clear data provenance is super critical for us and then finally privacy so I just want to stress that in all our products AI features run locally on the devices nothing ever leaves the device nothing ever goes to us at the Lego group nothing goes to third parties no login is collected there’s in terms of the trading whether the kids are building their own AI models or they’re using pre -existing models nothing ever leaves so safety and student well -being is a red line is a non -negotiable for us so everything we know about decades of education research and the way we use AI is very important to us and I think that’s what we need to show us that kids look best when they are building when they’re using their hands and really creating and we do and we’ve seen this very much at like education and through years of research so now more than ever children need to learn and need to learn together so much of computer science and AI today is stores with kids sitting in front of the screen with the headphones on by themselves learning and I don’t think we see this as a vision for learning for us kids should be building together coding together experimenting together tinkering together and sharing together so that is really our vision of how kids should be learning computer science and AI so when they tackle these when they tackle these new technologies they also have those cross -cutting skills to also deal with us in the real world so bringing this all together at Lego education we have these four values that govern our approach to AI literacy so we prioritize child agency and engagement to ensure students are active participants in their own learning journeys we empower students with the foundations of AI that Tom was talking about that remain relevant as the technology evolves.

We uphold child safety and well -being as it’s non -negotiable for every AI interaction in the class and we foster hands -on immersive and collaborative experiences that inspire creativity and shared learning. So that is really the four principles that are driving all of this. So how do we make this, how do we bring this into a classroom? How do we, with our products, how do we make sure it’s hands -on, it’s understandable and safe for kids? So I would encourage you also after the session to go to the booth, I think it’s in Hall 3, and actually see these products in person, get hands -on with them, try them out yourself. So we’re really helping students to build real AI literacy by demystifying how AI works.

Through these playful features and lessons, learners explore concepts like computer vision, probability ballistic thinking, classification, machine learning, while seeing their ideas come to life. The result is student agency. Kids not just using AI but actually understanding and building with it. So what better way to show you how kids are using it than for me to try to actually make you use it. So here we have a lesson which is about teaching kids about pre -trained classifiers. So this is in the last unit of once they’ve gone to some core principles of computer science they’ve learned about basics and events and loops and data structures. So at the end they are looking at AI and data and here they’re learning about how you can use a pre -trained classifier, the model that already exists, to bring their AI down to life.

One thing you’ll notice here when the code is up here that the camera that they can use, the camera by default is off. So this is all. sort of in line with the principles of AI safety so it’s an explicit action the kids are taking and here when I hit play now okay I’ve got that’s why I have a video okay no worries so what I’m gonna do always fun trying to do a live demo we always have a backup so yeah you can see that as I’m lifting my hands up and down you you’re seeing the different probabilities changing here and what the kids are learning through this is that with traditional computer science you’ve got zeros and ones things can be on and off with AI what they’re learning here is there’s a 80 70 90 percent chance that I’ve lifted my left hand up or my right hand up or both hands up and then that’s triggering the different events so they’ve learned about events in earlier lessons and that’s what I’m talking about triggering those.

So they learn that AI is not always right. They’re learning that the more data that’s trained into the model, the better it gets. And they also learn from an ethics perspective that if the AI model is not trained with enough kids’ examples, it will have biases in it as well. So these are very core principles of AI, but taught in a very simple and playful way and making the AI dancer come to life. So

Speaker 1

Ready to excite your students with computer science and AI? This lesson is called Strike a Pose. Students will learn how to customize an AI classifier and program AI -activated events. We’ll kick off with a big question to spark curiosity. How could you train a robot to follow your movements? We will explore the topic through the computer science concepts AI and data. The question is tied to a real -life example, how AI can be trained to recognize images through data. This makes it more relatable to both students and teachers. In groups of four, each student picks a minifigure, which indicates their roles in the collaborative building process. The group will build a robot with movable arms and discuss how it might work.

Then it’s time to get hands -on with coding. Groups will open Lego Education Coding Canvas, enter the lesson pin, and connect their hardware. Students create and train their own AI custom classifier by posing in front of the camera and capturing pose data. With simple pre -made code and their classifier, groups explore making the robot mimic their arm poses. Group members take turns so everyone gets hands -on. Two students develop the build of the robot. While the other two iterate on their code and later they swap. Students present their robot, talk about their iteration process, and discuss how they created and trained their class. At the end of this lesson, students will be able to say, I can create a custom classifier.

I can use PoseData to train a custom classifier. I can describe how to create a custom classifier and use data to train it. This is the third of four lessons in the AI and Data unit, where students explore how computers learn from data. In the following lessons, students investigate how data quality and quantity can improve how their AI detects their poses. At the end, they apply what they’ve learned through an open -ended design challenge. All materials for this lesson can be found on the LEGO Education Teacher Portal lesson plan, ready -to -use classroom presentation and facilitation notes. No extra. No extra prep time needed.

Atish Joshua Gonsalves

So you got to see how AI model is really used, how the AI dance is really used in the classroom and what you saw also in the classroom there were kids meaningful roles in the building process as they were building out the model but also meaningful roles when they’re coding and also training the AI as well. And all of this also for the kids but none of this can happen without teachers, right? So we cannot simply drop new standards and mandates on educators without the support for them. You saw in the video briefly referenced the teacher portal where the teachers get all the resources and the support they need to bring computer science and AI to kids.

We know that most teachers who are teaching computer science are actually not computer science teachers themselves. They are teaching math, they’re teaching science, they’re teaching English and so they need to be prepared to really scale this up as well. So we really see this not as a problem. It’s not as a challenge in terms of access to tools but an access to confidence. So I think this is a nice, there’s a couple of very nice quotes here but I’m also, it’s a nice quote. I just wanted to hand over to Richa. I’m very pleased to hand over to her, who leads product development on the retail side and is behind the super exciting Smartbricks, if you’ve seen those.

Richa Menke

Thanks, Steve. Hi, everyone, good morning. Thank you for having me. So, my name is Richa Menke. I head up interactive play at the LEGO Group. So, we’ve just heard an important call to action in terms of AI literacy. So, preparing children to understand and navigate an AI -powered world. And this matters enormously. But what I’d like to do is spend a few minutes discussing the other side of this question, which is, how do we prepare AI for kids and imagination? And part of the reason we’re here is that we believe our focus on play and imagination not only unlocks exciting new play experiences, it might just be the unlock to a more inclusive and empowering future of AI.

So, childhood, as we know, is formative. it’s not a market opportunity, it’s a developmental window that closes. What enters that window shapes who we become. Our sense of confidence, our curiosity, our relationship with struggle and creation, and very importantly, that shaping can often be invisible. So this is very important to us in what we do in the Creative Play Lab, which is the innovation team at the LEGO Group. So what we do is we look at how do we create more and more relevant play experiences for kids, how do we employ new technologies in service of better play for kids, but always keeping in mind our DNA as the LEGO Group, that hands -on, minds -on play experience that we all love.

So eight years ago, our team asked the question, in a world of digital screens, how could we offer kids, more interactivity in their LEGO play experiences, but without… screen. And we were really, really committed to this and spent eight years getting there. And we just launched in January, the SmartPlay platform, which is a new dimension of Lego play. What this is, is, you know, as the child is playing with the SmartBreak in their models, the play actually responds with appropriate sounds and behaviors. So imagine you have your Star Wars X -Wing, and you know, the way you move it around, you know, if you fly with it, it’ll swoosh, if you drop it, it’ll make a crash sound.

So, you know, it’s really responsive to the kid. And all of this without a screen. Without a screen. That was very, very important to us. And also without AI. And we just, we didn’t need AI in this solution. But, you know, also, we’re not entirely sure if AI is ready for childhood. We really believe that childhood deserves deliberation. And that deliberation might be an unlock, as I mentioned, to the future of AI. So first of all, AI holds tremendous potential when you think about play. When you think of the creative barriers that kids face in play. So for example, I’m sitting with my brick bin, I have a ton of bricks. I don’t know where to start, this fear of blank canvas.

AI could easily offer little prompts that inspire me to play. It could support diverse learning methods. AI could help us better understand a child’s intent so we could offer more better, relevant, meaningful experiences. And one of my favorite aspects, which I think is super interesting, is that generative AI is probabilistic. And in other contexts, like productivity, a hallucination is a bug. But when it comes to play, maybe that hallucination is just a playful feature. So there’s huge potential in what AI could bring to offer better play. But of course, as you know, there are many challenges that need to be addressed. And there’s three… key tensions that we think are really important to address when we think about kids and childhood.

So first of all, it’s this tension between efficiency and imagination. If I can get an answer just like this, I don’t have to wait. I don’t have to struggle. I don’t have to develop my imagination. And does that rob kids of the opportunity to really develop their imagination and more importantly, develop the confidence in their own imagination? Personalization and identity. A child at seven is not the same as who they’re going to be at 17. So if we start personalizing the experience for who they are at seven, are we holding them back? And then finally, assistance and agency. Are we raising kids who are, it’s very easy for them to prompt, but they don’t have the ability to really persevere through.

So if I can get an answer just like this, I don’t have to wait. I don’t have to struggle. These are some of the key tensions that we see. And of course, there’s a lot of opportunities, but we feel the responsibility. to ensure that these are addressed. So when we develop new play experiences, we ask ourselves the question, does this increase or decrease the choices that a child has? So child agency. Does this expand imagination? I’d encourage you to ask yourself the question as you develop AI solutions. Does it preserve that healthy developmental friction where you have to actually think, and finally, just would I want this shaping my child inner voice as a way to really think about what’s right?

And I’d love to leave you with this question that we spend a lot of time thinking about is, as we look at AI systems today, what exactly are we optimizing for and how important that choice is? So if today AI systems, if we optimize for engagement, what we’re going to get is more attention. But what if, what if… If we optimize for childhood, then we’re going to optimize for potential. Thank you very much.

Saadhna Panday

All right. Good morning, everybody. I’m Sadhna Pandey, and I’m the chief of education at UNICEF India. And it’s a pleasure to moderate today’s panel discussion on AI literacy and children. So we’ve heard a lot at the summit about the wonder of tech. It really feels good to talk about the wonder of children and of education. So I want to thank Legol for creating the space for this discussion. We all know that AI has brought a step change in how we live, work, and play. And there’s no doubt that it is impacting children’s lives and how they experience education. The problem is that AI is not just a tool for education. The problem is that it is doing it unevenly.

For a child living in urban Delhi, AI has found its way into their education either through the home or the school. But for a poor tribal girl living in rural Jharkhand, perhaps not so much. Education systems are facing massive learning challenges for which governments are seeking equitable, scalable and evidence -based solutions. Two to three decades of digital learning has yielded small -scale wins and modest impact on learning. And yet we’ve seen the massive impact of AI already on health systems and that gives us tremendous hope. I keep repeating this example because I’m fascinated with it. In the area of radiology, AI has helped the diagnosis of pancreatic cancer 438 days earlier than would have been normally expected.

We were previously diagnosing pancreatic cancer at fourth stage. We can now diagnose it at stage one and it diagnoses it with greater accuracy than any human ever can and this without touching a patient and that makes me feel excited. We are looking for that kind of accelerator in education. Something that’s going to bring efficiency and quality without widening inequality and as you’ve said that remains deeply human centered because we know that learning is an inherently social process. We cannot be naive about this. We are walking a tightrope between something that is scaling so far and evolving so rapidly but anybody who’s worked in the education system knows it’s a big ship it takes a wide berth to turn but even when with that we are looking for a public good out of AI because we need it these are really tough interests to marry but it has been done for vaccine rollout and it is being done in countries like Estonia right now within the education space through all of this you got it bang on we’ve got to keep teachers pedagogy and curricula at the center and more than anything else we need to keep children at the center matching their right to learn by multi modes including tech with their right to protection participation and privacy keep that in mind and we need to keep that in mind and we need to keep that in mind and we need to keep that in mind and we need to keep that in mind and we need to keep But time and again we make the error that we underestimate the capacity of children.

They’re not passive recipients of education. They have tremendous agency. They can consume tech, they can shape it, and no doubt they will lead it in time. So today’s conversation is about agency. How do we build AI that empowers children to become creative, critical, independent thinkers that maximize the potential, take out of the best of AI, but offset its risks? To help us through that conversation, I have Tom and Richard. Welcome again, Tom and Richard. And we’re looking forward to a very robust engagement. this morning. Okay. So Tom, we’re going to start with you. So you talked about AI sometimes feeling magical, that it’s abracadabra and voila, something beautiful appears. And we know how children love magic.

They really become enthralled with it and

Tom Hall

Children do indeed love magic, don’t we all? And we all like fast results. And increasingly, We have much shorter attention spans than we had maybe even 10 years ago, and so we’re all looking for quick fixes. I think we often, well, I think we’re overlooking the fact that children have immediate access to data and information now that they trust inherently from the get -go, and they will take a question and feed it back as if it is the gospel. So there is this real danger that AI is indeed seen as a magic box, particularly generative AI, and I think that that’s amazing that children have this inherent curiosity and the Lego group sort of celebrates that curiosity every day.

It’s a wonderful thing. But as I said, I think it’s a real mistake if we don’t teach children to question the magic and actually make magic for themselves. And in order to do that, that’s why we are… so passionate about these fundamentals of AI literacy because if we simply hand children a box that promises quick magical results I think we are really short -selling them so I’d much rather yeah we hand over the screwdriver we hand over the the kind of compass and allow them to take things apart and start to create their own ideas I’m not sure if I addressed your question there but I think that the magic is the magic is something I would we really want children to create their own and I don’t think that we should be under any illusion that they’re going to work this out without an education system that takes and a societal system that takes this responsibility very very seriously and it’s not about taking this responsibility in a few months or a few years time the time is now to maybe stop some things and actually start a fundamentally different approach

Speaker 1

losing

Saadhna Panday

the responsibility to protect them.

Richa Menke

Thank you. Thank you for the question. Yes, it’s challenging because kids have access all the time. You can’t stop it. As you say, they have a mind of their own. But I think as we’ve seen even with social media that maybe we don’t always understand the long -term consequences. While I can have an immediate reaction and something that makes me happy in the minute, what is that going to do in the long run? So I think this focus on education as a filter to understand the long -term as a kind of compass of what is a better experience I think is incredibly important. So that’s kind of our position in terms of how we would employ AI.

Saadhna Panday

Wonderful. So there’s two things that we need for empowerment. One is foundational skills. The child needs to have a basic level of literacy to be able to engage with language models. The And then the last thing that we need to do is to understand the language. Second, critical web and AI literacy. And the model you put out looks fantastic. Now let’s take the model into a real -world classroom. What is it going to look like in rural Rajasthan where we’ve got multigrained, multilingual, multilevel classes? How do we make this come alive and have relevance for those type of settings?

Tom Hall

I think that the best thing you can do, and any teachers in this room will know this, ask children who are looking at you the question, like what type of conversation do they want to have? And in the form of AI, we’ve just produced a template to discuss AI policies with your classes. And that’s what we’re going to do. And that’s what we’re going to do. And that’s what we’re going to do. And that’s what we’re going to do. And that’s what we’re going to do. And that’s what we’re going to do. And that’s what we’re going to do. And that’s what we’re going to do. And that’s what we’re going to do. And that’s what we’re going to do.

And that’s what we’re going to do. And that’s what we’re going to do. And that’s what we’re going to do. assess this question in a very, very smart, thoughtful way. And if we don’t ask them the question, again, we are very guilty of simply publishing something and deciding that it’s in their best interest. Of course we need to guide them, and we’ve got a lot of information that we need to share with them. But let them think their way through this, and the best way to do that is ask the questions. So, yeah, take a discussion around, you know, where does bias show up in their lives? What might that look like if a technology system leant too heavily on a false set of information?

Teaching them sort of the basics of if -then concepts. I think you can do that in any type of classroom, and you don’t need any type of equipment on the table. You need minds to be switched on, and to do that I think you need to ask children the questions, and you need to trust that they’re going to have some thoughts, and you need to help them guide that policy. So that’s something we’d love to see widely spread.

Atish Joshua Gonsalves

Yeah, maybe just coming in from… So… Prior to Lego, I also worked… with the UN Refugee Agency for many years and also sort of saw these applications of ed tech in quite rural or humanitarian contexts as well. So I think there are interesting ways to bring some of these concepts to life, even in very, I think I heard the phrase frugal AI being used here at the conference. But one of the things I think even for us, just because we have access to these powerful models doesn’t mean we need to put those necessarily directly into the hands of kids as well. So even as we look at sort of education progression from kindergarten right up to grade eight and beyond, the age appropriateness is super important.

So even as we’re looking at the littlest ones and how they learn about computational concepts and AI, a lot of where we start is actually completely screen free. They are working with understanding computer science concepts like sequences and loops and just doing this completely with bricks. And you can imagine some of these contexts, it may be bricks, it may be something else, but it isn’t even like the hardware. or a screen at all. So you can teach concepts of probability and computational thinking even without some of these, if you don’t have these resources. And this actually aligns well with, as we think of age -appropriate progression. But I really challenge the audience as well around this need to want to put things into kids’ hands directly in any context.

I mean, not just in challenging contexts in rural India, but also in other countries as well. Well, let’s not rush for the fastest and the best model, but what’s actually right for the kids as well.

Saadhna Panday

Absolutely. We need to generate a fair amount of evidence before we rush to scale with something like this. Although we have to mediate the fact that smartphone penetration in a country like India is widespread. So access is there. And a school is a microcosm of a local community. So whatever is happening in the whole country, in our home, is going to reflect. in the school, and if it impacts child well -being or if it impacts learning, then the schooling system will have to respond. So Tom, I’m coming back to you again. AI can sometimes feel very passive. You put something in, you get something out. But we know that the best learning happens through engagement.

It’s that journey of discovery that excites the child. So how do we make this thing interactive? What do we need to do to support creativity in the use of AI?

Tom Hall

I’ll declare my bias here, which is that I work for LEGO Group, therefore I’m kind of deeply entrenched in a passion for hands -on learning and a deep belief that when you use your hands, and the science backs this up, you are engaging all parts of your brains that lead to learning. You lead to deeper engagement and ultimately… ultimately a deeper mastery of the subject in front of you. We could show through thousands of research studies that we’ve done through the LEGO Foundation or with any of our research partners that spatial awareness skills develop stronger when children are using their hands. The very basics of mathematics in primary years will develop in a stronger way when you’re using manipulatives and you’re thinking through things.

So this use of hands and manipulatives is something we believe in so deeply. So I think artificial intelligence, it’s a concept of technology. We really believe there’s no reason why hands -on learning shouldn’t be brought in here. You saw in the video that we designed for collaboration first. So this is not a one -on -one learning experience. We really want children to learn together. Groups of four. One, two, three, four. Whatever number you put around the table. We want them to be looking at each other and challenging each other. working in groups, learning the fundamentals of collaboration. It’s not always easy. Things will break. You’ll have to start again. You might not like the role you’ve been given.

That’s a great life lesson. So I think AI can sometimes feel the magic box, but also maybe the dark box. And actually, it’s about helping kids understand that there are really clean and to understand technology fundamentals that underlie artificial intelligence and give them curriculum that means something to them. So we introduced a computer science GCSE in the UK back in 2014. I went to school in the UK. It’s where I live. I’m not too proud to say that that was a failure in terms of uptake by students because there were two mistakes that we made. One was a really lack of teachers and there was no teacher training. So there was no… kind of innovation put into the delivery pipeline, but then there was also a real lack of innovation in the courseware and the curriculum that we designed for that GCSE.

And so children just sat very bored in a computer science class learning very outdated principles. So I think the best thing we can do for interactivity and artificial intelligence sort of education is apply this to things that mean something to today’s teenagers and young people. And that means kind of meeting them where they are and sort of helping them apply fundamentals of AI to the life that’s going on around them. And I think that applies both to the child in the classroom and also the teacher. So give them curriculum that sort of applies now rather than

Saadhna Panday

I must say that I’ve seen the joy of the Lego bricks. I’m South African and I would travel to the United States and I would travel to the United States and I would travel to the United States and I would travel to the rural areas of KwaZulu -Natal and there’d be nothing else. there except a hut. You go to the back of the hut and you see a child with two things, the workbook given by South African government and hand -me -down Lego bricks. And you would see that coming alive of head, heart, and mind. And it was beautiful to see. So thank you, Lego, for that. All right. Richa, I’m coming back to you.

We’re excited about the tech, but we’re also worried about safety. And we’re worried about privacy. And our young adolescents, in particular, who also make up the child cohort, are worried about privacy and safety. So in all of the issues that a private entity needs to think about when they’re designing a digital experience for children, where does safety and privacy stand? And how do you create this joyful, meaningful, and meaningful experience for children? Thankful experience while reducing the risk with the tool. like AI?

Richa Menke

Thank you. So, as you can imagine, safety, privacy, these are absolutely foundational and non -negotiable as we’ve seen on the LEGO education side and similarly in ours. And just to be clear, none of our LEGO products actually employ AI. So the smart break is not using it because for all of these exact same reasons that we have a very high bar, if you look through the lens of childhood, we have a higher bar that we need to meet. So there is this tension, though, that obviously there’s so much potential for meaningful, incredible, hands -on play developed through AI, but at the same time, we need to ensure that until that bar is met, we would not put that in our products.

Saadhna Panday

Excellent. So for our young people of today who will be consumers of AI, trust, transparency, privacy, sustainability, and voice would be critically important. important that we’re not just handing something to them. They get to shape it and co -create it with us. At this point in time, we have a couple of minutes. So we’re going to take a couple of questions from the audience. Since I’m left -handed, my bias is on the left side. I’m declaring it up front. So I’m going to take three quick questions in the first round, and then I will come across. So I’ll take one from the front, one from the back, and then on this side. Right. Okay. Over to you.

Nikhil Bawa

Thank you. Thank you. Fantastic session. My name is Nikhil Bawa. I write about AI and education. I’m just curious about what advice you would have for parents because schools are going to be slow to adapt. And so do you have resources for parents in particular about, because they will, I mean, I’m trying to develop an alternate home curriculum for four hours a week outside. at a school for my kid. Just curious about what you would recommend for parents. You need a combination of structured and unstructured play both, right? I want to know your views on how you’re thinking unstructured play with AI and then play around with also other things like self -regulation, which becomes very difficult for even a team to manage.

So that’s one question and the second is, we’re doing a research on this entire AI adoption at homes which is beyond classrooms. And the initial findings are quite disturbing because it is getting a adopted and adopted just because it’s becoming like a race, especially in India. So I would also like to know if there are some recommendations of various AI play adoption from you guys. Okay, beyond the classroom.

Asha Nanavati

Good morning. Thank you so much. My name is Asha Nanavati. I’m with Alliance Educational Foundation, which runs a charitable small K -12 school in Kerala. They love the Lego products, you know. But I really heard what you said earlier, Richa, about capacity building, about including teachers. We’re a charitable school. All profits go back to the meals, the child. And we don’t maybe have funding for training teachers on AI adoption safety practices. We have play school learners up. So is Lego thinking about doing anything in India? We definitely would love to hear more about that. Thank you.

Tom Hall

Can we take a response to those questions? Can I work back? So we have a recommended AI toolkit to take into classrooms. And it’s a facilitated conversation with children around, you know, what do you think about AI? What should a policy be for a school and a classroom? To be honest, I think that is applicable to a group of teachers in a training day as it is to children and a teacher. And I’ve seen really great examples of schools that I know in the UK following a similar approach. I think maybe there’s a theme in all of the questions. Like maybe don’t worry about applying the brakes, right? Things are moving incredibly fast. I wouldn’t go along with what can feel like this very fast river or wave or current.

I think it’s perfectly okay to apply the brakes and say we need to hit pause and we need to have a conversation. And the conversation needs to be about what do we want. And when I say we, I mean the children in the classroom and the teacher. Like what do we want to get out of this experience? And I think have the conversation. Have the conversation first and don’t worry too much about the tools or the software. that you’re worried that you might be missing out on using. And as Richard just shared, we’re not using generative AI in our products, and that’s for a very deliberate reason, because we just don’t know enough yet about safety and privacy.

We have conducted research into that, and we’re following that very closely, but we’re not willing to take any risks. And I think this time of childhood is just too precious to make some shotgun choices that we’re going to pay very heavily for in the future. So I think empower the teacher and the child to have some really formative discussions about what do we want to get out of this, and then maybe look at what’s available.

Speaker 4

Le

Atish Joshua Gonsalves

arning our child agency versus some scaffolding. So as we bring these products also into core classes, classrooms as part of education strategy now. We do understand the needs for teachers to provide a scaffolding as they take them through this learning journey. So we have, for example, at LEGO Education, we follow something called a 5E model of engage, explore, explain, elaborate, and evaluate. But it’s just a fancy way of saying how do you sort of get the kids hooked initially to a big picture question or a real -life example. But you provide the educators and the students sort of a structure as you go through this process of thinking about that question. I think who had that question yesterday, the distance between a question and answer and that space between that’s where magic or inspiration happens, right?

And so giving that space for that to happen. And then when they’re building – and so you’re providing the structure for them to work in groups and build this out. But towards the end, in the elaboration phase and at the end of every unit, there’s something called a design challenge where the kids are not provided that much instruction. They’re given an open -ended prompt. And then they take the concepts and learn. They take the lessons that they’ve learned and apply that in a more open -ended way. outside of the Lego education computer science and AI product we also have something called First Lego League which is the world’s largest STEM annual STEM competition as well and there you see these it’s it’s so inspiring to see these groups of eight kids building a robotics or challenge and then doing a science theme as well but that’s completely open -ended so they will go beyond what a 45 minute lesson is what they would do within a 45 minute lesson and have a lot more agency in terms of what they can create beyond what so the teacher would take them through in in a classroom Ni

Tom Hall

khil we have some really great resources actually available online both on Lego and Lego foundation around facilitated play with your child and it’s starting from very early years through to later years

Saadhna Panday

so I’m gonna take two more questions because we coming to the end of the session we need to close we need to close okay now I will take one question but really

Tom Hall

Well, I think we heard a lot yesterday that we need to make sure that any tools that are made available are done so in languages that mean something to you on the ground. So I think there are many tools out there that can do automated translation. We hope that the quality is going to be really strong in them. We’re currently producing in English language. Of course, there will be localizations in the future.

Saadhna Panday

All right, colleagues, we need to come to a close because people need to move to the next session. We’re designing for safety, for equity. and while we provide services, we need to match it with demand. And to match with demand, teachers, learners, parents need to be empowered. That responsibility rests with all of us. It’s hard to do many things in an education system. Empowerment is not one of them. We can do that quickly. We can do that with scale and we can do that with equity. So I want to say thank you to our panelists for today for having an engaging conversation and a big thank you to Lego for bringing us together to have a conversation about children, education, and AI.

Thank you so much. The session is closed. Thank you. Thank you. Thank you.

S

Speaker 1

Speech speed

146 words per minute

Speech length

455 words

Speech time

186 seconds

AI literacy essential for future participation

Explanation

The speaker stresses that understanding AI is a prerequisite for meaningful involvement in policy and society. Without AI literacy, individuals risk being left behind as AI becomes ubiquitous.


Evidence

“We need to have a say in AI policies because AI literacy is really important.” [1].


Major discussion point

AI Literacy as Essential Foundation


Topics

Capacity development | Artificial intelligence | Human rights and the ethical dimensions of the information society


T

Tom Hall

Speech speed

167 words per minute

Speech length

2191 words

Speech time

786 seconds

AI must be taught as fundamentals, not magic

Explanation

Tom argues that AI should be presented as a technology system rather than a mysterious “magic box”, giving children the tools to deconstruct and rebuild it.


Evidence

“As a kind of magic box that they can… I think we need to be really clear that AI is not magic it’s not a magic toolbox it’s a technology system and foundational AI literacy isn’t about teaching children how to use this magic box I think far more importantly it’s like how do we give the child the screwdriver to take that box apart and really understand what’s going on under the cover…” [8].


Major discussion point

AI Literacy as Essential Foundation


Topics

Artificial intelligence | Capacity development


Manipulatives and hands‑on work boost engagement and mastery

Explanation

Tom emphasizes that using physical manipulatives deepens student engagement and leads to stronger mastery of concepts.


Evidence

“So this use of hands and manipulatives is something we believe in so deeply.” [45]. “You lead to deeper engagement and ultimately… ultimately a deeper mastery of the subject in front of you.” [46].


Major discussion point

Hands‑On, Collaborative Learning & Building AI Understanding


Topics

Capacity development | Artificial intelligence | Social and economic development


Need for teacher training, curriculum alignment, and language localisation

Explanation

Tom points out the current lack of teacher preparation and the necessity for localized curricula to effectively deliver AI education.


Evidence

“One was a really lack of teachers and there was no teacher training.” [95]. “Of course, there will be localizations in the future.” [102].


Major discussion point

Teacher Support, Capacity Building, and Implementation in Diverse Contexts


Topics

Capacity development | The enabling environment for digital development


Children should create their own “magic” rather than receive ready‑made results

Explanation

Tom advocates giving children the “screwdriver” to design their own AI solutions instead of handing them pre‑generated outputs.


Evidence

“hand over the screwdriver … create their own ideas” [16].


Major discussion point

Balancing Imagination, Play, and Efficiency in AI‑Enhanced Learning


Topics

Artificial intelligence | Human rights and the ethical dimensions of the information society


A

Atish Joshua Gonsalves

Speech speed

214 words per minute

Speech length

2059 words

Speech time

575 seconds

Children need agency and deep understanding of AI concepts

Explanation

Atish stresses that students should move beyond using AI to actually grasp and build AI systems, fostering agency and deeper comprehension.


Evidence

“So we’re really helping students to build real AI literacy by demystifying how AI works.” [6]. “Kids not just using AI but actually understanding and building with it.” [20].


Major discussion point

AI Literacy as Essential Foundation


Topics

Capacity development | Artificial intelligence


Lego product delivers hands‑on, collaborative AI learning experiences

Explanation

Atish describes LEGO’s approach of combining safety with immersive, collaborative, hands‑on activities that spark creativity.


Evidence

“We uphold child safety and well‑being as it’s non‑negotiable for every AI interaction in the class and we foster hands‑on immersive and collaborative experiences that inspire creativity and shared learning.” [32].


Major discussion point

Hands‑On, Collaborative Learning & Building AI Understanding


Topics

Capacity development | Artificial intelligence | Social and economic development


Structured 5E instructional model scaffolds inquiry and creation

Explanation

Atish notes that LEGO Education follows the 5E model (Engage, Explore, Explain, Elaborate, Evaluate) to structure AI learning.


Evidence

“we follow something called a 5E model of engage, explore, explain, elaborate, and evaluate.” [44].


Major discussion point

Hands‑On, Collaborative Learning & Building AI Understanding


Topics

Capacity development | Artificial intelligence


Teacher portal and resources empower educators

Explanation

He highlights a dedicated teacher portal that supplies resources and support for delivering AI and computer‑science curricula.


Evidence

“You saw in the video briefly referenced the teacher portal where the teachers get all the resources and the support they need to bring computer science and AI to kids.” [90].


Major discussion point

Teacher Support, Capacity Building, and Implementation in Diverse Contexts


Topics

Capacity development | Artificial intelligence


R

Richa Menke

Speech speed

163 words per minute

Speech length

1203 words

Speech time

441 seconds

Strict safety, fairness, transparency, and privacy guidelines for AI tools

Explanation

Richa outlines LEGO’s comprehensive safeguards—no text generation, universal design, fairness, transparency, and on‑device processing—to protect children.


Evidence

“AI should be safe … we do not generate any text or any media … we ensure that all our digital products are rooted in the principles of universal design …” [30]. “privacy … nothing ever leaves the device” [30].


Major discussion point

Safety, Privacy, Ethics, and Responsible AI Design


Topics

Human rights and the ethical dimensions of the information society | Artificial intelligence | Building confidence and security in the use of ICTs


Current Lego offerings avoid AI until safety standards are met

Explanation

Richa confirms that LEGO deliberately refrains from embedding AI in its products until robust safety criteria are satisfied.


Evidence

“And just to be clear, none of our LEGO products actually employ AI.” [39].


Major discussion point

Safety, Privacy, Ethics, and Responsible AI Design


Topics

Artificial intelligence | Human rights and the ethical dimensions of the information society


Tension between efficiency and imagination; AI should nurture creativity

Explanation

She identifies a core tension: AI can boost efficiency but must also preserve space for children’s imagination and creative play.


Evidence

“So first of all, it’s this tension between efficiency and imagination.” [118].


Major discussion point

Balancing Imagination, Play, and Efficiency in AI‑Enhanced Learning


Topics

Artificial intelligence | Human rights and the ethical dimensions of the information society


Risks of over‑personalisation and loss of agency in childhood

Explanation

Richa warns that excessive personalization may diminish children’s ability to imagine and develop confidence in their own ideas.


Evidence

“And does that rob kids of the opportunity to really develop their imagination and more importantly, develop the confidence in their own imagination?” [121].


Major discussion point

Balancing Imagination, Play, and Efficiency in AI‑Enhanced Learning


Topics

Human rights and the ethical dimensions of the information society | Artificial intelligence


Child safety and privacy are non‑negotiable foundations

Explanation

She stresses that safety and privacy are absolute, non‑negotiable pillars for any AI‑enabled learning tool.


Evidence

“safety, privacy, these are absolutely foundational and non‑negotiable as we’ve seen on the LEGO education side and similarly in ours.” [73].


Major discussion point

Safety, Privacy, Ethics, and Responsible AI Design


Topics

Human rights and the ethical dimensions of the information society | Building confidence and security in the use of ICTs


S

Saadhna Panday

Speech speed

129 words per minute

Speech length

1416 words

Speech time

657 seconds

Child‑centered AI literacy protects rights and promotes empowerment

Explanation

Saadhna links AI literacy to fundamental rights—trust, transparency, privacy, and voice—asserting that children need a basic literacy level to engage responsibly.


Evidence

“The child needs to have a basic level of literacy to be able to engage with language models.” [15]. “So for our young people of today who will be consumers of AI, trust, transparency, privacy, sustainability, and voice would be critically important.” [11].


Major discussion point

AI Literacy as Essential Foundation


Topics

Human rights and the ethical dimensions of the information society | Capacity development


Emphasis on protecting children’s privacy and well‑being in AI deployment

Explanation

She underscores that safety, equity, and privacy are central to designing AI experiences for children.


Evidence

“We’re designing for safety, for equity.” [71]. “And we’re worried about privacy.” [84].


Major discussion point

Safety, Privacy, Ethics, and Responsible AI Design


Topics

Human rights and the ethical dimensions of the information society | Building confidence and security in the use of ICTs


Evidence‑based, equitable scaling is critical for impact

Explanation

Saadhna calls for scaling AI education solutions that are evidence‑based, equitable, and able to meet massive learning challenges.


Evidence

“Education systems are facing massive learning challenges for which governments are seeking equitable, scalable and evidence‑based solutions.” [99]. “We can do that with scale and we can do that with equity.” [105].


Major discussion point

Teacher Support, Capacity Building, and Implementation in Diverse Contexts


Topics

Capacity development | Monitoring and measurement | Social and economic development


A

Asha Nanavati

Speech speed

140 words per minute

Speech length

101 words

Speech time

43 seconds

Capacity‑building for low‑resource schools and charitable institutions

Explanation

Asha highlights the need to support charitable and low‑resource schools, citing her work with a small K‑12 school in Kerala.


Evidence

“I’m with Alliance Educational Foundation, which runs a charitable small K‑12 school in Kerala.” [104].


Major discussion point

Teacher Support, Capacity Building, and Implementation in Diverse Contexts


Topics

Capacity development | The enabling environment for digital development


S

Speaker 4

Speech speed

1 words per minute

Speech length

1 words

Speech time

54 seconds

Brief interjection acknowledging interactive learning

Explanation

Speaker 4 notes that AI can provide prompts that inspire playful interaction, underscoring the value of interactive learning.


Evidence

“AI could easily offer little prompts that inspire me to play.” [66].


Major discussion point

Hands‑On, Collaborative Learning & Building AI Understanding


Topics

Artificial intelligence | Capacity development


N

Nikhil Bawa

Speech speed

116 words per minute

Speech length

200 words

Speech time

102 seconds

Provide parents with structured/unstructured AI play resources and guidance

Explanation

Nikhil calls for resources that help parents integrate AI play at home, balancing structured curricula with open‑ended exploration.


Evidence

“resources for parents in particular about, because they will, I mean, I’m trying to develop an alternate home curriculum for four hours a week outside.” [140].


Major discussion point

Parental and Home Learning Role


Topics

Capacity development | Artificial intelligence | Human rights and the ethical dimensions of the information society


Support home curricula to foster self‑regulation and balanced AI adoption

Explanation

He highlights the challenge of fostering self‑regulation at home when integrating AI tools into learning.


Evidence

“self‑regulation, which becomes very difficult for even a team to manage.” [136].


Major discussion point

Parental and Home Learning Role


Topics

Capacity development | Artificial intelligence


Agreements

Agreement points

Child safety and privacy are non-negotiable priorities in AI development

Speakers

– Tom Hall
– Atish Joshua Gonsalves
– Richa Menke

Arguments

It’s acceptable to apply brakes and have conversations about what we want from AI rather than rushing to adopt tools


AI features should run locally on devices with no data leaving the device, no login collection, and no third-party data sharing


Safety and student well-being are non-negotiable red lines in AI product development


LEGO products currently don’t employ AI because the safety and privacy bar hasn’t been met for childhood applications


Summary

All LEGO representatives agree that child safety and privacy must be absolute priorities, even if it means slowing down or avoiding AI implementation until proper safeguards are established


Topics

Artificial intelligence | Human rights and the ethical dimensions of the information society | Building confidence and security in the use of ICTs


Hands-on, collaborative learning is superior to isolated screen-based learning

Speakers

– Tom Hall
– Atish Joshua Gonsalves
– Richa Menke

Arguments

Learning is most effective when children use their hands and engage in spatial awareness activities supported by research


AI education should be designed for collaboration first, with children working in groups and learning together


Children learn best through building, coding, experimenting, and sharing together rather than isolated screen-based learning


Hands-on, minds-on play experiences should remain central even when incorporating new technologies


Summary

All speakers emphasize the importance of physical, collaborative learning experiences over individual digital interactions, backed by research on spatial awareness and social learning


Topics

Capacity development | Social and economic development


Children should be active agents rather than passive consumers of AI

Speakers

– Tom Hall
– Atish Joshua Gonsalves
– Saadhna Panday
– Speaker 1

Arguments

AI literacy should focus on understanding fundamental concepts rather than just using AI tools as ‘magic boxes’ – children need the ‘screwdriver’ to understand what’s happening under the hood


We should ask children what type of conversations they want to have about AI and trust their thoughtful responses


Children should be active participants in their learning journeys rather than passive consumers of AI


Children are not passive recipients but have tremendous agency to consume, shape, and lead technology


Children want to be part of solving big problems and need to have a say in AI policies because AI literacy is important


Summary

There is strong consensus that children should be empowered as active participants who understand, question, and help shape AI rather than simply using it as consumers


Topics

Artificial intelligence | Human rights and the ethical dimensions of the information society | Capacity development


Teacher support and confidence-building are essential for successful AI education implementation

Speakers

– Tom Hall
– Atish Joshua Gonsalves
– Speaker 4
– Asha Nanavati

Arguments

Teachers need support and confidence-building, not just access to tools, especially since most aren’t computer science specialists


Teachers need comprehensive support including lesson plans, classroom presentations, and facilitation notes with no extra preparation time required


Charitable schools need accessible training and resources for AI adoption and safety practices


Summary

All speakers recognize that teachers need substantial support, training, and ready-to-use resources to effectively implement AI education, particularly since most are not computer science specialists


Topics

Capacity development | Social and economic development


AI education must address equity and accessibility concerns

Speakers

– Saadhna Panday
– Atish Joshua Gonsalves
– Asha Nanavati

Arguments

AI is impacting children’s education unevenly – urban children have access while rural children may not


We need equitable, scalable, and evidence-based solutions that don’t widen inequality


AI concepts can be taught without screens or advanced hardware, starting with basic computational thinking using physical materials


Charitable schools need accessible training and resources for AI adoption and safety practices


Summary

Speakers agree that AI education initiatives must actively address digital divides and ensure that solutions are accessible to disadvantaged communities and resource-constrained schools


Topics

Closing all digital divides | Social and economic development | Artificial intelligence


Similar viewpoints

Both speakers advocate for deliberate, cautious approaches to AI implementation that prioritize long-term child development over rapid technological adoption

Speakers

– Tom Hall
– Richa Menke

Arguments

It’s acceptable to apply brakes and have conversations about what we want from AI rather than rushing to adopt tools


We need to be cautious about long-term consequences of AI on children, similar to lessons learned from social media


Topics

Artificial intelligence | Human rights and the ethical dimensions of the information society


Both speakers are concerned about AI potentially interfering with healthy child development, whether through inappropriate emotional attachments or by removing beneficial developmental challenges

Speakers

– Atish Joshua Gonsalves
– Richa Menke

Arguments

AI should not be anthropomorphized or create unhealthy emotional bonds with children


There’s tension between AI efficiency and developing children’s imagination and struggle-through-problems abilities


Topics

Artificial intelligence | Human rights and the ethical dimensions of the information society | Capacity development


Both speakers view AI literacy as a fundamental skill that should be universally accessible rather than optional, requiring both basic and critical thinking components

Speakers

– Tom Hall
– Saadhna Panday

Arguments

AI literacy must be elevated to the status of modern literacy alongside math and reading, not treated as an elective for a few


Children need foundational skills and critical web/AI literacy to engage meaningfully with AI systems


Topics

Artificial intelligence | Capacity development | Social and economic development


Unexpected consensus

Deliberately not using AI in current products despite being an AI education company

Speakers

– Atish Joshua Gonsalves
– Richa Menke

Arguments

Safety and student well-being are non-negotiable red lines in AI product development


LEGO products currently don’t employ AI because the safety and privacy bar hasn’t been met for childhood applications


Explanation

It’s unexpected that a company focused on AI education would deliberately avoid using AI in their products, but both speakers from LEGO agree that current AI technology doesn’t meet their safety standards for children


Topics

Artificial intelligence | Human rights and the ethical dimensions of the information society


Starting AI education without screens or advanced technology

Speakers

– Tom Hall
– Atish Joshua Gonsalves

Arguments

We should ask children what type of conversations they want to have about AI and trust their thoughtful responses


AI concepts can be taught without screens or advanced hardware, starting with basic computational thinking using physical materials


Explanation

Surprisingly, AI education experts advocate for beginning AI literacy through completely analog, discussion-based and physical manipulation methods rather than digital tools


Topics

Artificial intelligence | Capacity development | Closing all digital divides


Children as AI policy contributors rather than just users

Speakers

– Tom Hall
– Saadhna Panday
– Speaker 1

Arguments

We should ask children what type of conversations they want to have about AI and trust their thoughtful responses


Children are not passive recipients but have tremendous agency to consume, shape, and lead technology


Children want to be part of solving big problems and need to have a say in AI policies because AI literacy is important


Explanation

There’s unexpected consensus that children should be involved in AI policy-making and governance discussions, not just be recipients of AI education – treating them as stakeholders in AI development


Topics

Artificial intelligence | Human rights and the ethical dimensions of the information society | Capacity development


Overall assessment

Summary

The speakers demonstrate remarkably high consensus on prioritizing child safety, agency, and holistic development over rapid AI adoption. Key areas of agreement include the need for hands-on collaborative learning, treating children as active agents rather than passive consumers, ensuring teacher support, and addressing equity concerns. Unexpectedly, even AI education advocates emphasize caution and deliberate implementation.


Consensus level

Very high consensus with strong alignment on fundamental principles. The implications are significant for AI education policy – suggesting that successful implementation requires prioritizing child development principles over technological capabilities, ensuring universal access, and involving children as active participants in shaping AI governance rather than just users of AI tools.


Differences

Different viewpoints

Timeline and urgency for AI implementation in education

Speakers

– Tom Hall
– Nikhil Bawa

Arguments

It’s acceptable to apply brakes and have conversations about what we want from AI rather than rushing to adopt tools


Parents need resources for home-based AI education since schools are slow to adapt


Summary

Tom Hall advocates for slowing down and having deliberate conversations before implementing AI tools, while Nikhil Bawa expresses urgency about the need for immediate AI education resources because schools are adapting too slowly


Topics

Artificial intelligence | Capacity development


Current readiness of AI technology for children

Speakers

– Richa Menke
– Speaker 1

Arguments

LEGO products currently don’t employ AI because the safety and privacy bar hasn’t been met for childhood applications


AI is unavoidable like taxes, and people need to evolve with it or be left behind


Summary

Richa Menke believes AI technology is not yet ready for children and maintains high safety standards, while Speaker 1 views AI as inevitable and emphasizes the need to adapt quickly


Topics

Artificial intelligence | Human rights and the ethical dimensions of the information society


Unexpected differences

Role of efficiency in child development

Speakers

– Richa Menke
– Speaker 1

Arguments

There’s tension between AI efficiency and developing children’s imagination and struggle-through-problems abilities


AI is unavoidable like taxes, and people need to evolve with it or be left behind


Explanation

This disagreement is unexpected because it reveals a fundamental philosophical divide about whether AI’s efficiency benefits children or potentially harms their development. Richa questions whether quick AI answers rob children of important developmental struggles, while Speaker 1 sees AI adaptation as necessary evolution


Topics

Artificial intelligence | Human rights and the ethical dimensions of the information society | Capacity development


Overall assessment

Summary

The main areas of disagreement center around the timeline for AI implementation, the current readiness of AI technology for children, and the balance between AI efficiency and child development needs


Disagreement level

The level of disagreement is moderate but philosophically significant. While speakers generally agree on core principles like child safety, agency, and hands-on learning, they differ substantially on implementation approaches and urgency. These disagreements have important implications for AI education policy, as they reflect tensions between innovation advocates who see AI as inevitable and child development experts who prioritize safety and developmental appropriateness over rapid adoption


Partial agreements

Partial agreements

All speakers agree on the importance of hands-on, foundational learning approaches, but they differ in their implementation strategies – Tom emphasizes giving children the ‘screwdriver’ to understand AI, Atish focuses on screen-free computational thinking with physical materials, and Richa prioritizes maintaining traditional play experiences

Speakers

– Tom Hall
– Atish Joshua Gonsalves
– Richa Menke

Arguments

AI literacy should focus on understanding fundamental concepts rather than just using AI tools as ‘magic boxes’


AI concepts can be taught without screens or advanced hardware, starting with basic computational thinking using physical materials


Hands-on, minds-on play experiences should remain central even when incorporating new technologies


Topics

Artificial intelligence | Capacity development


Both speakers agree on the importance of child agency and empowerment, but Tom focuses on involving children in AI policy discussions within educational settings, while Saadhna emphasizes children’s broader capacity to lead technological development

Speakers

– Tom Hall
– Saadhna Panday

Arguments

We should ask children what type of conversations they want to have about AI and trust their thoughtful responses


Children are not passive recipients but have tremendous agency to consume, shape, and lead technology


Topics

Human rights and the ethical dimensions of the information society | Capacity development


Both speakers recognize the need for equitable access to AI education, but Saadhna focuses on systemic solutions at the policy level while Asha represents the practical challenges faced by resource-constrained schools

Speakers

– Saadhna Panday
– Asha Nanavati

Arguments

We need equitable, scalable, and evidence-based solutions that don’t widen inequality


Charitable schools need accessible training and resources for AI adoption and safety practices


Topics

Closing all digital divides | Social and economic development | Financial mechanisms


Similar viewpoints

Both speakers advocate for deliberate, cautious approaches to AI implementation that prioritize long-term child development over rapid technological adoption

Speakers

– Tom Hall
– Richa Menke

Arguments

It’s acceptable to apply brakes and have conversations about what we want from AI rather than rushing to adopt tools


We need to be cautious about long-term consequences of AI on children, similar to lessons learned from social media


Topics

Artificial intelligence | Human rights and the ethical dimensions of the information society


Both speakers are concerned about AI potentially interfering with healthy child development, whether through inappropriate emotional attachments or by removing beneficial developmental challenges

Speakers

– Atish Joshua Gonsalves
– Richa Menke

Arguments

AI should not be anthropomorphized or create unhealthy emotional bonds with children


There’s tension between AI efficiency and developing children’s imagination and struggle-through-problems abilities


Topics

Artificial intelligence | Human rights and the ethical dimensions of the information society | Capacity development


Both speakers view AI literacy as a fundamental skill that should be universally accessible rather than optional, requiring both basic and critical thinking components

Speakers

– Tom Hall
– Saadhna Panday

Arguments

AI literacy must be elevated to the status of modern literacy alongside math and reading, not treated as an elective for a few


Children need foundational skills and critical web/AI literacy to engage meaningfully with AI systems


Topics

Artificial intelligence | Capacity development | Social and economic development


Takeaways

Key takeaways

AI literacy should focus on understanding fundamental concepts rather than just using AI tools – children need to understand what’s happening ‘under the hood’ rather than treating AI as a ‘magic box’


AI literacy must be elevated to the status of modern literacy alongside math and reading, not treated as an optional elective


Safety and privacy are non-negotiable in AI development for children – all AI features should run locally with no data leaving devices


Children should be active participants and co-creators in AI development rather than passive consumers


Hands-on, collaborative learning approaches are most effective for AI education, with children working in groups rather than isolated screen-based learning


AI is creating educational inequity – urban children have access while rural children may not, requiring deliberate efforts to ensure equitable access


Teachers need confidence-building and support, not just access to tools, since most educators teaching computer science are not specialists in the field


It’s acceptable to ‘apply the brakes’ and have conversations about desired outcomes before rushing to adopt AI tools


AI development should optimize for childhood development and potential rather than just engagement and attention


Resolutions and action items

LEGO Education will launch their new computer science and AI product in April with safety guidelines ensuring local processing and no data collection


LEGO has created an AI toolkit for classroom discussions about AI policies that can be used by teachers and parents


LEGO will provide resources through their teacher portal to support educators who are not computer science specialists


LEGO will offer resources online for parents to facilitate AI-related play and learning at home


Future localization of LEGO’s AI education materials into multiple languages is planned


Unresolved issues

How to effectively scale AI literacy education to rural and underserved communities with limited resources


Funding and support mechanisms for charitable schools that cannot afford AI training for teachers


Long-term consequences of AI exposure on child development and learning patterns


Balancing the tension between AI efficiency and children’s need to develop imagination and problem-solving through struggle


Creating age-appropriate AI education progression from early childhood through adolescence


Addressing the rapid pace of AI adoption in homes, particularly in India, which may have ‘disturbing’ implications according to research


Developing evidence-based approaches before scaling AI education solutions widely


Suggested compromises

Start AI education with completely screen-free, hands-on activities using physical materials like bricks to teach computational concepts


Implement a gradual progression approach where younger children learn foundational concepts without direct AI interaction


Use existing tools with automated translation capabilities while working toward proper localization


Focus on facilitated discussions and policy conversations about AI rather than rushing to implement AI tools


Combine structured learning with open-ended design challenges to balance scaffolding with student agency


Apply the same AI discussion toolkit to both children and teachers to build understanding across all stakeholders


Thought provoking comments

AI literacy isn’t about teaching children how to use this magic box. I think far more importantly it’s like how do we give the child the screwdriver to take that box apart and really understand what’s going on under the cover… our definition of AI literacy when we talk about it, it’s about understanding today’s technology, yes, but it’s far more about understanding the fundamental concepts so that you are armed and ready for what is yet to be designed, and actually so that you can be the designer of what is to come.

Speaker

Tom Hall


Reason

This comment reframes the entire discussion by challenging the conventional approach to AI education. Instead of focusing on tool usage, Hall advocates for deep understanding of underlying principles. The metaphor of giving children ‘the screwdriver to take that box apart’ is particularly powerful as it transforms children from passive consumers to active investigators and future creators.


Impact

This comment established the philosophical foundation for the entire discussion, shifting focus from AI as a consumption tool to AI as something children should understand, critique, and ultimately design. It influenced subsequent speakers to emphasize agency, understanding, and creative engagement rather than mere usage.


How do we prepare AI for kids and imagination?… what if we optimize for childhood, then we’re going to optimize for potential.

Speaker

Richa Menke


Reason

This comment introduces a crucial paradigm shift by flipping the typical question. Instead of asking how to prepare children for AI, Menke asks how to prepare AI for children. The distinction between optimizing for engagement versus optimizing for childhood/potential is profound and challenges the tech industry’s typical metrics.


Impact

This reframing elevated the discussion to consider AI development from a child-centric perspective rather than a technology-centric one. It introduced the concept that the design philosophy behind AI systems fundamentally shapes childhood development, leading to deeper conversations about values and long-term impact.


There’s three key tensions that we think are really important to address when we think about kids and childhood… efficiency and imagination. If I can get an answer just like this, I don’t have to wait. I don’t have to struggle. I don’t have to develop my imagination… Personalization and identity. A child at seven is not the same as who they’re going to be at 17… assistance and agency.

Speaker

Richa Menke


Reason

This comment articulates fundamental developmental concerns that are often overlooked in AI discussions. It highlights how AI’s apparent benefits (efficiency, personalization, assistance) might actually undermine crucial aspects of child development (imagination, identity formation, agency).


Impact

These tensions became a framework for evaluating AI applications throughout the discussion. It added nuance to the conversation by showing that seemingly positive AI features could have negative developmental consequences, leading other speakers to address how their approaches navigate these tensions.


In the area of radiology, AI has helped the diagnosis of pancreatic cancer 438 days earlier than would have been normally expected… We are looking for that kind of accelerator in education. Something that’s going to bring efficiency and quality without widening inequality and as you’ve said that remains deeply human centered because we know that learning is an inherently social process.

Speaker

Saadhna Panday


Reason

This comment provides a compelling analogy that raises expectations for AI’s potential in education while acknowledging the unique challenges. The specific example of early cancer detection creates a powerful benchmark for what transformative AI impact could look like in education.


Impact

This comment shifted the discussion toward considering AI’s transformative potential in education while maintaining focus on equity and human-centeredness. It challenged the panelists to think about scalable solutions that could have dramatic positive impact without losing the social nature of learning.


But time and again we make the error that we underestimate the capacity of children. They’re not passive recipients of education. They have tremendous agency. They can consume tech, they can shape it, and no doubt they will lead it in time.

Speaker

Saadhna Panday


Reason

This comment challenges a fundamental assumption in many educational technology discussions – that children are merely recipients rather than active agents. It reframes children as capable partners in shaping technology rather than subjects to be protected from it.


Impact

This perspective influenced the subsequent discussion to focus more on empowerment and co-creation rather than protection and control. It supported the earlier themes about giving children agency and tools to understand and create rather than just consume AI.


I really challenge the audience as well around this need to want to put things into kids’ hands directly in any context… Well, let’s not rush for the fastest and the best model, but what’s actually right for the kids as well.

Speaker

Atish Joshua Gonsalves


Reason

This comment provides a crucial counterbalance to the excitement about AI in education by advocating for restraint and age-appropriateness. It challenges the assumption that having access to the most advanced AI tools is necessarily beneficial for children.


Impact

This comment introduced a note of caution that tempered the discussion’s enthusiasm, leading to more nuanced conversations about implementation timelines, age-appropriateness, and the difference between what’s technologically possible and what’s developmentally appropriate.


Overall assessment

These key comments fundamentally shaped the discussion by establishing a child-centric, agency-focused framework for thinking about AI in education. Rather than a typical technology-first approach, the conversation was anchored in developmental psychology, educational philosophy, and children’s rights. The comments created a progression from challenging conventional AI education approaches, to reframing the relationship between AI and childhood development, to establishing practical tensions and considerations for implementation. This resulted in a sophisticated discussion that balanced technological potential with developmental appropriateness, emphasized empowerment over protection, and prioritized understanding over usage. The overall effect was to elevate the conversation beyond typical ed-tech discussions to a more nuanced exploration of how AI can serve children’s developmental needs while preparing them to be creators rather than just consumers of future technology.


Follow-up questions

How can AI literacy concepts be effectively implemented in multilingual, multilevel, and resource-constrained classroom settings like rural Rajasthan?

Speaker

Saadhna Panday


Explanation

This addresses the critical gap between theoretical AI literacy models and real-world implementation in diverse educational contexts where students have varying levels of foundational literacy and limited resources.


What are the long-term developmental consequences of children’s early exposure to AI systems, particularly regarding imagination and critical thinking skills?

Speaker

Richa Menke


Explanation

This explores the tension between AI’s efficiency and the developmental need for children to struggle, wait, and develop their own imagination and problem-solving capabilities.


How can we develop evidence-based approaches to AI adoption in education before rushing to scale implementation?

Speaker

Saadhna Panday


Explanation

This addresses the need for rigorous research and evidence collection to ensure AI tools are effective and safe before widespread deployment in educational settings.


What specific resources and training programs are needed to support teachers who are not computer science specialists in delivering AI literacy education?

Speaker

Atish Joshua Gonsalves and Asha Nanavati


Explanation

This identifies the critical need for teacher preparation and support systems, especially for educators in under-resourced schools who need to teach AI concepts without specialized backgrounds.


How can parents develop home-based AI literacy curricula when schools are slow to adapt to AI education needs?

Speaker

Nikhil Bawa


Explanation

This addresses the gap between the rapid pace of AI development and the slower adaptation of formal education systems, requiring alternative approaches for children’s AI education.


What are the implications of unregulated AI adoption in homes, particularly in competitive educational environments like India?

Speaker

Nikhil Bawa


Explanation

This explores concerning trends in AI adoption driven by competitive pressures rather than educational best practices, requiring research into safety and effectiveness.


How can AI literacy education be localized and made available in languages that are meaningful to diverse global communities?

Speaker

Audience member (implied)


Explanation

This addresses the need for culturally and linguistically appropriate AI education materials to ensure equitable access across different communities.


What is the optimal balance between structured and unstructured play when incorporating AI into children’s learning experiences?

Speaker

Nikhil Bawa


Explanation

This explores how to maintain the benefits of open-ended play while providing necessary guidance and safety measures in AI-enhanced learning environments.


How can we ensure that AI systems are optimized for childhood development and potential rather than just engagement metrics?

Speaker

Richa Menke


Explanation

This fundamental question challenges current AI development priorities and calls for child-centered design principles that support long-term developmental outcomes.


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.