WS #376 Elevating Childrens Voices in AI Design

26 Jun 2025 13:30h - 14:30h

WS #376 Elevating Childrens Voices in AI Design

Session at a glance

Summary

This workshop, titled “Elevating Children’s Voices in AI Design,” brought together researchers, experts, and young people to discuss the impact of artificial intelligence on children and how to make AI development more child-centric. The session was sponsored by the Lego Group and included participants from the Family Online Safety Institute, the Alan Turing Institute, and the UN’s Center for AI and Robotics. The discussion began with powerful video messages from young people across the UK, who emphasized that AI should be viewed as a tool to aid rather than replace humans, while highlighting concerns about privacy, environmental impact, and the need for ethical development.


Stephen Balkam from the Family Online Safety Institute presented research showing that, unlike previous technology trends, teens now believe their parents know more about generative AI than they do. The research revealed that while parents use AI mainly for analytical tasks, teens focus on efficiency-boosting activities like proofreading and summarizing. Both groups expressed concerns about job loss and misinformation, though they remained optimistic about AI’s potential for learning and scientific progress. Maria Eira from UNICRI shared findings from a global survey indicating a lack of awareness among parents about how their children use AI for personal purposes, and noted that parents who regularly use AI themselves tend to view its impact on children more positively.


Dr. Mhairi Aitken from the Alan Turing Institute presented research funded by the Lego Group showing that about 22% of children aged 8-12 use generative AI, with significant disparities between private and state-funded schools. The research found that children with additional learning needs were more likely to use AI for communication, and that children showed strong preferences for traditional tactile art materials over AI-generated alternatives. Key concerns raised by children included bias and representation in AI outputs, environmental impacts, and exposure to inappropriate content. The discussion concluded that AI systems are not currently designed with children in mind, echoing patterns from previous technology waves, and emphasized the need for greater transparency, child-centered design principles, and critical AI literacy rather than just technical understanding.


Keypoints

## Major Discussion Points:


– **Children’s Current AI Usage and Readiness**: Research reveals that children aged 8-12 are already using generative AI (22% reported usage), but AI systems are not designed with children in mind. This creates a fundamental mismatch where children are adapting to adult-designed systems rather than having age-appropriate tools available to them.


– **Parental Awareness and Communication Gaps**: Studies show significant disconnects between parents and children regarding AI use. While parents are aware of academic uses, they often don’t know about more personal uses like AI companions. Parents who regularly use AI themselves tend to view its impact on children more positively, highlighting the importance of parental AI literacy.


– **Equity and Access Concerns**: Research identified stark differences in AI access and education between private and state-funded schools, with children in private schools having significantly more exposure to and understanding of generative AI. This points to growing digital divides that could exacerbate existing educational inequalities.


– **Children’s Rights and Ethical Considerations**: Young people expressed sophisticated concerns about AI bias, environmental impact, and representation in AI outputs. Children of color became upset when not represented in AI-generated images, sometimes choosing not to use the technology as a result. There’s a strong call for children’s voices to be included in AI development and policy decisions.


– **Design and Safety Challenges**: The discussion emphasized that AI systems need to be designed with children’s wellbeing from the start, not retrofitted later. Key concerns include inappropriate content exposure, emotional dependency on AI companions, and the need for transparency about how AI systems work and collect data.


## Overall Purpose:


The workshop aimed to elevate children’s voices in AI design and development by presenting research on how AI impacts children, sharing direct perspectives from young people, and advocating for child-centric approaches to AI development. The session sought to demonstrate that children have valuable insights about AI and should be meaningfully included in decision-making processes about technologies that will significantly impact their lives.


## Overall Tone:


The discussion maintained a consistently serious yet optimistic tone throughout. It began with powerful, articulate messages from young people that set a respectful, non-patronizing approach to children’s perspectives. The research presentations were delivered in an academic but accessible manner, emphasizing both opportunities and concerns. The panel discussion became increasingly collaborative and solution-focused, with participants building on each other’s insights. The presence of young participants (like 17-year-old Ryan) reinforced the workshop’s commitment to including youth voices, and the session concluded on an empowering note with the quote “the goal cannot be the profits, it must be the people,” emphasizing the human-centered approach needed for AI development.


Speakers

**Speakers from the provided list:**


– **Online Participants** – Young people from across the UK sharing their views on generative AI (names not disclosed for safety reasons)


– **Dr. Mhairi Aitken** – Senior Ethics Research Fellow at the Alan Turing Institute, leads the children and AI program


– **Leanda Barrington‑Leach** – Executive Director of the Five Rights Foundation


– **Participant** – Multiple unidentified participants asking questions from the audience


– **Maria Eira** – AI expert at the Center for AI and Robotics at the United Nations Interregional Crime and Justice Research Institute (UNICRI)


– **Adam Ingle** – Representative from the Lego Group, workshop moderator and convener


– **Stephen Balkam** – Founding CEO of the Family Online Safety Institute (FOSI)


– **Mariana Rozo‑Paz** – Representative from DataSphere Initiative


– **Joon Baek** – Representative from Youth for Privacy, a youth NGO focused on digital privacy


– **Co-Moderator** – Online moderator named Lisa


**Additional speakers:**


– **Ryan** – 17-year-old youth ambassador of the OnePAL Foundation in Hong Kong, advocating for digital sustainability and access


– **Elisa** – Representative from the OnePile Foundation (same organization as Ryan)


– **Grace Thompson** – From CAIDP (asked question online, mentioned by moderator)


– **Katarina** – Law student in the UK studying AI law (asked question online)


Full session report

# Elevating Children’s Voices in AI Design: A Comprehensive Workshop Report


## Executive Summary


The workshop “Elevating Children’s Voices in AI Design,” sponsored by the Lego Group, brought together leading researchers, policy experts, and young people to address the critical gap between children’s experiences with artificial intelligence and their representation in AI development decisions. The session featured participants from the Family Online Safety Institute, the Alan Turing Institute, and UNICRI (United Nations Interregional Crime and Justice Research Institute), alongside direct contributions from young people across the UK and internationally.


The discussion revealed a fundamental challenge: whilst children are already using generative AI at significant rates, AI systems are not designed with children’s needs, safety, or wellbeing in mind. This pattern mirrors previous technology rollouts where child safety considerations were retrofitted rather than built in from the start. The workshop established that children possess sophisticated understanding of AI’s implications and valuable insights for its development, emphasizing the need for meaningful youth participation in AI governance.


## Opening Perspectives: Children’s Voices on AI


The workshop opened with compelling video messages from young people across the UK who articulated sophisticated perspectives on AI’s potential and risks. These participants emphasized that AI should be viewed as a tool to aid rather than replace humans, stating: “AI is extremely advantageous when used correctly. But when misused, it can have devastating effects on humans.”


The young participants demonstrated remarkable awareness of complex issues surrounding AI development. They highlighted concerns about privacy, describing it as “a basic right, not a luxury,” and showed deep understanding of environmental impacts, noting that “AI training requires massive resources including thousands of litres of water and extensive GPU usage.” They asserted their right to meaningful participation in AI governance: “Young people like me must be part of this conversation. We aren’t just the future, we’re here now.”


Their perspectives on education were particularly nuanced, advocating that “AI should be taught in schools rather than banned, with focus on critical thinking and fact-checking skills.” This position demonstrated their understanding that prohibition is less effective than education in preparing young people for an AI-integrated world.


The session also referenced the Children’s AI Summit, which produced a “Children’s Manifesto for the Future of AI” featuring contributions from young people including Ethan (16) and Alexander, Ashvika, Eva, and Mustafa (all 11).


## Research Findings: Current State of Children’s AI Use


### Family Online Safety Institute Research


Stephen Balkam from the Family Online Safety Institute (FOSI), a 501c3 charitable organization, presented research that revealed an unusual pattern in technology adoption. For the first time, teenagers reported that their parents knew more about generative AI than they did, primarily because parents were learning AI for workplace purposes.


The research revealed distinct usage patterns between generations. Parents primarily used AI for analytical tasks related to their professional responsibilities, whilst teenagers focused on efficiency-boosting activities such as proofreading and summarizing academic work. However, concerning trends emerged showing that students were increasingly using generative AI to complete their work entirely rather than merely to enhance it.


Both parents and teenagers expressed shared concerns about job displacement and misinformation, though they remained optimistic about AI’s potential for learning and scientific progress. Data transparency emerged as the top priority for both groups when considering AI companies.


Stephen also conducted an interactive demonstration with the audience, showing AI-generated versus real images, including examples from Google’s new Veo video generator, to illustrate the increasing sophistication of AI-generated content and the challenges this poses for detection.


### UNICRI Global Survey Insights


Maria Eira from UNICRI’s Centre for AI and Robotics shared findings from a survey published three days prior to the workshop, covering 19 countries across Europe, Asia, Africa, and the Americas. The research revealed significant communication gaps between parents and children regarding AI use. While parents demonstrated awareness of their children’s academic AI applications, they often remained unaware of more personal uses, such as AI companions or seeking help for personal problems.


The research identified a crucial correlation: parents who regularly used generative AI themselves felt more positive about its impact on their children’s development. This finding suggested that familiarity with technology shapes attitudes toward children’s use.


Eira’s research also highlighted the need for separate legislative frameworks specifically targeting children’s AI rights, recognizing that children cannot provide the same informed consent as adults and face unique vulnerabilities in AI interactions.


### Alan Turing Institute Children and AI Research


Dr. Mhairi Aitken presented research on children’s direct experiences with AI, funded by the Lego Group. The study found that approximately 22% of children aged 8-12 reported using generative AI, with three out of five teachers incorporating AI into their work. However, the research revealed stark disparities in access and understanding between private and state-funded schools, pointing to emerging equity issues.


The research uncovered particularly significant findings regarding children with additional learning needs, who showed heightened interest in using AI for communication and support. This suggested AI’s potential for inclusive education, though Dr. Aitken emphasized that development must be grounded in understanding actual needs rather than technology-first approaches.


When given choices between AI tools and traditional materials for creative activities, children overwhelmingly chose traditional tactile options. They expressed that “art is actually real” whilst feeling they “couldn’t say that about AI art because the computer did it, not them.” This preference revealed children’s sophisticated understanding of authenticity and creativity.


The research also documented concerning issues with bias and representation in AI outputs. Children of color became upset when not represented in AI-generated images, sometimes choosing not to use the technology as a result. Similarly, children who learned about the environmental impacts of AI models often decided against using them.


## Panel Discussion and Key Themes


### Design and Safety Challenges


The panel discussion revealed that AI systems fundamentally fail to consider children’s needs during development. Stephen Balkam noted that this pattern repeats previous web technologies where safety features were retrofitted rather than built in from the start. Dr. Aitken emphasized that the burden should be on developers and policymakers to make systems safe rather than expecting children to police their interactions.


Particular concerns emerged around AI companions and chatbots, with evidence that young children were forming emotional attachments to these systems and using them for therapy-like conversations. This raised questions about potential dependency and isolation from real community connections.


### Educational Impact and Equity


The research revealed troubling equity gaps in AI access and education. Children in private schools demonstrated significantly more exposure to and understanding of generative AI compared to their peers in state-funded schools, suggesting that AI could exacerbate existing educational inequalities.


However, the discussion also highlighted AI’s potential for supporting inclusive education, particularly for children with additional learning needs who showed interest in using AI for communication support.


### Privacy, Transparency, and Rights


Data protection emerged as a fundamental concern across all speakers. The young participants’ assertion that privacy is a basic right was echoed by researchers who emphasized the need for transparency about AI system operations and data collection practices. Stephen Balkam noted the ongoing challenge of balancing safety and privacy, observing that more safety potentially requires less privacy.


## International Youth Participation


The workshop included international youth participation, notably from 17-year-old Ryan, a youth ambassador of the OnePAL Foundation in Hong Kong, who asked specifically about leveraging generative AI for supporting people with disabilities. Elisa from the OnePile Foundation raised questions about power imbalances between children and AI systems. Zahra Amjed was scheduled to join as a young representative but experienced technical difficulties.


## Areas of Consensus and Ongoing Challenges


Participants agreed on several fundamental principles:


– AI systems must be designed with children’s needs and safety in mind from the outset


– Children must be meaningfully included in AI decision-making processes


– Transparency about data practices and privacy protection are essential requirements


– AI shows significant potential for supporting children with disabilities and additional learning needs


– Environmental responsibility must be considered in AI development


However, several challenges remained unresolved. Maria Eira noted that long-term impacts of AI technology on children remain unclear with contradictory research results. The challenge of creating AI companions that support children without fostering dependency remained unaddressed, and questions about global implementation of AI literacy programs require continued attention.


## Emerging Action Items and Recommendations


The discussion generated several concrete initiatives:


**Immediate Initiatives**: UNICRI announced the launch of AI literacy resources, including a 3D animation movie for adolescents and parent guide, at the upcoming AI for Good Summit.


**Industry Responsibilities**: Technology companies were called upon to provide transparent explanations of AI decision-making processes, algorithm recommendations, and system limitations.


**Educational Integration**: Rather than banning AI in schools, participants advocated for integration with strong emphasis on critical thinking and fact-checking skills.


**Research and Development**: The discussion highlighted needs for funding research on AI literacy programs and designing AI tools with children’s needs prioritized from the start.


**Legislative Approaches**: Participants called for separate legislation specifically targeting children’s AI rights and protections, recognizing children’s unique vulnerabilities in AI interactions.


## Conclusion


The workshop established that the question is not whether children are ready for AI, but whether AI is ready for children. Current systems fail to meet children’s needs, rights, and developmental requirements, necessitating fundamental changes in design approaches, regulatory frameworks, and industry practices.


As Maria Eira emphasized, echoing the sentiment of young participants: “the goal cannot be the profits, it must be the people.” This principle encapsulates the fundamental shift required in AI development—from technology-first approaches toward human-centered design prioritizing children’s rights, wellbeing, and meaningful participation.


The workshop demonstrated that when children’s voices are genuinely heard and valued, they contribute essential perspectives that benefit not only young people but society as a whole. Moving forward, the emphasis must be on meaningful youth participation in AI governance, transparent and child-friendly AI systems, critical AI literacy education, and regulatory approaches that protect children’s rights while respecting their agency.


Session transcript

Adam Ingle: Hi, everyone. Thank you for joining this panel session workshop called Elevating Children’s Voices in AI Design. Sponsored by the Lego Group and also participating is the Family Online Safety Institute, the Almond Turing Institute, and the Center for AI and Robotics at the United Nations Interregional Crime and Justice Research Institute. We’ve got a excellent workshop for you today, where you hear all about insights from the latest research on the impact of AI on children, and also hear from young people themselves about their experiences and their hopes. So this is just a quick run of the show. We’re going to start with a message from the children about their views on generative AI, and then we’re going to hear some of the latest research from Stephen Bolcom, who’s the founding CEO of the Family Online Safety Institute. Maria Ira, who’s an AI expert at the Center for AI and Robotics in the UN. Fari Aitken, a Senior Ethics Research Fellow at the Almond Turing Institute. Then we’ll move on to a panel discussion and questions. Please feel free to ask questions. We want to take them from the audience, both in the room and online. We’ll also have a young person, Zahra Amjed, join us to share her insights and ask the panel questions herself. But without further ado, let’s get away, and we’re going to start with this video message from young people across the UK. We’re not disclosing names just for safety reasons, but please play the message and the video when you’re ready.


Online Participants: AI is extremely advantageous when used correctly. But when misused, it can have devastating effects on humans. That’s why we must view AI as a tool to aid us, not to replace us. Right now, students are memorising facts by adaption. while AI is outpacing that system entirely. Rather than banning AI in schools, we should teach students how to use it efficiently. Skills like fact-checking, critical thinking, and quantum engineering aren’t optional anymore, they’re essential. We need to prepare students for a world where AI is everywhere, teaching them to use it efficiently while not relying on itself. I feel that AI can help humanity in the future, but it also can harm, so it must be used in an ethical manner. I find AI really fun, but sometimes it’s not safe for children because it gives bad advice. Privacy is not a luxury, it’s a basic right. The data that AI collects is valuable, and if it’s not protected, it can be used to hurt the very people it’s supposed to help. If gold cannot be profit, it must be people. LLMs include thousands of litres of water during training, and GPT-3 require over 10,000 GPUs over 15 days. Hundreds of LLMs are being developed, and their environmental impact is immense. But all powerful tools, AI must be managed responsibly, or it’s promised to become a problem. The choices that government and AI developers make today will not just affect the technology, but our lives, our communities, and the world that we leave for our next generation. Young people like me must be part of this conversation. We aren’t just the future, we’re here now. Our voices, our experiences, and our hopes must matter in shaping this technology. I think adults should listen to children more because children have lots of good ideas, as well as adults, with AI. Artificial intelligence is a rising tide, but tides follow the shape of the land, so we must shape that land. We must set the direction, and we must act. to decide, together, the kind of world that we want to build. Because if we don’t, that tide may wash away everything that we value most. Fairness, privacy, truth, and even trust. AI holds this incredible promise, but that promise will only be fulfilled if we build it with trust, with care, with respect, and with a clear vision of the kind of world that we want to create, together. Thank you.


Adam Ingle: Well, thank you so much to all the young people there that put together those pretty powerful messages. I mean, from our perspective at the Lego Group, and also I know from all my co-panelists, this is all about elevating children’s voices and not being patronizing to their views, making sure they’re part of decision-making. And it’s great to see such eloquent young people who have real ideas about the future of AI, and we’re here to kind of discuss them more. I’m gonna pass over to Stephen Balkin now to talk about his latest research from the Family Online Safety Institute about the impact of AI on children.


Stephen Balkam: Well, thank you very much, Adam, and thank you for convening us and bringing us here. Really appreciate it. For those of you who are not familiar, FOSI, the Family Online Safety Institute, we are a 501c3 charitable organization based in the United States, but we work globally. Our mission is to make the online world safer for kids and their families. And we work in what we call the three Ps of policy, practices, and parenting. So that’s enlightened public policy, good digital practices, and digital parenting, which is probably the most difficult part of this, where we try to empower parents to confidently navigate the web with their kids. And the web increasingly is AI. infused, shall I say. I want to begin by just saying that two years ago, in 2023, we conducted a three-country study called Generative AI Emerging Habits, Hopes, and Fears. And at the time, we believe it was the first survey done around generative AI, given that ChatGPT had emerged only a few months before. And we talked to parents and teens in the U.S., in Germany, and in Japan, and some of the results surprised us. And you can see in the slide, and I’ll talk to those data points. First thing that we found which surprised us was that teens thought that their parents knew more about generative AI than they did. With previous trends, particularly in the early days of the web, and then web 2.0, and social media, kids were always way ahead of their parents in terms of the technology. But in this case, a large, sizable share of teens in all three countries recorded that their parents had a better understanding than they did. And we dug a little deeper and found that, of course, many of the parents were struggling to figure out how to use gen AI at work, or at the very least, try to figure it out before gen AI took over their jobs. But anyway, that was the first interesting trend. Parents, for their part, said that they used it mainly for analytical tasks, such as using gen AI platforms as a search engine and as a language translator. And that’s only increased over the last couple of years. Teens mostly were looking for it for efficiency boosting tasks, such as proofreading and summarizing long texts. to make them shorter and faster to read. And we’ve already seen some interesting developments in those two years where ChatGPT is actually, instead of just being used to proofread and analyze their work, teens and young people are increasingly using Gen AI to do their work for them, their essays, their homework, whatever. In terms of concerns, job loss was the number one concern for both parents and teens, and also the spread of false information, which has only been accelerating since we did that study. Other concerns, loss of critical thinking skills was the parents’ number three, whereas kids were more concerned about new forms of cyberbullying, again, which is something we’ve been seeing since we did that study. There was a lot of excitement, too. I mean, obviously concerns, but parents and teens both shared an optimism that Gen AI will, above all else, help them learn new things. Very excited also for AI’s potential to bring progress in science and healthcare, and to free up time to reducing boring tasks as well as progress in education. But then when we asked them about who was responsible for making sure that teens had a safe Gen AI experience, interestingly enough, parents believed that they were the most, had to take the greatest responsibility for ensuring their teen’s safety. And this was particularly true in the United States where, I’m afraid to say, we have less trust in our government to guide and to pass laws. Other countries were more heavily reliant on their own governments. and tech companies. And then we asked the question, what do parents and teens want to learn? And what are the topics that would help them navigate these conversations and address their concerns about Gen AI more broadly? And top of the list was transparency of data practices. And secondly, steps to reveal what’s behind Gen AI and how data is sourced and whether it can be trusted was a key element. Another area they felt that industry should take note of, that data transparency is top of mind for parents and teens, and that companies should take strides to be more forthcoming about how users’ data is being collected and used, which I think is something that we’ll hear more about in the next presentation. And then fast forward to this year, we conducted what we call the Online Safety Survey, now titled Connected and Protected, in the United States in the end of 2024 and into 2025. And this was a survey more about online safety trends in general, but we did include questions about Gen AI in the research. And a basic question, do you think that AI will have a positive or negative impact on each of the following areas? And these areas were creativity, academics, media literacy, online safety, and cyberbullying. And in each of these categories, kids were more likely to be optimistic about AI’s impact on society. Think about that. Parents felt more optimistic than their parents that AI was going to have a positive impact. Now parents weren’t necessarily pessimistic. across the board, about half of parents thought that AI would have a positive impact, but 60% of both parents and kids thought AI would have a negative impact on cyberbullying. And this, of course, is where we see stitched together videos, a kid’s face put onto all sorts of awful graphic images that are then spread around the school. When it comes to online safety, parents and kids were split down the middle, with just over half of both groups reporting that AI would have a positive impact on online safety. And when comparing data from wave one of the survey with wave two, we saw that parents in the second wave were much more likely to say that their child had used Gen AI for numerous tasks, including help with school projects, image generation, brainstorming, and more. In the first wave of this survey, we asked participants to identify if images were real or AI generated. Each respondent was presented with three images from a lineup of six to ensure accurate data. Less than half of respondents correctly identified two or more images, and you’re going to see an example of that in a moment. Less than 10% of respondents correctly identified all three images. And we’ll see how well you guys do in a minute. On the bright side, over four or five respondents correctly identified at least one image. And again, this survey was done before Google’s video generator came out, Yeho, which is just mind boggling how fast the developments are in this space. And some of the videos and images that have come out of that video generator are quite astounding. So based on this study, Fossey recommends the following. That technology companies be much more transparent about AI technology, providing families with a clear explanation of why a Gen AI system produced a certain answer, why an algorithm is recommending certain content, and what the limitations of AI tools like chatbots are. Industries should also learn from past mistakes and design AI tools with children in mind, not as an afterthought. And industry needs to fund research and programs that will help children learn AI literacy so they are better able to discern real content from AI-generated content and make informed decisions based on that knowledge. So now I’m going to test you guys on these three images and have a look and just have a show of hands. I don’t know how we’re going to do this online. But how many of you think that the first image is real? Any takers for real? Okay. How many for AI-generated? All right. More real than AI. Okay. Second one. AI? Real? All right. And the last one, real? Or AI? All right. Well, you guys did pretty well. The first one is a real painting. I’ve got the actual citation for you if you want to find out who the artist was. And yes, the second two were both gen A, AI-generated. Interestingly enough in our study, more men than women thought number two was real. Maybe that was wishful thinking. You can make your own conclusions. I think 85 to 90% of women immediately saw that she was not real. And if you look closely, her earrings don’t match, which again, I didn’t see that. So, anyway, back to you.


Adam Ingle: Thanks, Stephen. I performed poorly on that test, I will admit. So next up we’ve got Maria, and she’s an AI expert at the United Nations… Sorry, it’s a complex acronym. The United Nations Interregional Crime and Justice Research Institute and their Center for AI and Robotics. Maria, please take it away. She’s joining us online.


Maria Eira: Hello, everyone. Can you hear me and see my slides? Everything is working? Yes. Perfect. Thank you so much, Adam. And good afternoon, everyone. First of all, I would like to thank you, Adam, and the Lego Group for the invitation to be part of this very interesting workshop. So I work at the Center for AI and Robotics of UNICRE. Indeed, it’s a complex, long term for a UN research institute that focuses on reducing crime and violence around the world. And the center has a particular mandate to understand how AI contributes to both reduce crime and also to be… How it can also be used by malicious actors for criminal purposes, for example. And so now I will present you a project that we have together with Walt Disney to promote AI literacy among children and parents. And we focus on AI, but particularly on generative AI. So to start this project, we were trying to understand the parental perspectives on the use and impact of gen AI on adolescents, a little bit as Palsy was doing. So we distributed a survey worldwide and received… The survey was targeting parents and we received replies from 19 countries across Europe, Asia, Africa and the Americas. So, we just published this paper three days ago. The paper includes all the conclusions from this survey. It’s free access and you can access it via the QR code, but I already brought you here the main conclusions from this survey. So we had two main conclusions. So the first one, we understood that there is a lack of awareness from parents and the low communication between parents and their children on how adolescents are using generative AI tools. And we were targeting parents of adolescents of 13 to 17 years old. And so on the left, we have a graph, I don’t know if you can see it, but I will describe a little bit. So this graph is parents’ insights on teenagers’ generative AI use across different activities. And so on the first smaller graph we have, the activity is to search or get information about the topic. And so we can see that more than 80% of parents report that their kids are using generative AI to search information about the topic. And they are also using it quite often to help with school assignments. So for more academic purposes, we can see that parents are aware that their kids are using generative AI. However, for more personal uses, such as using generative AI as a companion or to ask for help to personal or health problems, we can see that the most popular reply was either I disagree. So they feel that their kids never use generative AI for these more personal purposes. And the second most popular reply was, I don’t know. So, this confirms a little bit, although we were saying right now that parents are becoming more aware. But still, we can see that as a worldwide distribution, a lot of parents still don’t know if their children are using generative AI for more personal uses. The second conclusion is we can see here on the graph on the right. And so, we started by, it’s basically, we understood that parents who use, I’m already giving the conclusion. So, parents who use generative AI tools feel more positive about the impact that this technology can have on their children’s development. And so, we can see on the graph on the right, so we have started by dividing parents according to their familiarity with generative AI tools. And so, we divide it into regular users, the ones who use generative AI every day or a few times per week, sporadic users, the ones who use generative AI a few times per month or less, and unfamiliar audience who never tried or never heard about this technology. And so, we can see that the regular users, so the yellow bars here, so feel much more positive on the impact that the technology can have on critical thinking, on their career, on their social work, and also on their general impact that this technology can have on kids’ development. And so, the child and familiar parents, so the blue ones here, were negative in all these fields. So, this shows that when parents are familiar with the technology, when they use the technology, they see it differently. And thinking… And viewing this technology in a positive way also helps children to use it in a more positive way and not fearing this technology so much. And so besides engaging with parents, we also engaged with children and we organized a workshop in a high school to collect the perspectives from the adolescents. And I brought here some interesting comments and feedback from children. So when we asked them where did they learn about generative AI, they mentioned France, they mentioned TikTok, my 20-year-old brother. So we can see that they are not learning how to use these tools in schools or from other trustworthy sources, let’s say. And when we asked them what’s one thing that adults should know about how teenagers are using generative AI, their replies were they use it to cheat in school, kids use AI to make everything, or adults should know more about it. And I think these were also very interesting to see their feedback. And it also helped us a lot to develop the main outcomes of this project. So we basically produced two AI literacy resources that will be launched in two weeks at the AI for Good Summit. So on the left, we have a 3D animation movie for adolescents that explains what AI is, how generative AI works, and very importantly, that not all the answers can be found in this chat box. And on the right, we have a guide for parents on AI literacy to support them in guiding their children to use this technology in a responsible way. So to communicate, so we focus a lot on the communication. which was something that we concluded from the initial survey, focusing on the communication about the potential risks and also to explore the benefits of this technology together to make parents engage with children and to learn together, because we are all learning on this. The technology is really advancing in a very fast pace, so we will all need to be on top of this development. So if you’d be interested, both resources will be available online soon, so if you’d like to receive them, just reach out to me. I’ll leave my email here. Also, if you have any other questions, I’m happy to reply. So thank you for your time and attention.


Adam Ingle: Thanks, Mariel. And now we have Varya Atkin, Senior Ethics Fellow at the Alan Turing Institute, to discuss research that LEGO Group was actually very proud to sponsor.


Dr. Mhairi Aitken: Thank you, Adam, and thank you for the invitation to join this discussion today. I’m really excited to be a part of this really important panel discussion. Yes, as Adam said, I’m a Senior Ethics Fellow at the Alan Turing Institute. The Turing Institute is the UK’s national institute for AI and data science, and at the Turing, I have the great privilege of leading a program of work on the topic of children and AI. The central driver, the central rationale behind all our work in the children and AI team at the Turing is the recognition that children are likely to be the group who will be most impacted by advances in AI technologies, but they’re simultaneously the group that are least represented in decision-making about the ways that those technologies are designed, developed, and deployed, and also in terms of policymaking and regulation relating to AI. We think that’s wrong. We think that needs to change. Children have a right to a say in matters that affect their lives, and AI is clearly a matter that is affecting their lives today and will increasingly do so in the future. So over the last four years, our team, the children and AI team at the Alan Turing Institute have been working on projects to develop and demonstrate approaches to meaningfully bring children and young people into decision-making processes around the future of AI technologies. So we’ve had a series of projects of a number of different collaborations, including with UNICEF. with the Council of Europe Steering Committee on the Rights of the Child, the Scottish Airlines and Children’s Parliament and most recently with the with the Lego Group. So I want to share some kind of headline findings from our most recent research which has looked at the impacts of generative AI use on children and particularly on children’s well-being and also share some messages from the Children’s AI Summit which was an event that we held earlier this year. So firstly from our recent research and this is a project that was supported by the Lego Group and looked at the impacts of generative AI use on children particularly children between the ages of 8 and 12. There were two work packages in this project, the first work package was a national survey so we surveyed around 800 children between the ages of 8 and 12 as well as their parents and carers and surveyed a thousand teachers across the UK. Now this research revealed that around a quarter of children, 22% of children between the ages of 8 and 12 reported using generative AI technologies and the majority of teachers, so three out of five teachers reported using generative AI in their work. But we found really stark differences between uses of AI within private schools and state-funded schools and this is in the UK context, with children in private schools much more likely both to use generative AI but also report having information and understanding about generative AI and this points to potentially really important issues around equity in access to the benefits of these technologies within education. We also found that children with additional learning needs or additional support needs were more likely to report using generative AI for communication and for connection and also from the teacher survey we find that there was significant interest in using generative AI to support children with additional learning needs. This was also a finding that came out really strongly in work package two of this research. Work package two was direct engagement with children between the ages of 9 and 11 through a series of workshops in primary schools in Scotland and throughout these workshops we found that children were really excited about the opportunity to learn about generative AI and they were really excited about the ways that generative AI could potentially be used to support them in education and again there was a strong interest particularly in the ways that generative AI could be used to support children with additional learning needs. But we found also that in these workshops where we invited children to take part in creative activities and we gave them the option of using either generative AI tools or more traditional tactile art materials, we found overwhelmingly that children chose to use traditional tactile hands-on art materials. You’ll see on the quote at the bottom, one of the sentiments that was expressed very often in these workshops was this feeling that art is actually real and children felt that they couldn’t say that about AI art because the computer did it, not them. And I think this reveals some really important insights into the choices that children make about using digital technologies and a reminder that those choices are not just about the digital technology, but about the alternative options available and the context and environments in which children are making those choices. Through the research, children also highlighted a number of really important concerns that they had around the impacts of generative AI. And I just want to flag some of these briefly now. One of the major themes that came out through this work was a concern around bias and representation in AI models and the outputs of AI models. Over the course of six full day workshops in schools in Scotland, we were using generative AI tools. And in this case, it was OpenAI’s ChatGPT and DALI to create a range of different outputs. And we found that each time children wanted an image of a person, it would by default create an image of a person that was white and predominantly a person that was male. Children identified this themselves and they were very concerned about this. They were very upset about this. But particularly for children of colour who were not represented through the outputs of these models, we found that children became very upset when they didn’t feel represented. And in many cases, children who didn’t feel represented by the outputs of models chose not to use generative AI in the future and didn’t want to use generative AI in the future. So it’s not just about the impact on individual children. It’s also about adoption of these tools and how representation feeds into that. Another big area of concern was the environmental impacts of generative AI. And this is something that we found has come out really consistently through all the work we’ve done engaging children and young people in discussions around AI. Where children have awareness or access to information about the environmental impacts of generative value models, they often choose not to use those models. And we found that in these workshops, that where children learnt about the environmental impact, particularly the water consumption of generative value models and the carbon footprint of generative value models, they chose not to use those models in the future. And they also pointed to this as an area in which they wanted policy makers and industry to take urgent action to address the environmental impacts of these models, but also to provide transparent, accessible information about the environmental impacts of those models. Finally, there were also big concerns around the ways that generative value models can produce inappropriate and sometimes potentially harmful outputs. And children felt that they wanted to make sure that there were systems in place to ensure that children had access to age-appropriate models and that wouldn’t risk exposure to harmful or inappropriate content. Now, finally, I just wanted to also share some messages from the Children’s AI Summit, which was an event that we held in February of this year. This was an event that my team at the Alan Turing Institute ran in partnership with Queen Mary University of London, and it was supported by the Lego Group, Elevate Great and EY. The event brought together 150 children and young people between the ages of 8 and 18 from right across the UK for a full day of discussions, exploring their hopes and fears around how AI might be used in the future, and also setting out their messages for what they wanted to see on the agenda at the AI Action Summit in Paris. From the Children’s AI Summit, we produced the Children’s Manifesto for the Future of AI, and I’d really urge, encourage you to look it up and have a read. It’s written entirely in the words of the children and young people who took part, and it sets out their messages for what they want world leaders, policymakers, developers to know when thinking about the future of AI. I just want to finish with a couple of quotes from the children and young people who took part in the Children’s AI Summit, and their message is really for you all here today about what needs to be taken on board when thinking about the role of children in these discussions. So firstly from Ethan, who is 16, and he says, Hear us, engage with us, and remember, AI may be artificial, but the consequences of your choices are all too real. And secondly, we have a quote from Alexander, Ashvika, Eva, and Mustafa, who were all aged 11, and they presented jointly at the Children’s AI Summit. And they said, we don’t want AI to make the world a place where only a few people have everything and everyone else has less. I hope you can make sure that AI is used to help everyone to make a safe, kind, and fair world. And I think that sums up the ethos of the Children’s AI Summit perfectly, and is also a mission that we really all need to get behind and make a reality. Thank you.


Adam Ingle: Thanks, Fahri, and to Stephen and Maria as well for just some really exciting research findings. We’re going to move towards a panel session now. So we’ll take questions from the audience, both in person and online. So if you’d like to think about some questions, feel free to then ask them. If you’re online, you can ask the online moderator, Lisa, who will ask those questions for you. I’ve got a few myself, though, and we’re actually waiting for Zahra, our young representative, to join. I think there’s been some technical difficulties there, so hopefully she’ll be joining us soon so we can hear directly from her. But to start things off, I think we heard a lot in the research. Kids are already using AI. Children are already using AI across multiple different contexts for multiple different purposes. I think I just want to take a step back and just ask, are children ready for AI, or is AI ready for children? Just as an open question to all the panellists here.


Dr. Mhairi Aitken: I’ll give that one a go. I mean, I think some of the big challenges that we’re finding so far is that these tools, we know that children of all ages are already interacting with AI on a daily basis. And that starts with infants, preschool kids, playing with smart toys and smart devices in the home, through to generated technologies and the ways that AI is also used online on social media. And a lot of the problems here is that these tools are being used by children and young people of all ages, but they’re not designed for children and young people. And we know that the ways that children interact with AI systems are often very different from how adults engage with those tools, or digital technologies more generally, and often very different from how the designers or developers of those systems anticipate that those tools might be used. And now I think there’s possibly a risk that we then put the kind of the burden or the expectation on children and young people themselves to kind of police those online interactions to take approaches to be safe online, whereas actually, the burden has to be on, you know, the developers, the policymakers and the regulators to make sure that those systems are safe, and that there is, there are age appropriate tools and systems available for children to access and benefit from.


Stephen Balkam: Yeah, this feels like deja vu all over again, I was very much involved in the web 1.0 back in the mid 90s. And it became very clear that the World Wide Web was not designed with kids in mind. And we had to sort of retrofit websites and create parental controls for the first time, but never really caught up. And then web 2.0 came along around 2005 2006. And sites like Myspace, and then Facebook, again, just took off first in colleges, then in high schools, then all the way down to elementary grade school level. Once again, not with kids in mind. And we’re just repeating that one more time with this AI revolution. And there’s a great deal of concern, particularly around the amount of what kids will do in terms of trusting chatbots, for instance, we’re seeing a lot of emotional attachments of quite young kids talking to chatbots, thinking that they are real, and sort of unloading their own personal thoughts to them. And for older teens, and for college based kids. the fact that they’re using Gen AI for doing their work, doing their homework, doing their projects and essays, meaning that they’re not developing critical thinking skills, but going straight to Gen AI for results. And for that, that probably is of greater concern.


Adam Ingle: Thank you, Steven. Maria, do you have any contributions to that question?


Maria Eira: Yeah, I agree, yeah, definitely with everyone that was said. Just adding that not just the AI systems are not ready or the kids are not ready for the AI, but the whole environment. So in terms of AI literacy, most of the people don’t really understand what is AI, how does it work, is it like a type of magic, but at the end of the day, it’s actually just computations and statistical models. And so it’s not just the technology that was not developed, but it’s the whole environment. So in terms of AI literacy in schools and so on.


Adam Ingle: Thank you. I’ve definitely got some more questions, but I can see we have someone in the audience that would like to ask a question. So please introduce yourself and ask the panel.


Mariana Rozo‑Paz: Thank you. Hi, everyone. I’m Mariana from DataSphere Initiative. I hope you can hear me well. Okay. So we are the DataSphere Initiative. We have a youth project that has been engaging young people for a couple of years. And I wanted to thank you all for the amazing presentations and the amazing work that you’re doing. And I think it’s actually very, very important that we have all of these stats, numbers, stories, experiences, and thank you also for starting with a video from children and closing up with quotes. And this introduction is just to say that we’re restarting a new phase in our project and we’re starting to focus on influencers. and not just kids that are becoming influencers, children that are being sometimes turned into influencers by their parents that have also mind-blowing stats. Adults that are becoming influencers and that are directly influencing children, not only to consume and buy their products or other products, but we’re also looking into AI agents as influencers in this digital space and that as I think one of the girls that was sharing her story was saying, it’s not just that they’re influencing or that are generally affecting their lives, digital lives, but it’s actually their very concrete lives and the relationships that they have with each other. So I just wanted to ask, and I think that Stephen was already mentioning a bit around the influence of other children and the maybe even like social media and if you had any questions or research done around how are influencers shaping the space and how children and youth are experiencing social media in general and then how did you start or if you started to ask about AI agents and how is that influencing particularly the relationships that they have in real life again? I think that was a bit of like a lot of questions, but thanks again so much.


Stephen Balkam: Yeah, I’ll try to respond to part of what you were saying. I mean, the technology is moving so fast that it’s incredibly hard for the research to keep up is number one. No, we haven’t yet asked about AI bots being an influencing factor, although we are anecdotally seeing kids, teens, young adults and adults using AI for therapy. I mean, literally talking through on hours at a time deep emotional issues that they have and getting responses from chat GPT and others in a way that is… Very positive and self-reinforcing, but also extremely, potentially dangerous in the sense that an artificial intelligence bot will not know, is not human, and will not be able to pick up on body cues and all the rest of it, and may not actually be able to challenge you in a way that a physical, a real human therapist will. One other point I’ll get to quickly, the whole influencing world, there’s new legislation that’s been popping up in the United States at least, that will at least compensate kids who’ve been part of a family, you know, vlog all their childhood, a bit like kid movie stars were back in the 30s. So now at least they’re getting compensation and a right to delete their videos that they had no true consent to be a part of when they turn 18. But there’s a broader societal question about monetizing our kids. We are not in favor of that, particularly because there’s no way that a 7, 8, 9-year-old can give consent. Yes, please film me every day and post this online so that I can go through college and you don’t have to pay, mom and dad. So anyway, maybe we’ll talk later because you had a lot of different points in there.


Dr. Mhairi Aitken: Maybe I could just pick up on, I guess, how this relates to the growth of AI companions and gender divide in this context. I suppose influence isn’t necessarily something that we’ve looked at so much in our research, which is mostly focused on 8 to 12-year-olds, not to say that they’re not already been influenced and many of them are beginning, certainly 12-year-olds, beginning to be on social media. But AI companions, I think, is an area that we really need to urgently get to grips with. There are more and more of these AI companions, AI personas that are clearly being marketed. towards children and young children and we don’t really yet know what the impacts of that might be but there’s growing research, we need more, we need more action to be taken on this, including AI companions that are marketed as addressing challenges of loneliness but then potentially creating a dependence or a connection to something that is very much kind of outside of society and community and potentially exacerbating those challenges which bring us a particular set of risks to address. In the Children’s AI Summit, which were again children and the Children’s AI Summit between the ages of 8 and 18 and among teenagers at Children’s AI Summit there was a lot of interest in potentially using AI companions to support children in terms of supporting them with mental health and there was a lot of interest in how that could be done but but unfortunately what would it mean to design that and develop these tools in ways that are age-appropriate that are safe, that have children’s well-being and children’s mental health as part of the design process, as a key element in the design process. At the moment the risk is that these tools are being developed and promoted that without children’s well-being and children’s interests in mind in the development process but they are increasingly being relied on and used for those purposes. So I think yeah it’s an area that we’re seeing a lot of interest from from children and young people but with a recognition that this needs to be done responsibly, safely and cautiously. Thanks.


Leanda Barrington‑Leach: Leander, I see you’ve got a question. Please. Hello everyone, I’m Leander Barrington-Leach, Executive Director of the Five Rights Foundation. Thank you so much for the presentations and for the research you’re doing which is absolutely fabulous. I could ask lots of things and I could comment on lots of things but I just wanted to take the opportunity given what you’re saying indeed about the importance of designing AI with children’s rights in mind from the start of raising awareness that there are regulatory and technical tools out there to do this. and in particular the Children and AI Design Code, which the Alan Turing Institute also contributed to, which was work that brought AI experts and children’s rights experts and many others together over a very long period of time to develop a technical protocol for innovation that puts children’s rights at the center. So I just wanted to draw awareness to this to say that we all agree that it’s so important, but to know that there are actually tools out there to make it happen. Thank you.


Co-Moderator: Thanks, Leander. Lisa, I think we’ve got an online question. We do indeed. Katarina, who is studying law in the UK, AI law specifically, you’re asking a question. Should AI ethics for children be separated from general AI ethics? That’s the first question. Second question, do you think there should be state-level legislation or policies for AI systems targeting specifically children? Thank you.


Adam Ingle: Maria, I’ll pass to you first if you want to answer either of those questions.


Maria Eira: Yes, sure. Thank you for your question. It’s very relevant indeed. And definitely, yes. Children should have separated legislation. Separate legislation should target children because children don’t have exactly the same consent. Let’s say, for example, the awareness of consent. There are several principles that cannot be fully applicable from adults to children, so we definitely need to have the child’s rights in mind when developing this legislation.


Adam Ingle: Thanks, Maria. Stephen or Mari, do you want to comment on, just one of you will, because we’ve got a few questions and I do want to get everyone to agree.


Dr. Mhairi Aitken: Yeah, I mean, I would agree that children have particular rights, they have particular needs, unique needs and experiences that should be addressed. I guess one other part of it is that if we design this well for children and if we get the regulatory requirements, policy requirements right for children, this benefits well beyond children as well. An AI system that’s designed well with children in mind is also going to have benefits in terms of other vulnerable users and wider user groups. So I think yes, there are unique perspectives, unique considerations that should be addressed, but the benefits go beyond that.


Adam Ingle: So before I go to other questions in the room, I just got really quick responses from the panel. Leander mentioned the age-appropriate AI design code, which is a tool to help companies think about how to build AI in a child rights and well-being way. What do you think are the research gaps? We’ve got tools like this. What is the one, to your mind, outstanding research gap that needs to be addressed before we can really be confident that there is a child-centric approach to AI development? Just a quick question. Maybe reflect on that as we take some other questions, and then I’ll come back later because I do want to think about the research gaps and a path forward to really understanding how to do this responsibly. So let’s take a question from this gentleman here.


Joon Baek: Hello, my name is June Beck from Youth for Privacy. We are a youth NGO focused on digital privacy. So we want to ask about a lot of children’s rights in AI. At least in the context of privacy, there has been some legislation, for example, where under the aim of protecting children’s data, for example, or safeguarding children online, there’s been concerns about those kind of laws having some privacy issues. I was wondering if, would there be some things that under the aim of protecting children when it comes to AI, for example, that could be some other kind of rights that could be in question or violated? So do you suppose there would be anything that we should be aware of?


Adam Ingle: So, you’re talking about the trade-off between protecting children’s rights and maybe some other issues that might be developing. Yeah. Stephen, why? Maria?


Stephen Balkam: Pretty much, you know, I went back to 1995. I mean, we’ve been struggling with the dichotomy between safety and privacy since the beginning of the web. In other words, the more safe you are, perhaps the more you’re giving up in terms of private information. Or the more private you are, maybe you’re not as safe as you could be. So trying to find a way that balances both has been at the core, certainly, of the work of my organization, but many others, and it is extremely hard for lawmakers to get that balance right. And then if you come from the U.S., you then have this other axis, which is called free expression, which adds another layer of complexity, too, because you want people to be private, you want people and kids to be safe, but you also want one of the five rights, by the way, is the right to say what you want to be able to say. So it’s just going to be something which I don’t think will ever completely get right. And we’re going to constantly have to compromise. But I don’t think it’s beyond our ability to reach those compromises.


Adam Ingle: Just noting time, I might move on to this gentleman here.


Participant: Hi, my name is Ryan, I’m 17 years old, and I’m a youth ambassador of the OnePAL Foundation in Hong Kong. So we’re advocating for digital sustainability and access in Hong Kong. So thank you for the wonderful presentations. My question is, AI for people with learning disabilities was raised at a significant prospect of AI by children from 8 to 12 years old. So how can generative AI be further leveraged for the support and inclusion of people with disabilities? Thank you.


Adam Ingle: Thank you. And I’m just wondering, depending from your research, Fari, if you want to elaborate.


Dr. Mhairi Aitken: Yeah, it’s come out really strongly from all the work we’ve done engaging children and young people, that this is an area where they’re really excited about the potential and they want to see AI developed in ways that will support children with additional learnings, additional support needs and with disabilities. And I guess what’s important, I mean particularly in the education context, supporting children with additional learning needs, there’s huge promise here and teachers in our study recognise that, children in our study recognise that, but again I think some of the challenges or current limitations is that there’s a lot of kind of edtech tools that are being pushed and promoted that are not necessarily beginning with a sound understanding of the challenges that they’re seeking to address or a sound understanding of the needs of children with additional learning needs. I think we need to start developing these technologies from that place, you know, if we want to develop something to support children with additional learning needs, it has to be grounded in a sound understanding of what those needs are and what the challenges are. And then maybe generative AI provides tools that provides a solution, but not always, not necessarily. I think we have to start with, you know, identifying the problems and challenges and develop those tools responsibly to effectively address those challenges. That requires having expertise from teachers, from children, from specialists in these areas to guide the development of those tools and technologies. But it’s definitely an area where there’s huge promise and where it could be used really effectively and really valuably.


Adam Ingle: Thank you. Great to have a youth representative at the IGF. I mean, my gosh, I was probably playing unsafe video games when I was 17, rather than going to international forums. So incredibly impressive. Lisa, you’ve got a question from online. I do indeed. So I have a question from Grace


Co-Moderator: Thompson from CAIDP, who’s asking, how is UNICRI, thank you, and the other entities represented in the panels working with national government officials on capacity building to school principals, counseling teams, and the entire ecosystem to prepare adults in protecting our children and adolescents?


Adam Ingle: Maria, I think that’s one for you.


Maria Eira: Yeah, sure. Thank you for your question, Grace. So as I was showing before, we are developing AI literacy resources for parents. So we will try to disseminate this as much as possible. So it’s basically recommendations for parents to guide their children on the use of this technology. So this is one thing. Then we are also trying to work with governments and particularly with the judges, law enforcement, to promote AI literacy, basically. So we do a lot of capacity building to law enforcement officers worldwide to explain what is AI, how to use AI in a responsible way. So we have guidelines developed with Interpol. So this is more on the law enforcement side. And we would love to explore more to other representatives from the government and try to implement AI literacy workshops and programs in schools. So we have started this workshop in a school in the Netherlands, which was also to collect adolescents’ perspectives, but we also had a component on explaining what AI is, what are the risks, what are the benefits, and some best practices to use it in the best way. And we would love to scale up this. And we are right now in conversations with the Netherlands and with other countries to see, to understand if we can really develop a full program that can be implemented in schools. But everything is being developed. The technology is really recent. Everyone is trying to be prepared for this. And, yeah, we are still working on that.


Adam Ingle: Thanks, Maria. We’ll take one final question from the room, and then I will do a quick lightning round among the panelists. to answer what’s one research area we still need to explore to move towards child-centric AI, and what’s one thing companies can do right now to make AI more appropriate for children. Quick answers to those two questions, but please, the lady here.


Participant: Hello, my name is Elisa. I’m also from the OnePile Foundation, just like Ryan. I see a big issue in children communicating with AI about their personal issues as children are in a way more vulnerable situation and position, and AI is the bigger person in that conversation. So my question is, how can we design AI so that it doesn’t increase that power imbalance between the child and the all-known AI? I didn’t quite get the end of that question. Sorry, just repeat your question. My question is, how can we design AI that the independency of the child is increased and that there is no power imbalance between the child and the AI? You want to try that?


Dr. Mhairi Aitken: Yeah, I mean, I think in all these interactions, one thing that’s absolutely crucial is the transparency around the nature of the AI system. Transparency also around how data might be being collected through those interactions and potentially being collected by the model or used to train future model or collected by the organization, the company developing and owning those models. And if I can tie it into your question around what’s needed, because I think it is actually related, it’s that kind of critical AI literacy. We hear a lot about the importance of AI literacy and increasing understandings of AI, but what I think is really important is it’s that critical literacy. So improving understandings of not just technically how these systems work, but actually the business models behind them. how it affects children’s rights and the impact that those systems have. So I think that that’s where we need more research but also what’s needed to enable children to make informed choices about how they use those systems.


Adam Ingle: Love to hear you tie it in the answer to both questions. They’ve already saved us a lot of time. Stephen, 15 seconds. Know what she said. That’s easy. Maria, one thing we can do in research or one thing companies can do right now?


Maria Eira: Yeah, so in research we are still understanding the long-term impact of this technology. We still don’t know and the literature also reflects this. We have very contradictory results. Some papers saying that AI can improve critical thinking. Others saying that AI can actually decrease critical thinking. I think we are still in a period where we are trying to understand exactly what will be the long-term impact of this technology. And so, yeah, again, what companies should do. I think the girl in the video in the beginning said exactly everything. So the goal cannot be the profits, it must be the people. So I think if companies really focus on the children and developing these tools, targeting and having the children in mind, we can actually develop good tools for everyone.


Adam Ingle: Thanks, Maria. The goal should not be the profit, it should be the people. I think that is a great lesson coming out of this session. That’s what we have time for. Thank you so much for joining us in the room and online. And please, if you’ve got any more questions, feel free to approach Stephen and Varya or get in contact with Maria. And thank you to all the young people that engaged with this session. And thank you from the LEGO Group as well. So we’ll end it there and we’ll see you soon. Bye. Thank you. Thank you. Thank you.


O

Online Participants

Speech speed

149 words per minute

Speech length

412 words

Speech time

165 seconds

Young people view AI as advantageous when used correctly but potentially devastating when misused

Explanation

Young people recognize AI as a powerful tool that can provide significant benefits when properly utilized, but they also acknowledge its potential for causing serious harm when misapplied. They emphasize the importance of viewing AI as a tool to aid humans rather than replace them.


Evidence

Students stated ‘AI is extremely advantageous when used correctly. But when misused, it can have devastating effects on humans. That’s why we must view AI as a tool to aid us, not to replace us.’


Major discussion point

Children’s Current Use and Understanding of AI


Topics

Human rights | Sociocultural


AI should be taught in schools rather than banned, with focus on critical thinking and fact-checking skills

Explanation

Young people argue that instead of prohibiting AI use in educational settings, schools should integrate AI education that emphasizes essential skills like critical thinking, fact-checking, and responsible usage. They believe students need preparation for an AI-integrated world while learning not to become overly dependent on the technology.


Evidence

Students noted ‘Right now, students are memorising facts by adaption. while AI is outpacing that system entirely. Rather than banning AI in schools, we should teach students how to use it efficiently. Skills like fact-checking, critical thinking, and quantum engineering aren’t optional anymore, they’re essential.’


Major discussion point

Educational Impact and Equity Issues


Topics

Sociocultural | Human rights


Disagreed with

– Dr. Mhairi Aitken

Disagreed on

Approach to AI literacy and education


Privacy is a basic right, not a luxury, and AI data collection must be protected

Explanation

Young people emphasize that privacy should be considered a fundamental right rather than an optional benefit. They express concern about the valuable data that AI systems collect and the potential for this data to be misused to harm the very people it’s supposed to help.


Evidence

Students stated ‘Privacy is not a luxury, it’s a basic right. The data that AI collects is valuable, and if it’s not protected, it can be used to hurt the very people it’s supposed to help.’


Major discussion point

Privacy, Data Protection and Transparency


Topics

Human rights | Legal and regulatory


Agreed with

– Stephen Balkam
– Dr. Mhairi Aitken

Agreed on

Data transparency and privacy protection are fundamental concerns for AI systems used by children


AI training requires massive resources including thousands of liters of water and extensive GPU usage

Explanation

Young people demonstrate awareness of the significant environmental costs associated with training AI models. They highlight the substantial resource consumption required for AI development, including water usage and computational power.


Evidence

Students noted ‘LLMs include thousands of litres of water during training, and GPT-3 require over 10,000 GPUs over 15 days. Hundreds of LLMs are being developed, and their environmental impact is immense.’


Major discussion point

Environmental and Ethical Concerns


Topics

Development | Sociocultural


Agreed with

– Dr. Mhairi Aitken

Agreed on

Environmental impacts of AI are significant concerns that influence children’s usage decisions


Young people must be part of AI conversations as they are affected now, not just in the future

Explanation

Young people assert their right to participate in current AI discussions and decision-making processes. They reject the notion that they are only stakeholders for the future, emphasizing that AI impacts their lives today and their voices should matter in shaping the technology.


Evidence

Students stated ‘Young people like me must be part of this conversation. We aren’t just the future, we’re here now. Our voices, our experiences, and our hopes must matter in shaping this technology.’


Major discussion point

Youth Participation and Rights


Topics

Human rights | Sociocultural


Agreed with

– Dr. Mhairi Aitken
– Adam Ingle

Agreed on

Children must be meaningfully included in AI decision-making processes


Adults should listen to children more because they have valuable ideas about AI development

Explanation

Young people advocate for greater inclusion of children’s perspectives in AI development discussions. They believe that children possess valuable insights and ideas that should be considered alongside adult viewpoints when making decisions about AI technology.


Evidence

Students said ‘I think adults should listen to children more because children have lots of good ideas, as well as adults, with AI.’


Major discussion point

Youth Participation and Rights


Topics

Human rights | Sociocultural


D

Dr. Mhairi Aitken

Speech speed

196 words per minute

Speech length

2780 words

Speech time

847 seconds

Around 22% of children aged 8-12 report using generative AI, with three out of five teachers using it in their work

Explanation

Research findings show that a significant portion of young children are already engaging with generative AI technologies, while the majority of teachers are incorporating these tools into their professional practice. This indicates widespread adoption across educational settings.


Evidence

National survey of around 800 children between ages 8-12, their parents and carers, and 1000 teachers across the UK


Major discussion point

Children’s Current Use and Understanding of AI


Topics

Sociocultural | Human rights


Children are the group most impacted by AI advances but least represented in decision-making about AI development

Explanation

There is a fundamental disconnect between who is most affected by AI technology and who has input into its development. Children, despite being the demographic that will experience the greatest long-term impact from AI advances, have minimal representation in the decision-making processes that shape these technologies.


Evidence

Four years of research projects at the Alan Turing Institute’s children and AI team, including collaborations with UNICEF, Council of Europe, and Scottish Airlines and Children’s Parliament


Major discussion point

Youth Participation and Rights


Topics

Human rights | Sociocultural


Agreed with

– Online Participants
– Adam Ingle

Agreed on

Children must be meaningfully included in AI decision-making processes


Stark differences exist between AI use in private schools versus state-funded schools, pointing to equity issues

Explanation

Research reveals significant disparities in AI access and education between different types of schools. Children in private schools are much more likely to use generative AI and have better understanding of these technologies, creating potential inequalities in access to AI benefits.


Evidence

UK-based research showing children in private schools much more likely to both use generative AI and report having information and understanding about generative AI


Major discussion point

Educational Impact and Equity Issues


Topics

Development | Human rights | Sociocultural


The burden should be on developers and policymakers to make systems safe rather than expecting children to police their interactions

Explanation

Rather than placing responsibility on children to navigate AI systems safely, the primary obligation should rest with those who create and regulate these technologies. Children interact with AI systems differently than adults and often in ways not anticipated by developers.


Evidence

Recognition that children interact with AI systems differently from adults and often differently from how designers or developers anticipate those tools might be used


Major discussion point

AI Design and Child Safety Concerns


Topics

Human rights | Legal and regulatory


Agreed with

– Stephen Balkam
– Adam Ingle

Agreed on

AI systems are not designed with children in mind and require child-centric development from the start


Children with additional learning needs show particular interest in using AI for communication and support

Explanation

Research indicates that children with additional support needs or learning disabilities are more likely to utilize generative AI for communication purposes and connection. There is significant interest from both children and teachers in leveraging AI to support children with additional learning needs.


Evidence

Survey findings showing children with additional learning needs more likely to report using generative AI for communication and connection, plus teacher interest in using AI to support these children


Major discussion point

Educational Impact and Equity Issues


Topics

Human rights | Sociocultural


Agreed with

– Online Participants

Agreed on

AI has significant potential to support children with additional learning needs and disabilities


AI models consistently produce biased outputs, predominantly showing white and male figures

Explanation

When children used generative AI tools to create images of people, the systems defaulted to producing images of white, predominantly male individuals. This consistent bias in AI outputs was identified and caused concern among the children using these tools.


Evidence

Six full-day workshops in Scottish schools using OpenAI’s ChatGPT and DALL-E, where each time children wanted an image of a person, it would by default create an image of a person that was white and predominantly male


Major discussion point

Bias and Representation Issues


Topics

Human rights | Sociocultural


Children of color become upset and choose not to use AI when they don’t feel represented in outputs

Explanation

When AI systems fail to represent children of color in their outputs, these children experience emotional distress and subsequently choose to avoid using the technology. This lack of representation not only impacts individual children but also affects broader adoption patterns of AI tools.


Evidence

Observations from workshops showing children of colour becoming very upset when not represented, and in many cases choosing not to use generative AI in the future


Major discussion point

Bias and Representation Issues


Topics

Human rights | Sociocultural


Children who learn about environmental impacts of AI models often choose not to use them

Explanation

When children gain awareness of the environmental costs associated with generative AI models, including water consumption and carbon footprint, they frequently make the conscious decision to avoid using these technologies. This pattern has been consistent across multiple research engagements with children and young people.


Evidence

Consistent findings across all work engaging children and young people, where children with awareness of environmental impacts, particularly water consumption and carbon footprint of generative AI models, chose not to use those models


Major discussion point

Environmental and Ethical Concerns


Topics

Development | Human rights


Agreed with

– Online Participants

Agreed on

Environmental impacts of AI are significant concerns that influence children’s usage decisions


AI companions marketed to children raise concerns about dependence and isolation from real community

Explanation

The growing market of AI companions specifically targeted at children presents risks of creating unhealthy dependencies and potentially exacerbating social isolation. While these tools are often marketed as solutions to loneliness, they may actually increase disconnection from real human relationships and community engagement.


Evidence

Growing research on AI companions marketed as addressing challenges of loneliness but potentially creating dependence or connection outside of society and community


Major discussion point

AI Companions and Emotional Attachment


Topics

Human rights | Sociocultural


Transparency about AI system nature and data collection is crucial for child interactions

Explanation

For children to safely interact with AI systems, it is essential that they understand what they are interacting with and how their data might be collected or used. This transparency should include information about the AI system’s capabilities, limitations, and data practices.


Major discussion point

Privacy, Data Protection and Transparency


Topics

Human rights | Legal and regulatory


Agreed with

– Online Participants
– Stephen Balkam

Agreed on

Data transparency and privacy protection are fundamental concerns for AI systems used by children


Critical AI literacy focusing on business models and rights impacts is needed beyond technical understanding

Explanation

While technical AI literacy is important, children need a deeper understanding that includes the business models behind AI systems and how these technologies affect their rights. This critical approach goes beyond just understanding how AI works to understanding why it works the way it does and who benefits.


Major discussion point

Research Gaps and Future Needs


Topics

Human rights | Sociocultural


Disagreed with

– Online Participants

Disagreed on

Approach to AI literacy and education


More research is needed on AI’s role in supporting children with disabilities while ensuring proper understanding of their needs

Explanation

While there is significant promise for AI to support children with additional learning needs and disabilities, current development often lacks proper understanding of the specific challenges and needs these technologies should address. Research and development must be grounded in expertise from teachers, children, and specialists in these areas.


Evidence

Recognition that many edtech tools are being pushed without sound understanding of challenges they seek to address or needs of children with additional learning needs


Major discussion point

Research Gaps and Future Needs


Topics

Human rights | Development | Sociocultural


Agreed with

– Online Participants

Agreed on

AI has significant potential to support children with additional learning needs and disabilities


Designing AI well for children benefits other vulnerable users and wider user groups

Explanation

When AI systems are properly designed with children’s needs and rights in mind, the benefits extend beyond just children to other vulnerable populations and the general user base. Child-centric design principles create better, more inclusive AI systems overall.


Major discussion point

Regulatory and Policy Approaches


Topics

Human rights | Legal and regulatory


M

Maria Eira

Speech speed

134 words per minute

Speech length

1688 words

Speech time

750 seconds

Parents who regularly use generative AI feel more positive about its impact on their children’s development

Explanation

Research shows a clear correlation between parents’ familiarity with generative AI technology and their attitudes toward its impact on their children. Parents who use AI regularly view it more positively across multiple areas including critical thinking, career development, and social work, while unfamiliar parents tend to be negative about AI’s impact.


Evidence

Worldwide survey from 19 countries showing regular users (yellow bars) feel much more positive about AI’s impact on critical thinking, career, social work, and general child development compared to unfamiliar parents (blue bars) who were negative in all fields


Major discussion point

Children’s Current Use and Understanding of AI


Topics

Human rights | Sociocultural


There is a lack of awareness from parents and low communication between parents and children about AI use

Explanation

Research reveals significant gaps in parental understanding of how their adolescent children use generative AI, particularly for personal purposes. While parents are aware of academic uses, they often don’t know or disagree that their children use AI for more personal matters like companionship or health advice.


Evidence

Survey targeting parents of adolescents aged 13-17 showing over 80% of parents aware of AI use for information search and school assignments, but for personal uses like AI companions or health advice, most popular responses were ‘I disagree’ or ‘I don’t know’


Major discussion point

AI Design and Child Safety Concerns


Topics

Human rights | Sociocultural


Company goals should focus on people rather than profits when developing AI tools for children

Explanation

When developing AI technologies for children, companies should prioritize human welfare and child wellbeing over financial gains. This principle emphasizes the need for ethical development practices that put children’s needs and safety first.


Evidence

Reference to student comment from opening video: ‘the goal cannot be the profits, it must be the people’


Major discussion point

Environmental and Ethical Concerns


Topics

Human rights | Economic


Long-term impacts of AI technology on children remain unclear with contradictory research results

Explanation

Current research on AI’s effects on children shows conflicting findings, making it difficult to draw definitive conclusions about long-term impacts. Some studies suggest AI can improve critical thinking while others indicate it may decrease these skills, highlighting the need for more comprehensive research.


Evidence

Literature review showing contradictory results with some papers saying AI can improve critical thinking while others say AI can decrease critical thinking


Major discussion point

Research Gaps and Future Needs


Topics

Human rights | Sociocultural


Children should have separate AI legislation because they cannot give the same consent as adults

Explanation

Children require distinct legal protections regarding AI because they lack the same capacity for informed consent as adults. Several principles applicable to adults cannot be directly applied to children, necessitating specialized legislation that considers children’s unique vulnerabilities and developmental needs.


Evidence

Recognition that children don’t have the same awareness of consent and several principles cannot be fully applicable from adults to children


Major discussion point

Regulatory and Policy Approaches


Topics

Human rights | Legal and regulatory


S

Stephen Balkam

Speech speed

139 words per minute

Speech length

2192 words

Speech time

941 seconds

Teens thought their parents knew more about generative AI than they did, contrary to previous technology trends

Explanation

Unlike previous technological developments where children typically led adoption, research found that teenagers believed their parents had better understanding of generative AI. This reversal occurred because many parents were learning AI tools for work purposes or to stay relevant in their careers.


Evidence

2023 three-country study (US, Germany, Japan) with parents and teens, showing large, sizable share of teens in all three countries recorded that their parents had better understanding, with parents struggling to use gen AI at work


Major discussion point

Children’s Current Use and Understanding of AI


Topics

Sociocultural | Human rights


AI systems are not designed with children in mind, requiring retrofitting for safety like previous web technologies

Explanation

The development of AI technology is repeating the same pattern as previous internet technologies, where systems are created without considering children’s needs and safety, then require after-the-fact modifications. This pattern occurred with Web 1.0 in the mid-90s and Web 2.0 around 2005-2006, and is now happening again with AI.


Evidence

Historical examples of World Wide Web not designed with kids in mind requiring retrofitted parental controls, and social media sites like Myspace and Facebook expanding from colleges to elementary schools without child-focused design


Major discussion point

AI Design and Child Safety Concerns


Topics

Human rights | Cybersecurity


Agreed with

– Dr. Mhairi Aitken
– Adam Ingle

Agreed on

AI systems are not designed with children in mind and require child-centric development from the start


Students are increasingly using Gen AI to do their work rather than just proofread it, potentially impacting critical thinking development

Explanation

There has been a concerning shift in how students use generative AI, moving from using it as a tool for proofreading and summarizing to having it complete entire assignments. This trend raises concerns about students not developing essential critical thinking skills.


Evidence

Comparison between initial study findings where teens used AI for ‘proofreading and summarizing long texts’ versus current observations of ‘teens and young people increasingly using Gen AI to do their work for them, their essays, their homework’


Major discussion point

Educational Impact and Equity Issues


Topics

Sociocultural | Human rights


Data transparency is top priority for parents and teens regarding AI companies

Explanation

Research shows that both parents and teenagers prioritize understanding how AI companies collect, use, and source their data. They want companies to be more forthcoming about data practices and to provide clear explanations about how AI systems work and whether the information can be trusted.


Evidence

Survey results showing ‘transparency of data practices’ as top of list for what parents and teens want to learn, and ‘steps to reveal what’s behind Gen AI and how data is sourced and whether it can be trusted’ as key element


Major discussion point

Privacy, Data Protection and Transparency


Topics

Human rights | Legal and regulatory


Agreed with

– Online Participants
– Dr. Mhairi Aitken

Agreed on

Data transparency and privacy protection are fundamental concerns for AI systems used by children


There’s an ongoing struggle to balance safety and privacy, with more safety potentially requiring less privacy

Explanation

The relationship between online safety and privacy creates a persistent dilemma where increasing one often means decreasing the other. This challenge has existed since the beginning of the web and becomes more complex when adding considerations like free expression rights.


Evidence

Reference to struggling with ‘the dichotomy between safety and privacy since the beginning of the web’ since 1995, with additional complexity from free expression rights in the US context


Major discussion point

Privacy, Data Protection and Transparency


Topics

Human rights | Legal and regulatory


Disagreed with

– Joon Baek

Disagreed on

Balance between safety and privacy in AI regulation


Young children are forming emotional attachments to chatbots and using AI for therapy-like conversations

Explanation

There is growing concern about children, teens, and young adults developing emotional dependencies on AI chatbots, using them for extended therapeutic conversations. While these interactions can feel positive and self-reinforcing, they lack the human elements essential for proper mental health support.


Evidence

Anecdotal observations of ‘kids, teens, young adults and adults using AI for therapy, literally talking through on hours at a time deep emotional issues’ with responses from ChatGPT and others


Major discussion point

AI Companions and Emotional Attachment


Topics

Human rights | Sociocultural


L

Leanda Barrington‑Leach

Speech speed

173 words per minute

Speech length

181 words

Speech time

62 seconds

There are existing regulatory and technical tools like the Children and AI Design Code to implement child-centric AI development

Explanation

Regulatory and technical solutions already exist to address the need for child-focused AI development. The Children and AI Design Code represents a collaborative effort between AI experts, children’s rights experts, and other stakeholders to create practical protocols for innovation that prioritizes children’s rights.


Evidence

Reference to the Children and AI Design Code as work that ‘brought AI experts and children’s rights experts and many others together over a very long period of time to develop a technical protocol for innovation that puts children’s rights at the center’


Major discussion point

Regulatory and Policy Approaches


Topics

Human rights | Legal and regulatory


A

Adam Ingle

Speech speed

169 words per minute

Speech length

1180 words

Speech time

418 seconds

The workshop aims to elevate children’s voices in AI design without being patronizing to their views

Explanation

The session is specifically designed to ensure children are part of decision-making processes regarding AI development. The approach emphasizes treating young people’s perspectives with respect and incorporating their real ideas about the future of AI rather than dismissing them as less valuable than adult opinions.


Evidence

Workshop called ‘Elevating Children’s Voices in AI Design’ with participation from young people sharing experiences and hopes, including video messages and panel participation


Major discussion point

Youth Participation and Rights


Topics

Human rights | Sociocultural


Agreed with

– Online Participants
– Dr. Mhairi Aitken

Agreed on

Children must be meaningfully included in AI decision-making processes


Children are already using AI and the question is whether children are ready for AI or AI is ready for children

Explanation

This fundamental question addresses the current reality that children are actively engaging with AI technologies across multiple contexts and purposes. The framing suggests examining whether the responsibility lies with preparing children for AI or ensuring AI systems are appropriately designed for children.


Evidence

Research findings showing kids are already using AI across multiple different contexts for multiple different purposes


Major discussion point

AI Design and Child Safety Concerns


Topics

Human rights | Sociocultural


Agreed with

– Dr. Mhairi Aitken
– Stephen Balkam

Agreed on

AI systems are not designed with children in mind and require child-centric development from the start


M

Mariana Rozo‑Paz

Speech speed

161 words per minute

Speech length

323 words

Speech time

119 seconds

AI agents as influencers are directly affecting children’s real-life relationships and experiences

Explanation

The emergence of AI agents functioning as influencers presents new challenges beyond traditional human influencers or children becoming influencers themselves. These AI agents are not just affecting children’s digital lives but are having concrete impacts on their real-world relationships and social interactions.


Evidence

DataSphere Initiative youth project research focusing on influencers, including AI agents as influencers in digital spaces affecting children’s concrete lives and relationships


Major discussion point

AI Companions and Emotional Attachment


Topics

Human rights | Sociocultural


There are concerning trends in children being turned into influencers by their parents with mind-blowing statistics

Explanation

Research reveals troubling patterns where parents are converting their children into influencers, raising ethical concerns about consent, exploitation, and the commercialization of childhood. The scale of this phenomenon appears to be significant based on emerging data.


Evidence

DataSphere Initiative research on children being turned into influencers by parents with ‘mind-blowing stats’


Major discussion point

Youth Participation and Rights


Topics

Human rights | Economic


J

Joon Baek

Speech speed

173 words per minute

Speech length

124 words

Speech time

42 seconds

Privacy protection laws aimed at safeguarding children may inadvertently violate other rights

Explanation

There is concern that legislation designed to protect children’s data and ensure their online safety might create unintended consequences that compromise other fundamental rights. This highlights the complex balance required when creating protective measures for children in the AI context.


Evidence

Experience from Youth for Privacy NGO observing privacy issues in legislation aimed at protecting children’s data and safeguarding children online


Major discussion point

Privacy, Data Protection and Transparency


Topics

Human rights | Legal and regulatory


Disagreed with

– Stephen Balkam

Disagreed on

Balance between safety and privacy in AI regulation


P

Participant

Speech speed

150 words per minute

Speech length

203 words

Speech time

80 seconds

AI creates a power imbalance between children and AI systems that needs to be addressed through design

Explanation

Children are in a vulnerable position when communicating with AI about personal issues, as the AI appears to be the ‘bigger person’ or authority in the conversation. Design approaches should focus on increasing children’s independence and reducing this inherent power imbalance rather than reinforcing it.


Evidence

Recognition that children are in a more vulnerable situation and position when AI is the bigger person in conversations about personal issues


Major discussion point

AI Design and Child Safety Concerns


Topics

Human rights | Sociocultural


C

Co-Moderator

Speech speed

127 words per minute

Speech length

107 words

Speech time

50 seconds

There should be separate AI ethics and legislation specifically targeting children rather than applying general frameworks

Explanation

The question of whether AI ethics for children should be distinct from general AI ethics reflects recognition that children have unique needs, vulnerabilities, and rights that may not be adequately addressed by general AI governance frameworks. This suggests the need for specialized approaches to AI regulation and policy for children.


Evidence

Question from law student studying AI law specifically about separating children’s AI ethics from general AI ethics and state-level legislation for AI systems targeting children


Major discussion point

Regulatory and Policy Approaches


Topics

Human rights | Legal and regulatory


Agreements

Agreement points

AI systems are not designed with children in mind and require child-centric development from the start

Speakers

– Dr. Mhairi Aitken
– Stephen Balkam
– Adam Ingle

Arguments

The burden should be on developers and policymakers to make systems safe rather than expecting children to police their interactions


AI systems are not designed with children in mind, requiring retrofitting for safety like previous web technologies


Children are already using AI and the question is whether children are ready for AI or AI is ready for children


Summary

All speakers agree that current AI systems are developed without considering children’s needs and safety, repeating historical patterns from previous web technologies. They emphasize that responsibility should lie with developers and policymakers rather than children themselves.


Topics

Human rights | Legal and regulatory


Children must be meaningfully included in AI decision-making processes

Speakers

– Online Participants
– Dr. Mhairi Aitken
– Adam Ingle

Arguments

Young people must be part of AI conversations as they are affected now, not just in the future


Children are the group most impacted by AI advances but least represented in decision-making about AI development


The workshop aims to elevate children’s voices in AI design without being patronizing to their views


Summary

There is strong consensus that children should have meaningful participation in AI governance and development decisions, as they are currently affected by these technologies and have valuable perspectives to contribute.


Topics

Human rights | Sociocultural


Data transparency and privacy protection are fundamental concerns for AI systems used by children

Speakers

– Online Participants
– Stephen Balkam
– Dr. Mhairi Aitken

Arguments

Privacy is a basic right, not a luxury, and AI data collection must be protected


Data transparency is top priority for parents and teens regarding AI companies


Transparency about AI system nature and data collection is crucial for child interactions


Summary

All speakers emphasize that transparency about data practices and privacy protection are essential requirements for AI systems that children use, viewing privacy as a fundamental right rather than optional feature.


Topics

Human rights | Legal and regulatory


AI has significant potential to support children with additional learning needs and disabilities

Speakers

– Dr. Mhairi Aitken
– Online Participants

Arguments

Children with additional learning needs show particular interest in using AI for communication and support


More research is needed on AI’s role in supporting children with disabilities while ensuring proper understanding of their needs


Summary

There is agreement that AI shows promise for supporting children with additional learning needs, though this must be developed with proper understanding of their specific requirements and challenges.


Topics

Human rights | Sociocultural


Environmental impacts of AI are significant concerns that influence children’s usage decisions

Speakers

– Online Participants
– Dr. Mhairi Aitken

Arguments

AI training requires massive resources including thousands of liters of water and extensive GPU usage


Children who learn about environmental impacts of AI models often choose not to use them


Summary

Both young people and researchers recognize the substantial environmental costs of AI development and note that awareness of these impacts influences children’s decisions about using AI technologies.


Topics

Development | Human rights


Similar viewpoints

Both speakers advocate for distinct legal and ethical frameworks for children’s AI use, recognizing that children have unique vulnerabilities and cannot provide the same informed consent as adults.

Speakers

– Maria Eira
– Co-Moderator

Arguments

Children should have separate AI legislation because they cannot give the same consent as adults


There should be separate AI ethics and legislation specifically targeting children rather than applying general frameworks


Topics

Human rights | Legal and regulatory


Both experts express concern about children developing unhealthy emotional dependencies on AI systems, particularly AI companions and chatbots used for personal or therapeutic purposes.

Speakers

– Stephen Balkam
– Dr. Mhairi Aitken

Arguments

Young children are forming emotional attachments to chatbots and using AI for therapy-like conversations


AI companions marketed to children raise concerns about dependence and isolation from real community


Topics

Human rights | Sociocultural


Both researchers emphasize the need for deeper understanding of AI’s impacts on children, going beyond technical literacy to include critical analysis of business models and rights implications.

Speakers

– Dr. Mhairi Aitken
– Maria Eira

Arguments

Critical AI literacy focusing on business models and rights impacts is needed beyond technical understanding


Long-term impacts of AI technology on children remain unclear with contradictory research results


Topics

Human rights | Sociocultural


Unexpected consensus

Parents’ superior knowledge of AI compared to children

Speakers

– Stephen Balkam
– Maria Eira

Arguments

Teens thought their parents knew more about generative AI than they did, contrary to previous technology trends


Parents who regularly use generative AI feel more positive about its impact on their children’s development


Explanation

This finding is unexpected because historically children have led technology adoption. The reversal occurred because parents were learning AI for work purposes, creating an unusual dynamic where parents had more AI knowledge than their children for the first time in digital technology evolution.


Topics

Sociocultural | Human rights


Children’s preference for traditional materials over AI tools in creative activities

Speakers

– Dr. Mhairi Aitken

Arguments

Children chose to use traditional tactile hands-on art materials over generative AI tools, feeling that ‘art is actually real’ while ‘AI art because the computer did it, not them’


Explanation

Despite children’s general interest in AI, when given the choice between AI and traditional creative tools, they overwhelmingly chose traditional methods. This unexpected preference reveals important insights about children’s values regarding authenticity and personal agency in creative expression.


Topics

Human rights | Sociocultural


Equity concerns creating barriers to AI adoption in education

Speakers

– Dr. Mhairi Aitken

Arguments

Stark differences exist between AI use in private schools versus state-funded schools, pointing to equity issues


Explanation

The emergence of AI creating new forms of educational inequality was unexpected, as it suggests that AI could exacerbate existing disparities rather than democratize access to educational tools. This finding highlights how technological advancement can inadvertently increase rather than reduce educational inequities.


Topics

Development | Human rights | Sociocultural


Overall assessment

Summary

There is strong consensus among speakers on fundamental principles: AI systems need child-centric design from the start, children must be included in AI governance decisions, privacy and transparency are essential rights, and AI shows promise for supporting children with additional needs while requiring careful attention to environmental impacts and bias issues.


Consensus level

High level of consensus on core principles with implications for urgent need for coordinated action across policy, industry, and research domains. The agreement suggests a clear path forward requiring collaboration between technologists, policymakers, educators, and children themselves to ensure AI development serves children’s best interests and rights.


Differences

Different viewpoints

Balance between safety and privacy in AI regulation

Speakers

– Stephen Balkam
– Joon Baek

Arguments

There’s an ongoing struggle to balance safety and privacy, with more safety potentially requiring less privacy


Privacy protection laws aimed at safeguarding children may inadvertently violate other rights


Summary

Stephen Balkam presents this as an inevitable trade-off that requires compromise, while Joon Baek raises concerns about unintended rights violations from protective measures, suggesting a more cautious approach to safety-focused legislation


Topics

Human rights | Legal and regulatory


Approach to AI literacy and education

Speakers

– Online Participants
– Dr. Mhairi Aitken

Arguments

AI should be taught in schools rather than banned, with focus on critical thinking and fact-checking skills


Critical AI literacy focusing on business models and rights impacts is needed beyond technical understanding


Summary

Young people emphasize practical skills like fact-checking and efficient AI use in schools, while Dr. Aitken advocates for deeper critical literacy that includes understanding business models and rights impacts


Topics

Human rights | Sociocultural


Unexpected differences

Children’s preference for traditional materials over AI tools

Speakers

– Online Participants
– Dr. Mhairi Aitken

Arguments

AI should be taught in schools rather than banned, with focus on critical thinking and fact-checking skills


Children who learn about environmental impacts of AI models often choose not to use them


Explanation

While young people in the video advocated for AI integration in education, research findings showed children often chose traditional tactile materials over AI tools and avoided AI when learning about environmental impacts. This reveals a gap between advocacy for AI education and actual usage preferences


Topics

Human rights | Sociocultural | Development


Overall assessment

Summary

The discussion showed remarkable consensus on core principles – that children need protection, representation, and age-appropriate AI design – but revealed nuanced differences in implementation approaches and priorities


Disagreement level

Low to moderate disagreement level with high consensus on fundamental goals. The main tensions were methodological rather than philosophical, focusing on how to achieve shared objectives rather than disagreeing on the objectives themselves. This suggests a mature field where stakeholders agree on problems but are still developing optimal solutions


Partial agreements

Partial agreements

Similar viewpoints

Both speakers advocate for distinct legal and ethical frameworks for children’s AI use, recognizing that children have unique vulnerabilities and cannot provide the same informed consent as adults.

Speakers

– Maria Eira
– Co-Moderator

Arguments

Children should have separate AI legislation because they cannot give the same consent as adults


There should be separate AI ethics and legislation specifically targeting children rather than applying general frameworks


Topics

Human rights | Legal and regulatory


Both experts express concern about children developing unhealthy emotional dependencies on AI systems, particularly AI companions and chatbots used for personal or therapeutic purposes.

Speakers

– Stephen Balkam
– Dr. Mhairi Aitken

Arguments

Young children are forming emotional attachments to chatbots and using AI for therapy-like conversations


AI companions marketed to children raise concerns about dependence and isolation from real community


Topics

Human rights | Sociocultural


Both researchers emphasize the need for deeper understanding of AI’s impacts on children, going beyond technical literacy to include critical analysis of business models and rights implications.

Speakers

– Dr. Mhairi Aitken
– Maria Eira

Arguments

Critical AI literacy focusing on business models and rights impacts is needed beyond technical understanding


Long-term impacts of AI technology on children remain unclear with contradictory research results


Topics

Human rights | Sociocultural


Takeaways

Key takeaways

Children are already using AI extensively (22% of 8-12 year olds) but AI systems are not designed with children in mind, requiring urgent action to prioritize child-centric development


There is a significant communication gap between parents and children about AI use, particularly for personal applications, with parents who use AI themselves being more positive about its impact


AI literacy education focusing on critical thinking, fact-checking, and understanding business models behind AI systems is essential and should be integrated into schools rather than banning AI


Significant equity issues exist in AI access and education, with stark differences between private and state-funded schools creating potential digital divides


Children show strong concerns about bias and representation in AI outputs, environmental impacts, and inappropriate content, often choosing not to use AI when these issues are present


AI shows particular promise for supporting children with additional learning needs, but development must be grounded in understanding actual needs rather than pushing technology solutions


The burden of ensuring AI safety should be on developers, policymakers, and regulators rather than expecting children to police their own interactions


Children have a fundamental right to participate in AI decision-making processes that affect their lives, as they are the most impacted group but least represented in development decisions


Resolutions and action items

UNICRI and Disney are launching AI literacy resources (3D animation movie for adolescents and parent guide) at the AI for Good Summit in two weeks


Technology companies should provide transparent explanations of AI decision-making, algorithm recommendations, and system limitations


Industry should fund research and programs to help children develop AI literacy and content discernment skills


AI tools should be designed with children in mind from the start, not as an afterthought, learning from past mistakes with web technologies


Companies should focus on people rather than profits when developing AI tools for children


Separate legislation specifically targeting children’s AI rights and protections should be developed, recognizing children’s unique consent and awareness limitations


Unresolved issues

Long-term impacts of AI technology on children remain unclear with contradictory research results on effects like critical thinking development


How to effectively balance safety and privacy rights in AI systems for children without compromising either


Addressing the environmental impact of AI models and providing transparent information about resource consumption to users


Developing age-appropriate AI companions that support mental health without creating dependency or isolation from real communities


Scaling AI literacy programs globally and implementing them effectively in school systems across different countries


Addressing the power imbalance between children and AI systems in personal conversations and interactions


How to ensure AI systems designed for children with disabilities are grounded in actual needs rather than technology-first approaches


Preventing the monetization and exploitation of children through AI-powered influencer marketing and family vlogging


Suggested compromises

Accepting that perfect balance between safety, privacy, and free expression may never be achieved, requiring constant compromise and adjustment


Designing AI systems well for children will benefit other vulnerable users and wider user groups, creating broader positive impact


Starting with problem identification and user needs assessment before applying AI solutions, rather than technology-first approaches


Combining transparency about AI system nature and data collection with critical AI literacy education to enable informed choices


Developing AI literacy resources that target both children and parents simultaneously to improve communication and understanding


Thought provoking comments

AI is extremely advantageous when used correctly. But when misused, it can have devastating effects on humans… Young people like me must be part of this conversation. We aren’t just the future, we’re here now.

Speaker

Online Participants (Young people from across the UK)


Reason

This opening statement immediately established the central tension of the discussion – AI as both opportunity and threat – while assertively claiming young people’s right to participate in decision-making. The phrase ‘we aren’t just the future, we’re here now’ powerfully challenges the common dismissal of children’s voices as merely preparatory for future relevance.


Impact

This comment set the entire tone for the workshop, establishing children as active stakeholders rather than passive subjects of protection. It influenced all subsequent speakers to frame their research and recommendations around meaningful youth participation rather than paternalistic approaches.


teens thought that their parents knew more about generative AI than they did. With previous trends, particularly in the early days of the web, and then web 2.0, and social media, kids were always way ahead of their parents in terms of the technology. But in this case, a large, sizable share of teens in all three countries recorded that their parents had a better understanding than they did.

Speaker

Stephen Balkam


Reason

This finding fundamentally challenges the conventional wisdom about digital natives and technology adoption patterns. It suggests a significant shift in how AI technologies are being introduced and adopted, with workplace necessity driving adult adoption ahead of youth exploration.


Impact

This observation reframed the entire discussion about digital literacy and family dynamics around AI. It led to deeper exploration of how AI literacy should be approached differently from previous technology rollouts and influenced subsequent speakers to consider intergenerational learning approaches.


art is actually real… children felt that they couldn’t say that about AI art because the computer did it, not them.

Speaker

Dr. Mhairi Aitken (quoting children from her research)


Reason

This insight reveals children’s sophisticated understanding of authenticity, creativity, and personal agency in relation to AI. It challenges assumptions that children will automatically embrace AI tools and shows their nuanced thinking about what constitutes genuine creative expression.


Impact

This comment shifted the discussion from focusing on AI capabilities to considering children’s values and choices. It introduced the important concept that technology adoption isn’t just about functionality but about meaning and identity, influencing how other panelists discussed the importance of providing alternatives and respecting children’s preferences.


parents who use generative AI tools feel more positive about the impact that this technology can have on their children’s development… when parents are familiar with the technology, when they use the technology, they see it differently.

Speaker

Maria Eira


Reason

This finding reveals a crucial insight about how personal experience with technology shapes attitudes toward children’s use of that technology. It suggests that fear and resistance may stem from unfamiliarity rather than inherent dangers, pointing toward education as a key intervention.


Impact

This observation led to discussion about the importance of adult AI literacy as a prerequisite for supporting children’s safe AI use. It influenced the conversation toward considering family-based approaches to AI education rather than child-focused interventions alone.


children are likely to be the group who will be most impacted by advances in AI technologies, but they’re simultaneously the group that are least represented in decision-making about the ways that those technologies are designed, developed, and deployed

Speaker

Dr. Mhairi Aitken


Reason

This statement crystallizes the fundamental injustice at the heart of current AI development – those most affected have the least voice. It frames the entire discussion in terms of rights and representation rather than just safety or education.


Impact

This comment elevated the discussion from technical considerations to fundamental questions of democracy and rights. It influenced subsequent speakers to consider not just how to protect children from AI, but how to include them in shaping AI’s development.


the goal cannot be the profits, it must be the people

Speaker

Maria Eira (quoting from the children’s video)


Reason

This simple but profound statement cuts to the heart of the tension between commercial AI development and human welfare. Coming from children themselves, it carries particular moral weight and clarity about priorities.


Impact

This comment served as a powerful conclusion that tied together many threads of the discussion. It reinforced the moral imperative for child-centered AI development and provided a clear principle for evaluating AI initiatives.


Should AI ethics for children be separated from general AI ethics?

Speaker

Katarina (online participant studying AI law)


Reason

This question forced the panel to articulate whether children’s needs are fundamentally different from adults’ or simply a subset of universal human needs. It challenged the assumption that child-specific approaches are necessary while opening space to consider the broader implications of child-centered design.


Impact

This question prompted important clarification from panelists about why children need specific consideration while also acknowledging that good design for children benefits everyone. It helped crystallize the argument for child-specific approaches while avoiding segregation of children’s interests from broader human rights.


Overall assessment

These key comments fundamentally shaped the discussion by establishing children as active stakeholders rather than passive subjects, challenging conventional assumptions about technology adoption and digital literacy, and elevating the conversation from technical considerations to questions of rights, representation, and values. The opening statement from young people set a tone of empowerment that influenced all subsequent speakers to frame their research in terms of meaningful participation rather than protection. The research findings about reversed technology adoption patterns and children’s sophisticated value judgments about authenticity added nuance and complexity to common assumptions. The discussion evolved from a focus on safety and education to encompass broader questions of democracy, representation, and the fundamental purposes of AI development. The interplay between research findings and direct youth voices created a rich dialogue that moved beyond typical adult-centric approaches to technology policy.


Follow-up questions

How are influencers (including AI agents as influencers) shaping children’s experiences with AI and social media, and how does this affect their real-life relationships?

Speaker

Mariana Rozo-Paz from DataSphere Initiative


Explanation

This addresses a gap in current research about the influence of AI agents and human influencers on children’s digital experiences and their concrete impact on real-world relationships


What are the long-term impacts of generative AI use on children’s development and well-being?

Speaker

Maria Eira


Explanation

Current research shows contradictory results about whether AI improves or decreases critical thinking skills, indicating need for longitudinal studies


How can AI companions be designed responsibly to support children’s mental health without creating dependency or exacerbating loneliness?

Speaker

Dr. Mhairi Aitken


Explanation

There’s growing interest from children in using AI companions for mental health support, but current tools aren’t designed with children’s well-being in mind


How can generative AI be further leveraged for the support and inclusion of people with disabilities?

Speaker

Ryan (17-year-old youth ambassador)


Explanation

Children showed strong interest in AI supporting those with additional learning needs, but development needs to be grounded in understanding actual needs and challenges


How can AI be designed to reduce power imbalances between children and AI systems, particularly in personal conversations?

Speaker

Elisa from OnePile Foundation


Explanation

Children are in vulnerable positions when communicating with AI about personal issues, requiring design approaches that maintain child agency and independence


How can we develop critical AI literacy that goes beyond technical understanding to include business models and rights impacts?

Speaker

Dr. Mhairi Aitken


Explanation

Current AI literacy efforts focus on technical aspects, but children need to understand the broader implications including data collection, business models, and rights impacts to make informed choices


What are the impacts of using AI bots for therapy, particularly regarding emotional attachments and potential risks?

Speaker

Stephen Balkam


Explanation

Anecdotal evidence shows children and adults using AI for therapeutic conversations, but research is needed on the safety and effectiveness compared to human therapy


How can we address equity gaps in AI access and education between private and state-funded schools?

Speaker

Dr. Mhairi Aitken


Explanation

Research revealed stark differences in AI access and understanding between private and state schools, pointing to important equity issues that need addressing


How can we better understand and address parental awareness gaps regarding children’s personal use of generative AI?

Speaker

Maria Eira


Explanation

Research showed parents are aware of academic AI use but lack knowledge about personal uses like AI companions or seeking help for personal problems


What regulatory approaches can protect children’s rights in AI without violating other rights like privacy?

Speaker

Joon Baek from Youth for Privacy


Explanation

There are concerns that legislation aimed at protecting children in AI contexts might inadvertently compromise other rights, requiring careful balance


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.