WS #234 AI Governance for Children’s Global Citizenship Education

17 Dec 2024 06:30h - 08:00h

WS #234 AI Governance for Children’s Global Citizenship Education

Session at a Glance

Summary

This discussion focused on the role of AI in global citizenship education for young people, exploring both opportunities and challenges. The session featured presentations from researchers, industry professionals, and youth representatives, offering diverse perspectives on the topic.


Speakers highlighted how AI can enhance cross-cultural communication and understanding among youth, with examples like Honda’s HAL robot facilitating interactions between students in different countries. However, concerns were raised about data privacy, bias in AI systems, and the need for child-centered design in AI development.


UNICEF’s work on AI and children’s rights was presented, emphasizing the importance of protecting children’s data and ensuring AI systems are transparent and accountable. The discussion also touched on the digital divide, with calls to make AI more accessible and relevant to children in developing countries and marginalized communities.


Youth perspectives were shared, noting both the benefits of AI in education and daily life, as well as potential risks like over-reliance on AI for academic work. The importance of teaching responsible AI use and critical thinking skills was stressed.


Mental health emerged as a key theme, with examples of AI being used to provide psychological support to refugee youth. However, speakers emphasized the need for sustainable, long-term programs and funding to address mental health issues effectively.


The discussion concluded with calls for greater inclusion of children’s voices in AI policy and development, ensuring that AI systems meet the diverse needs of youth across different contexts and cultures. Overall, the session underscored the potential of AI to enhance global citizenship education while highlighting the need for careful consideration of ethical and practical challenges.


Keypoints

Major discussion points:


– The need for AI systems to be designed with children’s needs and rights in mind, not just adults


– Concerns about AI’s impact on jobs, misinformation, and children’s privacy/data protection


– The importance of including diverse global perspectives, especially from developing countries, in AI development


– Using AI to support global citizenship education and cross-cultural understanding for youth


– Addressing mental health impacts and providing support through AI, especially for vulnerable populations


Overall purpose/goal:


The discussion aimed to explore the role of AI in global citizenship education for young people, examining both opportunities and risks. It sought to highlight the need for child-centered AI design and policies, as well as greater inclusion of diverse global perspectives in AI development.


Tone:


The tone was largely informative and constructive, with speakers presenting research findings and personal experiences. There was an underlying sense of urgency about addressing challenges, but also optimism about AI’s potential benefits if developed responsibly. The tone became more impassioned when discussing the need to include marginalized communities and address mental health impacts.


Speakers

– Vicky Charisi: Workshop moderator


– Shigemi Satoshi: Researcher at Honda Research Institute


– Steven Vosloo: Digital Policy Specialist at UNICEF Office of Research and Foresight


– Athanasios Mitrou Georgiou: Student preparing for university studies, interested in digital technology and engineering


– Dominic Regester: Representative from Salzburg Global


– Amisa Rashid Ahmed: Counselling psychologist, mediator, advocate for community resilience, founder of Navisha Foundation


– Ariadni Gklotsou: Student with experience in different countries (Singapore, Australia, Greece)


Additional speakers:


– AUDIENCE: Various audience members who asked questions or made comments


Full session report

AI in Global Citizenship Education: Opportunities and Challenges for Youth


This discussion explored the role of artificial intelligence (AI) in global citizenship education for young people, examining both opportunities and challenges. The session featured diverse perspectives from researchers, industry professionals, and youth representatives, highlighting the complex interplay between AI technology and youth development.


Key Themes and Discussion Points


1. Child-Centred AI Design and Development


A central theme of the discussion was the need for AI systems to be designed with children’s rights and developmental needs in mind. Steven Vosloo from UNICEF emphasised the importance of including children’s voices in AI design and policymaking processes. He highlighted UNICEF’s work on AI and children, including policy guidance developed and the upcoming research project “Disrupting Harm.” Vosloo also presented statistics showing that 79% of teenagers are concerned about AI’s impact on job prospects.


Amisa Rashid Ahmed stressed that AI tools must prioritise children’s developmental needs, particularly in African contexts where current AI systems often exclude local languages and cultural perspectives. She emphasized the need for decolonization and Africanization of AI to ensure inclusivity and relevance.


Ariadni Gklotsou, a student representative, added that clear rules and boundaries are necessary for the safe use of AI by students. She highlighted the International Baccalaureate Organization’s academic integrity policy as an example of how educational institutions are addressing AI use, offering a balanced approach to regulation without completely restricting its use.


2. AI for Enhancing Global Citizenship Education


The discussion highlighted AI’s potential to enhance cross-cultural communication and understanding among youth. Ariadni Gklotsou, drawing from her experiences as a student in different countries, noted that AI can help teenagers connect with peers globally. Amisa Rashid Ahmed emphasised AI’s capacity to encourage global perspectives in children. Both speakers agreed that AI-driven social media platforms can help youth understand different cultures, fostering a sense of global citizenship.


Shigemi Satoshi from Honda Research Institute presented a project on cross-cultural communication between students in different countries, demonstrating practical applications of AI in promoting global understanding among young people.


3. Challenges and Risks of AI for Youth


While acknowledging AI’s potential benefits, speakers also raised significant concerns about its impact on youth. Steven Vosloo highlighted emerging issues such as AI-generated child sexual abuse material and the potential risks of AI relationships and interactions on children’s wellbeing. He cited cases in the US where families are suing tech companies over alleged harm caused by AI interactions, underscoring the need for robust protections.


Environmental impacts of AI systems were also mentioned as a concern requiring further consideration. Ariadni Gklotsou cautioned against overreliance on AI tools like ChatGPT, which could potentially hinder skill development in students. She also mentioned the Australian government’s decision to ban social media for children under 16, sparking a discussion on appropriate age restrictions for AI and social media use.


4. Inclusive and Ethical AI Development


A recurring theme was the need for more inclusive and ethical AI development. Amisa Rashid Ahmed emphasised the importance of centring African narratives in AI development, pushing for the inclusion of indigenous languages and knowledge within AI systems. She called for more research on AI in African contexts and stressed the need for transparency and accountability in AI governance.


The discussion highlighted the digital divide, with calls to make AI more accessible and relevant to children in developing countries and marginalised communities. This includes addressing the unique needs of children in areas lacking safe infrastructure and education, as raised by an audience member from Sudan.


5. AI Applications for Youth Wellbeing


The potential of AI to support youth mental health emerged as a key discussion point. Amisa Rashid Ahmed shared her work with Sudanese refugees, using AI for mental health support and language translation. Steven Vosloo mentioned the use of AI for emotional support in Japan, while an audience member from Myanmar raised the question of creating sustainable, community-driven mental health programmes using AI for young people in crisis areas.


Thanasis Mitrou Georgiou provided perspective on AI’s impact on education and career paths, contributing to the discussion on how AI is shaping future opportunities for youth.


Agreements and Consensus


There was broad agreement on the importance of involving children in AI development and policymaking processes. Speakers concurred that AI has significant potential to enhance global citizenship education and foster cross-cultural understanding among youth.


A moderate level of consensus emerged regarding the need for child-centred, culturally sensitive approaches to AI development and implementation in education and youth support. However, speakers differed in their specific approaches, with Vosloo focusing on a rights-based approach, Ahmed emphasising cultural and linguistic inclusion, and Gklotsou advocating for clear rules and boundaries.


Differences and Unresolved Issues


While disagreements were relatively minor, there were differing emphases on how to approach AI development for children. The discussion left several issues unresolved, including:


1. Ensuring equitable access to AI-driven educational tools for children in crisis areas or with limited infrastructure


2. Balancing the benefits of AI for learning against risks of overreliance and skill atrophy


3. Addressing the environmental impacts of increased AI system development and use


4. Determining appropriate age restrictions and safety measures for youth engagement with AI and social media


Conclusion and Future Directions


The discussion concluded with calls for greater inclusion of children’s voices in AI policy and development, ensuring that AI systems meet the diverse needs of youth across different contexts and cultures. Key takeaways included the need for AI systems designed with children’s rights and diverse contexts in mind, the potential of AI for global citizenship education, and the importance of addressing risks and challenges associated with AI use by youth.


Suggested action items included pushing for greater inclusion of children’s voices in AI design and policymaking processes, conducting more research on AI applications in African and developing world contexts, and creating sustainable, long-term programmes leveraging AI to support youth mental health and wellbeing.


The session underscored the potential of AI to enhance global citizenship education while highlighting the need for careful consideration of ethical and practical challenges. It emphasised the importance of continued dialogue and research to ensure that AI development aligns with the needs and rights of children worldwide.


Session Transcript

Video presented during the session: focused on accuracy, modularity, and efficiency at the individual level. At HRI, we’re weaving together new ideas and technologies to create a different kind of AI that can navigate human relationships. We’re working with interdisciplinary research teams to develop AI and robotic systems to benefit society as a whole. Our mission is to lead in the new domain of human-centered AI and to establish the AI tools necessary to navigate human relationships for the bee society to flourish.


Shigemi Satoshi: As you saw the video, bees in harmonious society believe that the next big step for AI systems is to evolve from being the tool to becoming the partner. To achieve that, we aim to develop the future and provide long-term support including psychological care. Ambassadors and aging population and COVID-19 have contributed to decline in human relationships in the community. To address this issue, we need a system that coexists with humans 24 hours a day, 365 days a year, and reliable social enablers. In other words, the role of the enabler is to help people lead more fulfilling lives. Currently, the Honda products use only a few hours in a day. I want to change it, and I would like to develop new technology research, new technical research, so that the Honda products can co-exit 24 hours a day. I would like to introduce the HAL robot, which I am researching as one of the ways of realizing a harmonious hybrid society. HAL is an encourage mediator. The target is to provide emotional and psychological support through AI technology to the strongest and facilitate group interaction and relationships. Its value is to analyze and understand the social group dynamics for intergenerational and intercultural harmony to support improving the group diversity and to avoid social conflict and division. Reinforcement, learning to understand and create social dynamics that provide a neutral understanding and team building. We want the HAL robot as a partner, a harmonious hybrid, 24 hours a day. To achieve this goal of the transformation to be a partner with corporate AI, we need to do the following activities. To connect with human beings in a natural way. harmonious individual and social needs, and to foster proactive trust between human and society. The first scenario is to promote growth, group communication, interaction in school in different countries. In the partner, we have started to experiment high school in Japan and in Australia. The second focus is on improving the health of the children in hospital in Spain. Especially, I want to bring the smart to the children by enabling the interaction of the HAL robot. Let me briefly explain how HAL is utilized at school in scenario 1. As a social robot, HAL has served as a facilitator to bridge cultural gap between the students from the different countries and provides the information, lifestyle advice, and that’s unique to each country. For example, he highlights the differences in school lunches, school events such as school sports day, and cultural festivals. These activities help the children understanding and respect the diversity. Please show the videos showing this activity.


Video presented during the session: Over the next five years, HRI is focusing on research in AI that is designed to proactively nurture positive relationships and social cohesion. Our first phase is aimed at supporting inclusive practices and an acceptance of diversity through an embodied AI that acts as an encouraging mediator. Children and the adults they become are the future of a diverse and cohesive society. At HRI, we’re working with various partners and stakeholders to design an encouraging mediator that aims to support appropriate pro-social skills and ways of thinking that bridge cultural differences and support equity, diversity and inclusion through an exchange of ideas, discussion and shared experience with their peers in other countries. The encouraging mediator will adopt UNICEF’s policy guidance on AI for children as an interactive cross-cultural shared experience that creates an enabling environment for its participants. Diversity is about what makes groups different from one another. Community is created out of sharedness and not by sameness. Our encouraging mediator, equipped with human understanding, autonomous behaviour generation and creative learning modules, will facilitate and mediate interaction and communication with children from various institutions from all over the world to form a diverse and shared community through a series of activities that encourage self-expression, open discussion and an understanding of each other’s cultures. Thank you.


Shigemi Satoshi: Okay, next I will explain the development of social robotics cross-cultural mediations. Before we developed and deployed the robotic systems cross-cultural mediation, we conducted various steps not just to ensure the meaningful technology and that makes us seen for children, but also towards responsibility technology that safeguards children in the process. I will just mention three of the most important steps. In first step, around three years ago, we collaborated with UNICEF to promote the pilot study on AI for children. My specialty is that we adopted the UNICEF’s policy guidance in designing our robotic systems and the technical technologies that it used. For example, the guidance number four, we built a strict protocol of ensuring the data and the properties are related to the matter when developing the systems. The kind of research we are working on right now is aligned to mine and our ethical question. Hence, we need to work with some kind of framework to addressing the needs along the way. Step number two, we conducted various co-designs with partners, including the educators, to see from the children the kind of the interaction and the content they want to the robot to provide. For example, we provide the children some true means for them to the program, the robot with the behavior and the kind of the language they want to use. And we hand-headed this in the final systems. In this case, we ensure the children’s participants in every technology, every technology we designed. And for the next step, we gather the multi-stakeholder team from the various fields, and accompany the social science, humanity, and education among others. This seems to provide the content of the intelligent design that’s relevant towards the children’s well-being. We then use this input to develop the workflow to the cross-cultural system for the medication in the diverse cultures among the children through the use of the robotic systems. Then I will show the demo video.


Video presented during the session: Thank you. I’m going to eat a donut. I’m going to eat a donut. Kids from Japan, could you tell us why you selected these avidurs? I want to eat donuts, so I choose donuts. I would love to eat a donut. I want to eat a donut. I’m going to eat a donut. The band members’ picture. We play music in our school’s culture festival. I play the trombone. Oh, that’s wonderful. I play the trumpet. And I play the flute. That’s nice. The picture shared says school. Can you tell us more about it? This is me when the last day of elementary school. We did a dance party. Amazing. Your picture is so cool. Thank you. Rodriguez, you have shared a picture of your school. Can you tell us more about that? So, I took that book because I like the earth and I love the natural resources that it has. And I also want to learn more about the earth so I can save me from these natural disasters and try to work out more. Nice. Now, Tricus, thanks for sharing it with all of us. A-L-I-C-E. I saw you shared your favorite outfit. Did you wear that outfit a lot? Uh, yeah. In school, mostly. Yeah. I don’t think I could pull it off. Now, let’s practice the word university. I got you. I got you. What would you like to study in university? Aerospace engineering. Aerospace engineering looks so interesting. Nice path to follow. So, our next activity is learning how to write kanji. Molly! Oh, it’s a smiley face. Oh, that makes a lot of sense. Japanese children, any comments about this kanji? It’s so cute. Let’s learn words in each other’s languages. Can you say for us any sign that you know? Even in your language, it’s fine. University. I think my giant green eyes make me unique. That, and my amazing personality. I love you. I love you, too. I see you. Yeah, because I want to see people happy. I want to see people happy all over the world. And not sadness and all these bad things. I enjoyed meeting you guys. It was really cool. And I enjoyed this experience because it’s cool. That concludes our episode of today. Excited to see you all next time. Bye for now. Sorry. Do it, do it, do it. OK, do it, do it. Do it, do it. We really want to meet you guys in real life. Yeah, it was amazing to meet you. Yeah, it was. Bye. Bye. Bye. Bye. See you next time. Bye. Bye. Bye. It was nice meeting you. Goodbye. Bye. I hope you have fun and learn something. See you soon. Bye. Bye. Thank you. Thank you.


Shigemi Satoshi: Sorry about that. Thank you for your kind attention. Thank you very much. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.


Vicky Charisi: Thank you, Sigi-Michan, for the presentation. We understand that this is an ongoing project. As far as I know, this is not something that it is in the market. But we see how a company engages with, while they’re developing their product, engages with current policy guidance for children. Or engages, includes children in their design process, etc. Which is something that we appreciate. And, of course, I understand that this is something that will come later. The connection with global citizenship education here in this project and why we invited this work to be presented in this workshop is that we understand these cross-cultural settings and cross-cultural interactions between children as quite important for global citizenship education. Often we work, you know, teachers work in their local environments. And we want to see also, we explore things that are in different settings, in different cultures, different socio-economic statuses, etc. So, thank you very much for this. And I think we have time for one question, probably. If there is any question from the audience. If you could introduce first. No, we need to give you the microphone. Oh, there. And you can keep your headphones if you want. And if you want to introduce yourself first. No.


AUDIENCE: Hello.


Vicky Charisi: Yes.


AUDIENCE: I am from the small island nation of Samoa. I was wondering. I see that you guys are touching a lot of bases in some of the big nations. Is there any interest or any way that small island nations could be involved in this project? Thank you.


Shigemi Satoshi: Thank you very much for your comment. I think it’s better to expand the different countries to bridge the cultural difference. Thank you.


Vicky Charisi: Thank you. I head over to Stephen, I think. Stephen, you can introduce yourself. Thank you. Can you hear me okay?


Steven Vosloo: Okay. Good morning, everyone. Thank you for joining us. I’m Stephen Foslu. I’m a Digital Policy Specialist at UNICEF in the Office of Research and Foresight. Thank you for the opportunity to be here. I will speak a bit later, but now I’m introducing Thanasis Mitrouh who is from the Netherlands. He’s just finished secondary school, high school and is preparing for university studies. He’s interested in studying digital technology and engineering. Thanasis uses AI in his daily activities as a student and is also interested in the ethical aspects. We look forward to hearing more.


Athanasios Mitrou Georgiou: Thank you very much. I really appreciate this invitation to this workshop and I’m glad to explain briefly my thoughts about AI for young people. So as all of us know, AI is changing the world in ways that are impossible to ignore. For young people, AI brings both exciting opportunities but also big challenges. First I’m going to talk about how AI can shape our education choices and career paths by providing access to information, and secondly how social media keeps us connected with teenagers from all over the world. On the bright side, for those of us who have just finished school like me, AI opens up new areas of studies and careers in fields such as data science, machine learning and AI ethics. It provides us tools that make our work faster and more efficient, whether we’re analyzing or solving problems or creating and exploring our own ideas. For example, AI even empowers young entrepreneurs to build their own businesses with fewer resources, making innovation more accessible than ever. But with all this potential comes a possibility. We have to keep learning and adapting. For example, now that I’m preparing for my university studies, I often use ChatGTP to help me code better and give me feedback on my coding, and that would be way more difficult without ChatGTP. However, AI raises important ethical questions like how to prevent bias in our systems, protect privacy and manage its impact on job and society as a whole. It’s up to us to engage thoughtfully with these issues. And that means developing skills AI can’t replace, such as creativity, emotional intelligence and critical thinking. And by staying curious and resilient, we can turn AI into a tool that works for us and not against us. Social media, another huge part of our lives, also shapes the way we see the world, especially how we think about democracy and social issues. Platforms like Instagram, TikTok and X expose us to diverse perspectives. and connect us to movements we’re interested in. They make it easier than ever to advocate for change and be part of important conversations. But they have their downsides too. Algorithms often trap us in eco-chambers, reinforcing what we already believe and making it harder to see other points of view. And on top of that, misinformation spreads fast and the pressure to react quickly can lead to shallow thinking. To make the most of social media, we have to be critical about what we consume and use it responsibly to amplify positive change. This in combination with measures taken by governments and big tech to keep social media safe for all of us. AI and social media could do more to meet the needs of young people. Imagine if AI had features designed specially for us, like learning tools that adapt to our age and interests. Or maybe some safety modes that are able to block harmful content while encouraging creativity and curiosity. These tools could teach us to evaluate information critically, keeping us safe and informed while we explore and grow. Finally, many of us are privileged to learn how to become global citizens. These opportunities should be given to all children and young people, no matter the place they live in. Global citizenship education in combination with AI could be very useful to prepare young people to shape better societies. AI is a powerful tool, and it’s up to us to decide how we use it. With curiosity, creativity, and a commitment to making a difference, we can ensure that AI shapes a brighter future for all of us. Thank you.


Steven Vosloo: Thank you very much, Tanasis. Okay, let me just get the clicker, please. Thank you. Okay, so, hello again. You can hear okay? Perfect. So, at UNICEF, we are obviously very interested in children’s development and children’s rights and how technology impacts those rights, either to develop those rights and support them or undermine them, and how do we protect and empower children. So, this is an area of work that I lead. and a few years ago I led a project called AI and children or AI for children and we developed guidance, which I’ll touch on in a moment, on how AI can be developed in a way that upholds children’s rights, protects and empowers them. So we started, well we engaged young people around the world and this was a workshop in Sao Paulo in Brazil to really talk to them about, like Thanasis was saying, how do you use AI? What do you think about it? And this point came through really strongly that even though children, and by that I mean anyone under 18, actually the biggest children and youth are the biggest online group out of any age group and they use technology probably more than anyone else. The technologies aren’t really designed with them in mind and that needs to change. So we developed, there we go, this guidance on, like I said, on how AI can be more protective and empowering for children and there’s the link and I really encourage you to use it. It’s in English, Spanish, French and Arabic and there’s some resources for parents and caregivers and teachers even. Let me go to the next slide because this, sorry, these are some of the key points that came out in the guidance and you’ll recognize some of these if you are familiar with the AI world. Things like fairness and non-discrimination or children’s data and privacy. These are not new issues in the world of AI but we really wanted to focus on what does it mean for when we talk about a child’s data, which is different to an adult’s data, or how do we provide transparency and explainability and accountability for children different to adults. And of course, as we know, children’s data is different, children’s understanding as they develop, their cognitive development goes on, is different to that of adults and so things like AI explainability, even for adults it’s difficult, for children it has to be more. much simpler, and we need much simpler ways to provide AI interactions and experiences that are at the level of children and their caregivers. So we work closely with eight organizations around the world to pilot the guidance, including the Honda Research Institute and the Joint Research Center, the JRC, at the European Commission, working with Vicky. So what you heard earlier was one of the projects that piloted the guidance, and we learned from that, and we appreciate that collaboration. But we also worked with companies and with governments. And all of it sits on top of children’s rights, which are basically to protection, to provision, and to participation. So we published that in 2021. We’re at the end of 2024. What has changed since then? And I raise this because, for two reasons. One, we need to constantly be aware of a changing technological landscape and the social landscape around that. And secondly, we are thinking of, if we had to write this guidance today, or if we had to release the guidance, what would we do differently? When you do something like this, as you know from ethical AI principles and ethical guidance, you try and make it as future-proof as possible, and in many ways, the principles have not changed. Your data still needs to be protected. You still need your privacy. You still need to be included in the process. But how does that change today, and what are some of the issues that we are thinking about, or should be thinking about? And so I’m going to list a few today. I would love to hear from you what you think UNICEF should focus on. I don’t know what’s going forward, sorry. What you think UNICEF should focus on, or others should focus on. If you had to read a guidance, let’s say it comes out next year, 2025. So, just quickly. What’s happened since 2021? Generative AI, chat GPT. We know about that. Huge investments by governments and companies. The minister yesterday gave a great opening speech about the global divides and those who have AI opportunity and power and those who don’t. Saudi Arabia itself, I was reading, may invest in a $100 billion AI center. AI advances from creating podcasts to your homework help, medical diagnoses to climate modeling. So things are moving quickly and we also see a real focus on governance. The safety summits that have happened in the UK, in Seoul, in Korea, in February in Paris and a focus on responsible AI. So let me quickly also just include some statistics or some findings from a recent research survey rather done in the US at the top. This was with a thousand teenagers, 13 to 17. And this is interesting. I fear I won’t have a job when I’m old enough to work, age 17. So when we consulted children around the world in 2020, none of them spoke about jobs except those in South Africa where I’m from, where there’s higher youth unemployment. Those in the US, those in Sweden, those in Chile did not speak about jobs. Now we said coming up in the US. The one on the right is also interesting. I never know if a picture I’m looking at is AI or not. And so the issue of trust is something that’s changing for all of us as we experience more AI media. Information comes up, 59% of teens are concerned about this and almost half of teens use AI tools several times a week or more. Now this is the US, so this is not the same for all countries. But some of the data that we’re getting is that children even in global South countries, and I’ll talk about that in a moment, are also using, 40% or so, are also using AI systems once a week. And so it’s not just a rich country or developed world phenomenon. That’s what we see. The bottom stat is interesting. It’s from a study done by Fossey last year with teenagers in the US, Germany, and Japan. And they said, what are the top two ways that you would use AI in the future? And half of teenagers in Japan said for emotional support. So this is very interesting. And we’ll have to, you know, depending on what you think, you may have different views on that, but it’s a very interesting stat. This is how some teenagers will look to AI. So I’ll just quickly run through some of the issues that we think this means that where guidance is needed and engagement is needed with young people for how AI could, how we shape AI for children. So the skills has come up again and again. This is not new. We covered this in our policy guidance, but the world is changing in terms of what kind of skills do you need today and in the future? Life skills, skills for work. We don’t know what the workplace in the future looks like. And so how do we better prepare or anticipate what those skills are and therefore change education systems today? How do we teach responsible use of AI? We can’t debate anymore whether children should use AI or not, it is happening. How do we teach responsible use and provide protections and empowerment as needed? And how do we use AI to support education? So the second one, AI generated child sexual abuse material. This is something we did was not on the radar at all in 2020, but it is on the rise and kind of deep fakes of non-consensual intimate images or videos being created and shared. And the numbers are still quite small, but rising quickly. And it really is a problem for, well, for the victims, but it also is a real problem for law enforcement as they try to identify real victims who. now being mixed up with the kind of manipulated images. AI relationships is something that’s interesting and it’s been coming up more and more and I’m not saying there’s anything wrong with AI relationships but we are seeing news stories of AI relationships gone wrong in a sense and there are two cases in the US now where families are suing tech companies who they’re alleging that these AI interactions caused the children either suggested the children do harm or caused children to do harm themselves to themselves and so it’s something to to really watch in terms of of what kind of protections we need environmental impacts of AI this is something again that was not on our radar we when we wrote about this in 2020 it was really just to say that AI has an environmental impact but really AI can help combat climate change but we’re seeing with the growth of data centers that consume a lot of energy to build to maintain the rare minerals that go into AI systems and servers and the e-waste that gets produced this is something we really need to watch in the future and I just raised this because UNICEF’s work on climate change and children really shows that climate change impacts children more than it does adults and so we really need to watch this children also have a right to a clean sustainable environment and then lastly this misinformation point that we again we did not look at this just three years ago it came up in the quote earlier but the use of AI for misinformation or manipulated media that does is meant to mislead or cause distrust is something something we need to watch so I just do two more quickly The AI supply chain, again, this is not something that we will focus on in a big way, but it’s something that keeps coming up, that we need to improve the working conditions. These are digital products, and like all products that children use, we need to look at the supply chain and the labor practices, and there are stories of children potentially being used for data labeling and content moderation in very unsafe and unsupported conditions. So that obviously has to change. Okay. Sorry, let’s start, but basically just to say thank you for this, I would love to hear from you about what you think, how we can create a space where AI is more child-centered. We have some data coming out in the next few months from a project we’re doing called Disrupting Harm, where we’ve asked children in 12 countries, and not the usual US, UK, in Morocco, in Colombia, in Mexico, how they use AI, what they’re worried about. We’re looking forward to sharing that with you. Thank you.


Vicky Charisi: Thank you, Stephen. Amazing. In fact, what you mentioned about the climate change issue, I was last week in a conference where the Ban Ki-moon Foundation, they announced this climate change and global citizenship education. So there are, I think there is quite activity on this topic, and I’m very looking forward to see the next steps. But I think we have time also for one question for Stephen. Yes, please.


AUDIENCE: It’s not about the question, just a comment that, first of all, let me introduce myself. I’m Piu from Myanmar. I just would like to comment that when we are talking about the AI and the education, especially for the children, it is not to focus children from the developing country and vulnerable group as well. So my comment would be like, it’s better to include them and all. also trying to outreach to the school in the developing country and also let them to include in the project what be the moving forward for the inclusion and diversity and also think about the future of the AI in by means of the whole world.


Vicky Charisi: Thank you so much for raising this so clearly, Stephen I don’t know if you want to comment on this at all but totally.


Steven Vosloo: Thank you for that comment it’s really well appreciated and you know that as we, oh no. I can hear Stephen. You can hear? Yeah. You can hear? Okay. No as we know the challenge with AI is that it’s concentrated in a few countries and a few companies and we really need to get those opportunities to the developing world and the global south. In Africa there was an IMF projection that in Africa by 2030 there will be 230 million jobs that will need digital skills which would include AI skills I would think the way that the world is going and so this really is an issue of how do you scale up children in the global south and also use AI and digital to improve education in a really challenging situation. So thank you for that point, that’s really well taken.


Vicky Charisi: Thank you and this also gives me the opportunity to introduce, to give the floor to Dominic who’s going to introduce our next speaker, Dominic the floor is yours and Amisa probably she can turn on her camera as well.


Dominic Regester: Great thank you Vicky, good morning everybody. It’s an enormous pleasure to introduce Amisa, Amisa is a counselling psychologist, a mediator and an advocate for community resilience. for Mental Health and for Peaceful Social Cohesion. She’s the founder and executive director of the youth and women-led organization, Navisha Foundation, which is dedicated to fostering community resilience through community-based mental health interventions, innovations, and approaches. We at Salzburg Global, we had the chance to work with Amisa last year on a project about civic and civil education, which was one of the papers that fed into the design of this session. And so I’m very excited to welcome Amisa to the stage and to hear what she’s gonna say. And after Amisa’s finished, there should be time for Q and A with both people in the room and the online audience. But Amisa, it’s lovely to see you again. Thank you very much for doing this. And over to you.


Vicky Charisi: Yeah, just to mention that Amisa is based in Kenya, right? So Amisa, the floor is yours.


Amisa Rashid Ahmed: Okay. Thank you, everyone. I hope you can hear me.


Vicky Charisi: Yes, very clearly.


Amisa Rashid Ahmed: Okay. Yeah, so my name is Amisa, and I’ll start with a story on how I got into more involvement around AI. So I served in an organization as a board member, and this organization was handling a certain case of a big organization. I don’t want to say the name. This organization was being sued in Kenya because it is an international corporate organization was being sued in Kenya. The content moderators actually felt… Apologies for that. So the content moderators were suing this company because of exploitation. So what happened is they worked with this company, but there was no safeguarding protocols. There was no policies. to safeguard these young people who are workers. And there are content moderators, content annotators, and some of them are working into building content for some AI tools. So it was easy for them to be laid off. And some of them had developed mental health conditions because as content moderators, they were actually very brutal images as they were working. This company, these young people who are suing this big company, losing battle at that moment because it is a, has been like, and these are just young Africans who have, so the exploitation was there. So my work involved, it was to support their mental health, but guiding them on how we can come up with better policies in regards to AI and the emerging trends which nobody is looking into. And so that is really what got me interested into AI. And especially with children, because once we, as the case is still continuing with these young people, and it is also disadvantageous because a lot of government are not supporting the young people. Of course, money has been poured, but these are some of the challenges in the continent, right? So when we’re talking about also AI within the African continent, one thing that is an inequality. A good example at Nivisha Foundation, the organization that we are working in, we’ve been using generative AIs and chatbots to create a therapy so that they can be able to access mental health resources and access therapy support or just an online. chatbot where they can have the conversation with a virtual therapist. So it has been good, but one of the challenges that we are working on is that reflect biases in their training of data and it actually excludes African languages or context or perspective because if you generate like an image of a young professional working in multinational corporate work, it won’t bring the image of somebody like me. So it will bring the image of somebody else, right? So that means that the people who have been able to algorithm and train the data have not actually considered that there are actually other demographies who uses AI and should be included. And from the last speaker’s point on how AI does not involve children, apart from not involving children, it does not involve individuals from marginalized communities, individuals from underrepresented communities because if I’m not represented, how will my context and languages be known? And also, how do we make sure that it reaches everybody, even in place accessibility of all of these things? So that is one aspect where when you’re talking about AI, we are facing, but now look at it from the lens of children. If there are no policies to support these young people who are seeing this big multinational corporate in regards to their mental health and in regards to protocols and policies, so who safeguards the children just the AI in whichever capacity it is? is and then who also looks into the safeguarding policies that are in existence and make sure that there’s a mental health clause that take cares of these from both the people who are generating AI and both the consumers so that you know that these are the repercussions and this is how you as a multinational tech company coming up with AI in case this happened, this is the repercussions in regards to people health and safeguarding and their wellbeing. The other issue is data privacy and security. So if there’s no data privacy, the safeguarding policies does not take care of data’s policies and security, how do we make sure that AI tools used in education because now AI is the in thing and we are happy that everybody’s using it, how do we make sure that it is actually used and not exploited for data for profit? Data and as we know, we may not be able to get the data that is coming up from AI. So apart from this is how AI and how we are viewing it in the continent but what about, what are the opportunities for citizenship education in Africa? So number one, localize AI and create educational resources in African languages and contexts, even if it is only have 48 ethnic languages, if you go to other neighboring countries, there are 100, 200 ethnical languages. So we may not be able, but there’s the social norm that people can be able to relate to. So when I’m using generative AI, how can it give me information and resources that is catered to my surrounding and not give… me a Euro-American example that I am not able to relate with. That will actually promote inclusivity. The other aspect of AI and opportunities around it is equitable access to quality education. AI-driven tools can address, they can, like AI addresses, like we at Inivisha, we are using AI. We know other colleagues who are using different aspect of AI’s, amazing innovations just to make sure that we are creating equitable access to education and other opportunities. One thing that I am appreciative of, let’s say, generative AI is how, and this as able-bodied people, we are not able to see, but people with disability actually use AI a lot to ease in their work. Whether it is mental, where people who have ADHD and they have issues in starting task, AI is the best tool to be able to use it. Whether it is speech-to-speech, like it’s a person who is blind or have visual issues and they’re not able to actually use tech while seeing them, there is voice-to-voice or speech-to-speech that they can be able to use. As much as we may be able to say, okay, it is not working, but for people with disabilities that we’ve worked with, they can be able to attest that AI has really worked and has really helped their work in whatever capacity that they’re doing. Also, AI can be used to encourage global perspective and exposing children in fostering global citizenship skills, understanding different contexts. In that, as much as we are saying that AI should be contextualized, we are not saying that we should not also get the aspect of learning more about other people, cultures, and information, which can actually bring easy, encourage global perspective and all those kind of. things. And to just finalise my point in regards to ethical AI in governance, we really need child-centred design around AI. So AI tools must prioritise, and that is my call to action, prioritise children’s developmental needs, particularly within the continents, with varied social dynamic skills. And that goes back to research. How much is actually invested in research around AI within the continent? And when I’m talking within the continent, Africa is not homogenic. Like Kenya has its own cultural issues, socio-economic, political issues. Tanzania, the same. If you go to the southern Africa, to the western Africa, to what? So how can we invest in the continent, in terms of saying investing in Africa, but really being intentional about going locally and doing researches around AI so that we can have these child centres designed and come up with needs or children’s needs, which despite the various social, cultural dynamics. So that is a call for action for most of us, because we know the statistic, like Africa has the largest population of youth and young children. So if we are not actually working with them to make sure that you’re making the future better for them, we cannot say that we’re actually creating a better society for them. How can we have transparency and accountability also when it comes to governance and having clear guidelines on how AI tools function and their impact on children learning? We need transparency. There’s not a lot of transparency because it is for profit and a lot of people profit from the data around AI. And how can our government, this is also a call for us, and monitor AI’s role in education? You cannot say that AI is there, so us. as governments not addressing AI and not prioritizing it, means that we are leaving children and young people exploited and exposed to harm around AI. And how can we make sure finally that we have like policy, whether it is PPP, whether it’s regulatory framework, whether it’s funding and investment around AI in research, funding, local challenges, coming up with the frameworks like data protection actions. In Kenya, we have one, an act for 2019, but how do we keep on also iterating and making it better? Because every day when you’re talking about AI, it’s changing. So how can you make sure that the data protections acts are actually going with time? And finally, I personally work with my personal, if you go even to my LinkedIn platform, you’ll see that I’m really working around decolonization and Africanization of most of these things around mental health and mental health. And AI is one of the intersectionalities that we keep on talking about. And one of the things that I’m key on is how do we center narratives? How do we center African narratives in AI development? Be it pushing for languages, indigenous knowledge within the AI system, so that we can be able to have access to this. And how do we make sure marginalized children from marginalized communities, I come from one, ensure that they have access. Currently, we work with the refugees from Sudan and AI has been, generative AI has been amazing as a tool of engaging with them. But how do we make sure that we highlight and make it work for them when it comes to AI? And finally, how do we include mental health and integrate it in whatever AI education, whether it is children, emotional and psychological well-being. and having AI chatbots offering mental health supports to students, so how can we be able to do that? So that being said, I am happy to have any questions around it, but my call to action remains the same. How do we also decolonize AI? Because if we were, let’s say, the languages that are used and all these things are not from us, that means that our culture from the Indigenous marginalized communities will never be seen. So how do we make sure that as you are building AI, it is inclusive to the core and not just saying we are inclusive, but intentional inclusivity that can be seen? So thank you and back to you, Dominic or Vicky.


Dominic Regester: Thank you, Amisa, that was fantastic. We have time for a couple of questions, I think, so either questions from the online audience, feel free to add them in the chat or if you’re in the room with Vicky.


Vicky Charisi: Yeah, if there is any question from the audience for Amisa or comment. Yeah, Stephen.


Steven Vosloo: So that was really, really interesting. I’m curious about, just very briefly, what you do with the refugees from Sudan and how they use AI, please. Thank you.


Amisa Rashid Ahmed: Okay, so what we do at Nivishe, we run a fellowship, a Nivishe Mental Health Fellowship, whereby we localize, contextualize and use cultural sensitivity to educate young people around mental health. It’s in fourth cohort and from our third cohort, we had Sudanese youth who said that they want to be involved because a lot of humanitarian support is going to Sudan, but nobody’s talking about their mental well-being, the trauma that comes from the war that is inflicted. So that is how we are able to come up with our fellowship specifically. But since most of them are displaced in different countries, we have been able to use, the fellowship already had its own curriculum that was geared towards like Kenyan youth specifically, but now because of AI we’ve been able to use it to translate the conversations, the curriculum so that it can suit the Sudanese languages. Of course, we have Sudanese youth advisory board to guide us in regards to that. But also, we have like a chat bot that we are training it to actually speak or understand the Sudanese language and context, so that whichever place a young person is and they’re not able to access mental health support or a therapist, they can be able to use the chat bot that actually is cultural sensitive and uses the language to be able to do that. So we are working in the aspect of voice-to-voice and speech-to-speech and also like feeding it with the Sudanese language so that they can be able, Sudanese Arabic, so that they can be able to access it while they’re doing the fellowship, learning more about mental health. There’s also a resource or a tool, because since you are not able to offer one-on-one mental health support, as most of them are displaced widely within a majority within the continent, how can they use the existing tool to make sure that they’re accessing the necessary mental health support that they need?


Vicky Charisi: Thank you. Thank you, Amisa. Yeah, it was great to have you with us today, Amisa. And although I apologise to the audience, the connection sometimes was not very good, but I didn’t want to interrupt her because I think all of us understood and it was a great contribution. Thank you very much, Amisa. We are going to move on to our last session. This is an informal conversation that I had with another student. So for this workshop, we thought, you know, inclusivity, not only in terms of geographical or cultural inclusivity, but also inclusivity of youth. It’s really important. So that’s why we had with us Thanassis and also we have one more girl. She was not able to be with us either online because she has school obligations. So we videotaped the conversation and we are going to see to watch now the conversation on the video recording. So can we have the video on the screen and then we will have some like 10 minutes for a discussion among us and to hear also about your work. Yeah. Yeah, you can play. Hi, thank you for being with us today. As we have discussed this workshop aims to give us an overview of the role of AI on young people’s global citizenship education. And we would like to hear your opinion about it based on your experiences. In most countries, teenagers grow up in societies that depend more and more on AI based applications. Can you describe what kind of opportunities and risks you see about the use of AI by young people and tell us your opinion about possible future directions.


Ariadni Gklotsou: Hi Vicky, thank you for the invitation to be part of this workshop at the IGF 2024. I’m glad to share some of my thoughts about AI and global citizenship education. First, my experience as a student first in Singapore, then Australia and now in Greece has shown me how important it is for young people to understand each other when they grow up in different places. I believe that a global citizen is a person that takes action to make their local communities and global societies a better place for all. For example, people decide about solar panels to support our environment and create alliances for treaties for peaceful collaborations. More recently, developers create AI technologies that can help teenagers connect with other teachers all over the world. AI has and is increasingly becoming a component in our everyday activities, even if we don’t realize it. It ranges from the creation of Instagram feed you may scroll on and extends to tools that help you do your project at school. We are all aware of the life-saving tool tragedy is when you realize your literature essay is due the next morning. AI benefits us by creating specialized and specific recommendations such as music or movies and it acts as a more direct research engine. Moreover, the use of AI in social media acts as a tool for young people to stay connected while enriching their understanding for different cultures all around the world. However, AI comes with several ethical challenges. when employed to carry out tasks that help us learn and are critical for us when we are developing other skills. An example of this is when Chatterbt is used as an as a generator rather than a feedback studio. When we give authority for it to change our words or structure our sentences, this is where AI becomes more complex. Sometimes it makes us even believe that the ideas generated by Chatterbt are our own, but it is important for all students to understand that in different contexts the same tool might have positive or negative impact for us.


Vicky Charisi: Indeed, that’s so true. Thanks for sharing with your thoughts about the current situation. Now can you tell us if you have any suggestions for future directions?


Ariadni Gklotsou: I think that we all need to be safe in an online environment as it happens in our physical environment. For this we need rules. I would like to talk about the International Baccalaureate Organization or the IPO which I am part of. This international high school diploma program, the IB has issued an academic integrity policy document that includes the use of AI. I agree with it since they have made it more clear that plagiarism is a serious action that could result in the student to be expelled from the IB or in worst cases an institution to lose their license to teach the IB. The document summarizes that AI cannot be used as a writing tool but only used to help us improve text with grammatical errors. This way the IB creates these boundaries on the use of AI without restricting it completely. For these reasons I believe it is important to set certain rules even legislations to control the use of AI but not to the extent that prevents it completely. I’m very curious for example how the new decision on the Australian government to ban social media for children under 16 will be applied. AI is the development of modern society that can now be ignored and we hope that this workshop at IGF will help make decisions for AI and its use.


Vicky Charisi: Thank you so much Ariadne, that is very helpful for all of us. We are very glad to hear your thoughts about the use of AI by young people and we hope that more young people like you will advise us on how to create a better online environment for all of you. Thank you very much. Right. So, we saw on this like some worries, especially, I mean, we all, Ariadne mentioned the life-saving Chachupiti, right? So, we see the attitudes that children have. Probably, we can stop the video again. I think we hear again the video. This is what I hear in my… Is it only me? No. Okay. Yeah. But I would like now to open the floor, probably, to the audience. If you have any comments or if you want to share something from your work that it is relevant to this workshop, we would love to hear. Yeah. Sure. We need the microphone. Thank you. Can you hear me? Yes. Okay. If you can put the microphone closer. Yeah.


AUDIENCE: Okay. So, Amisa has answered one of my questions, but I will add. So, in more effective ways, I’m from Sudan. So, how children often like lack access to safe infrastructure, education. So, how can AI policies and tools can be adapted to address the unique needs of children in these areas, ensuring they are not left behind? And how can we ensure that they are benefited from the AI-driven global citizenship education? Do you want the… Yeah.


Steven Vosloo: Thank you. That’s a great question. What we see overall is that, what I said in the beginning, just to say it again, because it keeps coming up, is that children use AI, but they’re not involved in how AI is designed or how the policies are made. So, it was really great to hear very honest reflections from Ariadne on how AI is used, but also how… guidance is useful about banning it, but what are some of those boundaries? So the simple answer to the question is policymakers and AI companies should involve children more in the process, because if children’s voices are heard then the AI systems will talk to their unique needs, their developmental stages, their different contexts. You know, as was said earlier, the child in Kenya, in Nairobi or in the rural areas has a very different perspective to, let’s say, the child in Tokyo or in Sydney. So these are all valid points, but at the moment often there’s a one-size-fits-all for AI. So we need to all work harder and really push this point of including children in the process. So hopefully we can count on the young people among us to help us on this journey. Thank you.


Vicky Charisi: We have one more question. You have a microphone?


AUDIENCE: A question, actually, not a comment. Yeah. I’m originally from Myanmar. We are also facing a crisis at this stage, and even there are lots of the young people and children suffering a lot, so because of the civil wars and the political turmoil, and also at the refugee camp, there are lots of things are happening there. The challenge is that we cannot communicate in person directly to those who are inside the refugee camp, because there are also the restrictions to communicate with the outsiders. But so when Anissa talked about the mental health fellowship, there is something that we are trying to do right now for the young people to build their mental resilience. The purpose of that program is supposed to be like a community-driven program because we cannot go in person and train them to be aware of their mental resilience and how to be temporary. That is the thing that we can provide in this stage. We cannot say how to build their mental well-being during the crisis area so the thing that we are trying to do is like we are trying to engage with them virtually through the program and also trying to provide the temporary relief practice that they can use in their daily life for their emotional well-being and then coping their anxiety in some what way. But on one hand, our concern is that the sustainability of the program because it is totally voluntary based program. So the thing is that we have to look for someone from the relevant community to get involved in that program to contribute back to their community. I believe that in this Internet Governance Community, most of the community members are also voluntary based. We are modeling that concept and trying to practice in the localization program. That is what I commenced talking about the mental resilience and the young people and we also need to consider about the sustainability of the program without having any grants or any funding. At least we can figure it out the passionate young person who would like to contribute back to their community.


Vicky Charisi: Yeah, that’s an excellent point. Thank you so much for raising it. I know we have just a few minutes to rest. Just for my part, I think what you said, like having funding, but also Stephen mentioned beforehand we need more investments and this kind of programs should be sustainable and should be long-term, especially if you engage with mental health issues. This is something that it cannot be like for three months, for example, you need to have a long-term plan. And I hope, especially in this community, from our side, of course, we are going to report back on IGF and eventually to the UN, and we, I mean, with the means that we have, we are going to raise this issue, of course, but thank you very much for commenting on this. I don’t know if there are other comments on this topic. No. Okay, Dominique online, do you see any, do we have any questions from online participants or we are good to close? Dominique, we can’t hear you, probably you’re muted.


Dominic Regester: Not muted at this end.


Vicky Charisi: One moment.


Dominic Regester: There are no questions.


Vicky Charisi: Yeah, yeah, yeah. Okay. We don’t have questions. Okay. So, I would like to thank you all for being here today with us. And if you want to keep in contact, you have our names, contact emails, please keep in touch. Thank you very much. Thank you. Thank you. you you you you


S

Steven Vosloo

Speech speed

154 words per minute

Speech length

2349 words

Speech time

909 seconds

AI systems need to be designed with children’s rights and development in mind

Explanation

Steven Vosloo emphasizes the importance of considering children’s rights and developmental needs when designing AI systems. This approach ensures that AI technologies are protective and empowering for children.


Evidence

UNICEF developed guidance on how AI can be developed in a way that upholds children’s rights, protects and empowers them.


Major Discussion Point

AI and Children’s Rights/Development


Differed with

Amisa Rashid Ahmed


Differed on

Approach to AI development for children


AI-generated child sexual abuse material is a growing concern

Explanation

Vosloo highlights the increasing problem of AI-generated child sexual abuse material. This issue was not on the radar in 2020 but has become a significant concern, posing challenges for law enforcement and victim identification.


Evidence

The numbers of AI-generated child sexual abuse material cases are still quite small, but rising quickly.


Major Discussion Point

Challenges and Risks of AI for Youth


AI relationships and interactions may pose risks to children’s wellbeing

Explanation

Vosloo points out the potential risks associated with AI relationships and interactions for children. He mentions cases where families are suing tech companies over alleged harm caused by AI interactions.


Evidence

Two cases in the US where families are suing tech companies, alleging that AI interactions caused children to harm themselves.


Major Discussion Point

Challenges and Risks of AI for Youth


Environmental impacts of AI systems need to be considered

Explanation

Vosloo raises concerns about the environmental impact of AI systems, including energy consumption by data centers and e-waste production. He emphasizes that climate change impacts children more than adults, making this issue particularly relevant.


Evidence

Growth of data centers consuming a lot of energy, rare minerals used in AI systems, and e-waste production.


Major Discussion Point

Challenges and Risks of AI for Youth


Children’s voices should be included in AI design and policymaking

Explanation

Vosloo argues for the inclusion of children’s voices in AI design and policymaking processes. He suggests that this inclusion would lead to AI systems that better address children’s unique needs and contexts.


Evidence

Current AI systems often have a one-size-fits-all approach that doesn’t account for different perspectives of children in various contexts.


Major Discussion Point

Inclusive and Ethical AI Development


Agreed with

Amisa Rashid Ahmed


Ariadni Gklotsou


Agreed on

Importance of including children in AI development and policymaking


A

Amisa Rashid Ahmed

Speech speed

146 words per minute

Speech length

2223 words

Speech time

909 seconds

Current AI tools often exclude African languages and contexts

Explanation

Amisa Rashid Ahmed points out that many AI tools do not include African languages or contexts in their training data. This exclusion leads to biases and lack of representation for African users.


Evidence

Example of generating an image of a young professional working in a multinational corporate environment not producing an image of someone like her.


Major Discussion Point

Inclusive and Ethical AI Development


Differed with

Steven Vosloo


Differed on

Approach to AI development for children


AI can provide equitable access to quality education

Explanation

Ahmed highlights the potential of AI-driven tools to address educational inequalities. She suggests that AI can be used to create more equitable access to quality education resources.


Evidence

Mention of using AI at Nivishe Foundation to create equitable access to education and other opportunities.


Major Discussion Point

AI for Global Citizenship Education


AI tools must prioritize children’s developmental needs

Explanation

Ahmed emphasizes the importance of prioritizing children’s developmental needs in AI tool design. She calls for child-centered design in AI, particularly considering the varied social dynamics in different contexts.


Major Discussion Point

AI and Children’s Rights/Development


Agreed with

Steven Vosloo


Ariadni Gklotsou


Agreed on

Importance of including children in AI development and policymaking


AI can encourage global perspectives in children

Explanation

Ahmed suggests that AI can be used to foster global citizenship skills and understanding of different contexts. She sees AI as a tool for exposing children to diverse perspectives and cultures.


Major Discussion Point

AI for Global Citizenship Education


Agreed with

Ariadni Gklotsou


Agreed on

AI can enhance global citizenship education


More research on AI in African contexts is needed

Explanation

Ahmed calls for increased investment in research on AI within the African continent. She emphasizes the need for localized research to address specific cultural, socio-economic, and political issues in different African countries.


Evidence

Mention of Africa’s diverse contexts and the need for intentional, localized research.


Major Discussion Point

Inclusive and Ethical AI Development


Transparency and accountability in AI governance is crucial

Explanation

Ahmed stresses the importance of transparency and accountability in AI governance. She calls for clear guidelines on how AI tools function and their impact on children’s learning.


Major Discussion Point

Inclusive and Ethical AI Development


Localization and contextualization of AI is important

Explanation

Ahmed emphasizes the need to localize and contextualize AI for different cultural and linguistic contexts. She argues for the inclusion of African narratives, languages, and indigenous knowledge in AI development.


Evidence

Example of using AI to translate conversations and curriculum for Sudanese refugees.


Major Discussion Point

Inclusive and Ethical AI Development


AI chatbots can provide mental health support for youth

Explanation

Ahmed discusses the use of AI chatbots to provide mental health support for youth. She highlights how these tools can be culturally sensitive and language-appropriate.


Evidence

Example of developing a chatbot that understands Sudanese language and context for mental health support.


Major Discussion Point

AI Applications for Youth Wellbeing


AI tools can assist refugees and displaced youth

Explanation

Ahmed describes how AI tools can be used to support refugees and displaced youth. She highlights the use of AI for translation and cultural adaptation of educational materials.


Evidence

Example of using AI to translate and adapt mental health fellowship curriculum for Sudanese refugees.


Major Discussion Point

AI Applications for Youth Wellbeing


A

Ariadni Gklotsou

Speech speed

160 words per minute

Speech length

579 words

Speech time

216 seconds

AI can help teenagers connect with peers globally

Explanation

Ariadni Gklotsou highlights how AI technologies can facilitate connections between teenagers from different parts of the world. This global connection can enhance understanding between young people from diverse backgrounds.


Evidence

Personal experience as a student in Singapore, Australia, and Greece, emphasizing the importance of understanding each other when growing up in different places.


Major Discussion Point

AI for Global Citizenship Education


Agreed with

Amisa Rashid Ahmed


Agreed on

AI can enhance global citizenship education


Social media powered by AI helps youth understand different cultures

Explanation

Gklotsou points out that AI-powered social media serves as a tool for young people to stay connected and enrich their understanding of different cultures worldwide. This exposure to diverse perspectives contributes to global citizenship education.


Major Discussion Point

AI for Global Citizenship Education


Agreed with

Amisa Rashid Ahmed


Agreed on

AI can enhance global citizenship education


Overreliance on AI tools like ChatGPT can hinder skill development

Explanation

Gklotsou warns about the potential negative impacts of overrelying on AI tools like ChatGPT for academic tasks. She emphasizes the importance of using AI as a feedback tool rather than a generator to avoid hindering skill development.


Evidence

Example of using ChatGPT as a generator rather than a feedback tool for writing essays.


Major Discussion Point

Challenges and Risks of AI for Youth


Rules and boundaries are needed for safe AI use by students

Explanation

Gklotsou advocates for the establishment of rules and boundaries for AI use in education. She supports policies that set clear guidelines on AI use without completely restricting it.


Evidence

Example of the International Baccalaureate Organization’s academic integrity policy that includes guidelines on AI use.


Major Discussion Point

AI and Children’s Rights/Development


Agreed with

Steven Vosloo


Amisa Rashid Ahmed


Agreed on

Importance of including children in AI development and policymaking


A

AUDIENCE

Speech speed

126 words per minute

Speech length

599 words

Speech time

284 seconds

AI can help build community resilience programs

Explanation

An audience member discusses the potential of AI to support community-driven programs for building mental resilience, particularly in crisis areas. They highlight the use of virtual engagement and temporary relief practices facilitated by AI.


Evidence

Example of a program attempting to engage with young people in refugee camps virtually to provide mental health support and coping strategies.


Major Discussion Point

AI Applications for Youth Wellbeing


Agreements

Agreement Points

Importance of including children in AI development and policymaking

speakers

Steven Vosloo


Amisa Rashid Ahmed


Ariadni Gklotsou


arguments

Children’s voices should be included in AI design and policymaking


AI tools must prioritize children’s developmental needs


Rules and boundaries are needed for safe AI use by students


summary

All speakers emphasize the need to involve children in the development of AI systems and policies to ensure their needs and rights are considered.


AI can enhance global citizenship education

speakers

Amisa Rashid Ahmed


Ariadni Gklotsou


arguments

AI can encourage global perspectives in children


AI can help teenagers connect with peers globally


Social media powered by AI helps youth understand different cultures


summary

Both speakers highlight the potential of AI to foster global understanding and connections among young people from different cultures.


Similar Viewpoints

Both speakers emphasize the need for more research and consideration of AI’s impacts in specific contexts, particularly in developing regions.

speakers

Steven Vosloo


Amisa Rashid Ahmed


arguments

Environmental impacts of AI systems need to be considered


More research on AI in African contexts is needed


Both speakers advocate for adapting AI to specific contexts and establishing clear guidelines for its use, particularly in educational settings.

speakers

Amisa Rashid Ahmed


Ariadni Gklotsou


arguments

Localization and contextualization of AI is important


Rules and boundaries are needed for safe AI use by students


Unexpected Consensus

AI’s role in mental health support for youth

speakers

Amisa Rashid Ahmed


AUDIENCE


arguments

AI chatbots can provide mental health support for youth


AI can help build community resilience programs


explanation

There was an unexpected agreement on the potential of AI to provide mental health support, particularly in crisis situations and for displaced youth. This consensus highlights an emerging application of AI in addressing global challenges.


Overall Assessment

Summary

The main areas of agreement include the importance of involving children in AI development, the potential of AI for global citizenship education, the need for contextualization and localization of AI, and the emerging role of AI in supporting youth mental health.


Consensus level

There is a moderate level of consensus among the speakers on key issues. This consensus suggests a growing recognition of the importance of child-centered, culturally sensitive approaches to AI development and implementation in education and youth support. However, there are still areas where more research and discussion are needed, particularly regarding the specific implementation strategies and potential risks of AI use by youth.


Differences

Different Viewpoints

Approach to AI development for children

speakers

Steven Vosloo


Amisa Rashid Ahmed


arguments

AI systems need to be designed with children’s rights and development in mind


Current AI tools often exclude African languages and contexts


summary

While both speakers emphasize the importance of considering children in AI development, Vosloo focuses on a rights-based approach, while Ahmed highlights the need for cultural and linguistic inclusion, particularly for African contexts.


Unexpected Differences

Overall Assessment

summary

The main areas of disagreement revolve around the specific approaches to developing AI for children and youth, with differences in emphasis on rights-based approaches, cultural inclusion, and regulatory frameworks.


difference_level

The level of disagreement among the speakers is relatively low. Their perspectives are largely complementary, focusing on different aspects of the same overarching goal of creating safe, inclusive, and beneficial AI systems for children and youth. This low level of disagreement suggests a general consensus on the importance of child-centered AI development, which could facilitate more comprehensive and inclusive policies and practices in this area.


Partial Agreements

Partial Agreements

All speakers agree on the need for child-centered AI development and policies, but they differ in their specific approaches. Vosloo emphasizes including children’s voices in the process, Ahmed focuses on prioritizing developmental needs, and Gklotsou advocates for clear rules and boundaries.

speakers

Steven Vosloo


Amisa Rashid Ahmed


Ariadni Gklotsou


arguments

Children’s voices should be included in AI design and policymaking


AI tools must prioritize children’s developmental needs


Rules and boundaries are needed for safe AI use by students


Similar Viewpoints

Both speakers emphasize the need for more research and consideration of AI’s impacts in specific contexts, particularly in developing regions.

speakers

Steven Vosloo


Amisa Rashid Ahmed


arguments

Environmental impacts of AI systems need to be considered


More research on AI in African contexts is needed


Both speakers advocate for adapting AI to specific contexts and establishing clear guidelines for its use, particularly in educational settings.

speakers

Amisa Rashid Ahmed


Ariadni Gklotsou


arguments

Localization and contextualization of AI is important


Rules and boundaries are needed for safe AI use by students


Takeaways

Key Takeaways

AI systems need to be designed with children’s rights, development, and diverse contexts in mind


AI can provide opportunities for global citizenship education and connecting youth across cultures


There are significant risks and challenges associated with AI use by youth, including privacy/safety concerns and potential skill development issues


More inclusive and ethical AI development is needed, with increased input from children and underrepresented communities


AI has potential applications for supporting youth wellbeing and mental health, particularly for vulnerable populations


Resolutions and Action Items

Push for greater inclusion of children’s voices in AI design and policymaking processes


Conduct more research on AI applications and impacts in African and developing world contexts


Develop clearer guidelines and boundaries for responsible AI use by students


Create sustainable, long-term programs leveraging AI to support youth mental health and wellbeing


Unresolved Issues

How to ensure equitable access to AI-driven educational tools for children in crisis areas or with limited infrastructure


Balancing the benefits of AI for learning against risks of overreliance and skill atrophy


Addressing the environmental impacts of increased AI system development and use


Determining appropriate age restrictions and safety measures for youth engagement with AI and social media


Suggested Compromises

Allow limited, guided use of AI tools like ChatGPT for students while maintaining focus on developing core skills


Implement AI systems with culturally-relevant content and languages while still promoting global perspectives


Balance data collection for AI improvement with strong privacy protections for children’s information


Thought Provoking Comments

AI relationships is something that’s interesting and it’s been coming up more and more and I’m not saying there’s anything wrong with AI relationships but we are seeing news stories of AI relationships gone wrong in a sense and there are two cases in the US now where families are suing tech companies who they’re alleging that these AI interactions caused the children either suggested the children do harm or caused children to do harm themselves to themselves and so it’s something to to really watch in terms of of what kind of protections we need

speaker

Steven Vosloo


reason

This comment introduces a complex and emerging issue around AI and children’s wellbeing that had not been discussed previously. It highlights potential risks and legal implications that need to be considered.


impact

This comment shifted the discussion to consider more serious potential harms of AI beyond just educational uses. It prompted thinking about the need for protections and safeguards for children interacting with AI.


How do we center African narratives in AI development? Be it pushing for languages, indigenous knowledge within the AI system, so that we can be able to have access to this. And how do we make sure marginalized children from marginalized communities, I come from one, ensure that they have access.

speaker

Amisa Rashid Ahmed


reason

This comment highlights the importance of inclusivity and representation in AI development, especially for marginalized communities. It challenges the current Western-centric approach to AI.


impact

This comment deepened the discussion by bringing in perspectives from the Global South and emphasizing the need for cultural sensitivity and inclusivity in AI development. It prompted thinking about how to make AI more accessible and relevant for diverse communities.


I think that we all need to be safe in an online environment as it happens in our physical environment. For this we need rules. I would like to talk about the International Baccalaureate Organization or the IPO which I am part of. This international high school diploma program, the IB has issued an academic integrity policy document that includes the use of AI.

speaker

Ariadni Gklotsou


reason

This comment from a student provides a concrete example of how educational institutions are addressing AI use. It offers a balanced perspective on regulating AI without completely restricting it.


impact

This comment brought the discussion back to practical implementations of AI policies in education. It provided a real-world example of how AI is being addressed in academic settings, adding depth to the conversation about AI governance.


Overall Assessment

These key comments shaped the discussion by broadening its scope from purely educational applications of AI to wider societal impacts, ethical considerations, and governance challenges. They introduced perspectives from diverse stakeholders – policymakers, researchers from the Global South, and students – which enriched the conversation and highlighted the complexity of integrating AI into global citizenship education. The comments prompted deeper reflection on inclusivity, safety, and the need for balanced regulation in AI development and use.


Follow-up Questions

How can small island nations be involved in AI projects for cross-cultural education?

speaker

Audience member from Samoa


explanation

This highlights the need for inclusivity in AI development and implementation, especially for underrepresented regions.


How can AI policies and tools be adapted to address the unique needs of children in areas lacking safe infrastructure and education?

speaker

Audience member from Sudan


explanation

This emphasizes the importance of tailoring AI solutions to different contexts, especially in challenging environments.


How can we ensure children from conflict zones benefit from AI-driven global citizenship education?

speaker

Audience member from Sudan


explanation

This highlights the need to consider access and applicability of AI education tools in diverse and challenging situations.


How can we create sustainable, community-driven mental health programs using AI for young people in crisis areas?

speaker

Audience member from Myanmar


explanation

This addresses the need for long-term, locally-relevant AI solutions for mental health support in challenging environments.


How can we better include children and marginalized communities in AI development and policy-making?

speaker

Steven Vosloo


explanation

This emphasizes the importance of diverse perspectives in shaping AI systems and policies to meet varied needs.


How can we develop child-centered AI design, especially within diverse African contexts?

speaker

Amisa Rashid Ahmed


explanation

This highlights the need for culturally sensitive and developmentally appropriate AI tools for children.


How can we ensure transparency and accountability in AI governance, especially regarding its impact on children’s learning?

speaker

Amisa Rashid Ahmed


explanation

This addresses the need for clear guidelines and monitoring of AI’s role in education.


How can we integrate mental health considerations into AI education tools?

speaker

Amisa Rashid Ahmed


explanation

This emphasizes the importance of considering emotional and psychological well-being in AI-driven educational tools.


How can we decolonize AI and center African narratives in AI development?

speaker

Amisa Rashid Ahmed


explanation

This highlights the need for diverse cultural representation and indigenous knowledge in AI systems.


Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.