IGF 2024 Global Youth Summit
IGF 2024 Global Youth Summit
Session at a Glance
Summary
This discussion focused on the impact of artificial intelligence (AI) on education and the challenges and opportunities it presents. Participants, including policymakers, educators, and youth representatives, explored various aspects of AI in education.
Key points included the need for ethical and trustworthy AI systems, addressing the digital divide, and ensuring equitable access to AI-powered educational tools. Speakers emphasized the importance of involving youth in decision-making processes and policy development related to AI in education. The discussion highlighted concerns about data privacy, algorithm bias, and the potential for AI to exacerbate existing inequalities.
Several speakers stressed the need for global collaboration to develop shared standards and best practices for AI in education. The importance of cultural diversity and localized content in AI-powered educational tools was also emphasized. Participants discussed the role of educators in implementing AI systems and the need for proper training to use these tools effectively.
The discussion touched on the challenges of AI-generated content and its impact on academic integrity. Speakers also addressed the potential of AI to personalize learning experiences and improve accessibility for students with diverse needs. The need for critical thinking skills and digital literacy in the age of AI was emphasized.
Overall, the discussion underscored the complex nature of integrating AI into education systems and the need for a multi-stakeholder approach to address challenges and harness opportunities. The importance of balancing innovation with ethical considerations and human-centered design in AI development for education was a recurring theme throughout the discussion.
Keypoints
Major discussion points:
– The impact of AI on education, including opportunities and challenges
– Ethical considerations and accountability for AI in educational settings
– The digital divide and ensuring equitable access to AI-powered education
– The role of youth voices and participation in shaping AI policies for education
– Addressing biases and ensuring diversity in AI development and implementation
Overall purpose/goal:
The discussion aimed to explore the implications of AI for education from multiple stakeholder perspectives, with a focus on including youth voices and considering both opportunities and challenges. The goal was to identify key issues and potential ways forward for responsibly integrating AI into educational systems.
Tone:
The tone was largely constructive and collaborative, with speakers building on each other’s points. There was a sense of urgency around addressing challenges, balanced with optimism about AI’s potential benefits. The tone shifted slightly towards the end to become more action-oriented, with calls for youth to propose solutions rather than just raising problems.
Speakers
– Li Junhua: UN Under-Secretary-General for Economic and Social Affairs
– Ihita Gangavarapu: Coordinator of India Youth IGF, co-moderator for onsite participants
– Ahmad Khan: Researcher and Development Engineer, Aramco, Saudi Arabia
– Henri Verdier: Ambassador for Digital Affairs, Ministry of Europe and Foreign Affairs, Government of France
– Margaret Nyambura Ndung’u: Minister at the Ministry of Information, Communication, and Digital Economy, Government of Kenya
– Carol Roach: Moderator
– Phyo Thiri Lwin: Active in regional youth initiatives from Myanmar
– Amal El Fallah Seghrouchni: Minister of Digital Transition and Administration Reform of Morocco
– Umut Pajaro Velasquez: Coordinator of Youth LAC IGF and Youth IGF Colombia
Additional speakers:
– Jarrel James: Researcher for internet resiliency
– Lily Edinam Botsyoe: PhD researcher in privacy
– Ahmad Karim: From Year in Women
– Osei Keja: Representative from African Youth IGF
– Dana Cramer: PhD candidate, Youth IGF Canada
– Asfirna Alduri: Part of the Responsible Technology Hub
Full session report
Expanded Summary of AI in Education Discussion
Introduction
This discussion explored the impact of artificial intelligence (AI) on education, examining both challenges and opportunities. The panel included policymakers, educators, youth representatives, and researchers from various countries, offering diverse perspectives on this complex topic.
Key Themes and Discussion Points
1. Historical Context and Current AI Applications in Education
Amal El Fallah Seghrouchni provided historical context, noting that AI-assisted teaching began 40 years ago. Li Junhua shared specific examples of AI use in education across different countries, such as AI-powered teaching assistants in China and personalized learning platforms in the United States.
2. AI’s Impact on Education
Speakers discussed AI’s potential to enhance and personalize education. Phyo Thiri Lwin highlighted AI tools’ ability to help non-native speakers improve language skills. Amal El Fallah Seghrouchni emphasized the importance of voice in generative AI for education, especially in multilingual contexts.
Ahmad Khan categorized AI developments in education into two approaches: instructor-focused (instructionist) and student-focused (constructionist). This distinction helped frame the discussion on AI’s varied applications in educational settings.
3. Ethical Considerations and Accountability
There was broad consensus on the need for ethical considerations and accountability in AI education. Amal El Fallah Seghrouchni stressed the importance of transparency and fairness in AI systems making educational decisions, as well as protecting data privacy and cognitive rights.
Umut Pajaro Velasquez argued for shared accountability among multiple stakeholders, including developers, educators, policymakers, and students. The discussion also touched on the ethical implications of using AI tools for academic work, with panelists and audience members debating the boundaries of acceptable AI assistance in education.
4. Addressing Biases and Inequalities
The discussion revealed significant concerns about biases and inequalities in AI education. Li Junhua pointed out the digital divide, noting that less than a third of the global population is connected to the internet. Ahmad Karim highlighted the disparity between the global south, which faces threats and protection concerns regarding AI, and the global north, which tends to focus on opportunities.
Phyo Thiri Lwin elaborated on the challenges of accessing AI education in developing countries, such as infrastructure and funding issues. Asfirna Alduri brought attention to the often-overlooked issue of underprivileged workers, often from the global south, who label AI training data and are frequently excluded from discussions about AI development.
5. Youth Participation in AI Governance and Development
There was strong agreement on the importance of youth involvement in shaping AI policies and implementation in education. Henri Verdier emphasized the need to engage youth in shaping AI policies. Osei Keja advocated for youth involvement in policy-making and governance of AI at regional and national levels.
Asfirna Alduri proposed creating intergenerational spaces where youth can develop AI solutions. An audience member suggested that youth should propose solutions to AI challenges rather than just asking older generations to fix problems, indicating a slight difference in approach to youth involvement.
6. Role of Policymakers and Public Infrastructure
Margaret Nyambura Ndung’u highlighted the role of policymakers in ensuring AI becomes a force multiplier for inclusive and equitable education, addressing the digital divide, and safeguarding equity. Henri Verdier stressed the need for a public infrastructure for educational AI to ensure accessibility and fairness.
7. Concerns and Future Considerations
Ahmad Khan raised concerns about the potential for AI to replace human thinking and creativity. The discussion also touched on the need to validate the accuracy of AI-generated information, especially in STEM fields.
Umut Pajaro Velasquez emphasized the role of academia in researching the impact of AI on education. An audience member suggested the need for universal guidelines on AI use in education.
Conclusion
The discussion underscored the complex nature of integrating AI into education systems and the need for a multi-stakeholder approach to address challenges and harness opportunities. While there was general optimism about AI’s potential to enhance education, speakers emphasized the importance of balancing innovation with ethical considerations, addressing inequalities, and ensuring meaningful youth participation in shaping the future of AI in education.
Moving forward, it is clear that continued international collaboration, inclusive governance structures, and a focus on ethical, user-centric AI development will be crucial in realizing the potential of AI to positively transform education while mitigating associated risks. The historical context provided a valuable perspective on the evolution of AI in education, while the focus on current applications and future challenges highlighted the dynamic nature of this field.
Session Transcript
Li Junhua: It’s very heartening to see that the decision makers collaborating with all of you and recognizing the critical role of the young people in shaping these discussions. I’m very or I’m truly inspired by the remarkable leadership and the spirit of the cooperation that you have demonstrated through the IGF youth track over the past months. It has facilitated the individual at a very invaluable cross regional dialogue and learning along the set of a global youth track or the global youth summit. The youth track that sends a very powerful message to the world, namely, uniting across the generations is essential. So only together can we meaningfully address the issues of reshaping the foundation for our future. As we come together today to discuss the impact of the AI that on education, that does remember that, first and foremost, education is one of the fundamental human rights enshrined in the universal declaration. It is our shared responsibility to ensure that the AI supports this right, this fundamental right throughout the world, rather than undermine it. They are a number of the good examples around us. For instance, in Morocco, AI is helping to reduce learning disparities in rural areas. In France, AI is helping virtually impaired students to read the converting digital information into haptic feedback. And in Brazil, AI powered the natural language processing is improving the literacy. Also in India, AI driven voice assisted education tools are fostering language inclusiveness. And in UK, AI is being used to convert the complex documents into sentences using trigonometry. After and concludes some key facts of the opening lecture, we are scheduled during our online conference to have a dedicated discussion and a virtual ìëë° during the week. Follow us on Facebook, Twitter, Instagram, andTwitter.com, Facebook.com. You can also. Well, I’ll keep sharing more data rather than talking quite a lot into easy-to-read formats in over 70 languages, which actually enhanced accessibility for learners with diverse needs. Having said that, we have to recognize that there’s a digital divide at Hamburg’s. That actually hampers the potential of the artists to build on all those good practices. And one very striving figure, less a third of the global population is not connected to the internet, not to mention the AI accessibility. So in this connection, United Nations has been given a strong mandate to right this wrong. The recent adopted Global Digital Compact calls for actions to force the international partnerships that build AI capacity through the education and training to expand access to open AI models, systems, and the training data. So today’s summit provides a platform for multistakeholders intergenerational dialogue on AI, especially on AI training, AI education. I believe your summit, your discussion will greatly help identifying a path towards an ethical digital future where we leverage AI to help guarantee that every individual has an open access to quality education regardless of their backgrounds. So I look forward to hearing from you more with your very brief ideas and innovative actions. Thank you. Thank you very much.
Ihita Gangavarapu: Thank you, Mr. Lee. I know he’s been one of the busiest at this conference and we appreciate the support towards young people in the internet governance space. With this, I think I’ll take over from Ms. Carroll. I am Ayita Gangavarpu. I’m the coordinator of India Youth IGF and also the co-moderator for onsite participants. But before I begin, I would also like to introduce our online moderators. We have Ms. Ines Hafid from the Tunisia IGF and Arab IGF, as well as Mr. Keith Andre from Kenya IGF and the African Youth IGF. This is to ensure that we have a seamless communication and participation, both virtually as well as onsite. All right, with this, we now move on to a very interesting intergenerational panel and it gives me immense pleasure to introduce you to our panelists for today’s session. We have Mr. Andre Verdier, the Ambassador for Digital Affairs, Ministry of Europe and Foreign Affairs, Government of France. We’re also joined by Ms. Margaret Nyambura Ndugu, Minister at the Ministry of Information, Communication, and Digital Economy, Government of Kenya. We have Ms. Amal El-Falah Segroshni, the Minister of Digital Transition and Administration Reform of Morocco. We have Ms. Fio Thirimin, who is the Coordinator of Youth Myanmar IGF. We have Mr. Ahmed Khan, Researcher and Development Engineer, Aramco, Saudi Arabia. We have Mr. Umut Pajaro Velasquez, the Coordinator of Youth LAC IGF and Youth IGF Colombia, among other affiliations. We have Mr. Khalid Hadadi, who’s the Director of Public Policy, Robolox. With this, now I move on to the very first question, which is directed to our host, Mr. Ahmed Khan. Let us start from the host country. So what is your experience on how AI’s innovation is impacting education? What are the different countries in the world who are adapting AI in education? And what are the different opportunities that this space brings to all of us?
Ahmad Khan: Okay, thank you very much. Actually, the thought came to my mind whether I should use ChatGPT to help me with this remark. I mean, on the one hand, it would improve my output. On the other hand, it could dilute my thoughts and perspective that I’ll share. I think this is the dilemma we face with AI, right? And I’ll get back to this point, but I’d like to touch on some points that answer this topic, and we can expand more in the Q&A as there is interest. So in terms of innovations in AI, there are generally two categories of AI developments in education. The first one is educator-focused or instructor-focused technologies, and these are called the instructionist approaches, which are focused on supporting the automating grading in procedures and feedback on students, for example. And the other approach is called the constructionist approach. And the focus is on how students and learners can use technology to construct knowledge themselves. And that’s more of a hands-on approach to education. And both really provide value and should be involved in how we go about integrating AI technologies into education. In terms of concerns, there are concerns with data use by technology companies, there are concerns with decision-making capabilities by AI tools, and by the way, this is a structural limitation inherent in large language models. But my main concern is, and this is looking at the future of what education could be, my main concern is the possibility of a future where society would become an end-user of knowledge and not a creator of knowledge, right? And as a youth advocate, I’d like to talk more about this and how we can give a few characteristics of what good education of the future could look like and what it should do. So first, good education should instill a level of deep thinking and curiosity for knowledge. And there are some tools, new tools now, that where you have large language models, they use the Socratic methods with students where they ask them questions, get them engaged, and reach answers on their own. And there will be sharp and critical. The value we can see from that and the idea here is, the vision here is generation should be able to use AI to support their thinking and not replace their thinking. Second, good educators should be a source of inspiration and guidance for students, and we should really focus on enabling and supporting this direction. The idea here is someone who is inspired with AI to just get the answer, while someone who’s inspired will use AI to help the answer and foster. Third, good education would foster self-learning and empower lifelong learners who would think collectively and not individually. On this, I think that’s something we have to practice as we teach. One example that I would think of is developing learning hubs that we can spread around the globe, where students, policy makers, technology development can come together and discuss ideas, come out and see how they get feedback, what worked, what didn’t work, and then that helps with integration and really pushing it forward. I’ll give here an idea, I think that I will close it. The idea of technology adoption curve, if you’re familiar with it, you have your normal bell curve that starts with your pioneers, your Steve Wozniak, the magician of technology. Then you have your early adopters, which is about 10-15 percent of the population, and it turns out this is the section that really drives the maturity and adoption of technology. Those are the people who will stand in line to wait for the new iPhone for hours. Those are the people who will tell you how good it is and what needs to improve. Then you have your average adopter, the majority adopters, those are the practical people who want to think about, how will I use the technology? How will it help me without really taking much of my time? Then you have the late adopters, those who are either not able or not interested in adopting it early on. In developing these learning hubs, we really want to think about how we can help more people, facilitate for more people to become early adopters, and get them involved in the discussion and engaged early on. Then you can have the effects go from a local to a global level. Thank you very much.
Ihita Gangavarapu: Thank you so much for your points. It sets a lot of context for the upcoming discussions. I now request Mr. Henry Werdia, the Ambassador for Digital Affairs from Government of France. My question to you is, France, the rich initiatives to address AI, how do you see AI impacting education? And what principles should guide the development and implementation of AI in the educational settings?
Henri Verdier: Thank you very much. That’s a question. It’s impossible to answer in five minutes, but I will try to share some view. But first, let’s recall that when we speak about AI on education, we speak about at least three different things. We speak about how to use AI for education. And of course, we can dream of a world with, for example, more personalized education. If a model could tell me, you don’t understand mathematics because you didn’t understand this two years ago, and I will fix it, and now you can continue. That’s, of course, a dream. We also have to think about education to AI. So we need skills and literacy and, frankly speaking, a human with absolutely no AI literacy won’t be as free as they could soon. So we need to empower a bit and to prepare. And we need to prepare our children to the world of AI, a world, a very complex world, where if you don’t know how to do your job with AI, you will lose your job, where we’ll constantly live with small companions that will always obey and serve us. But that’s not a good way to become a great human being, to never oppose, to be surrounded by a servile model. So that’s a very different question. And that’s important, too. The youth represent a vulnerable group. We have a duty to let them become citizens and free human beings. They have rights, and we have to pay much more attention that most users of AI. I say this because, of course, everything we are doing in the field of AI regulation and governance matters for education. And let’s start with general principles. We need to find some trade models. You need to respect the UNESCO ethical principles and the strong ethics. You need to avoid bias and to pay attention. In order to make this, we need to conceive, because we don’t have it, a way to audit AI model and in a democratic way. The problem is not just a few experts coming to me and saying, no, I did audit the model, it’s great. We need a society to be able to have a conversation regarding the models. And for this, we need to conceive new strategies. We need to avoid, not bias, but just a lazy confirmation of current inequalities. So, for example, today, as you know, if I ask to the AI, show me a CEO, it will propose a white, 50 years old man, so like me. Because today, the average CEO is like this, but it will change, and the AI has to change, too, or to prepare. So this is not just for education. education but that are very important question if you don’t fix it you will have trouble for education then maybe you have questions that are more educational we need to to save a spirit of public service education is a fundamental right so of course the market can help us companies can help us we have research but we have to be sure that it will remain a public service we probably need let’s imagine if we end with a world with one giant company teaching every children of the world we are lost that’s finished so we need a diversity of solution respecting cultural diversities and needs of every countries so that’s very important we need to be sure that the principle of equity equal access non-discrimination will be preserved we in France we think that for this we need a kind of public infrastructure at some level we cannot just rely on self-regulation from an oligopoly of few companies so we have to to think about what is a public service of educational AI so that’s our some question probably and I quite finished because the five minutes are running fast probably the international community will have to conceive a framework for knowledge and education we cannot let capture we’ll produce a lot of knowledge with AI with all those data but can they be the property of the companies that will build this knowledge you know we live in a world where we live because there is a public science because there is a common knowledge and you can innovate and create value on whatever because there is also a common knowledge so what is a common knowledge that we need to share regarding AI and this I don’t think that we have a strong conversation regarding this question and I conclude with this, and that’s not just because we are at the IGF Youth. We need to engage the youth from the beginning. I think this deeply and frankly, not just because I’m the father of two daughters, 18 and 20, but we will need new solutions. We need strong innovation and brave innovation. We need ideas coming out of the box. And for this, we need to engage very early the youth. And for example, to prepare the Paris AI Summit, we did work with IGF France and other organizations, and we did organize some sessions and workshops on Akathon. We did just ask to young, which kind of education do you dream about? And I was very interested by the answer. So for example, all the ideas were ideas with a personal AI in my pocket that I did teach myself and I know the model and the model doesn’t know me so well. So they don’t want, they didn’t, those young that did work with us, they didn’t want one big central AI model somewhere in the United States. They wanted their personal companion that they teach themselves. And that was interesting because that was instinctive. They didn’t really think about it. But for them, a good future is a future where the AI works for me in my pocket with my prompt and not a future where someone did decide somewhere about my future. So that was very, very interesting for me.
Ihita Gangavarapu: Thank you. Thank you, Mr. André. You mentioned a lot of concerns, Robert, that meaning as a community, we need to address, we need to think about and also deliberate. You know, that brings me to my next question, which is directed to Ms. Margaret. This is on, when we talk about cooperation and collaboration, what can policy makers do for AI to support and push education for everyone? And how can global collaboration be fostered to address the challenges and opportunities in the AI field, particularly in education?
Margaret Nyambura Ndung’u: Thank you, Madam Moderator. Good morning, good afternoon, and good evening to all of the distinguished panelists. It’s a great honor to join you in these 2020. for IGF, and I’m glad that I’m able to join you online, and extend my gratitude to the host country, Saudi Arabia, for organizing this and the UN Secretariat. Going into the question, the intersection of artificial intelligence and education presents both profound opportunities and pressing challenges. As we delve into this discussion, I would like to frame my remarks on the two key areas that you have asked, what policymakers can do to ensure artificial intelligence supports education for all, and how global collaboration can address the challenges and opportunities for artificial intelligence. Policymakers have a critical role in ensuring that artificial intelligence becomes a force multiplier for inclusive and equitable education, as envisioned in the UN Sustainable Development Goal Number 4. To achieve this, there is a need to focus on accessibility, addressing the digital divide, and safeguard equity, among others. Artificial intelligence-powered tools must be designed with inclusivity at their core, ensuring they cater to learners with diverse needs, including those enabled differently, and those in underserved communities. Governments should incentivize the development of open-source artificial intelligence tools and platforms that democratize access to quality education content. This can be done or can be achieved through following universal design principles, also using artificial intelligence to create personalized learning experiences that adapt to individuals’ needs, such as text-to-speech capabilities for visually impaired learners, or speech recognition tools for those with hearing disabilities. We also must focus on multilingual support through equipping artificial systems with language translation capabilities, especially for local and indigenous languages, to bridge linguistic barriers for learners in underserved communities and across all communities. Again, we are talking about collaborative platforms through promoting the creation of open-source educational platforms that pool resources and expertise globally, making high-quality content accessible to all. learners regardless of location or socio-economic support. And finally, we are talking supporting development of localized content that is culturally relevant and content-specific learning materials that resonate with the local communities. Distinguished delegates, the second area I would like to focus on is the issue of digital divide. Artificial intelligent potentials can only be harnessed if all learners have access to the necessary digital infrastructure. Policymakers must prioritize investments in affordable and reliable internet connectivity and digital devices, particularly in rural and marginalized areas. Addressing the digital divide through artificial intelligence in education requires comprehensive strategies that ensure all learners have access to the digital infrastructure and tools necessary to benefit from artificial intelligence-powered solutions. By focusing on infrastructure, affordability, and inclusivity, and combining efforts across stakeholders, AI can be a transformative tool to overcome the digital divide and provide equitable education opportunities for all learners. I must say we are doing a lot of infrastructure development, looking at affordability, and looking at capacity building as a country. Distinguished delegates, safeguarding equity is important in leveraging AI to back education for all. We must mitigate the risk of bias in AI algorithms that could exacerbate existing inequalities. Policymakers should establish regulatory frameworks that ensure transparency and fairness in the design and deployment of AI in education. To safeguard equity in leveraging AI for education for all, several strategies must be adopted to ensure AI supports inclusivity and does not inadvertently perpetuate or exacerbate existing inequalities. I know we all know across the continent and more in Africa that these inequalities exist. The third area that I would like to focus on is the element of fostering global collaboration. Global collaboration is not merely a choice, but a necessity to harness AI potential in education responsibly. By working together, governments, institutions, and the private sector can ensure that AI contributes to inclusive high-quality education for all. The challenges and opportunities presented by AI in education are inherently global, and so must be our response. Collaborative efforts are essential in shaping an inclusive digital future that includes strengthening international partnership. Governments, education institutions, private sector actors, and the civil society must work together to develop shared standards and best practices for AI in education. Multilateral organizations like the UN can provide platforms for dialogue and cooperation. With these, again, we bring the education institutions because we are talking of our young people, our youth, to ensure that they are fully integrated. Governments, education institutions, private sector, and civil society must come together to develop shared standards and best practices that ensure AI’s ethical, equitable, and effective integration into education systems. Sharing knowledge and resources is one key area of fostering global collaboration. And we are seeing a transformative potential of artificial intelligence in addressing global challenges, particularly in education, health, and education development, calls for equitable access to AI technologies and expertise. Countries with advanced AI capabilities bear a responsibility to ensure their knowledge, resources, and innovations with those that are still developing the AI ecosystem. Some of the key strategies is promoting digital public goods. We are talking of global research collaborations. We are talking of capacity building and knowledge transfer. Once we do that and ensure that we are exchanging knowledge, and particularly when we are talking about digital public goods, advanced AI nations can support the development and dissemination of open source AI tools. Data sets and platforms as digital public goods, ensuring accessibility for all, and a collaborative platform can facilitate the adoption of these tools in under-resourced regions fostering inclusion. As we discuss this, empowering youth participation is one of the core issues of this forum. forum to consider. And again, this is a youth forum. And we are saying that empowering youth through global collaboration is critical to shaping an ethical, inclusive, and forward-looking AI ecosystem. By giving young people a seat at the table and fostering their active engagement, we can ensure that AI policies and practices resonate well with the aspirations of the next generation, safeguarding a future where technology serves humanity. And finally, the last but not least area is ethical AI development that is equally critical in fostering global collaboration. By embedding ethical principles into the design and implementation of AI systems, we can build global trust. This includes respecting cultural context and safeguarding data privacy and security, especially for vulnerable populations. Global collaboration is critical to embedding ethical principles in the design, deployment, and use of AI systems. By leveraging shared values and diverse perspectives, the international community can ensure that AI development aligns with the principles of fairness, inclusivity, and respect for human rights, fostering global trust. Thank you, moderator.
Ihita Gangavarapu: Thank you so much for your points. I think very well captured a policymaker’s perspective on the various concerns that we have, as well as such certain ways with respect to collaboration how these concerns can be addressed. Now I’ll hand it over to Ms. Carolyn to take it forward.
Carol Roach: Thank you very much. We had a lot to digest just now, and it’s basically from a lot of us with the gray hair, except honorable hair. So now it’s time to hear from the youth, especially based on what we’ve heard and the desire to really engage the youth and not just as figureheads but to really get you involved and sitting at the table. So here’s your question, sorry. You come from Myanmar and are very active in the regional youth initiatives. I see her online all the time. Not everyone has the same opportunities. What policies are necessary to prevent the digital divide from widening due to AI implementations in education?
Phyo Thiri Lwin: Thank you for introducing me a bit. I feel like as from the perspectives of the young people from the developing country, we are also trying our best to catch up every day as the way we can. Because I know that there are also the academy in the private sector which are trying to train the young people. Younger generation, I’m Gen Z anyway, but they are trying to educate and train them to learn more about the AI, the Gen F, let’s say. So that is highlighting that even though there are many things are happening in many developing countries, we are still trying our best to catch up the best, not to miss any kinds of the opportunity. But there are also the challenges for, you know, like assessing the education, because for example, developing the infrastructure related to the AI is quite expensive, especially for those in the private sector academy or the school or university. They are quite challenging with the, you know, funding and also the investment related to the, let’s say, like the setup, the learning hub in the developing country related to the AI. That is one of the challenges that I see. Another challenge would be related to the accessible education. Internet has been challenging for us to assess the technology in the developing country. Maybe it is related to the geopolitical related matters. So, you know what, without Internet, I don’t think we can learn continuous learning about the AI and also we can empower the young people to continue their education. So if we are talking about the AI in education, the Internet is also important for us to get access. So for preventing the digital device, that’s a question. So if you prevented the digital device from the AI, learning from the AI education, I personally feel like at least we need to get access to the Internet as a fundamental right. Then we can continue to shape the policy. Even if there is a challenge at the policymaker level, we can shape our society and community at the very ground level, like at school or university. We can change the policy. We can allow students to use the AI at least. But the educators also need to be open-minded for using the AI. What I experienced is that for assessing the AI, let’s say even though the students want to use the AI too, some of the educators stay narrow-minded. It’s a very derogatory way to say, but I feel like they are very concerned for cheating on the assignments or something like that. I can feel from their perspective why they are concerned for cheating on the assignments using the AI technology. But from the learner’s perspective, like me, I have a challenge. I’m not a native speaker in English, right? I need the AI assistant to revise my idea in a better version, let’s say, kind of like this. I feel like at the school or university level, we can shape the ground policy at the school and university at least to grant access to the students for using the AI too. But one concern might probably be the assessment system because many students, maybe students can also cheat using the AI technology, right? Maybe we need to change their exam system, maybe assessment system as well. That is the way what we can do, and even from the educator’s side, I personally feel like it’s better to change their exam system or assessment system on the way. I’m mentioning about at the ground level what we can shape our society by changing the policy at the school or university, right? But at the higher level of the policymaking for the developing country, and also there are always the big gap between developing countries and the developed countries. But the thing is that we can share our resources to each other because we are all human beings. One of the speakers said we have to think about the diversity and inclusion, and we can also share the resources, at least sharing the information and also sharing the opportunity to learn about the AI. Let’s say this speaker mentioned about the AI Summit in France, right? So maybe we can give the opportunity to the young people to go and learn what is happening in the developed country by attending the AI Summit. Kind of like that, this is also the opportunity and also empowering them to do something back in their initiative at the local level. For example, if we invite the educator or learner, both of them get a chance to attend the AI Summit, that might be probably very beneficial for them to share their best and how to be open-minded using the AI technology. Yeah, that is a way how we can share the resources among us and also at the global level. We should not leave anyone behind in this AI evolution and revolution period and era. We have to bring everyone as much as we can by shaping our educational policy at the ground level and also at the higher level of the position.
Carol Roach: Thank you, Fio. I think what we hear repeating here is that⦠Very good, thank you. We hear repeating here that we need some kind of corporate responsibility in terms of helping countries to develop their AI because it is an expensive endeavor and we need them. I totally agree with regards to the change of mindset, especially of the educators, probably of parents as well, so that the youth have a say and just don’t look at AI as a negative, but to embrace the positive part. And for us to embrace the positive part, yes, we do need that mindset change. We are going to now hear from Umut online. Can you hear us? Umut? You can hear? Okay. Yeah. Okay, so as we sort out the technology, let’s move on to our next question. Minister Amal, this is a big one for you. After hearing all of what’s being said of these important views, both from the youth and from others of the older generation, what preoccupies your attention as a decision maker on AI implications for education? It’s a big one.
Amal El Fallah Seghrouchni: Thank you very much. Yes, it’s a huge question, but I will start with the dream. My dream is to keep young people as far as possible from computers because I think they spend already a lot of time connected and very close from machines. And I think that AI should be used when it has a real added value. And we have to discuss what does it mean, added value. For example, if you use AI to simulate classroom, this is something you cannot do alone as teacher. So you need a tool to simulate this classroom, to simulate interaction with students, etc. And in this situation, AI can provide some benefits. And I would like to say something just to set up the scene. For me, in 18, there was already education-based AI. We call this enseignement assisté par ordinateur, assisted with computer. Education assisted. And it started many, many years ago, like 40 years ago. And there was a lot of advances in mathematics, in science, etc. And then we scale up from this basic AI-assisted teaching to serious game, for example. And gaming becomes something very important in many, many situations, not only at school but also in companies, etc. Because it puts the person in a situation of learning. And then we started thinking about more personalized experience with AI. And we got this generative AI very recently, in 2000, maybe GPT, 2022. But generative AI started like five years before. And this gen AI can have, like, very interesting features. And I will go back to this. But also some bad features. Like for example, plagiarism is something we met. All education system is disturbed by this. Chat GPT, you started by saying maybe chat GPT can give me the answer. So let’s focus on the positive aspects. For example, the voice. The voice in generative AI is very useful for education. And we started developing a lot of apps, for example, for translation from one language to another one. And using some an approach based from speech to text or from text to speech. And this is very useful in particular in the global south. Because you have to face literacy. And you have to face multilanguage. If you go to Africa, for example, in one country, you have to deal with, like, 15 or 20 languages. Different languages. And generative AI helps us to move to shift from one language very smoothly to another one. And this is, I think, the real added value in education of generative AI. Now, just when I listen to you, I think there is a very huge problem. Again, if we focus on the global south, it’s about connection, connectivity. It’s something very difficult to have everywhere. The infrastructure is also a huge problem if you deal with large language models. So we have to find some new approaches. For example, in France, there is a very nice group working on frugal AI and trustworthy AI in the sense that we need to certify that the output of AI are very accurate. And also, they don’t need a huge amount of data and not very big or very large models. And also, the access to platforms. If you put the apps on platforms, people should be able to access. So I think maybe the thing that is crucial is about ethical aspects of AI and education. We have already some dividend on numerical stuff should not exceed more. I mean, if you have people that have access to learning with very sophisticated tools while others don’t have access, you have this problem of accessibility and equity. Transparency. There is also a need to maintain clarity about how AI systems function and make decisions. We have been now, we moved from automatization of systems to autonomy of systems with AI. And this autonomy allows some systems, educational systems to, for example, to make decisions about orientation, to make decisions about access to the university, et cetera. And this is related also to accountability. We need to explain why this person will get this kind of access or not. Another topic, very important, is data privacy and cognitive rights in education. Because you know, data is something very important to protect. And in particular, if you deal with cognitive data, it’s much more important. So there is a possibility to trace all the cognitive data and to manipulate, to apply some nudging on this cognitive data to go ahead with a lot of manipulations at large scale. And finally, I would like to mention all the problems related to AI, like gaze in data, you know, this book of Invisible Woman. There are a lot of problems related to the use of data and we rely on data and gaze algorithm also. So just to summarize, there is the problem of data, there is the problem of infrastructure, and also there is the problem of design. How to make AI trustworthy in particular in the case of education?
Carol Roach: Thank you very much. We hear a lot of different terms of ethical AI, but I think I like trustworthy AI. So we’ll have to use that a little bit more as well. Is Umut on? Okay, Umut, so here is your question and welcome. Okay. What are the youth in Latin America and the Caribbean thinking when it comes to who should be held accountable for decisions made by AI systems in educational environments? Take it away.
Umut Pajaro Velasquez: Okay. everyone on good day or good evening wherever you are. When it comes to decisions on how or who is going to be held accountable in AI decisions, well actually in Latin America we think that actually this has many problems related to internet governance. It’s a problem that should be addressed by several stakeholders at the same time. Probably the main stakeholders that can be held accountable for decisions made by AI systems in educational environments requires careful consideration of the role and responsibilities of first of all developers. AI developers have the responsibility to design and develop AI systems that are ethical, unbiased and transparent. They should ensure that their system and train on diverse and representative data sets especially in a region like Latin America where we have so many cultural nonsense and different languages and all of that and also ensure that the AI system especially that the ones that we’re using for education are designed to protect student privacy. Educators have a role in this aspect on accountability also and I’m an educator so we talk about this a lot of time and despite a lot of people say that most of the educators have some resistance to AI, I think it’s the opposite. Probably most of the educators don’t know exactly what is the responsibility in all this process so they don’t know exactly how to you’re saying somehow, so they feel more afraid because they don’t know. No, actually they are against it to the use of the technology. So educator here has to play a crucial role in implementing AI system in the classroom. Mostly, most of the educators need to be trained on how to use AI effectively and ethically, and they should be involved in the decision making process regarding how AI is used in the schools. So that means that educators should be involved also in the implementation and the policy making processes, not only being the ones that receive some education to implement those tools, but also the ones that decide how it’s going to be implemented and how to be going to be regulated. They use all these tools inside of the classroom. And the other stakeholders that can be considered important here is obviously the policymakers. Policymakers have a responsibility with all the society to create regulations, guidelines, and guidelines that the government that use AI in education. These policies should be addressed issues such as data privacy, algorithm bias, and obviously accountability. And students, because they are also part of the process, we can’t avoid having some great accountability without including students inside of this conversation. Without them, it’s impossible to actually address fully the encompass or the complexity of having AI education tools. and on making them accountable for the use of the tools because it’s not only developers that are going to be accountable for it, it’s all we need to see in all the process, not only in the design stage, but until the deployment and implementation on it. Students themselves should be empowered to understand how AI is being used in their education and how to have a voice in the decision-making processes. They should be educated about the potential benefits and risks of AI and encouraged to critically evaluate the information generated by AI systems. Students in this case, they need not only proper education to know how to use the tools, but also some critical thinking on how and when they are starting using the tools because most of the students are using the tools, so we can avoid that. We have to think that accountability is a really complex topic to talk about. Probably with five minutes, we don’t have enough time to cover everything about accountability, but what we can say is that accountability should be something that should be shared among all stakeholders. It requires a collaborative approach that prioritises ethical consideration, transparencies and the wellbeing of the students because the students are the main focus on the education system. Before I forget, another stakeholder that should be taken into account is academia. Academia should actually need to understand and investigate AI is affecting the education, not only in the practices, and then inside of the classroom, but outside of the classroom, and how it’s changing the dynamics on how a student can learn and improve their abilities, or their abilities inside of the classroom, or for their daily lives. So academia needs to understand the pedagogy, the didactical and the pedagogical aspects that are being affected by the use of artificial intelligence tools in the classroom. So there is another stakeholder that should be taking account in this. So we can have AI more accountable in education, that actually is transparent, fairness, has a human oversight, respect the privacy of the student, proponents to equity, and is child-friendly. So that’s my approach to it. Thank you.
Carol Roach: Thank you very much. So we can see that the number of critical stakeholders are growing here. We have government, the technical community, and of course, academia. After listening to all the talks, I have a question running around in my head, but I have to leave it running for one more speaker, and then I’ll put it out to you. So Mr. Khalid. Sorry. Just go ahead, okay. Right, so I get to put out my running question. So I’ll be honest, I have not used one single AI tool. And I’ll tell you why. It’s because, you can still hear me? Yeah, okay. It’s because of what was said. at the beginning and I think it’s from Mr. Khalid. Hold on. Am I gonna be enhancing what I do or diluting? Is it really gonna be me, is it a thousand other people? So I’m going to put it out to you who’ve probably used AI. How do you feel ethically, personally, when you use an AI tool to help you, as Vajo said, to enhance? Do you think that you’re really enhancing? What do you do to help your ethical compass? So I’m throwing that question out to you in the audience.
Henri Verdier: Very brief comment. Indeed, you did use AI tools a lot. If you take a picture with your iPhone, that’s AI. If you do receive some advertising on the web or within a social network, that’s done with AI. If you do receive information in your social network, the feed, that’s AI. And that’s a wicked aspect of the problem because you don’t always notice that AI is everywhere. And maybe that’s the most important because we cannot confront and contest and discuss democratically because decisions are made and we don’t even know that there are decisions.
Carol Roach: No, I agree. I agree that sometimes we don’t know, but I’m throwing it out there in terms of I do know. So when you look at Zoom, you have the little AI apps you can use, WhatsApp, everywhere you turn. But right up in now, I don’t know if they’re trying to be ethical, you have to click a button to say, yes, I want to use it. So I’m looking at the point where I click and say, yes, I want to use it. But thank you. You’re quite right.
Jarrel James: Hi, my name is Jerrell James. I’m a researcher for… or internet resiliency. And I do a lot of work with AI and a lot of concerns that Ahmed has made are… Also, I’ve heard him talk about them before, but I have them as well. And so I think what you’re discussing there is like consensual data mining, consensual activity with AI. And I think when I use it myself and my hesitations right now around using it, which is something I would love to hear the panel discuss, I think for me, I’m wondering about who is responding to me in the sense like who… It’s a large language model. So whose language, whose background, whose perspective is diluting my creativity? Whose background perspective is diluting my output? Because I really value the fact that I come from East Africa, I am well-trained and well-educated in all sorts of things in the West, but I am applying that education through this perspective and this lens. And when I use AI tools, I often notice that I almost have to give the AI my philosophy first, and I have to write out logical prompts that give if-then statements, like if I believe this, then this is my outcome of what I would like to see. And I almost have to deprogram the AI from the language model that it exists in currently. And so I would love to hear, yeah, Ahmed, I think you are ready to go on this, but I’d love to hear more about, instead of just owning how it’s implemented regulatory-wise, who are the people in the… Do you see Global South members being the next Steve Jobs of AI, or these big innovators in AI, or is it going to follow the same path of the foreign delegates or the corporations come in and they give you these language models, and then you have to decide what’s true or not? Thank you.
Ahmad Khan: So maybe first I’ll start with the concern that you raised, and this goes to what the ambassador said. Really, I think we blew the social media. Why is it that someone in a different continent get to decide what I watch on my phone for two to four hours? And I’m not mentioning any names, Mark Zuckerberg. But for AI specifically, so the idea of how the technology works, basically, it’s a large language model, as in it takes data and it learns what the data says, and it can predict the next word. This is what it learns. So you take a bunch of information from the internet. So what it learns, on average, is the average content you see. So without any post-training, this is what you get, the average response you would get in the internet. But then there is the fine-tuning that happens after, be more supportive, give it more information, be this and that, and then it learns some concepts that can provide you with the direction that you can give it, right? So how it happens now is the different companies control for these things, and they ensure that it’s trustworthy in a sense. They try to make their own best judgment in terms of how to use it. And then we become end-users. So now the question is, again, back to the point, how can we push it to use AI so that I can tell it what I want, and it serves me and not the company? And this is really what we want to focus on. And I think this really is an overlap between the capabilities of the technology itself and then what we do with it. And I’ll leave the floor if there’s any more comments, if anyone wants to add more to this.
Carol Roach: Next.
AUDIENCE: Can you hear me okay? Brilliant. So thank you so much for that, Ahmed. And I think my experience as a young person utilizing AI, especially in, you know, I’m a consultant, my job is quite professional, is I usually struggle to get it to give me an objective stance. Whatever prompts I give it, it ends up speaking to what I’m feeding it, which is not necessarily what I want, right? I want this thing to challenge me, to give me some sort of objective truth. So I guess my question to you is, do you think an objective truth exists in the sense of AI, or is it always going to be manipulated to a certain extent by its users and the community that utilizes AI technology?
Ahmad Khan: Yeah, I think, again, this is a bigger question of what’s right and what’s wrong overall, right? But AI will give you the answer that it has learned. And in that sense, it’s always objective. And if you tell it, I want you to challenge me, then it will try to challenge you. And this is what it’s good at. So if you use it for what it’s built for, it’s great. But if you use it, try to extend it further than what it should do, then it will fail. And then we say the company is responsible for it. If you use a knife that’s supposed to cut things and it cuts your finger, maybe you didn’t use it right. Maybe it wasn’t too sharp. So we have to really know what the limits of AI is before we really try to use it for all intents and purposes, right? In terms of how we can actually use it to get logical and objective answers, there are tools now. So large language models learn an intuition from data. So this is what they get. If anyone is familiar with the system one, system two, it’s the fast thinking process of intuition. It just learns intuition, but it doesn’t have a structure for logic. There are now hybrid models that they’re developing that can actually ensure something is logically and reasonably objectively making sense. And that’s something we can incorporate into developing tools. I think that would take longer. So maybe hold on to ask it what the meaning of life is until we get that answer.
Henri Verdier: A very brief comment regarding your question. So the current model, we are built by companies to sell something. So they try to be bureaucrats and the others. They didn’t always answer. Sometimes they told, are you sure that this is your question? Do you know why you are asking this question? There will always be answers. And when they don’t know, they do invent. And when they don’t invent, they hallucinate. But they will always agree to answer. And for me, that’s my worst concern.
Ihita Gangavarapu: Very well answered, actually. Just quickly before we proceed, I just want to check with our online moderators. Ines and Keith, do we have any questions or comments online? I see that we have a comment or question from Lily. If you’d like to speak, please.
Lily Edinam Botsyoe: Hi, everyone. Good morning and sorry morning from here as I’m in Cincinnati, Ohio and excited to join the conversation. So one of the things I wanted to say early on was the fact that Madam Carol had mentioned that she had not, and then she actually clarified it so I’m going to point out the fact that in using our emails and our calendars, there’s the subtle use of AI so much so that it enhances productivity in one way or the other. We all are using and so for somebody who’s a youth and coming from the angle of perspective of answering if it adds any efficiency or is effective for me, first off it is. But secondly, I’m a PhD researcher in privacy. So my concerns actually go towards the idea of privacy and I share the sentiment of the speaker who took the microphone the first time to say whose perspective is it may be spotlighting for me, right? And so in that aspect, one of the things that we start to look at is even what these companies are doing. For example, ChargeGPT. I’ve discussed so much about my dissertation with ChargeGPT that when I ask a question, it brings elements of my past work into the prompt or into the response it gives to me. One of the things that I think that they’re doing is that when you start a conversation, you can toggle a button and say, hey, don’t train with my information or don’t train using my data. For one step, it is a way for people to say, hey, I’m looking into my privacy and maybe I don’t want this to be used in training this large model for others to be a part of. But one of the conversations is that aside from what these companies are doing, we are speaking about responsibility. And for us as people who are probably looking to be private and secure while using AI tools, we will also start thinking about how we know and understand what these tools are. And first off, look out for our own security. Are you uploading your social security numbers? Are you uploading passwords? What else are you doing out there that probably can land into the training of these models? Remember, like what the minister said, he said, this is a machine learning that this AI tools are using. there’s natural language processing that they’re using, or even these tools acting like the human brain. And so it brings about that neural network part of AI. In that sense, we all are using the tool, but for us also, we have to take the time to also learn for ourselves and make sure we are taking proactive approaches to protect ourselves while these companies and policies and every other thing also works in place. So from my point of view, AI supports me, but I also look out for privacy because it is huge. And if you don’t think of it as for yourself, the companies would only play a secondary role in your information probably may be used in training these models.
Carol Roach: Okay, thank you very much. That was very helpful. The next speaker online, we have quite a queue here. So please, we’re asking everybody to stick to the two minutes. I think we’re down to maybe a one minute intervention so that persons can have a chance to get involved. Thank you.
Ahmad Karim: Hello, everyone. Hello. Are you listening? Hello, I’m not getting. Go ahead. Hello, thank you so much for the insights. My name is Amad Karim, I’m from Year in Women. And I have two questions. The first one is that we see a very wide gap between two conversations when it comes to AI. Global North, it talks about AI and the opportunity that would bring is the economy of everyone. And in Global South, if the threats and the protection side of AI and technology, how can we guarantee that we have, women and girls have that side in the conversation where they are, the systems are aware of the concerns and their safety measures and concerns, but it does not defy the opportunity space where we can have more girls and women shape that whole industry. And then the second question related to bias and AI. We know that there are biases related to the data itself. AI inherited our history, our civilization, tens of thousands of years. of biases, again, it’s women and girls, and this is what we’re also receiving from AI input, but also the algorithm bias when it comes to those who are creating AI, mostly men, creating softwares for other men, and then with a smaller percentage of women in the AI industry, that’s also a perpetuated application. And then the last part is the bias in the users. Those who have already gender biases and to asking the wrong questions, and how can we make sure that we are, who is responsible for fixing the AI that would work for women and girls? Thank you.
Carol Roach: We’ll go back to the online speaker after you have a response to a question that was put to us. All right, we’ll go to the online.
AUDIENCE: Hi, I would like to answer the last part of the question that talk about our gender bias on AI, because that’s related to my work. So yeah, one of the things that we can do, well, we can actually blame AI for being gender biased when we actually create the data that fit the AI system with those biases. The biases exist in the society, so we need first of all, change the cultural background of the entire society in order to have less gender bias in artificial intelligence. We can actually improve the language sometimes when we are talking about language models, and when we can do it, that’s actually what I’m trying to do with it. with my language that is Spanish is to actually improve those models in the way that actually the representation and the outputs that the people receive when it comes to gender are more equal when it comes to men’s representation of things and women’s representation of things. So it’s hard when you have some languages that actually are so strong, have some representation of gender inside of it, and there is a cultural background that actually is really, really, really, really gendered. So it’s not going to be easy to tackle the gender bias, but we should try, as many people are trying to do it at the moment. So what we can pretend with one of the things that I say to people that I want to improve the models related to gender, gender, gender, gender bias on AI is actually start feeding the data with more related things than women and other genders to how they spread themselves and what they do in the more common things. So it would be helpful for that AI, for the different AI models to give a less gender response to the problems that they receive. So, yeah, that’s what I wanted to say.
Carol Roach: Thank you very much. Very, very helpful. And you’re quite right. It’s almost a circular argument there with regards to what AI is learning, feeding from what we have vetted and feeding from our biases. So we do need to look at how to address that. Thank you very much. I’m going to take… Okay, the persons on the floor to really stick to one minute if we’re going to get to the end of the line. I’m going to ask at this time that nobody else join the queue. Thank you. I’m going to ask back from the perspective of global norms or the other.
AUDIENCE: So like when you’re questioning the use to be critical, I’m not sure if you are getting a good environment for them. When we all have the UN organization and everything working on education and technology, however, we don’t have a universal guideline that as someone saying what is pleasureism or not, and you know how much is okay or not, are we giving enough information about that so that you snow? No. Even at my German university, they do not really specify until where is pleasureism and not and how to use it. But then we are straight up to what we should facilitate all the other trends on. I think it’s very complicated, and I think we should really think how we work on this so that we actually inform, and not just telling them to be on the other. Thank you very much. That’s a very good point,
Carol Roach: and we now have five minutes to wrap up, or four minutes. So let’s make it quick. 30 seconds.
Osei Keja: My name is Osei Keja from Ghana, and also a rep from African youth IGF. So this year we had our African youth IGF last November, and the topic was digital governance and emerging technologies, youth participation, amplifying youth voices. One of the recommendations was establish advisory and participatory platforms to involve youth in policy making and governance at regional and national levels. My question is that what kind of methodologies? Oftentimes the youth are sort of an afterthought in all these conversations. How possible, or what kind of methodology or say structures should we put in place so that they are all inclusive? And also, quick one, who are we benchmarking in terms of all these technologies or say policies? Who are we benchmarking? And what is that? Who should we learn from? Who are we benchmarking, and who are we learning from? So to answer our questions. Thank you very much. Very good point, and I would encourage you
Carol Roach: to join the working group on youth and the IGF. We formed that group to try to help some of the things that you said so your voice can be heard actively. Thanks.
Dana Cramer: Hello, Dana Kramer for the Record Youth IGF Canada. I’m curious about how we as students can advocate for AI adoption in our educations. For context, I’m a PhD candidate in Toronto, Canada, and my university has sweeping regulations on AI usage now, which really impacts on how youth can become first movers of AI programs. And then by being first mover have more experience to then enter, for example, that seat at the governance table on it. And our regulations at my university aren’t just CHAT-GPT, but also allowed to review synthesis and dissemination programs. So I’m wondering if the panel could speak to strategies for advocating for having youth be able to use AI in our educations, that then we can be partners and stakeholders in governance tables too. Thank you. I just want to flip the switch a bit here.
Carol Roach: We’ve been asking, or persons have been asking the older generation on how to change. You said you’d like to see a change, but what are your ideas towards change? What are your ideas? How can you ensure, a lecturer, that I can use CHAT-GPT to produce my paper without you worrying what are the guardrails you’re suggesting in that type? I’m just throwing the question out.
Asfirna Alduri: The perfect introduction for my question, or actually my comment. My name is Asmin Alduri. I am part of the Responsible Technology Hub, a youth-led non-profit that is actually working on this question specifically. So one aspect that we do is we actually have spaces that are intergenerational in means of we’re not only giving young people the mic, but we let them actually develop AI. So instead of asking them and serving them all what you want, we ask them the question of give us a solution and then we will have a question or we will go through the problems that you’re actually seeing. That way young people are actually taken serious, they feel respected and on the same level, and the kind of discussions we’re having are way deeper, way more solution-oriented, and way more inclusive of young people, at least for us in Germany. But one aspect that I really wanted to highlight, because I feel like it’s missing, and Minister Villeflach actually brought it up in regards of ethical aspects. We do not talk about click workers. AI has to be developed by labeling data, and that data is being labeled by young people who are super underprivileged, mostly from the global south, not paid well enough. So if we talk about including people and young people in this aspect, we need to include those who are exploited by developing it. And maybe that’s an open question for later on as well. How can we include these young people? This is the most important part for my work, at least. Thank you.
Carol Roach: Insightful. 30 seconds.
AUDIENCE: Okay, sure. Okay, I’m Oliver from Hong Kong Welfare Foundation and I’m auseful investor there. So personally, I majored in biology and I use generating AI for extended learning, for example, for accessing undergraduate knowledge and academic essays. So what has been proved is that the generating AI gives BAs a misunderstanding about the STEM topic. So how can they use scientific researcher or the STEM student to make sure that, or to judge that the response is correct? And who should actually be responsible for the false information that’s being given by the AI? I’m sorry, my two speakers, I’ve been given the signal to end. I can’t even do a wrap-up.
Carol Roach: So I’m very sorry about that. We cannot take any more speakers. However, we cannot take anybody else again. However, I think we started a very good conversation. Now the point is take it past a conversation. Now we wanna take it to action. I think sometimes for youth is that I’m gonna ask the older generation, this is my problem, how are you gonna fix it? Now we’re gonna flip it around and say, I have a problem with you guys. How are you gonna fix it? So keep that in mind, please. And thank you very much for your participation. Go ahead.
AUDIENCE: Just a second, I think also one good way is I have a solution, what do you think about it? Instead of how do you fix my problem, this is a solution, what do you think about it? And this is the idea of the learning hubs that come with a solution and see what the policy makers think about it.
Carol Roach: That’s a good way of putting it, yes. Thank you. Thank you, everybody. Give yourselves a good round of applause. Thank you very much. Thank you, online participants. Thank you very much. I’m sorry. Oh, it’s all right. My roommates are here. It’s still going and I can definitely somehow am closing it out for you. I like that approach. It’s good to make it.
Li Junhua
Speech speed
113 words per minute
Speech length
510 words
Speech time
270 seconds
AI can personalize learning experiences and adapt to individual needs
Explanation
AI has the potential to tailor education to each student’s specific requirements. This personalization can enhance the learning process by addressing individual strengths and weaknesses.
Major Discussion Point
AI’s Impact on Education
Agreed with
Amal El Fallah Seghrouchni
Phyo Thiri Lwin
Agreed on
AI has potential to personalize and enhance education
AI tools are helping reduce learning disparities in various countries
Explanation
AI is being used to address educational inequalities across different nations. This technology is helping to bridge gaps in access to quality education.
Evidence
Examples include Morocco using AI to reduce learning disparities in rural areas, France using AI to help visually impaired students read, and Brazil using AI-powered natural language processing to improve literacy.
Major Discussion Point
AI’s Impact on Education
Differed with
Ahmad Karim
Differed on
Approach to AI in education between Global North and Global South
There is a digital divide that hampers AI’s potential, with less than a third of the global population connected to the internet
Explanation
The unequal access to internet connectivity globally limits the potential benefits of AI in education. This digital divide creates disparities in who can access and benefit from AI-powered educational tools.
Evidence
The speaker cites that less than a third of the global population is connected to the internet.
Major Discussion Point
Addressing Biases and Inequalities in AI Education
Amal El Fallah Seghrouchni
Speech speed
107 words per minute
Speech length
840 words
Speech time
470 seconds
AI can simulate classroom interactions and provide benefits teachers cannot alone
Explanation
AI technology has the capability to create virtual classroom environments and interactions. This can offer educational experiences that go beyond what a single teacher can provide.
Major Discussion Point
AI’s Impact on Education
Agreed with
Li Junhua
Phyo Thiri Lwin
Agreed on
AI has potential to personalize and enhance education
AI raises concerns about plagiarism and disruption of education systems
Explanation
The introduction of AI in education brings challenges related to academic integrity. There are concerns about how AI might be used to cheat or undermine traditional educational practices.
Major Discussion Point
AI’s Impact on Education
Transparency and fairness are needed in AI systems making educational decisions
Explanation
AI systems involved in educational decision-making processes need to be transparent and fair. This is crucial to ensure that AI-driven decisions in education are ethical and unbiased.
Major Discussion Point
Ethical Considerations and Accountability in AI Education
Agreed with
Ahmad Khan
Umut Pajaro Velasquez
Lily Edinam Botsyoe
Agreed on
Need for ethical considerations and accountability in AI education
Data privacy and cognitive rights need protection when using AI in education
Explanation
The use of AI in education raises concerns about the protection of personal data and cognitive rights. It’s important to establish safeguards to protect students’ privacy and intellectual property.
Major Discussion Point
Ethical Considerations and Accountability in AI Education
Agreed with
Ahmad Khan
Umut Pajaro Velasquez
Lily Edinam Botsyoe
Agreed on
Need for ethical considerations and accountability in AI education
Phyo Thiri Lwin
Speech speed
118 words per minute
Speech length
864 words
Speech time
438 seconds
AI tools can help non-native speakers enhance their language skills
Explanation
AI-powered language tools can assist learners in improving their proficiency in non-native languages. This can be particularly beneficial for students struggling with language barriers in education.
Evidence
The speaker mentions using AI to revise ideas and improve language expression.
Major Discussion Point
AI’s Impact on Education
Agreed with
Li Junhua
Amal El Fallah Seghrouchni
Agreed on
AI has potential to personalize and enhance education
Ahmad Khan
Speech speed
170 words per minute
Speech length
1320 words
Speech time
463 seconds
Companies controlling AI models need to ensure they are trustworthy
Explanation
Organizations developing and managing AI models have a responsibility to ensure their reliability and ethical use. This is crucial for maintaining trust in AI-powered educational tools.
Major Discussion Point
Ethical Considerations and Accountability in AI Education
Agreed with
Amal El Fallah Seghrouchni
Umut Pajaro Velasquez
Lily Edinam Botsyoe
Agreed on
Need for ethical considerations and accountability in AI education
Differed with
Umut Pajaro Velasquez
Differed on
Responsibility for AI accountability in education
Umut Pajaro Velasquez
Speech speed
116 words per minute
Speech length
822 words
Speech time
423 seconds
Multiple stakeholders including developers, educators, policymakers and students share accountability for AI in education
Explanation
The responsibility for ethical and effective use of AI in education is shared among various groups. This includes those who create AI systems, those who implement them in educational settings, those who regulate their use, and those who use them for learning.
Evidence
The speaker mentions specific roles for developers (designing ethical and unbiased systems), educators (implementing AI effectively), policymakers (creating regulations and guidelines), and students (understanding and having a voice in AI use).
Major Discussion Point
Ethical Considerations and Accountability in AI Education
Agreed with
Amal El Fallah Seghrouchni
Ahmad Khan
Lily Edinam Botsyoe
Agreed on
Need for ethical considerations and accountability in AI education
Differed with
Ahmad Khan
Differed on
Responsibility for AI accountability in education
Lily Edinam Botsyoe
Speech speed
200 words per minute
Speech length
527 words
Speech time
157 seconds
Users need to be aware of their own role in protecting privacy when using AI tools
Explanation
Individuals using AI tools have a responsibility to safeguard their personal information. This includes being cautious about what data they input into AI systems and understanding the privacy implications of their actions.
Evidence
The speaker mentions the importance of not uploading sensitive information like social security numbers or passwords when using AI tools.
Major Discussion Point
Ethical Considerations and Accountability in AI Education
Agreed with
Amal El Fallah Seghrouchni
Ahmad Khan
Umut Pajaro Velasquez
Agreed on
Need for ethical considerations and accountability in AI education
Unknown speaker
Speech speed
0 words per minute
Speech length
0 words
Speech time
1 seconds
Gender biases in AI stem from societal biases and need to be addressed culturally
Explanation
AI systems often reflect and perpetuate existing gender biases present in society. Addressing these biases requires not just technical solutions, but also cultural changes to promote gender equality.
Major Discussion Point
Addressing Biases and Inequalities in AI Education
Youth should propose solutions to AI challenges rather than just asking older generations to fix problems
Explanation
Young people should take a proactive approach in addressing AI-related issues. Instead of solely relying on older generations to solve problems, youth should develop and present their own solutions.
Evidence
The speaker suggests that youth should come with solutions and ask policymakers what they think about them, rather than asking how to fix problems.
Major Discussion Point
Youth Participation in AI Governance and Development
Ahmad Karim
Speech speed
171 words per minute
Speech length
273 words
Speech time
95 seconds
The global south faces threats and protection concerns regarding AI, while the global north focuses on opportunities
Explanation
There is a disparity in how AI is perceived and approached between developed and developing nations. While developed countries often emphasize the potential benefits of AI, developing countries are more concerned with potential risks and protective measures.
Major Discussion Point
Addressing Biases and Inequalities in AI Education
Differed with
Li Junhua
Differed on
Approach to AI in education between Global North and Global South
Asfirna Alduri
Speech speed
172 words per minute
Speech length
267 words
Speech time
92 seconds
Underprivileged workers labeling AI training data, often from the global south, need to be included in discussions
Explanation
The workers who label data for AI training, often from developing countries, are an important but often overlooked part of AI development. Their perspectives and concerns should be included in discussions about AI ethics and governance.
Major Discussion Point
Addressing Biases and Inequalities in AI Education
Intergenerational spaces where youth can develop AI solutions should be created
Explanation
There is a need for collaborative environments where young people can work on AI development alongside older generations. These spaces can foster innovation and ensure that youth perspectives are integrated into AI solutions.
Evidence
The speaker mentions their work with the Responsible Technology Hub, which creates intergenerational spaces for AI development.
Major Discussion Point
Youth Participation in AI Governance and Development
Osei Keja
Speech speed
154 words per minute
Speech length
161 words
Speech time
62 seconds
Youth need to be involved in policy making and governance of AI at regional and national levels
Explanation
Young people should have a voice in shaping AI policies and governance structures. Their participation is crucial for ensuring that AI development and implementation considers the perspectives and needs of younger generations.
Evidence
The speaker mentions a recommendation from the African Youth IGF to establish advisory and participatory platforms for youth involvement in policy making.
Major Discussion Point
Youth Participation in AI Governance and Development
Dana Cramer
Speech speed
175 words per minute
Speech length
143 words
Speech time
48 seconds
Students should advocate for responsible AI adoption in their education to gain experience
Explanation
Students should actively push for the integration of AI in their educational institutions. This advocacy can help them gain practical experience with AI, preparing them for future roles in AI governance and development.
Evidence
The speaker mentions university regulations on AI usage that impact how students can become first movers in AI programs.
Major Discussion Point
Youth Participation in AI Governance and Development
Agreements
Agreement Points
AI has potential to personalize and enhance education
Li Junhua
Amal El Fallah Seghrouchni
Phyo Thiri Lwin
AI can personalize learning experiences and adapt to individual needs
AI can simulate classroom interactions and provide benefits teachers cannot alone
AI tools can help non-native speakers enhance their language skills
Multiple speakers agreed that AI has the potential to improve education by personalizing learning experiences, simulating classroom interactions, and assisting with language skills.
Need for ethical considerations and accountability in AI education
Amal El Fallah Seghrouchni
Ahmad Khan
Umut Pajaro Velasquez
Lily Edinam Botsyoe
Transparency and fairness are needed in AI systems making educational decisions
Data privacy and cognitive rights need protection when using AI in education
Companies controlling AI models need to ensure they are trustworthy
Multiple stakeholders including developers, educators, policymakers and students share accountability for AI in education
Users need to be aware of their own role in protecting privacy when using AI tools
Several speakers emphasized the importance of ethical considerations, transparency, and shared accountability in the development and use of AI in education.
Similar Viewpoints
Both speakers highlight the disparity in AI access and perception between developed and developing nations, emphasizing the need to address the digital divide and consider the unique challenges faced by the global south.
Li Junhua
Ahmad Karim
There is a digital divide that hampers AI’s potential, with less than a third of the global population connected to the internet
The global south faces threats and protection concerns regarding AI, while the global north focuses on opportunities
These speakers advocate for increased youth participation in AI governance, development, and implementation, emphasizing the importance of including young people’s perspectives in shaping AI policies and solutions.
Osei Keja
Dana Cramer
Asfirna Alduri
Youth need to be involved in policy making and governance of AI at regional and national levels
Students should advocate for responsible AI adoption in their education to gain experience
Intergenerational spaces where youth can develop AI solutions should be created
Unexpected Consensus
Addressing biases in AI
Unknown speaker
Ahmad Karim
Asfirna Alduri
Gender biases in AI stem from societal biases and need to be addressed culturally
The global south faces threats and protection concerns regarding AI, while the global north focuses on opportunities
Underprivileged workers labeling AI training data, often from the global south, need to be included in discussions
There was an unexpected consensus on the need to address various forms of bias in AI, including gender bias, regional disparities, and the inclusion of underprivileged workers. This consensus highlights a growing awareness of the complex social and cultural dimensions of AI development.
Overall Assessment
Summary
The main areas of agreement included the potential of AI to enhance education, the need for ethical considerations and accountability in AI education, the importance of addressing the digital divide and biases in AI, and the necessity of youth involvement in AI governance and development.
Consensus level
There was a moderate level of consensus among the speakers on these key issues. This consensus suggests a growing recognition of both the opportunities and challenges presented by AI in education, as well as the need for inclusive and ethical approaches to AI development and implementation. The implications of this consensus point towards a need for collaborative, multi-stakeholder efforts to harness the benefits of AI in education while addressing potential risks and inequalities.
Differences
Different Viewpoints
Approach to AI in education between Global North and Global South
Ahmad Karim
Li Junhua
The global south faces threats and protection concerns regarding AI, while the global north focuses on opportunities
AI tools are helping reduce learning disparities in various countries
While Li Junhua emphasizes the positive impact of AI in reducing learning disparities globally, Ahmad Karim points out a disparity in perception between the Global North and South, with the latter more focused on threats and protection concerns.
Responsibility for AI accountability in education
Ahmad Khan
Umut Pajaro Velasquez
Companies controlling AI models need to ensure they are trustworthy
Multiple stakeholders including developers, educators, policymakers and students share accountability for AI in education
Ahmad Khan emphasizes the responsibility of companies controlling AI models, while Umut Pajaro Velasquez argues for a shared accountability among multiple stakeholders.
Unexpected Differences
Approach to youth involvement in AI development
Asfirna Alduri
Unknown speaker
Intergenerational spaces where youth can develop AI solutions should be created
Youth should propose solutions to AI challenges rather than just asking older generations to fix problems
While both speakers advocate for youth involvement, their approaches differ unexpectedly. Asfirna Alduri suggests creating collaborative intergenerational spaces, while the unknown speaker proposes a more independent approach where youth develop solutions on their own.
Overall Assessment
summary
The main areas of disagreement revolve around the approach to AI in education between Global North and South, responsibility for AI accountability, data privacy protection, and methods of youth involvement in AI development.
difference_level
The level of disagreement among speakers is moderate. While there are differing perspectives on specific issues, there seems to be a general consensus on the importance of AI in education and the need for responsible development and implementation. These differences highlight the complexity of integrating AI into education globally and emphasize the need for collaborative, multi-stakeholder approaches to address challenges and opportunities.
Partial Agreements
Partial Agreements
Both speakers agree on the importance of data privacy in AI education, but they differ in their approach. Amal El Fallah Seghrouchni emphasizes the need for systemic protection, while Lily Edinam Botsyoe focuses on individual user responsibility.
Amal El Fallah Seghrouchni
Lily Edinam Botsyoe
Data privacy and cognitive rights need protection when using AI in education
Users need to be aware of their own role in protecting privacy when using AI tools
Similar Viewpoints
Both speakers highlight the disparity in AI access and perception between developed and developing nations, emphasizing the need to address the digital divide and consider the unique challenges faced by the global south.
Li Junhua
Ahmad Karim
There is a digital divide that hampers AI’s potential, with less than a third of the global population connected to the internet
The global south faces threats and protection concerns regarding AI, while the global north focuses on opportunities
These speakers advocate for increased youth participation in AI governance, development, and implementation, emphasizing the importance of including young people’s perspectives in shaping AI policies and solutions.
Osei Keja
Dana Cramer
Asfirna Alduri
Youth need to be involved in policy making and governance of AI at regional and national levels
Students should advocate for responsible AI adoption in their education to gain experience
Intergenerational spaces where youth can develop AI solutions should be created
Takeaways
Key Takeaways
AI has significant potential to personalize and enhance education, but also raises ethical concerns around privacy, bias, and accountability
There is a need for global collaboration and inclusive governance to ensure AI benefits education equitably across regions
Youth participation is crucial in shaping AI policies and implementation in education
Addressing biases and the digital divide is essential for AI to truly benefit education globally
Resolutions and Action Items
Establish advisory and participatory platforms to involve youth in AI policy making and governance at regional and national levels
Create intergenerational spaces where youth can develop AI solutions
Improve AI language models to reduce gender biases
Develop ‘learning hubs’ globally for students, policymakers and tech developers to collaborate on AI in education
Unresolved Issues
How to effectively regulate AI use in educational settings without stifling innovation
How to ensure AI enhances rather than replaces critical thinking skills in students
How to address the exploitation of underprivileged workers labeling AI training data
How to validate the accuracy of AI-generated information, especially for STEM topics
Suggested Compromises
Balancing AI assistance in education with preserving human creativity and critical thinking
Finding a middle ground between strict regulations on AI use in education and allowing students to gain experience with AI tools
Developing AI models that serve individual needs while also respecting privacy and data rights
Thought Provoking Comments
My dream is to keep young people as far as possible from computers because I think they spend already a lot of time connected and very close from machines. And I think that AI should be used when it has a real added value.
speaker
Amal El Fallah Seghrouchni
reason
This comment challenges the assumption that more AI and technology in education is always better, introducing an important counterpoint to the discussion.
impact
It shifted the conversation to consider the potential downsides of AI in education and the importance of using it judiciously, rather than just focusing on its benefits.
How can we push it to use AI so that I can tell it what I want, and it serves me and not the company? And this is really what we want to focus on.
speaker
Ahmad Khan
reason
This comment highlights a crucial issue of user agency and control in AI systems, especially in educational contexts.
impact
It sparked further discussion about the ethical implications of AI and the need for user-centric design in AI tools for education.
We need to engage the youth from the beginning. I think this deeply and frankly, not just because I’m the father of two daughters, 18 and 20, but we will need new solutions. We need strong innovation and brave innovation. We need ideas coming out of the box.
speaker
Henri Verdier
reason
This comment emphasizes the importance of youth involvement in shaping AI policies and practices, recognizing their unique perspectives and potential for innovation.
impact
It led to increased focus on youth participation throughout the rest of the discussion, with several subsequent speakers addressing this point.
AI has to be developed by labeling data, and that data is being labeled by young people who are super underprivileged, mostly from the global south, not paid well enough. So if we talk about including people and young people in this aspect, we need to include those who are exploited by developing it.
speaker
Asfirna Alduri
reason
This comment brings attention to an often overlooked aspect of AI development – the labor conditions of those involved in data labeling.
impact
It broadened the scope of the discussion to include ethical considerations in AI development processes, not just in the end product or its use in education.
Overall Assessment
These key comments shaped the discussion by introducing critical perspectives on the ethical implications of AI in education, the importance of user agency, the need for youth involvement in AI policy and development, and the often-overlooked labor issues in AI creation. They helped to deepen the conversation beyond surface-level benefits of AI in education to consider more complex, systemic issues that need to be addressed for responsible AI implementation in educational settings.
Follow-up Questions
How can we push AI to serve individual users rather than companies?
speaker
Jarrel James
explanation
This is important to ensure AI tools enhance individual creativity and perspective rather than diluting them with generic responses.
Do you see Global South members being the next big innovators in AI?
speaker
Jarrel James
explanation
This is crucial for understanding if AI development will continue to be dominated by certain regions or if there will be more diverse representation in the future.
Does an objective truth exist in AI, or is it always manipulated to some extent by its users and the community that utilizes AI technology?
speaker
Audience member
explanation
This question is important for understanding the limitations and potential biases of AI systems in providing information.
How can we guarantee that women and girls have a voice in AI conversations, addressing both safety concerns and opportunities?
speaker
Ahmad Karim
explanation
This is crucial for ensuring gender equity in AI development and implementation.
Who is responsible for fixing AI to work for women and girls?
speaker
Ahmad Karim
explanation
This question is important for addressing gender biases in AI systems and ensuring accountability.
What kind of methodologies or structures should be put in place to ensure youth are included in AI policy-making and governance?
speaker
Osei Keja
explanation
This is important for ensuring meaningful youth participation in shaping AI policies and governance.
Who are we benchmarking in terms of AI technologies and policies?
speaker
Osei Keja
explanation
This question is crucial for understanding best practices and models in AI development and regulation.
How can students advocate for AI adoption in their educations, particularly in universities with strict regulations?
speaker
Dana Cramer
explanation
This is important for enabling students to gain practical experience with AI and become stakeholders in its governance.
How can we include click workers, who are often underprivileged young people from the Global South, in discussions about AI development?
speaker
Asfirna Alduri
explanation
This question addresses the ethical concerns of AI development and the need to include those who are potentially exploited in the process.
How can scientific researchers or STEM students judge if the responses given by AI are correct, especially for complex topics?
speaker
Oliver
explanation
This is crucial for ensuring the reliability and accuracy of AI-generated information in scientific and academic contexts.
Who should be responsible for false information given by AI?
speaker
Oliver
explanation
This question is important for establishing accountability in AI-generated content and misinformation.
Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.
Related event
Internet Governance Forum 2024
15 Dec 2024 06:30h - 19 Dec 2024 13:30h
Riyadh, Saudi Arabia and online