Pre 8: IGF Youth Track: AI empowering education through dialogue to implementation – Follow-up to the AI Action Summit declaration from youth
12 May 2025 09:00h - 10:15h
Pre 8: IGF Youth Track: AI empowering education through dialogue to implementation – Follow-up to the AI Action Summit declaration from youth
Session at a glance
Summary
This discussion focused on AI’s role in education as part of the IGF Youth Track, following up on the AI Action Summit Declaration from Paris. The session brought together youth representatives, government officials, and technology industry leaders to explore how AI can empower education through meaningful dialogue and implementation. The conversation centered on three main recommendations from the Paris declaration: advancing AI-driven education, strengthening multi-stakeholder cooperation, and fostering intergenerational dialogue for sustainable digital development.
Key concerns emerged around ensuring AI serves as a supplement rather than substitute in education, with participants emphasizing the importance of maintaining critical thinking skills while leveraging AI’s benefits. Youth representatives stressed that students will use AI regardless of restrictions, making it crucial for educators to guide proper usage rather than prohibit it entirely. The discussion highlighted the risk of the current digital divide becoming an “AI divide,” particularly affecting developing countries and underserved communities.
Panelists emphasized that successful AI integration in education requires inclusive participation from all stakeholders – students, teachers, policymakers, and technology developers – from the design phase onward. They noted the importance of building trust between educators and students, moving away from adversarial approaches focused on detecting AI-generated content toward collaborative frameworks that enhance learning experiences. The conversation also addressed concerns about misinformation, the need for digital literacy, and the balance between embracing technological advancement while preserving essential human cognitive skills.
The session concluded with agreement that AI represents both opportunities and challenges for education, requiring careful, inclusive governance to ensure it serves the public good globally.
Keypoints
## Major Discussion Points:
– **Multi-stakeholder cooperation in AI-driven education**: Emphasis on the critical need to include all stakeholders – youth, educators, policymakers, and technology developers – in designing and implementing AI educational tools, with particular focus on meaningful rather than tokenistic youth participation
– **Digital divide and global accessibility**: Concerns about ensuring AI educational tools don’t exacerbate existing inequalities, with specific attention to infrastructure needs, funding mechanisms, and inclusive access for developing countries and underserved communities
– **Teacher-student collaboration vs. opposition**: Discussion of moving away from adversarial approaches (detecting AI-generated work) toward collaborative integration of AI as a learning enhancement tool, emphasizing the importance of training both educators and students
– **Balancing AI assistance with cognitive development**: Debate over the fine line between using AI as a supplement versus substitute, addressing concerns about cognitive outsourcing, critical thinking skills, and the continued importance of independent learning capabilities
– **Misinformation and responsible AI deployment**: Focus on ensuring AI educational tools are safe, transparent, and reliable, with discussions on combating misinformation and maintaining educational integrity while leveraging AI’s benefits
## Overall Purpose:
This session served as a follow-up to the AI Action Summit Declaration, aiming to advance the implementation of AI in education through intergenerational dialogue. The goal was to bridge perspectives between youth representatives, policymakers, and technology leaders to develop practical strategies for responsible AI integration in educational settings globally.
## Overall Tone:
The discussion maintained a constructive and collaborative tone throughout, characterized by cautious optimism. While participants acknowledged significant challenges and risks associated with AI in education, the conversation remained solution-oriented. There was a notable balance between tech industry optimism and more measured perspectives from educators and policymakers, with youth voices providing practical insights from their direct experience with AI tools in learning environments.
Speakers
**Speakers from the provided list:**
– **Moderator (Ramon)** – Remote moderator from the YouthDIG team for the IGF Youth Track session
– **Dorijn Boogaard** – Session moderator
– **Ben Mischeck** – Youth representative from Germany, participating in YouthDIG
– **Laila Lorenzon** – Youth representative from Brazil, joined YouthDIG last year, ISOC ambassador in 2025
– **Anton Aschwanden** – Head of Government Affairs and Public Policy at Google
– **Chengetai Masango** – Representative from the IGF Secretariat
– **Pap Ndiaye** – Ambassador and Permanent Representative of France to the Council of Europe, former French Minister of Education
– **Anja Gengo** – IGF Secretariat colleague (mentioned as being online and available for questions)
– **Audience** – Various audience members who asked questions and participated in discussions
**Additional speakers:**
– **Brahim Balla** – Intern at ACL (Council of Europe) in Strasbourg
– **Jasmine** – Online participant who provided comments via chat
– **June Paris** – Online participant who provided comments via chat
– **Anthony Millennium** – Online participant who provided comments via chat
– **George** – Audience member from Civil Society who asked questions
– **Master’s student** – Student at CU under the CIVICA project, project lead on recognizing AI impact in higher education
Full session report
# IGF Youth Track Discussion: AI’s Role in Education
## Executive Summary
This IGF Youth Track session explored artificial intelligence’s role in education through a multi-stakeholder dialogue featuring youth representatives, government officials, technology industry leaders, and international organisation representatives. The discussion was framed as part of ongoing efforts following international AI summits and declarations, with participants examining both opportunities and challenges in implementing AI educational tools.
The session featured interactive elements including a Mentimeter poll that revealed the audience was “pretty young” and showed mixed views on whether AI will improve education quality. Participants engaged in substantive discussion about balancing AI’s educational benefits with concerns about cognitive development, digital equity, and the need for collaborative rather than prohibitive approaches to AI integration in classrooms.
## Key Participants
**Dorijn Boogaard** served as session moderator, with **Ramon** providing remote moderation support from the YouthDIG team. Youth perspectives were represented by **Ben Mischeck** from Germany and **Laila Lorenzon** from Brazil, who joined YouthDIG last year and is now also an ISOC ambassador in 2025.
Government and policy perspectives came from **Pap Ndiaye**, Ambassador and Permanent Representative of France to the Council of Europe and former French Minister of Education. **Anton Aschwanden**, Head of Government Affairs and Public Policy at Google, represented the technology industry. **Chengetai Masango** and **Anja Gengo** from the IGF Secretariat provided international governance perspectives.
Active audience participation included **Brahim Balla**, an intern at the Council of Europe, a master’s student working on AI impact in higher education, and **George** from civil society, along with online participants.
## Major Discussion Themes
### Multi-Stakeholder Cooperation and Youth Engagement
**Chengetai Masango** opened by noting this is a “year of decision-making and change” with the 20-year WSIS review, emphasizing that the youth track creates opportunities for intergenerational dialogue crucial for sustainable digital governance. **Anja Gengo** reinforced that twenty years of IGF operations have demonstrated multi-stakeholder cooperation as the only effective approach for governing digital technologies.
**Laila Lorenzon** emphasized that meaningful cooperation requires intentional building with adequate resources and capacity building to ensure equal participation. She argued for moving beyond tokenistic consultation to genuine co-creation processes, stating that cooperation “needs to be built intentionally with resources and capacity building.”
**Pap Ndiaye** stressed that multi-stakeholder approaches must include all education stakeholders from the design phase, particularly youth, teachers, and parents, and must encompass all countries, especially those with limited educational access.
### Collaborative Versus Prohibitive Approaches to AI in Education
**Ben Mischeck** argued that initial focus on AI detection in educational settings created mistrust between students and teachers instead of fostering collaboration. He emphasized that “students will use AI anyway” and stressed the importance of teachers actively encouraging proper usage rather than attempting prohibition.
Ben provided a specific example about essay writing, explaining that AI can help students focus on developing arguments rather than getting stuck on mechanical aspects of writing. He argued that teachers need to “take an active role in showing students how to use AI correctly and securely.”
**Laila Lorenzon** supported this perspective, describing Brazil’s “AI in the Classroom” project which emphasizes “sensitive listening” – understanding how students actually use AI rather than imposing restrictions. She noted that including both students and teachers in AI deployment prevents educators from feeling replaced while making classes more engaging for digital-native students.
### Digital Divide and Global Accessibility
**Anton Aschwanden** highlighted that 2.4 billion people remain offline and stressed that the current digital divide must not evolve into an “AI divide.” He mentioned Google’s investments in fiber optic cables and AI hubs as examples of infrastructure development efforts.
**Laila Lorenzon** argued that AI could actually help bridge the digital divide rather than deepen it, provided implementation respects local contexts and languages. She emphasized the need for investment in open, adaptable infrastructure.
**Pap Ndiaye** stressed that international governance frameworks must prioritize including all countries, particularly those with limited educational access, and raised questions about financing mechanisms and understanding the real needs of developing countries.
### Balancing AI Benefits with Cognitive Development
**Pap Ndiaye** argued that students still need spaces to think and write without AI, stating it remains “very important for their cognitive development” to have opportunities for independent thought and expression.
A master’s student raised concerns about “cognitive outsourcing,” arguing that over-reliance on AI threatens critical thinking abilities and academic integrity. This intervention introduced a framework for understanding risks of excessive AI dependence in educational settings.
**Chengetai Masango** offered a balanced perspective, acknowledging that AI is here to stay and must be used with critical thinking skills in a balanced approach combining traditional methods with new technologies.
**Anton Aschwanden** emphasized AI’s potential for personalized learning and language accessibility, mentioning AI tutoring systems that can provide individualized support, while acknowledging the need for responsible development with appropriate safeguards.
### Misinformation and Democratic Values
An audience member raised concerns about teaching youth to deal with AI-influenced social media content and recognize different perspectives to protect democratic values.
**Pap Ndiaye** expressed particular concern about what he termed “powerful adversaries” using AI and social media platforms, noting that students spend more time engaging with AI-powered social media than doing homework, creating competitive challenges for traditional educational institutions.
**Anton Aschwanden** responded that companies have self-interest in maintaining safe platforms and invest significantly in digital literacy programmes globally.
**George** from civil society asked specific questions about preventing AI in education from spreading misinformation and emphasized the need for youth involvement in building trust and accountability mechanisms.
## Key Points of Consensus
Participants agreed on several important principles:
– **Multi-stakeholder cooperation is essential**: All speakers emphasized that effective AI governance in education requires inclusive dialogue involving youth, teachers, policymakers, and technology developers
– **Collaborative approaches over prohibition**: Both youth representatives and several other speakers agreed that prohibiting AI in education is ineffective and counterproductive
– **Need for responsible implementation**: Speakers across stakeholder groups agreed that AI must be implemented with proper safeguards and balanced approaches
– **Preventing AI divides**: Participants agreed that existing digital inequalities must not be replicated in AI access
## Areas of Different Emphasis
While not necessarily disagreements, participants emphasized different priorities:
**Pap Ndiaye** advocated for preserving AI-free spaces in education to ensure proper cognitive development, while **Ben Mischeck** and **Laila Lorenzon** emphasized embracing and properly integrating AI throughout educational processes.
**Anton Aschwanden** presented an optimistic view of AI’s potential benefits, while **Pap Ndiaye** expressed more caution about commercial interests potentially undermining educational goals.
**Brahim Balla** raised concerns that current educational systems may not be ready for AI improvements and require fundamental transformation, shifting focus from the technology itself to institutional readiness.
## Practical Examples and Initiatives
Several concrete examples were shared:
– Brazil’s “AI in the Classroom” project emphasizing “sensitive listening” to understand student AI usage
– Google’s investments in infrastructure including fiber optic cables and AI hubs
– AI tutoring systems providing personalized learning support
– Digital literacy programmes being implemented globally
– The use of AI for language accessibility and translation
## Future Directions
**Dorijn Boogaard** outlined that the IGF youth track will continue with workshops at the African IGF in Tanzania, Asia-Pacific IGF in Nepal, and Latin American IGF, culminating in the IGF 2025 Global Youth Summit during the 20th annual IGF meeting in Lillestrom, Norway.
The session identified needs for:
– Developing recurring spaces for co-creation in international AI strategies
– Building consultation requirements with youth, students, and teachers into national AI education strategies from the beginning
– Continued intergenerational dialogue on AI governance
– Practical implementation strategies that balance innovation with educational integrity
## Conclusion
This session demonstrated the complexity of integrating AI into education while highlighting areas of broad agreement on the need for inclusive, collaborative approaches. While participants brought different perspectives and priorities, they shared common ground on the importance of multi-stakeholder cooperation, responsible implementation, and ensuring that AI serves educational equity rather than exacerbating existing inequalities.
The discussion revealed both significant opportunities and substantial challenges, with the emphasis on intergenerational dialogue and meaningful youth participation offering a promising model for addressing these challenges through inclusive processes. The session’s practical focus on implementation strategies and real-world examples provided concrete foundations for the continued regional workshops and global summit planned for 2025.
Session transcript
Moderator: IGF Youth Track, the AI Empowering Education Through Dialogue to Implementation. It’s a follow-up to the AI Action Summit Declaration of the Youth. I’m Ramon from the YouthDIG team. I will be your remote moderator in this session. And as I already mentioned in the previous session, you can find all information about this session, about the speakers, and about the EuroDIG in general on the EuroDIG wiki. I will share the link again in the chat. We encourage you to raise your hand if you have any questions or if you want to like to present something. But if you do ask a question in the Zoom chat, we would like to ask you to write a cue in front of your question so that we then can address it to the room. Now, let me share shortly the session rules. If you are entering through a Zoom session, please enter the session with your full name. If you ask a question, again, you can raise your hands using the Zoom function. You then will be unmuted, and the floor will be given to you. When you speak, please switch on the video, state your name, state your affiliation, and do not share any links to the Zoom meeting, not even to your colleagues. Thanks so much. Now I will be handing over to Dorijn, who is our moderator for this session.
Dorijn Boogaard: Thanks. Thank you very much, Ramon. And thank you all for joining this session. Welcome all here today, people here in the room, but also online, of course. I’m very happy to be able to moderate this session today with a wonderful panel sitting next to me. I’ll start on my left. We have Ben Mischek. He is representing youth from Germany and is joining YouthDIC this year. On my right, we have Laila, Laila Lorenzon. She is representing youth from Brazil, and she joined YouthDIC last year, but is now also an ISOC ambassador in 2025. Next, there is Anton Aschwanden, if I pronounce it correctly. He is the head of government. Affairs and Public Policy at Google. And next to him, we have Mr. Chengetai Masango from the IGF Secretariat. And next to him, we have Mr. Pap Ndiaye. He’s the Ambassador and Permanent Representative of France to the Council of Europe. Welcome all. I’m very happy that you’re all here and I would like to welcome Mr. Chengetai to start off this session. Go ahead.
Chengetai Masango: The IGF Secretariat, along with the Secretary General, appointed multi-stakeholder advisory group and the leadership panel all agree that developing youth capacity is essential. It is vital for the sustainability of digital governance processes and also is a core part of our mandate. The youth track is designed to create meaningful opportunities for the current generation of leaders and experts to engage directly with the next generation. I want to sincerely thank young people from across the globe who continue to participate actively in IGF processes. I also thank senior leaders across sectors and regions for recognizing the importance of listening to youth, engaging with them, and responding to their concerns and ideas. This year’s youth track is particularly significant. as it coincides with the 20-year review of the World Summit on the Information Society WSIS. I urge all of you, especially young participants, to speak up and contribute throughout this process, as we are in a year of decision-making and change, which also brings opportunity. The thematic focus and structure of the youth track was developed through a bottom-up approach. We work in close collaboration with youth IGFs and international youth initiatives, like the Internet Society’s Ambassadors Program, the Youth IGF Movement, and the Youth Coalition on Internet Governance. The host country also plays a key role in shaping the program, and I thank the Government of Norway for the support and contributions as the 2025 IGF host country. Throughout the year, the youth track features intergenerational dialogues amongst youth and senior leadership. Its implementation is supported by various partners, including the AI Action Summit in Paris, as well as the regional IGFs. Today’s session at EuroDIG is the first of four workshops following the outputs from the last year’s IGF in Riyadh, and the youth declaration presented at the AI Action Summit in February. With others coming up at the African IGF in Tanzania, the Asia-Pacific IGF in Nepal, and the Latin American IGF. The IGF 2025 Global Youth Summit will take place during the 20th annual IGF meeting in Lillestrom. Norway. Across these workshops and summits, we will focus on pressing issues such as AI for education and social media regulation. Youth are also encouraged to engage in other components of the IGF program, including the intersessional work. Everyone here is welcome to be part of the youth track. My colleague, Anya, from the Secretariat, is online, as I mentioned, and available to answer any questions you may have. Her contact details are also shared in the Zoom chat. Thank you, and I look forward to the discussions ahead.
Dorijn Boogaard: Thank you very much. And as this session is a follow-up to the AI Action Summit Declaration from the youth session in Paris, we actually have Laila here, who also joined the session in Paris. So I’m very curious, how do you look back at that session, and what are the critical points that you bring with you from that session?
Laila Lorenzon: Yes. Thank you for the question. And yes, I was there in February, and it was a very interesting discussion. We had a roundtable with, on one side, youth representatives from Asia, Africa, Europe, and Latin America, and on the other side, senior representatives that work in government and private sector. And it was very nice to see how the perspective of youth was actually heard, and some of the key points that I think is interesting to bring here, and it’s also on a declaration that was shared in the event, is the power of AI to facilitate and improve digital education, especially in constrained and crisis settings where physical education is not that accessible. So a very interesting point that I think is the power of AI to facilitate offline. learning, and also remote learning, especially for communities that have hard to reach accessibility in terms of internet infrastructure, and also in diverse crisis settings and conflict zones. So it’s really important to understand that AI can be used as a facilitator to bridge the digital divide, and not as something that can even deepen it. So that was a point that was raised. And also we had a translator as part of a new ISOC, France, and he raised a very interesting point as well on the use of generative AI and voice assistant for improved learning for a person with disabilities, which is also something very important to look and consider when we are drafting or idealizing AI applications for education. I think that the main takeaway that we all agree and discuss is the importance of equally involving students and teachers in the process of deploying and co-designing any AI application to really try to break this barrier that some teachers may see that AI is there to replace them or to make the process of learning harder, but it’s very important that they feel included in the conversation so they can also understand that AI is a tool that can help facilitate, it can make classes more interesting and engaging, it can help them connect with their students. So I think that was one of the key aspects and main takeaways that we take of the importance of uniting not only youth and students, which is very important, but also teachers and school representatives in this important process. Thank you.
Dorijn Boogaard: Thank you very much. I have read the declaration and there were three main points that I just wanted to repeat here today. So the first recommendation was to advance AI-driven education with a lot of sub-recommendations, of course. The second was to strengthen multi-stakeholder cooperation on AI and education. And the third one was to foster intergenerational dialogue for sustainable digital development. Coming from that, I would like to start with the first question from Mr. Pap Ndiaye. So I would like to ask you, from your perspective, what infrastructure investments and policy frameworks are necessary to ensure AI-driven education is accessible, inclusive, resilient, and particularly in underserved communities?
Pap Ndiaye: Thank you. Good morning. Thank you for your question. I’m very happy to be in this most interesting workshop. As France’s permanent representative to the Council of Europe, I welcome the organization of such an event, which constitutes an essential platform for European dialogue and international internet governments. Such a gathering is all the more essential given the current international situation. Artificial intelligence, a cross-border resource which is used by the greatest number of people, but whose production is held by a handful of key players, requires an unprecedented international governance effort to ensure that it serves the public interest. In terms of education, the international governance of AI must address several major issues. How can AI be used to benefit education and access to education? How can the risk of AI on education be prevented? How can young people be trained in AI? So the first condition for putting AI at the service of education and preventing its harmful effects among the youth is an international multi-stakeholder coherent and inclusive coordination. The Paris AI Action Summit, which focused in particular on identifying the needs and fields of action for international AI governance highlighted several priorities to be implemented. First, the need for a multi-stakeholder approach. Understanding the educational challenges of AI cannot be achieved without the representation of all education stakeholders. This includes governments, companies and international organizations, of course, but also and above all teachers, parent associations, youth representatives. International events such as today’s EuroDIG or the Internet Governance Forum are key arenas for an inclusive and diversified dialogue between governments, business and civil society. The representation of young people in the design of educational AI governance is an essential criteria for measuring the effectiveness of initiatives. I would like to salute the IGF Youth Initiatives for its growing commitment. Second, the need to include all countries, especially those where access to education remains limited. In this respect, the role of the United Nations is central in taking into account the voices of countries still isolated or partly isolated from AI. and for whom digital transformation is a major driver of development. Already in 2021, UNESCO published its recommendation for on the ethics of AI, placing equitable access to AI in education and the development of digital literacy at the heart of its agenda, as well as the need to protect students’ data. Adopted in 2022, the Global Digital Compact aims to develop the innovative voluntary financing options for artificial intelligence capacity building. Other initiatives such as the Global Partnership for AI at the OECD aim to promote access to educational technologies in developing countries, but are still too limited to developed economies. Inclusion of developing countries in international governance initiatives is a prerequisite for the spread of AI as an educational tool. The next AI Impact Summit to be held in India in February of 2026 will devote a significant part of the international discussions to developing countries. It will be a unique occasion to look closer at educational outcomes of AI, digital access and literacy and the opportunities for emerging markets to build their own public interest-oriented AI. Third, the affirmation of major AI principles. This is the ambition of the Council of Europe Framework Convention on AI, the first legally binding international instrument in this field. It aims to ensure that activities carried out as part of the life cycle of artificial intelligence systems are fully compatible. with human rights, democracy, and the rule of law, while being conducive to technological progress and innovation. Last, edtech is a highly promising emerging sector serving to improve access to education and digital tools as well as adapting AI to local cultural particularities. Investments in this sector are increasingly possible thanks to the emergence of mechanisms dedicated to financing innovative startups from emerging countries. The announcement of the current AI Foundation at the Paris Summit with an initial fund of 420 million euros serves precisely this purpose and aims to finance any project likely to serve the general interest thanks to the support of 15 countries and a funding target of 2.5 billion euros by 2030. Discussions on the financing of AI for education in developing countries must take into account the real needs of these countries and be conducted in close cooperation with them. Structures such as the G20 are effective arenas for an open dialogue and the understanding of developing countries’ needs. This year, the South African G20 presidency is placing particular emphasis on reducing inequalities in access to the digital economy by supporting a partnership-based approach to development aid that fosters the emergence of sovereign digital ecosystems, particularly in Africa. Mechanisms such as the Innovation and Development Fund, whose first results are expected in the next few months, are interesting relays for the development of educational AI. but are currently threatened by the withdrawal of US funds from the multilateral scene and funds such as USAID. The possibility of substituting American funds needs to be discussed by our European partners as part of a more global reflection on how the various initiatives fit together in the light of the emergence of new funding mechanisms dedicated to AI, such as current AI. Thank you.
Dorijn Boogaard: Thank you very much and some very interesting and important points you are making there. I think we will get back to that in the Q&A. Moving on, you already mentioned the importance of the multi-stakeholder model and in your previous reflection on the session in Paris, you also mentioned the importance of including teachers. So this is the question, what strategies can be employed to strengthen multi-stakeholder cooperation in AI-driven education, ensuring active participation from youth, educators, policymakers and technology developers? Laila.
Laila Lorenzon: Thank you for the question and thank you again for the opportunity to be here. So I think this question is very important because we often talk a lot about multi-stakeholder. We hear a lot of this term in internet governance related events, but I think it’s important to understand that cooperation doesn’t happen by default and it’s something that it needs to be built intentionally with resources, attention and care. And I think it’s very important to understand that youth is currently the most affected by AI in. internet in general, but we need to ask ourselves the questions that who gets to shape these tools and how is it being impacted by the ones that use it most. And especially with regards to education, how we can ensure that AI enhances rather than replaces the human connection that is so important in the learning process and in the social development of students as well. So to answer more of your question, I think that to ensure a truly multistakeholder process, it’s important to make sure that everyone has a seat at the table, but in addition to that, they also have the tool and the knowledge to contribute to the topic. So in that sense, I think that capacity building is key to make sure that everyone starts off on the same place and have all the technical knowledge needed to allow them to actively engage and shape the discussions on AI in education. And I’d like to bring here an example from Brazil, my home country, of a 2022 project called AI in the Classroom, developed by Data Privacy Brazil, a national NGO working with digital rights. And the goal of the project was precisely to engage students, teachers, and school leaders to make shared decisions of how to use AI in the classroom. And something that really stood out for me, in addition to all the technical robustness of the research, was something that the research called sensitive listening, which is basically creating a safe space where participants can feel generally heard and not judged and able to express concerns, and evenly in non-verbally ways. So I thought that this idea was very interesting because what ensured that we have meaningful participation is when we have trust that everyone is being equally heard. And during last year’s Youthdig, I participated. We talk a lot about how to recognize when youth participation is merely symbolic, when it’s used as decoration or a form of tokenism, and how we can move beyond that and towards meaningful and genuine engagement. And I think that really connects to how to ensure the most stakeholder approach to ensure that youth is not only heard and meaningfully heard, but also equipped with the knowledge on complex topics such as AI, so they are able to then contribute on how to shape the discussion. And that can be either through digital literacy engagements before opening the discussion on AI in education, or capacity building activities and even critical thinking activities to ensure that the participation is not symbolic, but rather transformative. And something that also happens too often is that youth is only consulted at the end of a process, so they only have a say when most of things are already decided. And I think that in order to change that, it would be very useful to have a structure and recurring spaces for co-creations, international AI strategies. And that can either be through youth assemblies or councils, which luckily we have been seeing all over the world, taking place more and more. But also build into the national AI education strategies that every development or design of a tool needs to have consultations with youth, students, and teachers, since the beginning, so they’re all able to participate as equal partners. And just to conclude, I’d like to highlight as well that I believe that investment in open, adaptable infrastructure that respects local context, language, and traditions, it’s something that cannot be… So, it’s important that all of these frameworks are open for accountability, and also thought and design, having in mind the local context and cultural diversities, and also the diverse needs of youth, youth with disabilities, and all the specific languages and context that needs to be taken into account. Because otherwise, I think that we risk deepening the very inequalities that we want to solve if we don’t ensure that this approach is meaningful and has the youth participation and the school representatives as well since the beginning. Thank you so much.
Dorijn Boogaard: Thank you very much, Laila. Coming from that, multi-stakeholder participation is of course very important, but you also mentioned something about the design and the technologies. So, moving on from that, we are going to Mr. Anton Ashvanian from Google. So, how can we ensure these AI tools used in education and services are safe, secure, and accessible, especially for students in developing countries?
Anton Aschwanden: Thank you so much. It’s working. Good. Perfect. So, yeah, thank you for the invitation. I’m Anton, working with Google, running our public policy in Switzerland and Austria, and have participated in many national IGFs and now happy to be in the European one, working with international organizations based here in Europe. So, yeah, I mean, if we’re asking the question about responsibility and accessibility, especially in developing countries, I think we need to take, before talking about AI, we need to acknowledge that we’re facing a digital divide already. So it’s, according to the latest statistics of ITU, we’re at 2.4 billion people. still offline and I think now that the real challenge is that this digital divide that we’re having right now does not become an AI divide, so we cannot afford that. And I think in order to tackle this challenge, it’s really key that everyone comes together, the private sector, public sector, obviously civil society, technical experts, in this case educators, teachers, students, and I’m really happy to be here because it’s also, as it has been said at the very beginning, I think it’s a crucial year for the multi-stakeholder governance. We’re just like six week away, if I calculate correctly, from the global IGF in Norway. We’re gonna have YCIS plus 20 in Geneva at the beginning of July and really a big thank you to all of you to be engaged and to show up and I can assure you that also Google will do so, so happy hopefully to see some of you as well in Norway, in Oslo and in Geneva. So what are we doing as Google to help tackle this challenge? I think it’s three things. It’s like digital infrastructure, invest in digital infrastructure. The second one, invest in people and then like use, the third one, using AI smartly to tackle global challenges and let me quickly go through what this means in the field of education. So when we’re talking about digital infrastructure first and I mentioned it, the 2.4 billion still offline, I think it’s key that investments do not only happen in the global north but that we’re really thinking about investments globally. We’re doing so, my employer, by really investing across the globe like as illustrations. fiber optic cables from so far not connected places. I’m thinking of fiber cables between Latin America to Africa, from Africa directly to Asia Pacific without going over Europe, and then really remote parts of the Pacific as well. And when we’re thinking about infrastructure in the education field, that also means investing in places where people can meet. I’m thinking of the local investments we did in specific AI hubs, in startup campuses, and then also in training hubs in all different countries we’re having such activities. And thinking about infrastructures, and also how you open up technology, that it’s not only closed models, but like think also happening at Google, like think of Android or the open gamma AI models that are open to developers and researchers. And then the second one is really investing in people, and I think we’re doubling down there, our efforts we have. I’m not gonna do a publicity spot, like I can use your favorite search engine and type Google AI skilling, but just like to tell you, this is really one of the biggest priorities we’re having right now. So there are the AI skilling certificates, we have like a whole menu. If you’re more interested, happy to share that. But I’m doing it myself. I somehow still consider myself young, but I’m already a bit older, but I force myself to do those AI essential classes. You can sign up on Coursera, prompting essentials, and I think this is really key. And what we’re doing as well is like pushing new ideas over Google. which is our philanthropic arm to really have like this AI opportunity for everyone. And key is that this is not only happening in a few countries, but those are global programs. And then like the third pillar I think is really how to use AI smartly to tackle global challenges. And if we’re really honest, we’re gonna probably dramatically fail with the SDGs. We’re so far behind depending on the statistics at 17 or 18%. So the clock is really ticking and the question is can AI help to accelerate progress to those goals? I’m personally optimistic in many fields. Like we’re talking about SDG four now, quality education, but if you look at better health for instance and looking at breakthroughs in science like alpha fold, drug discovery, this is making myself really optimistic that hopefully new drugs will be discovered. And like same applies for SDG four, quality education. If you look how AI powered tools can help in ways of like transmit education, transmit knowledge, personalized learnings like how the whole software can be improved, how we can access new languages and also like as it was said, also the not so spoken languages, this makes me really optimistic. Again, I’m not gonna do like the Google publicity spot. I have documents with me. There are some great illustrations. I’m thinking of Read Along Quill, the 1,000 languages initiatives. So some great illustration where we can really use AI for good and expand educational access. in native languages, for example, and also especially, I think, what is key that you can support with the technology, especially in regions where the ratio between teachers and pupils are perhaps not as good as in some more developed countries. So yeah, I’ll leave it here as a starter, and then hopefully we will start with the discussion. Thank you so much.
Dorijn Boogaard: Yes, definitely, thank you so much, and we will get back to that, of course. So on to the final question. We’ve heard it quite a few times that it’s important to include young people in this discussion. So Ben, how can intergenerational dialogue and digital commons be leveraged to foster sustainable AI governance and lifelong learning in an evolving technological landscape?
Ben Mischeck: Okay, maybe just to mention before, because we talked so much about youth participation, so I just think how great of a chance it is for Laila and me here to speak on behalf of young people, actually try to bring our points across, integrate our perspectives, because we’ve heard it’s very important, right? And the question you just asked, I mean, it’s a quite big question, maybe focusing on intergenerational dialogue first, and bringing a bit more of a practical perspective. I experienced the introduction of AI in the students’ everyday life over the past years, and from a very practical point of view, what I experienced, especially in the beginning, I do see change now, which is very welcomed, but in the beginning, it was very focused on what is written by AI and what is written by a student. So it was really about comparing and finding out, is this AI-generated, or did the student himself or herself do the work? And for me, this approach to speak about AI in education, is very critical because it shows, like from various points it’s critical. Like first of all, we do know that detecting AI, like AI-generated text is quite difficult. It might be biased. There are like technical issues here. But what is even more important for me is it does create mistrust between students and AI. And it kind of also shows that students and teachers, they were not working, not talking, like in a collaborative way, but kind of like against each other. And why I like want to raise this point is as we like mentioned, the multi-stakeholder project many times, I feel this is like a very good example of what we currently are lacking or where there’s a gap between the generational view of how AI is impacting education. And I really want to emphasize that we should like work together on like ways how to integrate AI. So it’s not about like being substituted by AI because like otherwise like students, they will find ways to like use AI for like what is like supposed to be cheating. And teachers like try to work against them. And that’s not what we imagine our education to be like. We want our teachers and students work together. And I think it’s really important to find ways how we can integrate AI to like leverage the learning experience as like already mentioned using AI tutor-based systems, for example. I think they have many big advantages. Of course, they do bring some risks that need to be mitigated. But from various points, I think AI tutor-based like also as a digital common then really can leverage lifelong learning. And I think it’s because of different point of views. Like first of all, I think it does reduce barriers to AI. Assuming we have the infrastructure to actually access those tools. I would say it does decrease financial barriers. it does decrease geographical barriers as well, and it also can, for example, for adults which are like supposed to be like part of the lifelong learning journey, it actually can also decrease like mental burdens or mental barriers. So I don’t know, but I feel the older someone is getting, the higher is the burden to actually learn and like interact with something like very new. And if you have like a safe space online where you can actually like interact with an AI system, get in touch with the new topic, it really helps to get in touch with new topics, to develop new skills, and I think that’s what like the lifelong learning journey really is about. And I think with the digital commons that are like developed right now, we see many exciting ways to think of new ways of learning and to develop new skills. Thanks.
Dorijn Boogaard: Thank you very much. Yeah, it’s working. So this was the panel, but now we’re going into the Q&A. So I’m hoping you have a lot of questions in mind. But before we do that, I wanted to welcome you to the Mentimeter, which I’m going to share right now. And this room seems pretty young, but I would like to get a view of what kind of ages we have in the room. So if you could join in Mentimeter through the code 42171593. And submit your age. Should we do it as well? Yeah. And the panel can join, of course. It’s an intergenerational dialogue. It’s 4217-1593. It seems like we have a lot of young people in the room. Ah, there they come. Yeah. Okay, we have quite a lot of people joining the Mentimeter, so I’m going to put on the first statement. I’m curious, what do you think? And we will see how the different age groups think maybe differently or the same about this statement. So, do you agree or disagree with the statement that AI will improve the quality of education? It’s quite similar. Yeah. We see young people that disagree. Yeah. Do we have a young person in the room or online who voted for disagree? Yes, someone all the way in the back. Would you like to share why you voted for Disagree?
Audience: Can you hear me? Okay. Hi, I’m Brahim Balla, intern at ACL here in Strasbourg. I don’t quite disagree, like mine is not a complete opinion, but I think that considering the current situation, we have in many countries, I think, and in many situations, an educational system that still is not ready to get along with the improvements which AI might be able to bring. So if we won’t be able to face this challenge, I think that AI will bring more harm than improvement within the educational system. So I think it’s not the AI itself that will bring the improvement, but our strength and our capability to transform it into something useful for the educational system in itself. Thank you.
Dorijn Boogaard: Thank you very much. Is there someone on the panel who would like to respond to this comment?
Anton Aschwanden: Yeah, go ahead. Happy to do so, because also in my question, there was this component about the responsible use of AI. And I think it’s absolutely right to be critical to a certain extent. It probably doesn’t surprise you that working at ACL is not just about AI, at Google, I’m a tech optimist, so I see the benefits, but of course we need to be aware of all the complexities and risks, and then really think about how to develop such a technology all across the life cycle, really, like design, testing, deployment, having the safeguards involved, and obviously, this is a big, big topic as well for us at Google. I’m based in one of the largest engineering offices, the Google Zurich office, and I mean, our security teams, for example, work on direct teaming efforts, so how you can trick the systems, how you make sure that this is not happening, and I think this is obviously a key component that we’re having those debates, and really also the feedback mechanism, and all the testing, monitoring, and the safeguards, and I think the industry is well aware of, but then again, it really requires the broad dialogue, and happy to have this one here over the next two days, but then also at the mentioned occasions later this year.
Dorijn Boogaard: Thank you very much. Do we have someone who voted for agree in the room? Yes, someone from the other generation, maybe, all the way back. Go ahead. If you have, if you would like to explain why you voted for agree.
Audience: So, well, of course, I’m 30 plus. I voted for agree, but actually, I’m not sure. I see that the potential is there to improve, but I’m not sure if it will work out, and also, I’m not sure what kind of education is still. needed, and whether we will make the transformation in education. We don’t need to create humans that can do things that will be replaced by AI. And we need to know what we have to teach people, what is required for them to be able to do a meaningful job in the future. And I see that we might be teaching people skills that they don’t need anymore, and that we might use AI in a way that doesn’t make them better in learning. But of course, the technology could do a lot of useful things, but we have to learn how to use it in a meaningful way.
Dorijn Boogaard: Very clear. Thank you very much. Maybe Ben can also reflect on that a little bit, because it also touches upon the relationship between teachers and students. So yeah, go ahead.
Ben Mischeck: Happy to do so. And I have to say I really much like the question. And I also really like that it’s coming from an older generation, because I often hear that the type AI is affecting the ways we work, the way we learn, that we’re just losing skills. Nobody is doing stuff for us, and we’re losing the skill to actually write an essay, to reflect on literature. And I have to disagree with the statement. So I like the question about what are the skills that we need to learn, and how are the skills going to develop in the future? And I think it’s a very complex question to answer. But what is important, I think, when trying to answer this question, is to think about the positive ways AI can impact a skill. And it’s also, I think, worth to mention that a skill is not just like one single thing to do. Like if I’m writing an essay. I have to do like a lot of little steps, and I believe some of these steps can be automated with AI, and I think that’s beneficial because humans then are able to focus on other aspects. If I’m writing an essay, I might have more time now to reflect on my arguments, formulate stronger arguments with more evidence, for example. And that’s like an example of how I think skills will develop in the future. But here, again, the intergenerational dialogue I think is very important, because I think the older generation has a different sense of skills or value of skills than we do, and I think the young generation might be opposed to the risk that we do lose skills because we think they’re not valuable anymore, but they actually do have some value, maybe in a cultural sense, maybe in a social sense, but I really think that the young people can learn from the older generation in this part, and the older generations need to be open of ways how to integrate AI in how we work and learn.
Dorijn Boogaard: Yes, of course, Laila.
Laila Lorenzon: Yes, I just wanted to add the fact that I agree with all the concerns that the current state of things is kind of hard to believe that AI can actually improve education when we see a lot of things about the misuse of AI, but I think it’s very important to consider that we are in a whole new generation in terms of socialization, so we and people younger than us, they are born in a way that is all connected, and very soon in their life they have screens and access to social media and all of that, so that has a huge impact on the social development side, and the usual way classes are done, I don’t think it answers to the needs of the students of this digital age anymore because they are… exposed to screens and to content and to short videos and to kind of change the way that they pay attention to things. And I think using AI to make gamified sessions or more interactive and engaging learning, I think it has a huge potential to make students more interesting and willing to learn and see AI as a tool to enhance their creativity and give them new ideas and not only doing something for them. So I think that relates as well to what Ben said of not trying to prohibit AI in classrooms. That’s never the way if you prohibit, it’s only make people gonna use it more. So instead of that, I think showing is at how it can be a tool to not do things for you, but make you think smarter or be more creative or tackle this challenge differently. And yeah, I think we should try to look more at these aspects of how we can make classes and education more interesting because it’s a whole new world with AI and also virtual reality, augmented reality. And it can really be used to make people more interested in learning again. And I think that’s something important to explore. Thank you.
Dorijn Boogaard: Thank you very much. And we also have a question online, but after that I will come to you in the room. So first I will give the word to our online moderator.
Moderator: Thank you very much. Yeah, we have a comment from Jasmine. I will just read it out loud and leave it to the debate. The key debate is always on how people leverage AI, meaning what positive and negative impacts potentially created from all several ways of usage.
Dorijn Boogaard: That’s a good comment, I think. And we will bring it into the conversation or is anyone wanting to respond to that now? Yes, yes, we can now move to loud in the room.
Audience: I’m looking at it from a little bit different angle and it’s what you’d be saying with social media that already had that in mind. I think that the way we all learn today, in some unofficial way or educated, are through what we see on social media, what comes by in videos, etc. And that is also presented to us by artificial intelligence, by all sorts of black boxes at Google and other places that we don’t know how they work. Simply we don’t know. We do know how it influences people. And this is where education comes back in, in my opinion, is how do we make sure that youths, now for me it’s too late, I’ll never be in a school class again, I think, but how do we teach youths how to deal with outcomes they see on social media, etc., etc., so that they are taught that there perhaps is another view. And I’m very pessimistic about what’s happening today, how this undermines our democracy and how it undermines youth. And coincidentally I read the whole article yesterday and I won’t read it out loud, but it’s about when you hear more about how bad it is in your country, the better it is on equality standards and the freedom of speech, etc. And go to Russia or another country, then you go to a very special place where they never hear from you again. So in other words, we need to teach how valuable differences of opinion are, but also that freedom of speech, how important that is. And there is a role for education nowadays, in my opinion. So I’m wondering what do you think about this and how we could go about organizing it, because it’s a… is about teaching our teachers as well. Thanks.
Dorijn Boogaard: Thank you very much. I’m looking at Mr. Pap Ndiaye. Could you please reflect on this? You also mentioned the importance of digital literacy, but are there other policy ways to tackle this problem?
Pap Ndiaye: Yeah, thank you. Thank you for your point. I mean, we need to be realistic. We have very powerful adversaries in many ways. That is the conjunction of a number of social media and AI all put together. And I was, I mean, French Minister of Education, and I’m very aware of the importance which the social media have in the everyday life of millions of young people. They spend more time checking TikTok than doing their homework, to put it briefly. And TikTok and others will be more and more efficient using AI and spending and money on AI which obviously makes them more and more influential and powerful. This is the reality that we face nowadays. So if I get back to the question, AI will improve the quality of education. I mean, it all depends on us. It could very well have the most detrimental and negative effects on education. AI could very well destroy education as we understand it. That’s a possibility. It could improve the quality of education. It all depends on how we organize ourselves. It all depends on the collective will to use AI in an effective way, but at this point in 2025 we have to acknowledge that all those who are attached to the common good and to this collaborative work between all stakeholders, the inclusion of the Global South, all this community so to speak, is lagging behind the rapid pace of development in a number of companies that just do not care at all about what we mean when we speak of education. Sorry to be a little gloomy here.
Dorijn Boogaard: Thank you very much.
Anton Aschwanden: Perhaps just a quick reaction. So we do care, that’s I think my reply, that we are really engaged in having our platforms that they are safe and responsible, not only for our users by the way, but also because we make money with advertisements and ads customers, they don’t want to have their products being promoted in an environment where scams or violence is present. So it’s also in our self-interest to do so. And then just regarding digital literacy, a topic close to my heart, being a vocational trainer myself, working with young people. So we do that in our own work, with our own products, but then also supporting many initiatives across the world. I know only the ones in Switzerland in all details, but we worked for now eight years on that. on digital literacy trainings with Swiss schools, like not Google, but Google.org financing the respective work of the largest Swiss youth foundation, Pro-Juventude. And perhaps just as a quick remark, obviously I think it’s because we talked about the misuse of AI and I think it’s fair to be aware of it and to be thinking of it and be critical about it. But personally, I’m not only, I’m not that, I think one should be worried about potential misuse of AI, but then also about the misused use of AI, like what will happen if you’re not gonna use it? And I think this is really a thought that I would also love to leave with you, that it can be this tool to many regions and community to kind of like make a jump as well, like looking at some countries on the African continent, like they’re more advanced in like mobile transaction payments than we are here in France and then in Switzerland, for instance. So it can also be like a tool to really like make a jump and an opportunity for progress. So this is just a thought I would love to leave you with, that it’s not only about the misuse, but the misuse that I’m worried.
Dorijn Boogaard: Thank you. Thank you very much. I would like to open the floor now for questions here in the room, but also feel free to raise a hand online, of course. Are there any questions in the room? Yes.
Audience: Can you hear me? Yeah. So I’m currently a master’s student at CU and under the CIVICA project, I’m a project lead of recognizing the impact of AI in higher education, and it is a collaborative project with LSE. and the core concern which has been shared by the educators are that the over-reliance on AI is threatening the cognitive, it’s leading to cognitive outsourcing and also threatening the critical thinking abilities of the students. So I want to know, because there is a very thin line between using AI as a substitute and as a supplement, so I want to know from the panel, how can you maybe, how can we help students to understand that difference so that it leads to the ethical use of AI and doesn’t lead, and doesn’t affect the academic integrity of the work? Thank you.
Dorijn Boogaard: Thank you very much. Is there anyone who would like to volunteer for asking, answering this question? Yeah, go ahead.
Ben Mischeck: Yeah, happy to elaborate on, but I do have to say I’m not involved in researching on education, so I do miss maybe some points regarding how you teach the best way. But what I think is really important here, and I did say that before, is that from a teacher’s side, you actively encourage the usage of AI. Because currently, students, they will use AI anyways. And I think that’s the problem. It’s very unmanaged, they just use it what they think is the best way to use it. And I mean, humans in general, it’s not just students. Humans are very comfortable, and they try to find the most easy solution to a problem. And I think teachers in the future need to take on the role to show students on how to use AI in a correct and also in a safe, secure way. So for example, as I said, with the essay writing, I think teachers need to encourage processes for students to learn how to write an essay in collaboration with an AI agent. So to actively and critically interact with the agent. And I mean, we do know, for example, that AI is hallucinating, providing false information. it’s a very important thing for a student to be critically when using AI. But I think you won’t be critical without being motivated to do so. And I think that’s where teachers come in and what role they play is to educate on how to actively use AI in a secure way that they’re not losing their mental capabilities and that it’s not a threat to actual skills, but it’s an augmentation of skills. And I mean, it might be that some parts are going to be missed out, but I think we have to question ourselves, and okay, is this really important to the task itself? Or can we say, okay, this part, AI can take that part, but we are able to critically think about the result AI is producing.
Dorijn Boogaard: Thank you.
Laila Lorenzon: Yes, I just want to second everything that Ben said, and I think it’s a matter of really encouraging the use of AI because as you said, students are going to use it anyway. But I think it’s a new challenge that is arising in education just when we had the introduction of the Internet itself in the early 2000s. It was very new, and it was a whole process of the teachers themselves learning how they can include the Internet into the classroom activities. And I think right now with AI, it’s just a similar challenge that part of the teachers need to research and understand, and I’m sure there are a lot of frameworks and reports on the best usage and the best prompts to make sure that AI enhances the educational process instead of replacing it because also some common practices like checking the sources whenever an AI tool says you want information and verifying these things that they can seem very basic, sometimes to some students they are because they don’t know how is the process of an AI, how does it work, where does it retrieve information from? So I think it’s also a little bit on the teachers themselves to try to engage the students and how would the teachers approach a task using AI themselves so they can pass that to the students and they can do more of a critical usage of AI and not only asking the AI to do because it’s something that you can also ask how can I think better on that? How can I be more creative? And I think it’s really a matter of going back to resources. There’s a bunch of resources, as you were mentioning, Google resources as well, that we can tap in to understand how to make AI a medium way of connecting better the teachers and the students because I often feel that there’s this feeling of not trusting AI and not wanting students to use it at all and teachers not using themselves as well. But I think something that is here is like when the internet started and now it’s in our daily lives and it’s something that we need to get used to it because it’s something that can actually make our lives easier if we understand well how to do it. Thank you.
Dorijn Boogaard: Thank you very much. Yes, go ahead.
Pap Ndiaye: Yeah, thank you. I may sound a little conservative but I still believe that it’s very useful for students to write essays on their own without the AI, without the internet, just to focus on the writing and their own ideas. I’m not suggesting that we should get rid of AI. It would be unrealistic and certainly not a good idea. I agree. But we still need to find spaces in schools. possibly in homework where students can think without the machines, without the AI. On their own, it is very important for their cognitive development.
Dorijn Boogaard: Thank you very much. Thinking without machines is a very important skill, I think. And we also have two comments online, so could you please reflect on them?
Moderator: Of course, thanks. June Paris wrote, older people do get AI, but we see the bad things about it because of experience in life. Those creating AI really need basic understanding prior to development. Do you want to elaborate this, or should I just continue with the next? All right, and we have a second one from Anthony Millennium. AI has the potential to significantly enhance the quality of education, but this can only be achieved if its development and deployment are equitable, accessible, and inclusive, particularly for young people in the global south. I’ll leave it to you.
Dorijn Boogaard: Yeah, I think some crucial points there in the chat. Do you want to reflect on that, one of you? Or shall we go to the next question in the room? I think I saw a hand. Yes, please go ahead.
Audience: Okay, I hope you can hear me well. So, I am George from Civil Society. I would like to know, like, from policy and government, like, a standpoint, like, how can public-private partnership be structured to ensure that AI tools developed for education are not only innovative, but also uphold transparency, like, also, like, inclusivity and respect for educational sovereignty in the diverse regions? And also, I have another question related to media that we talked about. And I would like to know, how can we prevent AI in education from spreading misinformation or disinformation or malinformation? And what role can youth play in building trust and accountability in a digital learning aspect?
Dorijn Boogaard: Thank you very much. Maybe, Chengetai, could you respond to the first question on the… Anja, are you here online? Maybe you could reflect on the first question on the importance of public-private partnership and the policy perspective. How can you make it actually effective? So, if you are in the room, yes, there you are.
Anja Gengo: Yes, I am. Thank you. I hope you can hear me. First of all, thank you so much for such an interesting and rich discussion. And I think it went beyond the aspects that the youth group working for months on preparing this session was envisioning. So, that really speaks to the fact how important it is to discuss this topic in this multi-stakeholder intergenerational setup. What I can say is that we operate in a multi-stakeholder environment. The IGF has been doing that for the past 20 years. And you can see that it has its evolution. And it’s been developing in terms of that new stakeholders are getting attracted to the model and they are meaningfully engaging. Just because the awareness is growing in the world, seeing that digital technologies as they are becoming more complex and just more integral part of us, they really require a multi-stakeholder approach. And that extends to the nature of cooperation. We call it public-private partnerships. We call it multi-stakeholder cooperation and collaboration. And we just think that through our experience in 20 years, that that’s really the only modus operandi to go if we want to speak about good governance of digital tech. And if we want to speak, in other words, about the digital technologies, including AI and everything that’s waiting for us, such as the quantum computing, for example, if we want it to work for us. But there is an issue that was mentioned several times by several speakers that spoke at this session, which is first of all, one problem is the awareness, not everyone is aware of that. And then the problem is also, I would say, not necessarily knowledge as much as skills in terms of how to do that. The countries really differ in resources in terms of having stakeholders who are interested and have resources to invest in this type of cooperation. And that’s, I think, what requires strong cooperation between primarily the developed and developing world. And I don’t mean just on economies, I really mean on different sectors with various kinds of resources to work together. And I think through these types of IGF processes, for example, such as EuroDIG for Europe, many national IGFs, that’s really an opportunity for inclusive platforms to bring together good practices and stakeholders that have resources and stakeholders that have a demand that lack resources to work together, exchange practices, and then establish good partnerships to work together. I do think that also bringing or creating these opportunities for bridging the generational gap in terms of the knowledge and skills is critically important. That’s why we build always this huge track through intergenerational dialogue where those who already have experience with the leadership, with resources, deployment of resources for good, can work with those who tomorrow will be taking the positions where they will have to make. decisions for the technologies to serve the better goods so that they know each other, and that’s how we save time. So I know that the response is not simple, but in a nutshell, I would say that just fostering dialogue really leads to cooperation and good partnerships, and these types of platforms are really proven, I think, effective resources for that. Thank you.
Dorijn Boogaard: Thank you very much, Anja. And you also had a second question, which was about tackling misinformation and disinformation. Maybe, Anton, you could quickly share your approach to this problem.
Anton Aschwanden: Yeah, obviously, it’s a key priority for us. Again, you wouldn’t use our product if you couldn’t rely on the quality of the information, and this is a top priority for us to have the answers delivered which are authoritative. And by the way, our tools, we also have been leading the respective functionalities to check where the sources are coming from and also invite you to use it. And the great illustration, for example, is then also if you’re using a tool like Notebook LM, where you can work, upload large PDFs, for instance, reports, preparing speeches or panels, that you then always have to reference where that’s the information coming from. So those are like the respective solutions we’re working on in our products. But then again, also in partnership with other players, developing standards like SynthID is an illustration on the watermarking of AI-generated content, Authenticity Coalition. So there’s a lot of work happening in this. space and overall I think it’s really to have an approach which is at the same time bold to really reach those targets but responsible and then doing it together and not just in a few places like coming back to the digital infrastructure and the skilling aspects that I was mentioning before to really do that across the globe and also invest in all the different parts of the world like some of our engineering hubs in Accra or Nairobi being an illustration that our engineers in this case in Africa are working on solution also for the African continent and that we do not like pretend that like there’s a one size fits it all solution. Thank you.
Dorijn Boogaard: Thank you very much and we are coming to the end of this session but I would like to ask all the panellists to have like one key message that you would like to share very shortly starting on the right and then we’ll go to all the panellists.
Pap Ndiaye: AI is a political issue and we need in the best sense of the word and we need to organise so as to make AI productive for the education of our youth throughout the world.
Chengetai Masango: I think AI is here to stay and I think it’s very important to use it but use it well with critical thinking skills and as Ambassador was saying that it has to be a balance as well. You can’t just do away with the way we used to do things but it’s a balance but it’s here to stay.
Anton Aschwanden: I would go for let’s make sure that the current digital divide does not become an AI divide and then what I’ve been saying before, let’s also worry about the missed opportunities and the missed use of AI and not only about the misuse.
Laila Lorenzon: I would say that AI is an ally and not an enemy and that it can be used to unite students and teachers and just improve the overall educational process, that it’s better and easier for both.
Ben Mischeck: Maybe to finish off, I would like to emphasize the importance of empathy when speaking about AI in education for all the generations, for all stakeholders, to really try to understand how the other one is perceiving AI and what benefits and risks everyone is seeing because I think all of them are very valid and it’s very worthwhile to make the effort to show empathy.
Dorijn Boogaard: Thank you all very much for joining this panel and thank you all very much for joining this session. I would like to ask or I would like to give away that this is the first of four workshops from the IGF youth track, so thank you for joining but also feel free to join the other three workshops coming up this year and now the final words to our online moderator. Thank you very much.
Moderator: Also from our side, thanks to the panelists for your insights, also thank you Dorijn for the moderation, as well thanks to the audience for the attendance. The next session will be Resilience of IoT Ecosystems, Preparing for the Future. We start at 1 p.m., giving you a 45-minute break for lunch and we look forward to seeing you back then. Thank you.
Laila Lorenzon
Speech speed
150 words per minute
Speech length
1935 words
Speech time
769 seconds
AI has power to facilitate digital education in constrained settings and bridge digital divide rather than deepen it
Explanation
AI can facilitate offline learning and remote learning, especially for communities with limited internet infrastructure and in crisis settings and conflict zones. It’s important to understand that AI can be used as a facilitator to bridge the digital divide rather than something that deepens it.
Evidence
Examples of AI facilitating learning in communities with hard-to-reach accessibility in terms of internet infrastructure, and in diverse crisis settings and conflict zones
Major discussion point
AI’s Impact on Education Quality and Implementation
Topics
Development | Online education
Meaningful cooperation requires intentional building with resources and capacity building to ensure equal participation
Explanation
Cooperation doesn’t happen by default and needs to be built intentionally with resources, attention and care. Capacity building is key to ensure everyone starts from the same place with technical knowledge needed to actively engage in AI education discussions.
Evidence
Example from Brazil’s 2022 ‘AI in the Classroom’ project by Data Privacy Brazil, which used ‘sensitive listening’ to create safe spaces for meaningful participation
Major discussion point
Multi-stakeholder Cooperation and Youth Participation
Topics
Development | Online education
Agreed with
– Pap Ndiaye
– Chengetai Masango
– Anja Gengo
Agreed on
Multi-stakeholder cooperation is essential for effective AI governance in education
Investment in open, adaptable infrastructure respecting local context and languages is essential
Explanation
All frameworks should be open for accountability and designed with local context and cultural diversities in mind, including diverse needs of youth with disabilities and specific languages. Without this approach, there’s risk of deepening inequalities rather than solving them.
Major discussion point
Infrastructure and Accessibility Challenges
Topics
Development | Infrastructure | Multilingualism
Agreed with
– Anton Aschwanden
– Pap Ndiaye
Agreed on
Digital divide must not become an AI divide, requiring global infrastructure investment
Including both students and teachers in AI deployment process prevents teachers from feeling replaced
Explanation
It’s important to break the barrier that some teachers may see that AI is there to replace them or make learning harder. Teachers should feel included so they understand AI is a tool that can help facilitate learning and make classes more interesting and engaging.
Major discussion point
Teacher-Student Relationship and Educational Transformation
Topics
Online education
Agreed with
– Ben Mischeck
Agreed on
Students will use AI regardless of restrictions, so education should focus on proper usage rather than prohibition
AI can make classes more engaging for digital-native students who are socialized differently
Explanation
Current generation is born connected with early exposure to screens and social media, impacting their social development. Traditional classroom methods don’t answer the needs of digital age students, so AI can be used for gamified sessions and more interactive learning.
Evidence
Students are exposed to screens, content, short videos that change how they pay attention to things
Major discussion point
Teacher-Student Relationship and Educational Transformation
Topics
Online education | Sociocultural
Disagreed with
– Pap Ndiaye
– Ben Mischeck
Disagreed on
Whether students should have spaces to learn without AI versus encouraging AI integration
Audience
Speech speed
124 words per minute
Speech length
835 words
Speech time
400 seconds
Current educational systems are not ready for AI improvements, requiring transformation to make AI useful
Explanation
Many countries have educational systems that are not ready to get along with improvements that AI might bring. Without the capability to transform AI into something useful, it will bring more harm than improvement to educational systems.
Major discussion point
AI’s Impact on Education Quality and Implementation
Topics
Online education | Development
Disagreed with
– Anton Aschwanden
– Pap Ndiaye
Disagreed on
Optimism versus pessimism about AI’s impact on education quality
Teaching youth to deal with AI-influenced social media content and value different perspectives is crucial for democracy
Explanation
Youth learn through social media content presented by AI algorithms in black boxes that influence people in unknown ways. Education must teach how to deal with these outcomes and recognize different viewpoints, as this undermines democracy when not addressed properly.
Evidence
Reference to article about how hearing about problems in free countries versus authoritarian countries like Russia demonstrates the value of freedom of speech
Major discussion point
Misinformation and Digital Literacy
Topics
Human rights | Online education | Freedom of expression
Public-private partnerships must ensure transparency, inclusivity and educational sovereignty across diverse regions
Explanation
There’s a need to structure public-private partnerships so that AI tools for education are innovative while upholding transparency, inclusivity, and respect for educational sovereignty in diverse regions.
Major discussion point
Policy and Governance Frameworks
Topics
Legal and regulatory | Online education
Anton Aschwanden
Speech speed
135 words per minute
Speech length
1853 words
Speech time
820 seconds
AI can improve education through personalized learning and language accessibility, but requires responsible development with safeguards
Explanation
AI-powered tools can help transmit education and knowledge through personalized learning, improve software, and provide access to new languages including less-spoken ones. However, this requires responsible development with safeguards throughout the lifecycle including design, testing, and deployment.
Evidence
Examples of Google’s Read Along Quill and 1,000 languages initiatives; mention of security teams working on red teaming efforts to prevent system manipulation
Major discussion point
AI’s Impact on Education Quality and Implementation
Topics
Online education | Multilingualism
Agreed with
– Chengetai Masango
– Pap Ndiaye
Agreed on
AI has potential to improve education but requires responsible implementation with safeguards
Disagreed with
– Audience
– Pap Ndiaye
Disagreed on
Optimism versus pessimism about AI’s impact on education quality
Current digital divide of 2.4 billion people offline must not become an AI divide
Explanation
With 2.4 billion people still offline according to ITU statistics, there’s a challenge to ensure the existing digital divide doesn’t become an AI divide. This requires investment in digital infrastructure globally, not just in the global north.
Evidence
ITU statistics showing 2.4 billion people offline; examples of Google’s fiber optic cable investments connecting Latin America to Africa, Africa to Asia Pacific, and remote Pacific areas
Major discussion point
Infrastructure and Accessibility Challenges
Topics
Development | Infrastructure | Digital access
Agreed with
– Pap Ndiaye
– Laila Lorenzon
Agreed on
Digital divide must not become an AI divide, requiring global infrastructure investment
Companies have self-interest in maintaining safe platforms and invest in digital literacy programs globally
Explanation
Companies care about platform safety not only for users but because advertising customers don’t want their products promoted in environments with scams or violence. This creates business incentive for responsible AI development and global digital literacy investments.
Evidence
Google’s 8-year partnership with Swiss youth foundation Pro-Juventude for digital literacy training in Swiss schools; mention of AI skilling certificates and global programs
Major discussion point
Misinformation and Digital Literacy
Topics
Online education | Development
Disagreed with
– Pap Ndiaye
Disagreed on
Corporate responsibility and self-interest in AI safety
Ben Mischeck
Speech speed
161 words per minute
Speech length
1394 words
Speech time
518 seconds
Students will use AI regardless, so teachers should actively encourage proper usage rather than prohibit it
Explanation
Students will use AI anyway in an unmanaged way, trying to find the easiest solution to problems. Teachers should take on the role of showing students how to use AI correctly and securely, such as learning to write essays in collaboration with AI agents while being critical of AI outputs like hallucinations.
Evidence
Example of essay writing where teachers should encourage processes for collaborative AI use and critical interaction with AI agents
Major discussion point
AI’s Impact on Education Quality and Implementation
Topics
Online education
Agreed with
– Laila Lorenzon
Agreed on
Students will use AI regardless of restrictions, so education should focus on proper usage rather than prohibition
Disagreed with
– Pap Ndiaye
– Laila Lorenzon
Disagreed on
Whether students should have spaces to learn without AI versus encouraging AI integration
Initial AI focus on detection created mistrust between students and teachers instead of collaboration
Explanation
Early approaches focused on comparing and detecting AI-generated versus student-written work, which created mistrust and positioned students and teachers against each other rather than working collaboratively. This approach has technical issues with AI detection and creates an adversarial rather than collaborative learning environment.
Evidence
Technical issues with AI detection being difficult and potentially biased
Major discussion point
Teacher-Student Relationship and Educational Transformation
Topics
Online education
Teachers need to take active role in showing students how to use AI correctly and securely
Explanation
Teachers should actively encourage AI usage and educate students on secure, collaborative AI use. Students need to learn to be critical when using AI, understanding issues like hallucinations, but this requires motivation and guidance from teachers.
Evidence
Examples of AI tutoring systems and collaborative essay writing processes
Major discussion point
Teacher-Student Relationship and Educational Transformation
Topics
Online education
Empathy is crucial when discussing AI in education across all generations and stakeholders
Explanation
All generations and stakeholders should try to understand how others perceive AI, including the benefits and risks each group sees. This empathy is important because different perspectives on AI’s value and impact are all valid and worthwhile to understand.
Major discussion point
Multi-stakeholder Cooperation and Youth Participation
Topics
Online education | Interdisciplinary approaches
Pap Ndiaye
Speech speed
112 words per minute
Speech length
1270 words
Speech time
678 seconds
International governance requires including all countries, especially those with limited educational access
Explanation
AI governance must include all countries, particularly those where access to education remains limited. The UN plays a central role in including voices of countries isolated from AI, for whom digital transformation is a major development driver.
Evidence
UNESCO’s 2021 recommendation on AI ethics placing equitable access and digital literacy at the center; Global Digital Compact from 2022; mention of upcoming AI Impact Summit in India focusing on developing countries
Major discussion point
Infrastructure and Accessibility Challenges
Topics
Development | Online education | Legal and regulatory
Agreed with
– Anton Aschwanden
– Laila Lorenzon
Agreed on
Digital divide must not become an AI divide, requiring global infrastructure investment
Multi-stakeholder approach must include all education stakeholders from design phase, especially youth, teachers, and parents
Explanation
Understanding educational challenges of AI requires representation of all stakeholders including governments, companies, international organizations, but especially teachers, parent associations, and youth representatives. Youth representation in AI governance design is essential for measuring initiative effectiveness.
Evidence
Examples of IGF Youth Initiatives and events like EuroDIG and Internet Governance Forum as key arenas for inclusive dialogue
Major discussion point
Multi-stakeholder Cooperation and Youth Participation
Topics
Online education | Legal and regulatory
Agreed with
– Laila Lorenzon
– Chengetai Masango
– Anja Gengo
Agreed on
Multi-stakeholder cooperation is essential for effective AI governance in education
Students still need spaces to think and write without AI for cognitive development
Explanation
While not suggesting getting rid of AI, it’s important to find spaces in schools and homework where students can think without machines or AI, focusing on their own writing and ideas. This is very important for their cognitive development.
Major discussion point
AI’s Impact on Education Quality and Implementation
Topics
Online education
Agreed with
– Anton Aschwanden
– Chengetai Masango
Agreed on
AI has potential to improve education but requires responsible implementation with safeguards
Disagreed with
– Ben Mischeck
– Laila Lorenzon
Disagreed on
Whether students should have spaces to learn without AI versus encouraging AI integration
Powerful adversaries using AI and social media could destroy education if collective action is not taken
Explanation
The conjunction of social media and AI creates very powerful adversaries, with young people spending more time on platforms like TikTok than doing homework. These platforms will become more influential using AI, and without collective will to use AI effectively, it could destroy education as we understand it.
Evidence
Example of students spending more time checking TikTok than doing homework; TikTok and others investing in AI to become more influential
Major discussion point
Misinformation and Digital Literacy
Topics
Online education | Sociocultural
Disagreed with
– Anton Aschwanden
Disagreed on
Corporate responsibility and self-interest in AI safety
Chengetai Masango
Speech speed
115 words per minute
Speech length
492 words
Speech time
255 seconds
Youth track creates opportunities for intergenerational dialogue and is essential for sustainability of digital governance
Explanation
Developing youth capacity is essential for sustainability of digital governance processes and is a core part of the IGF mandate. The youth track creates meaningful opportunities for current generation leaders to engage with the next generation, supported by various partners and featuring intergenerational dialogues.
Evidence
Support from AI Action Summit in Paris, regional IGFs, collaboration with youth IGFs, Internet Society’s Ambassadors Program, Youth IGF Movement, and Youth Coalition on Internet Governance
Major discussion point
Multi-stakeholder Cooperation and Youth Participation
Topics
Online education | Development
Agreed with
– Laila Lorenzon
– Pap Ndiaye
– Anja Gengo
Agreed on
Multi-stakeholder cooperation is essential for effective AI governance in education
2025 is crucial year for decision-making with WSIS 20-year review and multiple international summits
Explanation
This year’s youth track is particularly significant as it coincides with the 20-year review of the World Summit on the Information Society. It’s a year of decision-making and change, bringing opportunity, with the IGF 2025 Global Youth Summit taking place during the 20th annual IGF meeting.
Evidence
Upcoming events including African IGF in Tanzania, Asia-Pacific IGF in Nepal, Latin American IGF, and IGF 2025 Global Youth Summit in Lillestrom, Norway
Major discussion point
Policy and Governance Frameworks
Topics
Legal and regulatory | Online education
AI is here to stay and must be used with critical thinking skills in a balanced approach
Explanation
AI is permanent and important to use, but must be used well with critical thinking skills. There needs to be balance – you can’t just do away with traditional methods, but AI is here to stay and must be integrated thoughtfully.
Major discussion point
AI’s Impact on Education Quality and Implementation
Topics
Online education
Agreed with
– Anton Aschwanden
– Pap Ndiaye
Agreed on
AI has potential to improve education but requires responsible implementation with safeguards
Anja Gengo
Speech speed
154 words per minute
Speech length
546 words
Speech time
212 seconds
IGF’s 20-year experience shows multi-stakeholder cooperation is the only effective approach for digital technology governance
Explanation
The IGF has evolved over 20 years, attracting new stakeholders who meaningfully engage as awareness grows that complex digital technologies require multi-stakeholder approaches. This extends to public-private partnerships and multi-stakeholder collaboration as the only effective way to make digital technologies work for society.
Evidence
IGF’s 20-year track record of multi-stakeholder cooperation and evolution
Major discussion point
Multi-stakeholder Cooperation and Youth Participation
Topics
Legal and regulatory | Online education
Agreed with
– Laila Lorenzon
– Pap Ndiaye
– Chengetai Masango
Agreed on
Multi-stakeholder cooperation is essential for effective AI governance in education
Session provides platform for inclusive dialogue and bridging resource gaps between developed and developing regions
Explanation
IGF processes like EuroDIG and national IGFs create opportunities for inclusive platforms that bring together stakeholders with resources and those with demand but lacking resources. This enables exchange of good practices and establishment of partnerships for cooperation.
Evidence
Examples of EuroDIG for Europe, national IGFs, and intergenerational dialogue opportunities
Major discussion point
Infrastructure and Accessibility Challenges
Topics
Development | Online education
Moderator
Speech speed
182 words per minute
Speech length
443 words
Speech time
145 seconds
Session rules and structure facilitate proper remote and in-person participation
Explanation
Clear guidelines are established for Zoom participation including using full names, raising hands for questions, switching on video when speaking, stating name and affiliation, and not sharing meeting links. Questions in chat should be marked with ‘cue’ for proper addressing.
Evidence
Specific instructions for Zoom functionality, video requirements, and chat protocols
Major discussion point
Policy and Governance Frameworks
Topics
Online education
Dorijn Boogaard
Speech speed
140 words per minute
Speech length
1338 words
Speech time
569 seconds
Session represents first of four IGF youth track workshops building toward global summit
Explanation
This EuroDIG session is the first of four workshops following outputs from last year’s IGF in Riyadh and the youth declaration from the AI Action Summit in February. Additional workshops will occur at regional IGFs leading to the global summit.
Evidence
Reference to three main recommendations from the declaration: advance AI-driven education, strengthen multi-stakeholder cooperation, and foster intergenerational dialogue
Major discussion point
Policy and Governance Frameworks
Topics
Online education | Development
Agreements
Agreement points
Multi-stakeholder cooperation is essential for effective AI governance in education
Speakers
– Laila Lorenzon
– Pap Ndiaye
– Chengetai Masango
– Anja Gengo
Arguments
Meaningful cooperation requires intentional building with resources and capacity building to ensure equal participation
Multi-stakeholder approach must include all education stakeholders from design phase, especially youth, teachers, and parents
Youth track creates opportunities for intergenerational dialogue and is essential for sustainability of digital governance
IGF’s 20-year experience shows multi-stakeholder cooperation is the only effective approach for digital technology governance
Summary
All speakers agree that effective AI governance in education requires intentional multi-stakeholder cooperation involving youth, teachers, policymakers, and technology developers, with proper capacity building and inclusive dialogue mechanisms.
Topics
Online education | Legal and regulatory | Development
Students will use AI regardless of restrictions, so education should focus on proper usage rather than prohibition
Speakers
– Ben Mischeck
– Laila Lorenzon
Arguments
Students will use AI regardless, so teachers should actively encourage proper usage rather than prohibit it
Including both students and teachers in AI deployment process prevents teachers from feeling replaced
Summary
Both speakers agree that prohibition of AI in education is ineffective and counterproductive, emphasizing the need for collaborative approaches that teach proper AI usage while involving both students and teachers in the process.
Topics
Online education
AI has potential to improve education but requires responsible implementation with safeguards
Speakers
– Anton Aschwanden
– Chengetai Masango
– Pap Ndiaye
Arguments
AI can improve education through personalized learning and language accessibility, but requires responsible development with safeguards
AI is here to stay and must be used with critical thinking skills in a balanced approach
Students still need spaces to think and write without AI for cognitive development
Summary
Speakers agree that AI has educational benefits but must be implemented responsibly with proper safeguards, critical thinking, and balanced approaches that don’t completely replace traditional learning methods.
Topics
Online education
Digital divide must not become an AI divide, requiring global infrastructure investment
Speakers
– Anton Aschwanden
– Pap Ndiaye
– Laila Lorenzon
Arguments
Current digital divide of 2.4 billion people offline must not become an AI divide
International governance requires including all countries, especially those with limited educational access
Investment in open, adaptable infrastructure respecting local context and languages is essential
Summary
Speakers agree that existing digital inequalities must not be replicated in AI access, requiring intentional global infrastructure investment and inclusive governance that includes developing countries and respects local contexts.
Topics
Development | Infrastructure | Digital access
Similar viewpoints
Both speakers emphasize the importance of collaborative rather than adversarial approaches to AI in education, recognizing that current students have different learning needs due to their digital socialization.
Speakers
– Ben Mischeck
– Laila Lorenzon
Arguments
Initial AI focus on detection created mistrust between students and teachers instead of collaboration
AI can make classes more engaging for digital-native students who are socialized differently
Topics
Online education | Teacher-Student Relationship
Both speakers acknowledge the significant influence of technology companies and social media platforms on education, though they approach it from different perspectives – one emphasizing the threat and the other the responsibility.
Speakers
– Pap Ndiaye
– Anton Aschwanden
Arguments
Powerful adversaries using AI and social media could destroy education if collective action is not taken
Companies have self-interest in maintaining safe platforms and invest in digital literacy programs globally
Topics
Online education | Misinformation and Digital Literacy
Both express concern about AI’s influence through social media on democratic values and education, emphasizing the need for critical thinking and media literacy education.
Speakers
– Audience
– Pap Ndiaye
Arguments
Teaching youth to deal with AI-influenced social media content and value different perspectives is crucial for democracy
Powerful adversaries using AI and social media could destroy education if collective action is not taken
Topics
Human rights | Online education | Freedom of expression
Unexpected consensus
Need for traditional learning spaces alongside AI integration
Speakers
– Pap Ndiaye
– Ben Mischeck
– Chengetai Masango
Arguments
Students still need spaces to think and write without AI for cognitive development
Teachers need to take active role in showing students how to use AI correctly and securely
AI is here to stay and must be used with critical thinking skills in a balanced approach
Explanation
Despite representing different stakeholder perspectives (government, youth, international organization), there’s unexpected consensus that AI integration shouldn’t completely replace traditional learning methods, but rather complement them in a balanced approach.
Topics
Online education
Business incentives align with educational safety and responsibility
Speakers
– Anton Aschwanden
– Pap Ndiaye
Arguments
Companies have self-interest in maintaining safe platforms and invest in digital literacy programs globally
Multi-stakeholder approach must include all education stakeholders from design phase, especially youth, teachers, and parents
Explanation
Unexpected alignment between private sector representative and government official on the importance of responsible AI development, suggesting that business interests can align with public educational goals.
Topics
Online education | Legal and regulatory
Overall assessment
Summary
Strong consensus emerged around the need for multi-stakeholder cooperation, responsible AI implementation with safeguards, collaborative rather than prohibitive approaches to student AI use, and preventing digital divides from becoming AI divides. Speakers agreed on the importance of including all stakeholders, especially youth and teachers, in AI governance design.
Consensus level
High level of consensus across diverse stakeholders (youth, government, private sector, international organizations) on fundamental principles, though with different emphases. This suggests a mature understanding of AI in education challenges and broad agreement on solution approaches, which bodes well for effective policy development and implementation.
Differences
Different viewpoints
Whether students should have spaces to learn without AI versus encouraging AI integration
Speakers
– Pap Ndiaye
– Ben Mischeck
– Laila Lorenzon
Arguments
Students still need spaces to think and write without AI for cognitive development
Students will use AI regardless, so teachers should actively encourage proper usage rather than prohibit it
AI can make classes more engaging for digital-native students who are socialized differently
Summary
Pap Ndiaye advocates for preserving AI-free spaces in education for cognitive development, while Ben and Laila argue for embracing and properly integrating AI since students will use it anyway and it can enhance learning for digital natives
Topics
Online education
Optimism versus pessimism about AI’s impact on education quality
Speakers
– Audience
– Anton Aschwanden
– Pap Ndiaye
Arguments
Current educational systems are not ready for AI improvements, requiring transformation to make AI useful
AI can improve education through personalized learning and language accessibility, but requires responsible development with safeguards
Powerful adversaries using AI and social media could destroy education if collective action is not taken
Summary
There’s a spectrum from pessimistic views that current systems aren’t ready and AI could harm education, to optimistic tech industry perspectives about AI’s benefits, with diplomatic concerns about powerful commercial interests
Topics
Online education | Development
Corporate responsibility and self-interest in AI safety
Speakers
– Anton Aschwanden
– Pap Ndiaye
Arguments
Companies have self-interest in maintaining safe platforms and invest in digital literacy programs globally
Powerful adversaries using AI and social media could destroy education if collective action is not taken
Summary
Anton argues that companies have business incentives to maintain safe AI platforms, while Pap Ndiaye expresses concern that powerful tech companies don’t care about educational common good and are outpacing collective governance efforts
Topics
Online education | Legal and regulatory
Unexpected differences
Generational perspectives on AI skills and learning
Speakers
– Ben Mischeck
– Audience
Arguments
Empathy is crucial when discussing AI in education across all generations and stakeholders
Teaching youth to deal with AI-influenced social media content and value different perspectives is crucial for democracy
Explanation
While both acknowledge generational differences, Ben emphasizes mutual understanding and empathy, while the audience member focuses more on protecting youth from AI manipulation and preserving democratic values, suggesting different priorities in intergenerational dialogue
Topics
Online education | Human rights
Overall assessment
Summary
The main disagreements center around the balance between AI integration versus preservation of traditional learning methods, the level of trust in corporate versus governmental approaches to AI governance, and optimistic versus cautious perspectives on AI’s educational impact
Disagreement level
Moderate disagreement with constructive dialogue – speakers share common goals of improving education and ensuring equity, but differ significantly on implementation approaches and the appropriate level of AI integration in educational settings
Partial agreements
Partial agreements
Similar viewpoints
Both speakers emphasize the importance of collaborative rather than adversarial approaches to AI in education, recognizing that current students have different learning needs due to their digital socialization.
Speakers
– Ben Mischeck
– Laila Lorenzon
Arguments
Initial AI focus on detection created mistrust between students and teachers instead of collaboration
AI can make classes more engaging for digital-native students who are socialized differently
Topics
Online education | Teacher-Student Relationship
Both speakers acknowledge the significant influence of technology companies and social media platforms on education, though they approach it from different perspectives – one emphasizing the threat and the other the responsibility.
Speakers
– Pap Ndiaye
– Anton Aschwanden
Arguments
Powerful adversaries using AI and social media could destroy education if collective action is not taken
Companies have self-interest in maintaining safe platforms and invest in digital literacy programs globally
Topics
Online education | Misinformation and Digital Literacy
Both express concern about AI’s influence through social media on democratic values and education, emphasizing the need for critical thinking and media literacy education.
Speakers
– Audience
– Pap Ndiaye
Arguments
Teaching youth to deal with AI-influenced social media content and value different perspectives is crucial for democracy
Powerful adversaries using AI and social media could destroy education if collective action is not taken
Topics
Human rights | Online education | Freedom of expression
Takeaways
Key takeaways
AI is here to stay in education and must be used with critical thinking skills in a balanced approach that combines traditional methods with new technologies
The current digital divide affecting 2.4 billion people offline must not become an AI divide, requiring global infrastructure investment and inclusive access
Multi-stakeholder cooperation involving youth, educators, policymakers, and technology developers is essential and must be intentionally built with resources and capacity building
Students will use AI regardless of restrictions, so teachers should actively encourage proper usage and collaborate with students rather than prohibit AI use
AI can serve as an ally to unite students and teachers, improving educational processes through personalized learning, language accessibility, and enhanced engagement
Meaningful youth participation requires involvement from the design phase, not just consultation at the end of processes, with proper capacity building and digital literacy support
International governance frameworks and legally binding instruments are needed to ensure AI serves educational purposes while maintaining transparency and inclusivity
Intergenerational dialogue with empathy across all stakeholders is crucial for understanding different perspectives on AI benefits and risks in education
Resolutions and action items
Continue the IGF youth track with three additional workshops at African IGF in Tanzania, Asia-Pacific IGF in Nepal, and Latin American IGF
Hold the IGF 2025 Global Youth Summit during the 20th annual IGF meeting in Lillestrom, Norway
Encourage youth engagement in other IGF program components including intersessional work
Implement the three main recommendations from the AI Action Summit Declaration: advance AI-driven education, strengthen multi-stakeholder cooperation, and foster intergenerational dialogue
Develop recurring spaces for co-creation in international AI strategies through youth assemblies or councils
Build consultation requirements with youth, students, and teachers into national AI education strategies from the beginning of development processes
Unresolved issues
How to effectively transform current educational systems that are not ready for AI improvements
What specific skills should be taught to students in an AI-enhanced world and which traditional skills remain essential
How to structure public-private partnerships to ensure transparency, inclusivity and educational sovereignty across diverse regions
How to prevent AI in education from spreading misinformation and disinformation while maintaining innovation
How to balance the need for students to think independently without AI while also preparing them for an AI-integrated future
How to address the resource gaps between developed and developing countries in AI education implementation
How to ensure meaningful rather than tokenistic youth participation in AI governance processes
Suggested compromises
Balance traditional education methods with AI integration – maintaining spaces for students to think and write without AI while also teaching proper AI collaboration
Focus on both preventing AI misuse and avoiding missed opportunities from not using AI effectively
Combine top-down policy frameworks with bottom-up youth-driven approaches in AI education governance
Integrate AI as a tool to enhance rather than replace human connection and creativity in learning processes
Develop AI education solutions that respect local contexts and languages while maintaining global standards and cooperation
Thought provoking comments
I don’t quite disagree, like mine is not a complete opinion, but I think that considering the current situation, we have in many countries, I think, and in many situations, an educational system that still is not ready to get along with the improvements which AI might be able to bring. So if we won’t be able to face this challenge, I think that AI will bring more harm than improvement within the educational system.
Speaker
Brahim Balla (audience member)
Reason
This comment was particularly insightful because it shifted the focus from the technology itself to institutional readiness and capacity. Rather than taking a binary position on AI’s benefits, it introduced the crucial variable of systemic preparedness, highlighting that technology’s impact depends heavily on the context and infrastructure into which it’s deployed.
Impact
This comment fundamentally reframed the discussion from ‘Will AI improve education?’ to ‘Are we ready to make AI improve education?’ It prompted Anton to respond with Google’s perspective on responsible AI development and safeguards, and influenced subsequent discussions about the need for teacher training, infrastructure development, and institutional transformation.
We don’t need to create humans that can do things that will be replaced by AI. And we need to know what we have to teach people, what is required for them to be able to do a meaningful job in the future… we might be teaching people skills that they don’t need anymore, and that we might use AI in a way that doesn’t make them better in learning.
Speaker
Audience member (30+)
Reason
This comment was thought-provoking because it challenged the fundamental assumptions about educational content and purpose in an AI-driven world. It raised existential questions about the future of human skills and the relevance of current curricula, forcing participants to think beyond implementation to the very purpose of education.
Impact
This comment sparked a rich intergenerational dialogue about skills evolution. It led Ben to articulate a nuanced view of how skills are composed of multiple components, some of which can be automated while others become more important. It also prompted discussions about the value of traditional skills and the need for intergenerational dialogue to balance innovation with preservation of valuable human capabilities.
I still believe that it’s very useful for students to write essays on their own without the AI, without the internet, just to focus on the writing and their own ideas… we still need to find spaces in schools possibly in homework where students can think without the machines, without the AI. On their own, it is very important for their cognitive development.
Speaker
Pap Ndiaye
Reason
This comment was particularly insightful because it introduced a counterbalance to the prevailing narrative of AI integration. Coming from a former Minister of Education, it carried significant weight and represented a more conservative but thoughtful perspective on preserving human cognitive development and independent thinking.
Impact
This comment created a productive tension in the discussion, validating concerns about over-reliance on AI while not rejecting it entirely. It reinforced earlier points about the need for balance and helped establish that the discussion wasn’t about wholesale adoption but thoughtful integration that preserves essential human capabilities.
The over-reliance on AI is threatening the cognitive, it’s leading to cognitive outsourcing and also threatening the critical thinking abilities of the students. So I want to know… how can we help students to understand that difference so that it leads to the ethical use of AI and doesn’t lead, and doesn’t affect the academic integrity of the work?
Speaker
Master’s student from CU
Reason
This comment was insightful because it introduced the concept of ‘cognitive outsourcing’ – a precise term that captured a key concern about AI in education. It moved beyond general fears to identify specific cognitive risks and framed the challenge in terms of ethical use and academic integrity.
Impact
This comment shifted the discussion toward practical pedagogy and the role of educators in guiding AI use. It prompted responses from Ben and Laila about the importance of teacher involvement in modeling appropriate AI use, and reinforced the theme that prohibition isn’t effective – education and guidance are needed instead.
I think it’s not only about the misuse, but the misuse that I’m worried [about]… what will happen if you’re not gonna use it? And I think this is really a thought that I would also love to leave with you, that it can be this tool to many regions and community to kind of like make a jump as well.
Speaker
Anton Aschwanden
Reason
This comment was thought-provoking because it reframed the risk discussion entirely. Instead of focusing solely on the dangers of AI misuse, it highlighted the potentially greater danger of not using AI at all – the risk of being left behind. It introduced the concept of ‘leapfrogging’ development through AI adoption.
Impact
This comment provided a crucial counterpoint to the cautionary voices in the discussion and helped balance the conversation. It reinforced themes about digital divides becoming AI divides and influenced the discussion toward considering AI as a tool for equity and development rather than just a source of risk.
We have very powerful adversaries in many ways. That is the conjunction of a number of social media and AI all put together… They spend more time checking TikTok than doing their homework… TikTok and others will be more and more efficient using AI… This is the reality that we face nowadays… AI could very well destroy education as we understand it.
Speaker
Pap Ndiaye
Reason
This comment was particularly impactful because it acknowledged the competitive landscape that educational institutions face. It was brutally honest about the challenge of competing with highly engaging, AI-powered social media platforms for students’ attention, and framed this as an existential threat to traditional education.
Impact
This comment introduced a sense of urgency and realism to the discussion that had been somewhat absent. It prompted Anton to respond with Google’s perspective on responsibility and self-interest in creating safe platforms, and helped ground the theoretical discussion in the practical realities educators face daily.
Overall assessment
These key comments fundamentally shaped the discussion by introducing critical tensions and complexities that prevented it from becoming a simple pro-AI or anti-AI debate. The audience interventions, particularly from younger participants, grounded the discussion in real-world concerns about institutional readiness and cognitive development. The intergenerational dialogue that emerged – with older participants raising concerns about preserving human capabilities and younger participants advocating for thoughtful integration – created a nuanced conversation that acknowledged both opportunities and risks. The comments collectively moved the discussion from abstract benefits of AI to concrete challenges of implementation, from technological capabilities to human needs, and from individual tools to systemic transformation. This created a more sophisticated understanding that AI’s impact on education depends heavily on how it’s implemented, regulated, and integrated into existing educational frameworks.
Follow-up questions
How can we ensure that AI enhances rather than replaces the human connection that is so important in the learning process and in the social development of students?
Speaker
Laila Lorenzon
Explanation
This addresses a fundamental concern about maintaining the social and emotional aspects of education while integrating AI tools
What are the skills that we need to learn, and how are the skills going to develop in the future with AI integration?
Speaker
Audience member (30+)
Explanation
This question highlights the need to understand what human capabilities will remain relevant as AI automates certain tasks
How can we help students understand the difference between using AI as a substitute versus as a supplement to ensure ethical use and maintain academic integrity?
Speaker
Master’s student at CU
Explanation
This addresses the critical challenge of preventing cognitive outsourcing while leveraging AI’s benefits in education
How can public-private partnerships be structured to ensure AI tools for education are innovative while upholding transparency, inclusivity, and educational sovereignty in diverse regions?
Speaker
George from Civil Society
Explanation
This explores governance mechanisms needed to balance innovation with ethical considerations across different cultural contexts
How can we prevent AI in education from spreading misinformation, disinformation, or malinformation, and what role can youth play in building trust and accountability?
Speaker
George from Civil Society
Explanation
This addresses the critical issue of information quality and youth agency in maintaining educational integrity
How do we teach youth to deal with AI-influenced content on social media and recognize different perspectives to protect democratic values?
Speaker
Audience member
Explanation
This highlights the broader challenge of media literacy and critical thinking in an AI-mediated information environment
What specific mechanisms and funding structures are needed to prevent the current digital divide from becoming an AI divide?
Speaker
Anton Aschwanden
Explanation
This addresses the urgent need for equitable access to AI educational tools globally, particularly for developing countries
How can we create structured and recurring spaces for co-creation in international AI strategies that include youth from the beginning of the process?
Speaker
Laila Lorenzon
Explanation
This focuses on moving beyond tokenistic youth participation to meaningful engagement in AI governance
What are the real needs of developing countries regarding AI in education, and how should financing mechanisms be designed in partnership with these countries?
Speaker
Pap Ndiaye
Explanation
This emphasizes the importance of understanding local contexts and needs rather than imposing one-size-fits-all solutions
How can we balance the use of AI tools with preserving spaces for students to think and develop cognitively without technological assistance?
Speaker
Pap Ndiaye
Explanation
This addresses the need to maintain human cognitive development while integrating AI tools in education
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.