WS #110 AI Innovation Responsible Development Ethical Imperatives
26 Jun 2025 14:45h - 15:45h
WS #110 AI Innovation Responsible Development Ethical Imperatives
Session at a glance
Summary
This workshop at the Internet Governance Forum focused on AI innovation, responsible development, and ethical imperatives, co-organized by CAST, the UN Consultative Committee on Information Technology (CCIT), and the Internet Society of China (ISC). The discussion centered on how to foster AI innovation while ensuring responsibility, inclusivity, and alignment with global frameworks like UNESCO’s ethical recommendations and the Global Digital Compact.
Opening speakers emphasized that AI’s transformative power across sectors from agriculture to healthcare must be balanced with addressing challenges such as algorithmic bias, lack of transparency, privacy breaches, and the risk of deepening digital divides. Professor Gong Ke highlighted three core policy dimensions: inclusive development to prevent AI from worsening digital disparities, global governance frameworks that align national policies with international standards, and multi-stakeholder collaboration mechanisms.
UNESCO’s representative, Guilherme Canela de Souza Godoy, stressed that innovation and human rights protection should not be viewed as contradictory goals, emphasizing that good innovation benefits everyone rather than specific groups. He outlined UNESCO’s approach of fostering opportunities, mitigating risks, and prosecuting harms through established international human rights frameworks.
Educational applications of AI received significant attention, with speakers discussing both opportunities for personalized learning and risks including digital poverty, lack of regulation, and the potential reduction of diverse opinions. Professor Ricardo Israel Robles Pelayo from Mexico highlighted concerns about AI implementation in education and justice systems without proper ethical consideration, particularly in contexts with existing structural challenges.
Dr. Daisy Selematsela addressed how academic libraries navigate AI integration, discussing challenges around data protection, technical expertise, and financial constraints, while noting benefits like improved search capabilities and 24/7 user support. The workshop concluded with consensus on three key takeaways: AI must bridge rather than deepen global divides, governance requires harmonized national and international frameworks, and ethical design ensures AI serves humanity effectively.
Keypoints
## Major Discussion Points:
– **Balancing AI Innovation with Ethical Imperatives**: The central theme focused on how to foster AI technological advancement while ensuring responsible development that aligns with human rights, ethical frameworks, and global standards like UNESCO’s AI ethics recommendations.
– **Inclusive AI Development and Digital Divide Concerns**: Multiple speakers emphasized the need to prevent AI from exacerbating existing digital disparities between and within countries, particularly addressing how developing nations can maintain access to AI technologies and benefits.
– **AI Risks and Challenges in Education**: Extensive discussion on how AI impacts higher education, including concerns about academic integrity, digital poverty, lack of regulation, unauthorized content use, and the need for age-appropriate limitations and data ownership policies.
– **Global Governance and Multi-stakeholder Collaboration**: Speakers addressed the need for international cooperation, harmonized national frameworks, and multi-stakeholder approaches to AI governance, drawing lessons from internet governance models while recognizing AI’s unique vertical complexity.
– **Sector-Specific AI Implementation Challenges**: Detailed examination of AI applications in specific sectors like libraries, justice systems, and education, highlighting both opportunities (automation, personalized learning, 24/7 access) and risks (job displacement, bias, over-dependence on algorithms).
## Overall Purpose:
The workshop aimed to explore how to achieve responsible AI development that promotes innovation while ensuring ethical considerations, human rights protection, and inclusive access. The goal was to foster international dialogue and consensus-building around AI governance frameworks that serve the common good.
## Overall Tone:
The discussion maintained a consistently collaborative and constructive tone throughout. Speakers approached the topic with cautious optimism, acknowledging both AI’s transformative potential and its significant risks. The tone was academic and policy-focused, with participants sharing practical experiences and recommendations rather than engaging in debate. There was a strong emphasis on finding common ground and shared values, particularly around human-centric approaches to AI development.
Speakers
**Speakers from the provided list:**
– **Guilherme Canela de Souza Godoi** – Director for Digital Inclusion at UNESCO
– **Daisy Selematsela** – From the University of Witwatersrand Library, South Africa
– **Ricardo Israel Robles Pelayo** – Professor from Mexico
– **Dr Zhang Xiao** – Vice President of SYNLINK, IGF MEGA member and Executive Deputy Director of China IGF
– **Huang Chengqing** – Vice President of Internet Society of China and Director of China IGF
– **Moderator** – David, Deputy Director of China IGF (from ISC – Internet Society of China)
– **Ke GONG** – Professor, Chair of CCIT (Consultative Committee on Information Technology)
– **Dr. Yik Chan Chin** – Professor from Beijing Normal University (Note: The moderator introduced this speaker as “Professor Xiaofeng Tao” and “Professor Qian,” but the speaker identified themselves as “Dr. Yik Chan Chin”)
**Additional speakers:**
None identified beyond the provided speakers names list.
Full session report
# Workshop Report: AI Innovation, Responsible Development, and Ethical Imperatives
## Executive Summary
This workshop at the Internet Governance Forum (Workshop 110) brought together international experts to examine the balance between fostering AI innovation and ensuring responsible development. Co-organised by the UN Consultative Committee on Information Technology (CCIT) and the Internet Society of China (ISC), the discussion featured representatives from UNESCO, academic institutions, and governance organisations. The session faced time constraints and technical difficulties that affected the flow of presentations, with some speakers’ remarks being cut short due to scheduling limitations.
## Opening Framework
The workshop was moderated by David from ISC, who serves as Deputy Director of China IGF. Professor Gong Ke, participating online, established the foundational framework by identifying three key policy dimensions: inclusive development to prevent AI from exacerbating digital disparities, the need for global governance frameworks aligned with international standards, and the establishment of multi-stakeholder collaboration mechanisms.
Professor Gong emphasised that international collaboration is essential to maximise AI’s potential while minimising negative impacts, highlighting concerns about AI systems’ lack of explainability, transparency, and issues of bias within algorithmic processes.
## UNESCO’s Perspective on Innovation and Human Rights
Guilherme Canela de Souza Godoi, Director of Digital Inclusion at UNESCO, challenged the narrative that positions innovation and human rights protection as contradictory forces. He argued that these should be viewed as complementary objectives, stating that good innovation should benefit everyone rather than privileged groups.
Godoi outlined UNESCO’s three-pronged approach: fostering opportunities through AI development, mitigating risks through established frameworks, and addressing harms when they occur. He noted that UNESCO builds upon existing international human rights frameworks and mentioned the organisation’s 80th anniversary context. He emphasised that capacity building represents the primary demand from UNESCO’s member states regarding AI ethics implementation.
## Chinese Perspectives on AI Governance
Huang Chengqing, Vice President of the Internet Society of China, emphasised that AI development must be human-oriented, following the principle of “intelligence for good.” He argued that while government guidance is important, effective AI implementation requires participation from all sectors of society.
Ms. Zhang Xiao, Vice President of CENIC IGF MEGA member and Executive Deputy Director of China IGF, provided brief remarks on the complexity of AI governance, noting the need for multi-stakeholder collaboration while acknowledging the challenges involved.
## Educational Sector Concerns
Professor Qian Yiqin from Beijing Normal University raised critical questions about rapid AI implementation in educational settings without adequate consideration of consequences. He identified key risks including the acceleration of digital poverty, insufficient regulatory oversight, and unauthorised use of educational content.
Professor Qian introduced concerns about what he termed “containment of AI-generated content,” describing a problematic cycle where AI systems trained on AI-generated content could lead to deterioration of knowledge quality over time. He also raised questions about age-appropriate AI use and data ownership issues regarding user-generated commercial data.
## Latin American Perspective
Professor Ricardo Israel Robles Pelayo from Mexico expressed concern about hasty AI incorporation without adequate ethical reflection, particularly in educational institutions and justice systems. He questioned whether it is legitimate to trust algorithms trained with biased data and whether judges should delegate human judgement to machines.
Professor Pelayo emphasised that critical thinking serves as an essential mediator for ensuring fair AI decisions, arguing that innovation must be guided by law, ethics, and critical reflection rather than pursued as an end in itself.
## Academic Libraries Implementation
Dr. Daisy Selematsela from the University of Witwatersrand Library in South Africa began presenting insights on AI integration in academic libraries, highlighting both opportunities and challenges. However, her presentation was interrupted due to time constraints, preventing a complete discussion of her intended remarks about practical implementation challenges, data protection concerns, and financial constraints affecting institutions in developing countries.
## Workshop Conclusion
Due to time limitations, the moderator provided brief concluding remarks focusing on three key takeaways: inclusion as the foundation for AI development, the necessity of unity in governance approaches, and ensuring that innovation thrives within ethical guidelines. The session ended with an invitation for continued discussion rather than formal comprehensive conclusions.
## Key Challenges Identified
The workshop highlighted several unresolved issues requiring continued attention:
– Balancing rapid AI innovation with adequate regulatory oversight
– Developing mechanisms for international collaboration in AI governance
– Addressing data ownership and concentration issues
– Establishing age-appropriate guidelines for AI use in education
– Preventing AI from accelerating digital divides
## Conclusion
Despite technical difficulties and time constraints that affected the session’s flow, the workshop demonstrated international interest in addressing AI governance challenges through collaborative approaches. The discussion emphasised human-centric AI development and the need for frameworks that ensure AI serves broader societal benefits while addressing risks and inequalities. The abbreviated nature of the session highlighted the complexity of these issues and the need for continued international dialogue on AI governance.
Session transcript
Moderator: Good afternoon distinguished guests, ladies and gentlemen, speakers and participants both on-site and online. So welcome to workshop number 110. The topic of the workshop is AI Innovation, Responsible Development and Ethics Imperatives. The workshop is co-organized by CAST, UN Consolidative Committee on Information Technology, CCIT and Internet Society of China, ISC. So I’m David. You can just call me David because the same pronunciation in Chinese and English of ISC and the Deputy Director of China IGF. I’m honored to moderate today’s session. So here nowadays we stand at a very important moment where AI’s transformative power must align with ethical imperatives. Therefore, the workshop will explore how to foster innovation while ensuring responsibility, inclusivity and alignment with the global digital compact and UNESCO’s ethical frameworks. Okay, let’s begin. First, I would like to introduce the speakers of today. First, Mr. Huang Chengqing, the Vice President of Internet Society of China and the Director of China IGF, and Professor Gong Ke, Chair of CCIT, and Ms. Zhang Xiao, Vice President of CENIC IGF MEGA member and Executive Deputy Director of China IGF. And Mr. Guilherme Canara de Souza Godoy, Director of Digital Inclusion of UNESCO and Professor Qian Yiqin from Beijing Normal University and Professor Ricardo Israel Robles Pelayo from Mexico and Dr. Daisy Selematsela from the University of Waywater Slend Library, South Africa. Okay, let’s begin. First, I will invite Professor Gong Ke, Chair of CCIT. He will deliver our opening remarks online. Okay, Professor Gong, the floor is yours.
Ke GONG: Thank you. Thank you, David. Ladies and gentlemen, dear colleagues, on behalf of one of the organizers of this workshop, CCIT, I welcome you all to join this very important discussion. CCIT stands for the Consultative Committee on Information Technology under the China Association for Science and Technology. CCIT has participated in all 20 editions in the past 20 years of IGF, sharing the perspectives and the practices of China’s ICT communities with international partners to promote internet governance to be effective, inclusive, and serves to the common interest of all people. Today, the theme of our workshop is AI innovation, responsible development, and ethical imperatives. Artificial intelligence, in short, AI, is making breakthroughs in numerous fields at an accelerated pace, reshaping sectors ranging from agriculture, manufacturing, transportation, to education, healthcare, social services, and governance. Significantly, artificial intelligence is influencing the development and operation of the Internet and other ICT services. Yet challenges and risks persist. Lack of explainability and transparency in big AI models, weak robustness and precision, potential bias and discrimination, and the danger of exacerbating existing digital divides both between and within countries. To maximize AI’s potential for achieving sustainable development while minimizing its negative impacts, international collaboration and international consensus is essential. Through technical innovation to enhance AI’s explainability, transparency, safety, and robustness, and through proper regulation based on global consensus on AI principles, interoperable standards, and rules, this workshop will address three core policy dimensions. First, inclusive development. How can policies safeguard technology access for developing nations and prevent AI from worsening digital disparities? Second, global governance. How can national frameworks align with the United Nations Global Digital Compact and operationalize UNESCO’s recommendation on the ethics of artificial intelligence? Third, multiple stakeholders collaboration. What mechanism models can foster effective cross-sector and cross-border collaboration, especially in today’s geopolitical context? Dear colleagues, we sincerely invite all of you to actively engage in today’s dialogue, share your insights on establishing an effective global governance models, and jointly chart a course for AI development that drives sustainable transformation and delivers a responsible digital future for all. Thank you again for joining the workshop.
Moderator: Okay, thank you. Thank you for Professor Gong’s opening remarks. And next, we welcome Mr. Huang Chengqing, the Vice President of Internet Society of China and the Director of China IGF. For his opening perspective, Mr. Huang, please.
Huang Chengqing: Ladies and gentlemen, good day to you all. It’s a great pleasure to be here with you at the UN IGF to discuss the innovation and development of artificial intelligence. On behalf of the organizers of this workshop, I would like to extend a warm welcome to all the participants. In recent years, the rapid development of AI technology has profoundly impacted human production and lifestyle. However, it has also brought about many challenges, such as algorithmic bias, privacy breaches, disinformation, deepfakes, and information concludes. How to ensure that the innovation of AI develops in a human-oriented direction has become a crucial issue that needs to be addressed urgently. This issue involves not only technological and… Thank you. This issue involves not only technological and legal aspects, but also ethical and moral considerations. Therefore, clarifying the path to responsible innovation and development of AI and ethical considerations is of great significance for the stable, safe and sustainable development of global AI industry. At present, the innovation and development of AI is of a global affair that requires the participation and cooperation of the international communities. Countries should participate in the global governance of AI with a sense of responsibility. The ultimate goal of AI technology should be to promote human well-being and enhance the overall happiness and quality of the life of the people. An increasing number of international organizations, governments and industry civil society have joined hands to promote AI technology. have introduced ethical principles and governance policies for AI. In November 2021, at the 41st United Nations General Assembly, 193 member states unanimously approved the proposal for artificial intelligence to provide new opportunities to stimulate research and innovation in nuclear physics. In November 2021, at the 41st session of the UNESCO General Conference, 193 member states unanimously adopted the recommendation of Ethics of Artificial Intelligence, which proposed to provide new opportunities to encourage ethical research and innovation and to grant AI technology in human rights and fundamental freedoms, values and ethical considerations, to grant AI technology in human rights and fundamental freedoms, values and ethical considerations. In October 2023, China released the Global AI Governance Initiative. In October 2023, China released the Global AI Governance Initiative, emphasizing the principles of people-oriented and intelligence for good, and provided a Chinese solution for global AI governance, based on the concept of a community with a shared future for mankind. At the same time, we need to recognize that to promote artificial intelligence innovation and to follow the ethical principles of science and technology for good, we need not only the guidance of governments, also need the wide participation of all sectors of the society At the same time, we must recognize that promoting the innovation of AI in accordance with the ethical concept of technology for good requires not only the guidance of governments but also the extensive participation of all sectors of the society The Chinese Internet Association, as a social organization of the Chinese Internet industry has always been committed to promote self-discipline and the construction of social responsibility in China’s Internet industry As a civil society in China, the Internet Society of China has always been committed to promote self-discipline and the construction of social responsibility in China’s Internet industry and has released a number of initiative documents and has released a number of initiative documents and industry conventions Based on the valuable platform of the Internet Governance Forum, ISC also undertakes the work of the China IGF Secretary actively participating in global Internet governance and sharing China’s beneficial practice in data security and algorithm governance Contributing to the global governance of Internet Frontier Technologies with presented by AI technology Ladies and gentlemen, I believe that promoting the development of AI is the most important goal of the Internet Governance Forum The core goal of Responsible Development is to ensure the basic human rights of AI technology, respect human basic rights, promote fair justice, and prevent potential risks. I hope this that through this workshop we can bring together the consensus of all parties through interesting exchanges and cooperations, and work together to contribute to the innovation and development of AI. I wish the workshop a complete success. Thank you all.
Moderator: Thank you Mr. Huang for his opening perspective. And now we turn to our first presentation. First speaker is Mr. Guilherme Canela de Souza Godoy, Director for Digital Inclusion at UNESCO. His topic is Shaping Humanistic and Inclusive AI Innovation. Mr. Godoy, you have the floor.
Guilherme Canela de Souza Godoi: Thank you very much. First and foremost, thank you so much for the invitation to be here. And the previous two speakers already did part of my job because they explained better than I could do. What are the key characteristics of the UNESCO recommendations on ethics and AI? So probably they saved me a few seconds on this conversation. So the first important thing here, if you need to take just one element of my five minutes, is this one. We shouldn’t put innovation and protection and promotion of human rights as contradictory goals in this conversation about AI. It’s possible to innovate and at the same time protect and promote fundamental freedoms and human rights and be ethical. This should be actually our aim. We shouldn’t negotiate that. Actually, good innovation is very much related to the fact that we are not leaving anyone behind. Otherwise, it’s an innovation that benefits just a specific group in our society. So that said, the spoiler made, this is my conclusion. Let me just go into some specifics. UNESCO is celebrating its 80th anniversary this year. We were created together with the UN system in 1945. And if you look into the very first paragraph of the UNESCO constitution, you will see there that UNESCO is an organization that should promote the free flow of information and ideas. So from the very start of UNESCO, every technological revolution were brought to UNESCO to discuss, well, how we support this technological revolution and at the same time, we keep this mandate of protecting the free flow of information and ideas, which is broadly connected with the overall protection and promotion of the international human rights law system and the international standards that all UNESCO 194 member states have agreed on the first. So, as you can imagine, it’s easy to say, but it’s not that easy to do. And if we want to summarize in a nutshell, it’s what the previous two speakers already said. At the end of the day, when we are looking into and assessing these technological changes, in this case it’s AI, but a few years ago was another thing, tomorrow will be quantum or whatever, we are looking into three big things. We need to find ways to foster the huge opportunities we have with these technological revolutions and fostering these opportunities to everyone. We need to mitigate the risks and eventually we need to prosecute the harms. And it’s not one thing or the other, it’s one thing and the other. And that’s the basic of the governance system, how we do this, how we enhance the opportunities, mitigate the risks and eventually prosecute the harms. In the view of UNESCO and the United Nations system, we do that implementing the international agreements, the international standards that we have agreed on in the first place. In our case, for example, the Universal Declaration of Human Rights. So the UNESCO recommendation on ethics and AI actually is a translation to the AI sphere of these 80 years of history in dealing with the different technological changes that we had and how we can assess those changes, keeping in mind these original commitments and principles of our societies. Again, in our case, the human rights system and the Human Rights Declaration. So to conclude, practically speaking, what we are doing now is guaranteeing that our member states are capable of using the recommendation on ethics and AI and assess themselves to understand how ready they are to move to the next step. So more than 70 UNESCO member states have already implemented the readiness assessment methodology, which is a good self-assessment of what is needed to make the jump. And the second big pillar of this conversation for UNESCO vis-Ă -vis our member states is capacity building. It’s the first demand we have from our member states, is to increase capacity building on these different areas. So very recently we launched in the beginning of June in Paris a global alliance of national schools of governments and public administration in order to create processes of pre-service and in-service training of civil servants in the public sector about these issues. Then finally, to conclude, I’m also the secretary of an important intergovernmental program in UNESCO called Information for All, of which China is a very active member. And in that program, we are always emphasizing, if what we are doing here is not for everyone, for all, then we are missing something. And therefore we need to look into specific issues such as multilingualism, what we do for people with disabilities, or how we reduce gender gaps, and so on and so forth. So thank you and a pleasure to participate in this conversation.
Moderator: Thank you, Mr. Godoy. Next, I will invite Professor Xiaofeng Tao from Beijing Normal University. She will address risks and responsible use of AI in higher education. Okay, Professor Qian, please. Okay, great.
Dr. Yik Chan Chin: Thank you for… Thank you for giving the opportunity to address the ethical issue here. So I choose a particular topic, which is the education, ethics, the AI risk in education. So as we know that, you know, the AI actually poses some unique ethical and risk to the society. For example, like fairness, transparency, privacy issues, especially for the education sector. So therefore, in my presentation, following presentation, I will look at, you know, the risk, AI risk and application of generative AI in higher education. So first of all, what kind of the application, you know, we use at the moment in higher education, for example, like we use AI, generative AI for academic writing and learning support, personalized adaptive learning, pedagogical support, creative education, and so on. And so there’s a kind of opportunity brought by the application of the generative AI, such as we can now we can have more convenient student center opportunity and institutional opportunity and education, innovation, inclusion, reduce the workload and for the, of course, the educational sector and also free up teachers’ times, allowing them to focus on more excellent teaching or innovative teachings. But at the same time, we also notice, you know, that’s the many research has been going on. Look at the risk of generative AI in the educational sector. So there’s several very salient risks. The first one is the acceleration of digital poverty, which means so data reached a poor country will be excluded, you know, from the development of the larger language model or the algorithm. Secondly, is the lack of the national supervision and regulation, which means the technology is too fast. and most of the time it’s the big companies who control the technology, who know how to use it, implement it well actually the regulators are lagging behind to regulate so there’s some kind of gaps between the technological development and the regulation the third one is unauthorized use of content we know that there’s so many legal cases going on at the moment, like intellectual property rights issues and then the fourth one is the un-expandability of a generative model because of the black box nature it’s difficult to understand the reason for them to generate specific content and the fifth one is the containment of AI-generated content I think yesterday there was a report, a paper published, which talked about how the charted GPT, the large-language model actually changed by the content generated by the GPT so there’s a circle, you know, you change it on something which is reliable so the result is, the outcome is reliable again so there’s a serious problem about the reliability of knowledge in the long term and then there’s a lack of the real-world understanding as we know that the large-language model doesn’t really understand the text, the output they give to you we know the mechanism of how the generative AI works and there’s other two risks which is quite significant the one is when we look at education so if we use a large-language model without awareness of risk there’s a big issue about the reduce of diversity of opinions and the marginalize of minority voice because the opinions dominate, you know, the general opinion which is the most common or dominate positions and the deepfake issue as well so there’s some policy recommendation, actually it’s based on the UNESCO’s report of course every country They gave some recommendations, but here I used a recommendation from the U.S. school. So for example, we needed to educate the school and the education institution to improve their understanding of the potential benefit of the risk of artificial intelligence. So they need to understand what is the potential benefit, but at the same time the risk in the education institutions. And secondly, we need to reflect on the long-term impact of generative AI on the education research. So this is because as we noticed that most of the countries are still in the very early stage of adopting AI, generative AI in education. We have some of the advanced countries like China, America, even the EU, but a lot of countries are actually still in the early stage to adapt to AI. So there’s an urgent need for the public debate and police dialogue on the long-term impact of the AI in education, but this kind of debate has to be multi-stakeholder inclusive. And the other one is the definition and implementation of aging limitation of generative AI. What is the aging limitation for users to use AI? Because most of the AI in large number is designed for adults, which is 18 or at least 13 years old. So should we allow primary school students or middle school students under 13 to use charged GBD? So there’s a risk as well. So we need to discuss this. And the last one is about data ownership. So there’s a huge debate about the concentration of structures of data. So what kind of, should we allow the user-generated commercial data to be only owned by the large company, which will kind of manipulate the data? So should we have a huge debate? how we should use those data. So actually, we use a case study from China and the UK, because I do not have much time left, but you are welcome to do some research about these two countries. I think that’s a very interesting case study, because China just announced two guidance. The first guidance, the guidance for the general education of artificial intelligence in primary and secondary school, and the guidance for the use of generative artificial intelligence this year. So they give a very detailed plan, you know, how to use AI in their primary education, secondary education. Of course, there are some, for example, they have a particular approach, for example, moving from the quantitative to application to creation. So they have some pedagogical design, and also want to have a promote kind of international standard collaboration. Well, the UK’s approach is slightly different. UK also published recently and last year, they published very comprehensive guidance in using AI in education context, for example, the safe and effective use of AI. So they more look at the safety issue. Safety should be the top priority when deciding whether to use AI in an educational setting. So these two countries are very interesting for us to explore further. I think we need an urgent debate on this, especially for those countries who haven’t applied AI in their education. So I stop here. Thank you very much.
Moderator: Okay, thank you, Professor Chen. Now I will invite our third speaker, Professor Richardo Israel Robles-Peleo from Mexico, and he will share insights on innovation with responsibility, a legal and ethical perspective. Okay, Professor Peleo, please.
Ricardo Israel Robles Pelayo: Thank you very much. Good afternoon, everyone. It is an honor to be here and share a reflection on a topic that is crucial to our present and above all. Our future. Artificial intelligence sends both as a driver of innovation as a legal ethical challenge. AI is transforming all aspects of life from daily activity to critical sectors such as healthcare, education, justice, and security. While AI offers great innovation potential, it also presents significant legal and ethical challenges. Its widespread use demands responsibility. Therefore, it is essential to establish guiding principles, values, regulations, and public policies to ensure the responsible and sustainable use of this resource. Based on my professional experience in Mexico and Latin America, education and the justice system are two key areas that can significantly shape how societies respond to the impact of the AI. Talking about the AI as a source of stress of the students. As I mentioned in IGF in Berlin, it is urgent to apply big data in the design of educational policies. However, it is essential not to lose sight of what the students experience in their daily lives. Hyperconnectivity and constant use of AI-based tools can provoke insanity, destruction, and technological dependence among young people. The pressure to stay updated, the overflow of information, and the algorithms that filter and shape what we consume directly affect their emotional well-being and their ability to develop critical thinking. Despite this, we are witnessing how many educational institutions and even the national education system are hastily incorporating AI without taking the time to analyze its ethical implications. It seems that the urgency to innovate has overtaken the reflection on the human consequences this entails. And what about the artificial intelligence in the justice system? AI is already being used in areas such as crime prediction, evidence analysis, and even sentencing recommendations. However, in countries like Mexico, where the justice system faces deep structural challenges, we must ask, is it legitimate to trust an algorithm trained with biased data? Can a judge delegate their human judgment to a machine? Recently, Mexico established a system for selecting judges that prioritizes popular election over academic training and technical experience, undermining impartiality and legal quality. This reality, combined with work overload, lack of specializing training, and political electoral pressures, may lead to a dangerous trend using AI as a shortcut to issue rulings disregarding the ethical principles and critical analysis that justice requires. And what about the AI versus ethics, morality, and critical thinking? Artificial intelligence makes decisions based on data but lacks autonomy and ethical judgment, which can lead to bias. Ethics enables deliberation on what is right through philosophical principles, although it can become unjust when imposed authoritatively. Morality, rooted in social norms, may exclude if it is not inclusive. In this context, critical is essential for evaluating and questioning automatisms, identifying risks it serves as the key mediator to ensure fair and contextualized decisions. No system is infallible on its own. Critical thinking asks as the essential mediator for fair and contextualized decisions. In conclusion, Artificial Intelligence is already resharpening how we learn, judge, and protect ourselves. However, without ethical guidance, it can threaten human dignity. Its misuse in education, justice, and cyber security highlights the need for strong regulations, inclusive policies, and critical engaged society. Building a fair and sustainable digital world requires not just innovation, but shared responsibility and focus on humanity. Innovation must not be an end itself. It should be guided by law, ethics, and critical reflection. Mexico has the opportunity to build an inclusive, regulated, and people-centered Artificial Intelligence model. That will be true responsible innovation. So, thank you again for allowing me to share these ideas with you, and I look forward to continuing to collaborate in these important forums.
Moderator: Okay, thank you, Professor Pelayo. Now, I will welcome Dr. Daisy Selematsela from the University of Witwatersrand Library, South Africa, and discussing how education institutions leverage libraries to navigate AI challenges and opportunities. Okay, Dr. Selematsela, please.
Daisy Selematsela: Thank you. I just want to highlight on issues faced by academic libraries when we look at the integration of AI in the work that we do. And as you have heard from the other panelists here, issues around that are impacting on higher education and especially the pedagogy side. But coming from the library background, it’s quite important for us when we deal with the collections from academic libraries and how do we do see the interface between the collections that we have. So I just want to touch based on ethical concerns that we have and you have heard a lot that you heard from the previous speakers, that from ethical concerns for us from the library side, we handle vast data. And it’s quite crucial to navigate these ethical dilemmas responsibly from the side of the library side. When we look at technical challenges, we also are looking at issues such as interoperability of our systems because we then also align ourselves with international databases and e-resources and also a lack of technical expertise also hinder our seamless adoption. The other part I want to touch base on is the issues around financial constraints. And here is that as you know that when budgets are cut in universities, libraries are the first ones to be affected by the budget cuts. And this makes it difficult in investing in advanced AI solutions. The other aspect that’s a challenge is the job displacement fears. Concerns exist among staff about the potential use of AI replacing human jobs and actually and especially where we have tools that we use as librarians for the services that we provide. The other challenge that I want to highlight the issues around content digitization. At this day and age, we ensure that our special collections are digitized and we need to ensure that as part of the effectiveness of AI, we need to ensure high quality digital content is essential for optimal functionality. The other aspect is the users. Our library users, students, academics, and researchers at large. User adaptation is quite key, and education, data literacies, and you have heard about pedagogy, I won’t dwell much, but it’s quite key when we look at adoption of our technologies, especially in AR. The other aspect that we pick up as a challenge is interference, or the interference with traditional teaching and learning, and you have heard a little bit about the pedagogical aspect, it’s quite important. The other aspect is the data at risk, or the fragility of access, and here we’re looking at libraries implementing robust data protection strategies, such as encryption, anonymization, and access control measures, because that’s key for the work that we do. The other aspect is the constant uncertainty that we pick up when we are looking at the integration of AI in academic libraries, and here are some key aspects I want to highlight, data quality and reliability. Here we’re looking at the inconsistent or biased data that can lead to unreliable outcomes, making it challenging for us to maintain the accuracy and reliability of AI-driven services in libraries. The other aspect that’s key is technological dependence, and here we’re looking at issues around cyber attacks that actually compromise library resources and services, as you know that with libraries we deal mostly with international platforms and tools. The other aspect is evolving technologies, like I’ve highlighted, that we, for libraries globally, we tend to subscribe to similar tools and databases, and this requires also resource-intensive, which are quite resource-intense, and also impacts on the longevity and stability of our current system, and we need to keep abreast of the evolving technologies. The other aspect is on the issue of ethical and privacy concerns. When we use AI in libraries, we need to ensure that the AI systems that we have are transparent and fair, as an ongoing challenge. The other aspect is the user adaptation, and we can’t overemphasize the involvement of the users of the tools, the databases that we have in libraries, to ensure that equality and access is across the board, as we have heard from a colleague from UNESCO. So, educating users and building their confidence in these technologies, that’s the daily bread that we do in libraries, and it can become difficult. The other aspect is around regulator and policy changes. Libraries must stay informed and compliant with these changes, which can add to the uncertainty that we face. The other aspect that I want to, it’s the benefits that we look into. Under the benefits, the recap in academic libraries, it’s personalized learning. We need to ensure that AI, as the tool, can actually assist us in recommending books, articles, and other resources to ensure that personalized learning of the user. This tailored approach would also enhance the learning experience of our students and researchers. The other aspect is regarding automation of repetitive tasks. We do a lot of cataloging, which are routine work, and indexing and inventory management. And we see the use of AI actually allowing librarians to focus more on complex tasks and intellectually demanding activities, and that’s how we see the movement with the AI tools. Improved data management and analysis. Also, we see AI tools that can help and manage, analyze large data sets, making it easier to derive meaningful insight and support on research activities. And the other aspect, no library can work or be effective without access to its resources 24-7, irrespective of where you are located. And we’re seeing AI-powered chatbots. and virtual assistants can provide around-the-clock support to users answering queries and assisting library services at any given time. The other aspects that AI offers us that we pick as offering numerous benefits to academic libraries is enhanced search and discoverability. We can’t operate without AI-driven algorithms that can analyze vast data sets quickly, improving the accuracy and efficiency of search results. And this helps our researchers and our students to find relevant information faster. Enhanced accessibility, I can’t dwell much on that. It’s about the speech to text, the text-to-speech and other assistive technologies for disabled users and so forth. When we come to resource planning and collection development, we see predictive analytics that can help librarians plan their resources and develop collections that better meet the needs of their users. And this benefit actually highlight the transformative potential of AI in academic libraries, making them much more efficient, user-friendly, capable and supporting advanced research. The other part that we love much in libraries is the issues around repositories. And our repositories, whether it’s on data repositories or repositories about our collections, also we see the benefits where AI can serve as agents that can also enhance the functionalities of our repositories. And also here we’re looking at content generation, where this includes generating summaries, translations and even new research articles by analyzing and synthesizing information from the repository. The other aspect would be enhanced search and discoverability. Here we’re looking at AI agents that can improve the search and discover process by understanding user queries better and providing more relevant results. and they can also suggest related material that users might find useful. And that’s how we see the growth in how AI can be used in our repositories. Other aspects would relate to automated metadata creation and this will also assist us in reducing the manual effort required when we catalog our collections in the library. The other aspect will be on personalized recommendations. Here AI agents can offer personalized recommendations for books to a particular user, articles and other resources, enhancing the user’s experience. The other aspect will be on interactive learning. Selematsela, time is up. Okay, I will speak only for 5 minutes. Okay, I’m sorry.
Moderator: Okay, thank you Dr. Selematsela. For our special remarks, I will invite Ms. Zhang Xiao, Vice President of SYNLINK, IGF MEGA member and Executive Deputy Director of China IGF. Okay, Ms. Zhang, share your thoughts.
Dr Zhang Xiao: Thank you everyone. I’m glad to be involved in this interesting discussion and I have three points to share after listening to all the distinguished speakers. The first is why we are still talking about innovation and governance. And as I heard some people, especially from industry, that governance could not be too early because AI is still a young baby at this moment. But as we know that you cannot go to the road without a break with your car. So actually I think from the Chinese philosophy, that’s the beauty of the unity of objects. So from the innovation and governance, especially something like ethics, they are not objects. They are aligned with each other in nature, in our philosophy. Because without good governance, we cannot go further. So my second point is that what we can learn from internet governance, because we are here in the Internet Governance Forum, and of course we know that internet governance has the beauty of a modest stakeholder approach. But what’s the difference between AI and internet? As we know, the internet has connected all of us horizontally. But for AI, it has changed each field vertically. So they’ll be more complex, especially in ethics and safety issues. So what we can learn from internet governance is definitely a modest stakeholder, but if we should go further, they’ll be more complex. And the third question is what we can do in the future. Definitely, I think for AI governance people, we should find something. Of course, a modest stakeholder is beautiful. We should listen to different people. But still, we should go further. We need to find something in common. And one thing I think particularly important is ethics value. That should be in my mind as there should be a set of frameworks of ethics as we have done by UNESCO. But I think human-centric is something we all recognize. Human-centric. Because I think AI is more empowering, should be more empowering people, more enabling than just prevent or control the machines. That’s a human behind the machines. So I think human-centric, that means we should raise the awareness of each people, of each person, raise awareness that something is going to happen. And because it becomes too complex and sometimes it’s hard to find something in common. But still, let’s find something in common. I think there is awareness, human-centric, capacity building, and we are all doing, especially like AI. Liu, and Kevin Sekaray. And so, also I think we should have some shared vocabulary including frameworks of ethics and also interoperable standards. So let’s find something common and still allow the beauty of multidisciplinary approaches considering the different culture and backgrounds. So actually I think it’s very important that we can learn something from Internet governance, the multidisciplinary approach, and also we should be more enabling and responsible for AI because it has been more complex and also has ethics inside. That’s my very short comments. Thank you. Okay.
Moderator: Thank you, Ms. Zhang. So here we still only have six meetings because in my schedule we have a free discussion, but the time is up. Okay, I will make a conclusion. So we have three critical takeaways emerged. First, inclusion is a foundation. AI must bridge, not deepen, global divides, especially in the global South. And second, governance requires unity and national frameworks must harmonize with international norms like UNESCO’s ethic recommendation. And third, innovation thrives with guiding rails. Ethical by design ensures AI serves humanity, not vice versa. So if you have some questions, we can make a discussion and maybe we can have some more discussion and dialogues after the workshop. And so we believe that through our shared efforts, AI can become a force for good advancing economic and and the social development and creating a better future for all. So do you have some questions you can ask and our speakers maybe will answer you. Do you have four minutes? If you have questions, you can raise your up. So I’m very glad to meet you in this workshop. And on behalf of CCIT, IFC and China IGF, I think I thank our brilliant speakers, engaged participants and IGF secretary, especially the IGF secretary. So thank you, IGF, for giving us an important and equal platform to share our experiences and discuss the problems and share the opinions. So we hope that IGF will hold continuously. Maybe it’s our the same hope. Okay, the time is up. If you want to make more collaboration, we can discuss and make some more dialogue after the workshop. Okay. Thank you. Beggan construction of the builder’s house, located in the CAIA-7INAUDIBLE FORTRESS, along the United States- scenic South Saudi Arabia border coating. The house was designed to have been perfect for military educational purposes. The building has over 300 rooms and was designed for international students. Another biggest achievement of this way of building is the display of white-house mapping using RealFleet mapping. It depicts the tuning unit of the G-whooce itself, Ding therelilalada bebealina dingerelemang e Bea prestige under the PE who HE DA French who Pen dingerelemang e Bea prestige under the PE who HE DA French who Pen dingerelemang e Bea prestige under the PE who HE DA French who HE DA Pen dingerelemang e Bea prestige under the The case settled, the prime subject of a special report resulted in the failing investigation. For more information visit www.FEMA.gov director & producer machine visual effects ORIENTAL CINEMATOGRAPHY ditto sitter speech MASTER DITTO PHENOMENAL INVISIBLE FRANKIENSTEIN DILEMMA DOMINGUEZ Thanks For Watching!!
Guilherme Canela de Souza Godoi
Speech speed
130 words per minute
Speech length
794 words
Speech time
363 seconds
Innovation and protection of human rights should not be contradictory goals in AI development
Explanation
Godoi argues that it is possible to innovate while simultaneously protecting and promoting fundamental freedoms and human rights in an ethical manner. He emphasizes that good innovation is related to not leaving anyone behind, as innovation that benefits only specific groups is inadequate.
Evidence
UNESCO’s 80-year history of dealing with technological revolutions while maintaining mandate of protecting free flow of information and ideas; Universal Declaration of Human Rights as foundation
Major discussion point
AI Innovation and Responsible Development Framework
Topics
Human rights principles | Development
Agreed with
– Huang Chengqing
– Dr Zhang Xiao
Agreed on
Human-centric approach as fundamental principle for AI development
UNESCO’s 80-year experience in managing technological revolutions provides a foundation for AI governance
Explanation
Godoi explains that UNESCO was created in 1945 with a mandate to promote free flow of information and ideas, and has dealt with every technological revolution since then. The UNESCO recommendation on ethics and AI translates this 80-year experience to the AI sphere while maintaining original commitments to human rights principles.
Evidence
UNESCO constitution’s first paragraph about promoting free flow of information and ideas; Universal Declaration of Human Rights; UNESCO recommendation on ethics and AI
Major discussion point
Global Governance and International Cooperation
Topics
Legal and regulatory | Human rights principles
Agreed with
– Ke GONG
– Huang Chengqing
– Moderator
Agreed on
Need for international collaboration and unified governance frameworks
Good innovation should benefit everyone, not just specific groups in society
Explanation
Godoi emphasizes that if innovation is not for everyone, then something is missing. He stresses the need to look into specific issues such as multilingualism, accessibility for people with disabilities, and reducing gender gaps to ensure inclusive development.
Evidence
UNESCO’s Information for All program where China is an active member; focus on multilingualism, people with disabilities, and gender gaps
Major discussion point
Inclusive Development and Digital Divide
Topics
Development | Human rights principles
Agreed with
– Ke GONG
– Dr. Yik Chan Chin
– Moderator
Agreed on
AI must address rather than exacerbate digital divides and inequality
Capacity building is the primary demand from UNESCO member states for AI implementation
Explanation
Godoi states that capacity building is the first demand UNESCO receives from member states regarding AI implementation. UNESCO has launched initiatives to address this need through training programs for civil servants and public sector workers.
Evidence
Over 70 UNESCO member states have implemented readiness assessment methodology; launch of global alliance of national schools of governments and public administration in June in Paris
Major discussion point
Inclusive Development and Digital Divide
Topics
Capacity development | Development
Huang Chengqing
Speech speed
83 words per minute
Speech length
704 words
Speech time
503 seconds
AI development must be human-oriented and follow the principle of “intelligence for good”
Explanation
Huang argues that AI technology should ultimately promote human well-being and enhance the overall happiness and quality of life for people. He emphasizes that ensuring AI innovation develops in a human-oriented direction is a crucial issue that needs urgent attention.
Evidence
China’s Global AI Governance Initiative released in October 2023, emphasizing people-oriented principles and intelligence for good based on community with shared future for mankind
Major discussion point
AI Innovation and Responsible Development Framework
Topics
Human rights principles | Development
Agreed with
– Guilherme Canela de Souza Godoi
– Dr Zhang Xiao
Agreed on
Human-centric approach as fundamental principle for AI development
Countries should participate in global AI governance with a sense of responsibility
Explanation
Huang emphasizes that AI innovation and development is a global affair requiring international community participation and cooperation. He argues that countries must engage in global AI governance responsibly to address the challenges and ensure sustainable development.
Evidence
UNESCO’s 193 member states unanimously adopted recommendation on Ethics of Artificial Intelligence in November 2021; China’s Global AI Governance Initiative as example of responsible participation
Major discussion point
Global Governance and International Cooperation
Topics
Legal and regulatory | Development
Agreed with
– Ke GONG
– Guilherme Canela de Souza Godoi
– Moderator
Agreed on
Need for international collaboration and unified governance frameworks
AI implementation requires extensive participation from all sectors of society, not just government guidance
Explanation
Huang argues that promoting AI innovation according to ethical principles requires not only government guidance but also wide participation from all sectors of society. He emphasizes the role of civil society organizations in promoting self-discipline and social responsibility.
Evidence
Internet Society of China’s work in promoting self-discipline and social responsibility in China’s Internet industry; release of initiative documents and industry conventions; China IGF’s participation in global Internet governance
Major discussion point
Inclusive Development and Digital Divide
Topics
Legal and regulatory | Development
Agreed with
– Dr Zhang Xiao
– Guilherme Canela de Souza Godoi
Agreed on
Multi-stakeholder participation essential for effective AI governance
Ke GONG
Speech speed
81 words per minute
Speech length
354 words
Speech time
261 seconds
International collaboration and consensus are essential to maximize AI’s potential while minimizing negative impacts
Explanation
Gong argues that to maximize AI’s potential for sustainable development while minimizing negative impacts, international collaboration and consensus are essential. This includes technical innovation to enhance AI’s capabilities and proper regulation based on global consensus on AI principles and standards.
Evidence
CCIT’s participation in all 20 editions of IGF over 20 years; need for technical innovation to enhance AI’s explainability, transparency, safety, and robustness
Major discussion point
Global Governance and International Cooperation
Topics
Legal and regulatory | Development
Agreed with
– Huang Chengqing
– Guilherme Canela de Souza Godoi
– Moderator
Agreed on
Need for international collaboration and unified governance frameworks
National frameworks must align with UN Global Digital Compact and UNESCO’s ethical recommendations
Explanation
Gong emphasizes that global governance requires national frameworks to align with international standards, specifically mentioning the United Nations Global Digital Compact and UNESCO’s recommendation on the ethics of artificial intelligence. This alignment is crucial for effective AI governance.
Evidence
Reference to UN Global Digital Compact and UNESCO’s recommendation on ethics of artificial intelligence as key international frameworks
Major discussion point
Global Governance and International Cooperation
Topics
Legal and regulatory | Human rights principles
AI systems lack explainability, transparency, and may contain bias and discrimination
Explanation
Gong identifies key challenges and risks in AI systems, including lack of explainability and transparency in big AI models, weak robustness and precision, and potential bias and discrimination. He also warns about the danger of exacerbating existing digital divides between and within countries.
Evidence
Identification of specific technical challenges: lack of explainability and transparency in big AI models, weak robustness and precision, potential bias and discrimination
Major discussion point
Ethical Concerns and Risk Mitigation
Topics
Human rights principles | Legal and regulatory
Policies must safeguard technology access for developing nations and prevent AI from worsening digital disparities
Explanation
Gong addresses the need for inclusive development policies that ensure technology access for developing nations and prevent AI from exacerbating digital divides. This is presented as one of three core policy dimensions that the workshop aims to address.
Evidence
Identification of digital divides both between and within countries as a key challenge; inclusive development as one of three core policy dimensions
Major discussion point
Inclusive Development and Digital Divide
Topics
Development | Digital access
Agreed with
– Guilherme Canela de Souza Godoi
– Dr. Yik Chan Chin
– Moderator
Agreed on
AI must address rather than exacerbate digital divides and inequality
Dr. Yik Chan Chin
Speech speed
143 words per minute
Speech length
1105 words
Speech time
460 seconds
AI poses risks of accelerating digital poverty, lack of supervision, and unauthorized content use in education
Explanation
Dr. Chin identifies several salient risks of generative AI in education, including acceleration of digital poverty where data-poor countries are excluded from AI development, lack of national supervision and regulation due to the fast pace of technology, and unauthorized use of content leading to intellectual property issues.
Evidence
Legal cases regarding intellectual property rights; gap between technological development and regulation; exclusion of data-poor countries from large language model development
Major discussion point
Educational Sector Challenges and Opportunities
Topics
Online education | Legal and regulatory | Development
Agreed with
– Ke GONG
– Guilherme Canela de Souza Godoi
– Moderator
Agreed on
AI must address rather than exacerbate digital divides and inequality
Disagreed with
– Ricardo Israel Robles Pelayo
Disagreed on
Pace and approach to AI implementation in education
There are concerns about reliability of AI-generated content and reduction of diversity of opinions
Explanation
Dr. Chin warns about the contamination of AI-generated content, citing how ChatGPT and large language models can be changed by content they generate, creating unreliable knowledge cycles. She also expresses concern about AI reducing diversity of opinions and marginalizing minority voices by favoring dominant positions.
Evidence
Recent paper published about how ChatGPT large-language model changed by GPT-generated content; AI’s tendency to favor most common or dominant positions over minority voices
Major discussion point
Ethical Concerns and Risk Mitigation
Topics
Human rights principles | Content policy
Ricardo Israel Robles Pelayo
Speech speed
113 words per minute
Speech length
633 words
Speech time
333 seconds
Educational institutions are hastily incorporating AI without analyzing ethical implications
Explanation
Pelayo argues that many educational institutions and national education systems are rapidly incorporating AI without taking time to analyze ethical implications. He suggests that the urgency to innovate has overtaken reflection on human consequences, particularly regarding student stress and technological dependence.
Evidence
Observation of hyperconnectivity and constant AI tool use causing anxiety, distraction, and technological dependence among young people; pressure to stay updated and information overflow affecting emotional well-being
Major discussion point
Educational Sector Challenges and Opportunities
Topics
Online education | Human rights principles
Disagreed with
– Dr. Yik Chan Chin
Disagreed on
Pace and approach to AI implementation in education
Innovation must be guided by law, ethics, and critical reflection rather than being an end in itself
Explanation
Pelayo emphasizes that innovation should not be pursued as an end in itself but must be guided by legal frameworks, ethical considerations, and critical reflection. He argues for building an inclusive, regulated, and people-centered AI model as true responsible innovation.
Evidence
Mexico’s opportunity to build inclusive, regulated, and people-centered AI model; emphasis on shared responsibility and focus on humanity
Major discussion point
AI Innovation and Responsible Development Framework
Topics
Legal and regulatory | Human rights principles
Critical thinking serves as essential mediator for fair and contextualized AI decisions
Explanation
Pelayo argues that while AI makes decisions based on data but lacks autonomy and ethical judgment, and ethics and morality have their own limitations, critical thinking serves as the key mediator to ensure fair and contextualized decisions. He emphasizes that no system is infallible on its own.
Evidence
Analysis of AI’s data-based decision making without ethical judgment; ethics’ potential for injustice when imposed authoritatively; morality’s potential for exclusion if not inclusive
Major discussion point
Ethical Concerns and Risk Mitigation
Topics
Human rights principles | Legal and regulatory
Daisy Selematsela
Speech speed
145 words per minute
Speech length
1298 words
Speech time
533 seconds
Academic libraries face challenges with data management, technical expertise, and financial constraints in AI integration
Explanation
Selematsela outlines multiple challenges academic libraries face in AI integration, including ethical concerns with handling vast data, technical challenges like system interoperability and lack of expertise, and financial constraints as libraries are often first affected by university budget cuts.
Evidence
Libraries being first affected by university budget cuts; need for alignment with international databases and e-resources; lack of technical expertise hindering adoption
Major discussion point
Educational Sector Challenges and Opportunities
Topics
Online education | Infrastructure | Development
AI can provide personalized learning, automation of tasks, and enhanced accessibility in educational settings
Explanation
Selematsela highlights numerous benefits AI offers to academic libraries, including personalized learning through resource recommendations, automation of repetitive tasks like cataloging and indexing, and enhanced accessibility through assistive technologies. She also mentions 24/7 support through AI-powered chatbots and improved search capabilities.
Evidence
AI-powered chatbots and virtual assistants for round-the-clock support; speech-to-text and text-to-speech technologies for disabled users; predictive analytics for resource planning; automated metadata creation
Major discussion point
Educational Sector Challenges and Opportunities
Topics
Online education | Rights of persons with disabilities | Development
Dr Zhang Xiao
Speech speed
132 words per minute
Speech length
501 words
Speech time
227 seconds
Multi-stakeholder collaboration is essential but AI governance is more complex than internet governance due to vertical impact across sectors
Explanation
Zhang argues that while internet governance’s multi-stakeholder approach is valuable, AI governance is more complex because unlike the internet which connects people horizontally, AI changes each field vertically. This vertical impact across sectors makes AI governance more complex, especially regarding ethics and safety issues.
Evidence
Comparison between internet’s horizontal connectivity and AI’s vertical field-specific changes; reference to Internet Governance Forum’s multi-stakeholder approach
Major discussion point
Global Governance and International Cooperation
Topics
Legal and regulatory | Interdisciplinary approaches
Agreed with
– Huang Chengqing
– Guilherme Canela de Souza Godoi
Agreed on
Multi-stakeholder participation essential for effective AI governance
Human-centric approach should be the common framework for AI ethics
Explanation
Zhang emphasizes that human-centric principles should be the common framework for AI ethics, arguing that AI should be more empowering and enabling for people rather than just preventing or controlling machines. She stresses the importance of raising awareness and focusing on the humans behind the machines.
Evidence
Reference to UNESCO’s ethics frameworks; emphasis on AI being empowering and enabling rather than controlling; importance of raising individual awareness
Major discussion point
Ethical Concerns and Risk Mitigation
Topics
Human rights principles | Development
Agreed with
– Guilherme Canela de Souza Godoi
– Huang Chengqing
Agreed on
Human-centric approach as fundamental principle for AI development
Moderator
Speech speed
81 words per minute
Speech length
912 words
Speech time
674 seconds
AI’s transformative power must align with ethical imperatives at this critical moment
Explanation
The moderator emphasizes that we are at an important juncture where AI’s transformative capabilities need to be balanced with ethical considerations. The workshop aims to explore how to foster innovation while ensuring responsibility, inclusivity and alignment with global frameworks.
Evidence
Reference to global digital compact and UNESCO’s ethical frameworks as guiding principles
Major discussion point
AI Innovation and Responsible Development Framework
Topics
Human rights principles | Development
Three critical takeaways emerged from the discussion on AI governance
Explanation
The moderator summarizes three key conclusions: inclusion as foundation where AI must bridge rather than deepen global divides, governance requiring unity through harmonized national and international frameworks, and innovation thriving with ethical guidelines. These takeaways represent the core consensus from the workshop discussions.
Evidence
Reference to UNESCO’s ethic recommendations and the need for AI to serve humanity rather than vice versa
Major discussion point
Global Governance and International Cooperation
Topics
Human rights principles | Development | Legal and regulatory
Shared efforts can make AI a force for good advancing economic and social development
Explanation
The moderator concludes that through collaborative efforts, AI can become a positive force that advances both economic and social development while creating a better future for all. This represents an optimistic vision for AI’s potential when properly governed and ethically implemented.
Evidence
Workshop discussions and speaker presentations demonstrating various approaches to responsible AI development
Major discussion point
AI Innovation and Responsible Development Framework
Topics
Development | Human rights principles
Agreements
Agreement points
Human-centric approach as fundamental principle for AI development
Speakers
– Guilherme Canela de Souza Godoi
– Huang Chengqing
– Dr Zhang Xiao
Arguments
Innovation and protection of human rights should not be contradictory goals in AI development
AI development must be human-oriented and follow the principle of “intelligence for good”
Human-centric approach should be the common framework for AI ethics
Summary
All three speakers emphasize that AI development must prioritize human welfare and rights, with AI serving humanity rather than the reverse. They agree that human-centric principles should guide AI innovation and governance.
Topics
Human rights principles | Development
Need for international collaboration and unified governance frameworks
Speakers
– Ke GONG
– Huang Chengqing
– Guilherme Canela de Souza Godoi
– Moderator
Arguments
International collaboration and consensus are essential to maximize AI’s potential while minimizing negative impacts
Countries should participate in global AI governance with a sense of responsibility
UNESCO’s 80-year experience in managing technological revolutions provides a foundation for AI governance
Governance requiring unity through harmonized national and international frameworks
Summary
Speakers unanimously agree that AI governance requires international cooperation and alignment between national frameworks and global standards like UNESCO’s recommendations and the UN Global Digital Compact.
Topics
Legal and regulatory | Development | Human rights principles
Multi-stakeholder participation essential for effective AI governance
Speakers
– Huang Chengqing
– Dr Zhang Xiao
– Guilherme Canela de Souza Godoi
Arguments
AI implementation requires extensive participation from all sectors of society, not just government guidance
Multi-stakeholder collaboration is essential but AI governance is more complex than internet governance due to vertical impact across sectors
Good innovation should benefit everyone, not just specific groups in society
Summary
Speakers agree that effective AI governance cannot rely solely on government action but requires participation from all sectors of society, though they acknowledge AI governance is more complex than previous technological governance models.
Topics
Legal and regulatory | Development | Interdisciplinary approaches
AI must address rather than exacerbate digital divides and inequality
Speakers
– Ke GONG
– Guilherme Canela de Souza Godoi
– Dr. Yik Chan Chin
– Moderator
Arguments
Policies must safeguard technology access for developing nations and prevent AI from worsening digital disparities
Good innovation should benefit everyone, not just specific groups in society
AI poses risks of accelerating digital poverty, lack of supervision, and unauthorized content use in education
AI must bridge rather than deepen global divides
Summary
All speakers recognize that AI development risks widening existing inequalities and agree that inclusive policies are essential to ensure AI benefits all populations, particularly in developing nations and educational contexts.
Topics
Development | Digital access | Human rights principles
Similar viewpoints
Both speakers express concern about the rushed implementation of AI in educational settings without proper consideration of ethical implications and risks to students and educational quality.
Speakers
– Ricardo Israel Robles Pelayo
– Dr. Yik Chan Chin
Arguments
Educational institutions are hastily incorporating AI without analyzing ethical implications
AI poses risks of accelerating digital poverty, lack of supervision, and unauthorized content use in education
Topics
Online education | Human rights principles | Legal and regulatory
Both speakers emphasize the importance of human judgment and critical thinking in AI systems, arguing that technology should empower rather than replace human decision-making capabilities.
Speakers
– Ricardo Israel Robles Pelayo
– Dr Zhang Xiao
Arguments
Critical thinking serves as essential mediator for fair and contextualized AI decisions
Human-centric approach should be the common framework for AI ethics
Topics
Human rights principles | Development
Both speakers highlight the practical challenges educational institutions face in implementing AI, including resource constraints, technical limitations, and the need for proper oversight and regulation.
Speakers
– Daisy Selematsela
– Dr. Yik Chan Chin
Arguments
Academic libraries face challenges with data management, technical expertise, and financial constraints in AI integration
AI poses risks of accelerating digital poverty, lack of supervision, and unauthorized content use in education
Topics
Online education | Development | Infrastructure
Unexpected consensus
Innovation and ethics as complementary rather than competing forces
Speakers
– Guilherme Canela de Souza Godoi
– Ricardo Israel Robles Pelayo
– Moderator
Arguments
Innovation and protection of human rights should not be contradictory goals in AI development
Innovation must be guided by law, ethics, and critical reflection rather than being an end in itself
Innovation thriving with ethical guidelines
Explanation
Unexpectedly, speakers from different backgrounds (UNESCO, academia, moderation) converged on rejecting the common industry narrative that ethics and regulation slow innovation, instead arguing they are mutually reinforcing.
Topics
Human rights principles | Legal and regulatory | Development
AI governance complexity exceeding internet governance challenges
Speakers
– Dr Zhang Xiao
– Dr. Yik Chan Chin
– Daisy Selematsela
Arguments
Multi-stakeholder collaboration is essential but AI governance is more complex than internet governance due to vertical impact across sectors
There are concerns about reliability of AI-generated content and reduction of diversity of opinions
Academic libraries face challenges with data management, technical expertise, and financial constraints in AI integration
Explanation
Despite coming from different sectors (governance, education, libraries), these speakers unexpectedly agreed that AI presents fundamentally different and more complex challenges than previous internet governance models, requiring new approaches.
Topics
Legal and regulatory | Interdisciplinary approaches | Online education
Overall assessment
Summary
The speakers demonstrated remarkable consensus on core principles: human-centric AI development, need for international cooperation, multi-stakeholder participation, and inclusive development that addresses rather than exacerbates digital divides. They also agreed on the complexity of AI governance challenges and the complementary relationship between innovation and ethics.
Consensus level
High level of consensus on fundamental principles with strong implications for AI governance. The agreement across diverse stakeholders (UNESCO, government, academia, civil society) from different regions (China, Mexico, South Africa, Brazil) suggests these principles have broad international support and could form the foundation for global AI governance frameworks. The consensus particularly strengthens the legitimacy of UNESCO’s ethical recommendations and the multi-stakeholder approach pioneered in internet governance, while acknowledging the need for more sophisticated governance mechanisms for AI’s unique challenges.
Differences
Different viewpoints
Pace and approach to AI implementation in education
Speakers
– Dr. Yik Chan Chin
– Ricardo Israel Robles Pelayo
Arguments
AI poses risks of accelerating digital poverty, lack of supervision, and unauthorized content use in education
Educational institutions are hastily incorporating AI without analyzing ethical implications
Summary
Both speakers identify problems with current AI implementation in education, but Dr. Chin focuses on systemic risks like digital poverty and regulatory gaps, while Pelayo emphasizes the rushed adoption without ethical consideration and its psychological impact on students
Topics
Online education | Human rights principles | Legal and regulatory
Unexpected differences
Complexity of AI governance compared to internet governance
Speakers
– Dr Zhang Xiao
Arguments
Multi-stakeholder collaboration is essential but AI governance is more complex than internet governance due to vertical impact across sectors
Explanation
Zhang uniquely argues that AI governance is fundamentally more complex than internet governance because AI impacts sectors vertically rather than connecting horizontally like the internet. This perspective wasn’t directly addressed by other speakers, creating an implicit disagreement about whether existing internet governance models are sufficient for AI
Topics
Legal and regulatory | Interdisciplinary approaches
Overall assessment
Summary
The speakers showed remarkable consensus on core principles (human-centric AI, need for ethical frameworks, international cooperation) but differed primarily on implementation approaches and emphasis. The main areas of disagreement were subtle, focusing on methodology rather than fundamental goals.
Disagreement level
Low to moderate disagreement level. The speakers largely agreed on fundamental principles but showed different perspectives on implementation strategies, regulatory approaches, and the complexity of governance challenges. This suggests a mature field where basic principles are established but practical implementation remains contested.
Partial agreements
Partial agreements
Similar viewpoints
Both speakers express concern about the rushed implementation of AI in educational settings without proper consideration of ethical implications and risks to students and educational quality.
Speakers
– Ricardo Israel Robles Pelayo
– Dr. Yik Chan Chin
Arguments
Educational institutions are hastily incorporating AI without analyzing ethical implications
AI poses risks of accelerating digital poverty, lack of supervision, and unauthorized content use in education
Topics
Online education | Human rights principles | Legal and regulatory
Both speakers emphasize the importance of human judgment and critical thinking in AI systems, arguing that technology should empower rather than replace human decision-making capabilities.
Speakers
– Ricardo Israel Robles Pelayo
– Dr Zhang Xiao
Arguments
Critical thinking serves as essential mediator for fair and contextualized AI decisions
Human-centric approach should be the common framework for AI ethics
Topics
Human rights principles | Development
Both speakers highlight the practical challenges educational institutions face in implementing AI, including resource constraints, technical limitations, and the need for proper oversight and regulation.
Speakers
– Daisy Selematsela
– Dr. Yik Chan Chin
Arguments
Academic libraries face challenges with data management, technical expertise, and financial constraints in AI integration
AI poses risks of accelerating digital poverty, lack of supervision, and unauthorized content use in education
Topics
Online education | Development | Infrastructure
Takeaways
Key takeaways
Innovation and human rights protection should be complementary, not contradictory goals in AI development
AI governance requires a human-centric approach with international collaboration and consensus-building
Multi-stakeholder participation is essential, but AI governance is more complex than internet governance due to its vertical impact across all sectors
Educational institutions are rapidly adopting AI without sufficient consideration of ethical implications and long-term consequences
Three core policy dimensions must be addressed: inclusive development, global governance alignment, and multi-stakeholder collaboration
Critical thinking serves as an essential mediator for ensuring fair and contextualized AI decisions
Capacity building is the primary demand from member states for responsible AI implementation
AI must bridge rather than deepen global digital divides, particularly affecting developing nations
Resolutions and action items
UNESCO member states should implement the readiness assessment methodology for AI ethics (over 70 states have already done so)
Educational institutions need to improve understanding of AI benefits and risks before implementation
Countries should develop national frameworks that align with UN Global Digital Compact and UNESCO ethical recommendations
Academic libraries should implement robust data protection strategies including encryption, anonymization, and access control measures
Establish age limitations and guidelines for generative AI use in educational settings
Promote international collaboration through shared vocabulary, ethical frameworks, and interoperable standards
Unresolved issues
How to effectively balance rapid AI innovation with adequate regulatory oversight
Specific mechanisms for cross-border and cross-sector collaboration in current geopolitical context
Data ownership and concentration issues regarding user-generated commercial data
Age limitations for AI use in primary and secondary education settings
How to address the reliability concerns of AI-generated content contaminating training data
Specific strategies for preventing AI from accelerating digital poverty in developing nations
How to maintain diversity of opinions and prevent marginalization of minority voices in AI systems
Suggested compromises
Find common ground through shared ethical frameworks while allowing for cultural and contextual differences
Balance innovation speed with responsible development by implementing ‘ethical by design’ principles
Combine government guidance with extensive participation from all sectors of society
Learn from internet governance’s multi-stakeholder approach while adapting to AI’s greater complexity
Focus on human-centric values as universal foundation while accommodating different national approaches
Prioritize capacity building and awareness-raising as foundational steps before full AI implementation
Thought provoking comments
We shouldn’t put innovation and protection and promotion of human rights as contradictory goals in this conversation about AI. It’s possible to innovate and at the same time protect and promote fundamental freedoms and human rights and be ethical. This should be actually our aim. We shouldn’t negotiate that. Actually, good innovation is very much related to the fact that we are not leaving anyone behind.
Speaker
Guilherme Canela de Souza Godoi (UNESCO)
Reason
This comment reframes the entire AI governance debate by challenging the false dichotomy between innovation and ethics. It’s particularly insightful because it positions ethical considerations not as barriers to innovation, but as essential components of truly beneficial innovation.
Impact
This comment set the philosophical foundation for the entire discussion, establishing that the workshop would not debate whether to choose between innovation or ethics, but rather how to achieve both simultaneously. It influenced subsequent speakers to focus on practical implementation rather than justifying the need for ethical AI.
The containment of AI-generated content… there’s a circle, you know, you change it on something which is reliable so the result is, the outcome is reliable again so there’s a serious problem about the reliability of knowledge in the long term
Speaker
Dr. Yik Chan Chin (Beijing Normal University)
Reason
This observation about the recursive degradation of AI-generated content introduces a profound epistemological concern that hadn’t been addressed by previous speakers. It highlights how AI systems trained on AI-generated content could lead to a deterioration of knowledge quality over time.
Impact
This comment shifted the discussion from immediate ethical concerns to long-term systemic risks, introducing a new dimension of complexity that other speakers hadn’t considered. It deepened the conversation by highlighting how current AI practices could have cascading effects on future knowledge systems.
What’s the difference between AI and internet? As we know, the internet has connected all of us horizontally. But for AI, it has changed each field vertically. So they’ll be more complex, especially in ethics and safety issues.
Speaker
Dr Zhang Xiao (China IGF)
Reason
This metaphor brilliantly distinguishes between horizontal connectivity (internet) and vertical transformation (AI), providing a new framework for understanding why AI governance is fundamentally different and more complex than internet governance.
Impact
This comment provided a conceptual breakthrough that helped explain why existing internet governance models, while useful, are insufficient for AI governance. It influenced the discussion’s conclusion by highlighting the need for new approaches that account for AI’s sector-specific vertical impacts.
Despite this, we are witnessing how many educational institutions and even the national education system are hastily incorporating AI without taking the time to analyze its ethical implications. It seems that the urgency to innovate has overtaken the reflection on the human consequences this entails.
Speaker
Ricardo Israel Robles Pelayo (Mexico)
Reason
This observation critically challenges the rush to adopt AI in education and justice systems, highlighting the dangerous gap between technological implementation and ethical reflection. It’s particularly powerful because it connects abstract ethical principles to concrete institutional failures.
Impact
This comment introduced a sense of urgency and critique that hadn’t been present in earlier presentations, shifting the tone from theoretical discussion to practical concern about current harmful practices. It prompted deeper consideration of implementation timelines and the need for ethical frameworks before, not after, AI adoption.
When budgets are cut in universities, libraries are the first ones to be affected by the budget cuts. And this makes it difficult in investing in advanced AI solutions… The other challenge that I want to highlight the issues around content digitization.
Speaker
Daisy Selematsela (University of Witwatersrand)
Reason
This comment grounds the AI ethics discussion in practical resource constraints and infrastructure challenges, particularly highlighting how global inequalities manifest in AI adoption. It brings attention to often-overlooked institutional players (libraries) in AI governance.
Impact
This perspective added a crucial dimension of practical implementation challenges that the previous speakers had not addressed, showing how ethical AI principles must account for resource disparities and institutional constraints, particularly in developing countries.
Overall assessment
These key comments collectively transformed what could have been a theoretical discussion about AI ethics into a nuanced, multi-dimensional conversation that addressed philosophical foundations, practical implementation challenges, and long-term systemic risks. The UNESCO representative’s opening reframing eliminated false dichotomies and set a collaborative tone. Dr. Chin’s insight about recursive content degradation introduced temporal complexity to the discussion. Dr. Zhang’s horizontal/vertical metaphor provided a new conceptual framework for understanding AI governance complexity. Professor Pelayo’s critique of hasty implementation added urgency and practical grounding, while Dr. Selematsela’s focus on resource constraints highlighted global inequality issues. Together, these comments created a comprehensive discussion that moved from abstract principles to concrete challenges, establishing both the philosophical necessity and practical complexity of responsible AI development.
Follow-up questions
How can policies safeguard technology access for developing nations and prevent AI from worsening digital disparities?
Speaker
Ke GONG
Explanation
This addresses the critical issue of inclusive development and ensuring AI benefits reach all countries, not just developed ones
How can national frameworks align with the United Nations Global Digital Compact and operationalize UNESCO’s recommendation on the ethics of artificial intelligence?
Speaker
Ke GONG
Explanation
This focuses on harmonizing global governance approaches and ensuring consistent implementation of ethical AI principles across nations
What mechanism models can foster effective cross-sector and cross-border collaboration, especially in today’s geopolitical context?
Speaker
Ke GONG
Explanation
This addresses the challenge of maintaining international cooperation on AI governance despite current geopolitical tensions
What is the aging limitation for users to use AI? Should we allow primary school students or middle school students under 13 to use ChatGPT?
Speaker
Dr. Yik Chan Chin
Explanation
This raises important questions about age-appropriate AI use in education and the need for age-based restrictions on AI tools
Should we allow the user-generated commercial data to be only owned by the large company, which will manipulate the data?
Speaker
Dr. Yik Chan Chin
Explanation
This addresses critical questions about data ownership, concentration of power, and fair use of user-generated data in AI systems
Is it legitimate to trust an algorithm trained with biased data? Can a judge delegate their human judgment to a machine?
Speaker
Ricardo Israel Robles Pelayo
Explanation
This raises fundamental questions about the role of AI in judicial systems and the limits of algorithmic decision-making in justice
How do we ensure that AI systems in libraries are transparent and fair as an ongoing challenge?
Speaker
Daisy Selematsela
Explanation
This addresses the need for ongoing research into maintaining transparency and fairness in AI systems used in academic and library contexts
What can we learn from internet governance for AI governance, considering AI changes each field vertically while internet connects horizontally?
Speaker
Dr Zhang Xiao
Explanation
This suggests the need for research into adapting internet governance models for the more complex, sector-specific challenges of AI governance
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event
