Day 0 Event #172 Major challenges and gaps in intelligent society governance

Day 0 Event #172 Major challenges and gaps in intelligent society governance

Session at a Glance

Summary

This discussion focused on the development and governance of intelligent societies, exploring various aspects of AI and its impact on global development. The speakers addressed China’s objectives in building an intelligent society, emphasizing the importance of human-centered approaches and ethical considerations. They highlighted the need for international cooperation in addressing challenges such as energy consumption and environmental impacts of AI development.


The discussion explored the paradigm shift in AI governance, noting the transition from material technological subjects to human-like technological subjects, and the importance of flexible, open governance frameworks. Speakers emphasized the role of standardization in addressing opportunities and challenges in building intelligent social culture and civilization.


The potential of AI in addressing global issues like climate change and achieving sustainable development goals was discussed, with a focus on leveraging AI for social progress while maintaining human values and rights. The importance of interdisciplinary approaches and global cooperation in AI governance was stressed, with speakers calling for diverse, multidisciplinary panels to guide AI development.


Speakers also addressed the need for transparency in AI decision-making, the reshaping of knowledge production and social structures by generative AI, and the importance of aligning AI with human values. The discussion highlighted the transformative potential of AI across various sectors, including healthcare, education, and public services, while also acknowledging the need for responsible development and governance.


Overall, the discussion underscored the complex interplay between technological advancement, social impact, and governance challenges in the development of intelligent societies, emphasizing the need for collaborative, human-centered approaches to harness the benefits of AI while mitigating potential risks.


Keypoints

Major discussion points:


– China’s national plans and actions for building an intelligent society


– International governance challenges for AI, especially related to energy use and environmental impacts


– Philosophical and societal implications of generative AI and cognitive computing systems


– Governance transformation and standardization needed for the intelligent society


– Ensuring AI development is human-centered and supports sustainable development goals


Overall purpose:


The purpose of this discussion was to explore various perspectives on the development of intelligent societies powered by AI, examining both the opportunities and challenges from technological, governance, philosophical and global development standpoints. The speakers aimed to provide insights on how to responsibly advance AI while addressing key issues like environmental impacts, ethics, and human values.


Tone:


The overall tone was academic and forward-looking. Speakers presented research findings and policy recommendations in a formal, analytical manner. There was a sense of cautious optimism about AI’s potential balanced with calls for responsible governance and development. The tone remained consistent throughout, with each speaker building on previous points while adding their own area of expertise to the discussion.


Speakers

– Yuming Wei: Professor at Tsinghua University, moderator of the session


– Gong Ke: Former president of the World Federation of Engineering Organizations, executive director of Chinese Institute of New Generation Artificial Intelligence Development Strategies, former president of Nankai University


– Kevin C. Desuoza: Professor of Business, Technology, and Strategy at the School of Business, Queensland University of Technology


– Ru Peng: Professor at the School of Public Policy and Management, Tsinghua University


– Sam Daws: Senior advisor at the Oxford Martin AI Governance Initiative, Oxford University


– Min Jianing: Professor at Harbin Institute of Technology and Editor-in-Chief of the Journal of Public Administration


– Poncelet Ileleji: CEO of Jack Cook Labs, Banjul, Gambia


Additional speakers:


– Suf Gongke: Former president of the World Federation of Engineering Organizations, executive director of Chinese Institute of New Generation Artificial Intelligence Development Strategies, former president of Nankai University (likely the same person as Gong Ke, with a slight name variation)


Full session report

The Development and Governance of Intelligent Societies: A Comprehensive Overview


This discussion, moderated by Professor Yuming Wei of Tsinghua University, brought together experts from various fields to explore the multifaceted aspects of developing and governing intelligent societies powered by artificial intelligence (AI). The session featured a mix of in-person and online presentations, addressing a wide range of topics from national strategies to global governance challenges.


China’s National Strategy for an Intelligent Society


Gong Ke, former president of the World Federation of Engineering Organizations and executive director of the Chinese Institute of New Generation Artificial Intelligence Development Strategies, outlined China’s comprehensive national plan for AI development. He introduced the “1-2-3-4 planning” framework:


1. One overarching goal: building an intelligent society


2. Two driving forces: technological innovation and institutional innovation


3. Three stages of development: 2020, 2025, and 2030 milestones


4. Four key areas: social services, social governance, infrastructure, and public safety


This approach demonstrates China’s commitment to leveraging AI for societal advancement while addressing potential challenges.


Global Governance and International Collaboration


Sam Daws from Oxford University emphasized the critical importance of global governance and international collaboration in addressing AI challenges. He highlighted several key points:


1. The need for interoperable approaches to sustainable AI development


2. Opportunities for collaboration leading up to COP30 in 2025


3. The role of the UN Technology Envoy in facilitating international cooperation


4. The importance of addressing AI’s environmental impact, including high energy and water consumption


Daws introduced the concept of the Jevons Paradox to explain why AI energy use continues to rise despite efficiency gains, suggesting that increased efficiency may lead to increased overall consumption.


Societal Impact and Ethical Considerations


Min Jianing, Professor at Harbin Institute of Technology, presented 10 epidemiological questions on generative AI, exploring how large-language models are transforming knowledge production, decision-making processes, and social structures. His presentation covered:


1. The intrinsic mechanisms of evolution in knowledge production triggered by AI models


2. Potential reshaping of human society and relations of production


3. The impact of AI on human nature and existing beliefs


4. The possibility of value upgrades through openness and neutral learning


Kevin C. Desuoza from Queensland University of Technology discussed cognitive computing systems and their role in public value creation. He highlighted how AI is transforming approaches to societal challenges, citing examples of technology companies shifting focus from healthcare to “healthiness.” Despite technical difficulties with slide presentation, Desuoza’s insights broadened the conversation to consider how AI might transform entire paradigms of thinking about social issues.


Governance Transformation and Standardisation


Ru Peng, Professor at Tsinghua University, stressed the importance of standardisation as a tool for AI governance. She argued that standardisation is not merely a technical process but a multifaceted approach with strategic, social, and people-oriented dimensions. Peng called for the development of key standards in areas such as:


1. Social application for generative AI technology


2. Smart healthcare


3. Smart justice


4. Smart grassroots governance


She also mentioned China’s 92 national intelligent social governance experimental bases, highlighting the country’s practical approach to exploring AI governance models.


Sustainable Development and Human-Centered AI


Poncelet Ileleji from Jack Cook Labs in Gambia emphasized AI’s potential role in achieving the UN Sustainable Development Goals. He stressed the importance of a human-centered approach to AI development, referencing the UN AI advisory board recommendations. Ileleji’s presentation highlighted the need for AI applications that address global challenges while maintaining focus on human values and ethics.


Unresolved Issues and Future Directions


The discussion highlighted several unresolved issues and areas for future research and policy development:


1. Specific mechanisms for global interoperable approaches to sustainable AI development


2. Balancing national interests and global collaboration in AI governance


3. Concrete steps to align AI development with human values and ethics across different cultural contexts


4. Methods to effectively measure and mitigate the energy and environmental impacts of AI systems


Conclusion


This comprehensive discussion underscored the complex interplay between technological advancement, social impact, and governance challenges in the development of intelligent societies. The speakers emphasized the need for collaborative, human-centered approaches to harness the benefits of AI while mitigating potential risks. As AI continues to reshape various sectors, including healthcare, education, and public services, the importance of responsible development and governance becomes increasingly apparent.


The moderator, Professor Yuming Wei, concluded by noting the complementary nature of the speakers’ topics, highlighting how the diverse perspectives contributed to a holistic understanding of intelligent societies’ development and governance. Despite occasional technical difficulties, the session provided valuable insights into the multifaceted nature of AI development and governance, emphasizing the need for continued dialogue, research, and international cooperation to address the challenges and opportunities presented by intelligent societies.


Session Transcript

Yuming Wei: Okay, good afternoon distinguished guests, esteemed colleagues and friends, ladies and gentlemen. It’s my great honor to welcome all of you to this session. I am Wei Yu Lin from Tsinghua University. On behalf of the organizing committee, I would like to express my deep respect and gratitude to all of you for joining us today and contributing to this important discussion. Today we are gathered at a pivotal moment in history. The rapid development of artificial intelligence is driving huge transformation towards intelligent society, bolstering new academic frontiers, technological breakthroughs, and innovative models. This transformation brings enormous opportunities for development across all sectors, however it also introduces a range of complex governance challenges, including ethical concerns, social inequity, privacy, and security risks. This session aims to address these pressing issues from a global perspective, analyzing the latest trends, major challenges, and the future opportunities in the development of intelligent society. Through the lens of government and international collaboration, we will explore experimental and adaptive approaches needed to navigate the governance transition of intelligent society. We are privileged to have an exceptional panel of speakers, each of whom bring unique expertise and global perspective to this discussion. Now allow me to introduce them. Suf Gongke, former president of the World Federation of Engineering Organizations, executive director of Chinese Institute of New Generation Artificial Intelligence Development Strategies, and former president of Nankai University. Mr. Sam Zhou, senior advisor at the Oxford Martin AI Governance Initiative, Oxford University. University, and Director of Multilateral AI. Prof. Min Jianing, Professor at Harbin Institute of Technology and Editor-in-Chief of the Journal of Public Administration. Prof. Kevin D’Souza, Professor of Business, Technology, and Strategy at the School of Business, Queensland University of Technology. Prof. Ru Peng, Professor at the School of Public Policy and Management, Tsinghua University. And Mr. Pencilit Eladji, CEO of Jack Cook Labs, Banjul, Gambia, in Africa. To begin our session, it is my distinct pleasure to introduce our first speaker, Professor Peng Geng-ke, who will deliver his keynote address titled, China’s Objectives and Actions in Building an Intelligent Society. Please join me in giving a warm round of applause to welcome Prof. Geng-ke to the stage. Thank you.


Gong Ke: Thank you so much for the introduction. And I will take this opportunity to introduce you briefly about the Chinese National Plan for Building an Intelligent Society. And you may already know that the Chinese government has released a top-tier plan for the new generation of artificial intelligence development from 2017 to 2030. It’s a long-term, high-level plan. And this plan is dubbed in China as a 1-2-3-4 planning. So one means to set up one national open and collaborative AI technological innovation system, one nationwide system. Two means to master the two attributes of artificial intelligence. One is its technical feature. Another one is the social feature of artificial intelligence. Three means three-in-one promotion. That means to advance the technical R&D, production manufacturing, and industrial nurturing in a three-in-one manner. The four means four aspects to be supported by AI development that are STI development, science technology innovation development, economic growth, social progress, and national security. So within this framework, the plan identifies six pivotal tasks with building an intelligent society, being a prominent one alongside with fostering technological innovation systems, nurturing an intelligent economy, and enhancing digital infrastructure. So the goals for building an intelligent society is that, in one word, it’s to build a safe and convenient intelligent society. And the plan outlines some objectives, some objectives. First is to accelerate the penetration of AI to elevate the equality of life and create an omnipresent intelligent environment, significantly augmenting the efficiency of social services and social management. The second is delegating simplistic, repetitive, and hazardous tasks to AI, thereby fostering human creativity and generating high-quality, comfortable employment opportunities. The third objective is to diversify and enrich on-demand intelligent services to maximize accessibility to high-quality social services and convenient lifestyle. The fourth objective is elevating the standards of intelligent social governance, rendering societal operations safer and more efficient. So under these goals and objectives, there are some tasks for building an intelligent society highlighted in the plan. The first is developing convenient and efficient intelligent services. That means prioritizing AI innovations to address urgent societal needs, such as education, especially pre-university education, health care, and elder care, to provide tailored, super-rare quality services. The second task is to advance intelligent social governance, leveraging AI to tackle administrative, judicial, municipal, and environmental governance challenges, thereby modernizing the social management of China. The third task is to enhance public safety through AI, promoting a profound application of AI in public safety, fostering the construction of an intelligent monitoring, early warning, and control system for public security. The fourth task is promoting trust and interaction in society to utilize AI to bolster social interactions and nurture trusted communication between the civic. To implement these tasks, China has established a policy framework for building an intelligent society. Based on the national plan, the Chinese government has issued various policies in the past years providing intricate pathways for intelligent society construction, which includes, first, science and technology innovation policies. supporting research endeavors and encouraging enterprises R&D investment and to establishment of a major national R&D project with joint public and private investment and we also have industrial development policies to cultivating industrial integration and innovation platforms guided by the document which is titled opinions on accelerating scenario innovation for promote high-quality AI applications which is issued by the Ministry of Industrial and Information Industry and we also have human resource development policies to fortifying talent cultivation frameworks and intensifying talent recruitment endeavors and now the AI courses is already adopted in the fundamental education and higher education systems in China and we have data and infrastructure policies enabling data sharing and the constructing intelligent infrastructure of course also privacy protection we have adopted a new law in China two years ago for protect the private information and then safety and the ethical norm policies establishing AI safety regulation and social impact evaluation system infused with ethical guidelines released by the Chinese government three years ago and finally regional development policies that is to encourage regional collaborations and localized development initiatives so with these policy frameworks China has kept I think notable progresses in building in intelligent society for example developing application scenarios in social services in health care China the artificially intelligent imaging screening technology has been dramatically improved early diagnostic diagnosis diagnosis or of critical units such as cancer in education intelligent systems has optimized the resource allocation fostering equity and equality so people can use AI aids AI assistance learning and teaching system in China widely in governance intelligent systems applied to pandemic control and public services having significantly enhanced efficiency in the campaign with pandemic and call it pandemic and we also achieved the progress in building intelligent public services system the widespread adoption of AI in transportation finance and environmental sectors have given rise to the convenient and efficient public service system so if you know the city Hangzhou is a very beautiful city near to Shanghai however there’s a big lake in the center of this city so that the transportation public transportation in this city is a very very difficult five years ago this city ranked number number three or number four the most suggestion traffic suggestion city in China with the help of AI system they reduce the ranking from number four to number 57 congestion city in China now so we have also enhanced the regulation and governance mechanism to ensure the safety use of AI and also China promoting green and intelligent synergy to leveraging AI to reduce the carbon print of social services and also production so in summary China’s endeavors in constructing a intelligent society are steadily transitioning from blueprint to reality so the blueprint is a national plan this transition not only mirrors technological advancement but also underscores a pathway to refining social governance and augmenting human well-being so in the future personally I believe China should further emphasize constructing an intelligent society centered on people rather than technology efforts should empower individuals with technology ensuring a dedicated balance between technological innovation and ethical operation ultimately the objective is to forge an inclusive equitable sustainable and harmonious society for all by use by leveraging the potential of artificial intelligence so that’s my brief introduction of China’s goals and actions in building a intelligent society thank you so much


Yuming Wei: thank you professor for your excellent presentation which provide a comprehensive overview of Chinese objectives plans and actions in building an intelligent society. And now let us welcome Mr. Sam Jules, who will deliver his speech on the topic of international governance of AI and the environment.


Sam Daws: Thank you very much. It’s a great pleasure to be here today with you all. I’ve only got 10 minutes, so I’m going to try and race through this quite promptly. First, what’s the positive contribution AI can make to climate solutions? Well, here’s just a few. New materials research in solar technologies, battery research, biodegradable alternatives to plastics, atmospheric modeling and climate modeling through digital twins. My colleague, Professor Philip Steer at Oxford University has a leading intelligent project on this. And AI will be vital to achieve the new UN climate COP energy efficiency goals across all industries. Well, what’s the problem? Well, AI energy and water consumptions are high and growing. This has global impact as a contributor to greenhouse gas emissions. Currently, AI accounts for one to 2% of global energy use, but it’s set potentially to increase significantly in the future. We need a new Japan worth of electricity every year because of AI, but also because of air conditioning and electric vehicles. And the water use of data centers uses more water than four times that of Denmark every year. Other factors that could increase AI’s energy use. There’s been an emphasis to date on the energy cost of testing and training large language models. But in the future, there’ll be greater emissions from inference, from multimodal search, and particularly important will be to track the energy use of semi-autonomous AI agents. And generative AI will shift not just from scraping the internet, but also relying on real-time IoT data of human behavior and natural processes involving greater data. So what are the solutions? Well, the UN environmental program in Nairobi, their recommendations are a good start. First, focus on the whole life cycle of AI, from the mining of critical materials all the way through to the deployment of AI models. Second, standardize the way we measure AI emissions. This work’s been taken forward by the ITU, the ISO, the IEC, IEEE, and others, including at the recent New Delhi Standard Summit. Next, incentivize transparency from industry on their energy use and emissions. Incentivize efficiency in hardware design, make software more efficient, such as the work of the Green Software Initiative. And data sobriety, more accurate, well-structured data, reducing duplication, only use the data necessary for the task. And lastly, powering new data centers only using renewable energy and reuse components at end of life. But can we leave energy optimization of data centers and chip design to industry alone? Well, perhaps, but that might change. I say perhaps because industry has achieved remarkable progress. Data center energy consumption only increased by 6% between 2010 and 2020, but computer workloads increased by 550%. NVIDIA achieved 100-fold increase in performance per watt from Kepler in 2012 to Hopper in 2023. Google achieved similar efficiencies through their TPUs. But despite these efficiencies, AI energy use and emissions overall continue to rise. So AI is therefore subject to the Jevons Paradox, named after an English economist who, back in 1865, observed that increased efficiency of coal use actually led to increased consumption of coal across a wide range of industries, and AI is following a similar path. So what are the prospects for a global interoperable approach on sustainable AI? Well, to have that, we need to navigate these geopolitical challenges. One, the greater US-China trade and national security competition. We’ve seen export controls, rare-earth export bans, and so on. Secondly, the new US administration is moving away from green regulation. Thirdly, sovereign AI trends may make it harder to shift testing, training, and influence to other countries. And fourthly, the new BRICS-AI alliance, announced this last week by President Putin, may lead to a bifurcation of policy approaches with the West. I want to end with what are the opportunities in 2025? Well, I think it’s vital we… use forward that includes China as well as the West. So we have the UN Universal Tracts that come out of the two UN General Assembly resolutions, Responsible AI, proposed by the US, co-sponsored by China, and AI Capacity Building, proposed by China, co-sponsored by the US. Great initiatives, we can have further cooperation. Then the UN Tracts that are emerging from the Global Digital Compact and the HLAB on AI, Science Convening, Policy Dialogue, Standards, Capacity Building, all can be used to advance sustainable AI. UNESCO’s Ethical Principles, ITU’s AI for Good Summit, and multi-stakeholder forums such as IGF. Then there’s national leadership. The Kingdom of Saudi Arabia’s leadership of the Digital Cooperation Organisation. They’re interested in an ethical framework for AI. Perhaps that could be a bridge involving China and the West. Malaysia’s chairing of ASEAN in 2025. They’ve got an interest in Responsible AI to double ASEAN’s digital economy. Singapore’s had leadership on greener data centres in humid tropical climates, in software, in integrating sustainability into its AI Verify and Model GNI frameworks. The EU and UK work on reducing digital emissions. The International Energy Agency’s report next spring on AI for Energy. France’s AI Action Summit, GPEI, the UK’s International Energy Security Summit, and Republic of Korea’s hosting of APEC economic leaders. These are all mini-lateral opportunities to further standardise approaches to measuring the energy cost of AI, but they can’t replace global initiatives that involve both China and the West. The last initiative, a really important one, is COP30 in Belém, Brazil. I wonder, can we have a higher ambition coalition on this issue of middle and smaller powers moving towards COP30? Possible national champions include Kenya, Singapore, the UAE, Saudi Arabia, Kazakhstan, Brazil, and France. That’s the end of my time. Thank you very much.


Yuming Wei: Thanks for Mr Strauss’ inspiteful speech. The energy and environment challenges in AI development are indeed global issues that require collaborative efforts from all countries around the world to address. Now, let’s welcome… who will be delivered on an online presentation from Beijing sub-forum. His speech is titled, 10 Epidemiological Questions on Generating Artificial Intelligence. Okay, Beijing, is there a voice clearly? Okay, let’s welcome. Hello. This is Feng Huizhang.


Min Jianing: This is from Beijing. Today, I’m going to talk about 10 Epidemiological Questions on Generating Artificial Intelligence. The rise of generative artificial intelligence triggered an unprecedented epidemiological revolution. This revolution is profoundly influencing human knowledge production, cognitive patterns, and social structures. To better grasp the full picture of this revolution and explore the future landscape of human-machine symbiosis, we must raise critical questions from an epidemiological dimension. So that’s why I want to read the 10 questions to cover various aspects of generative AI development, from a technological innovation to philosophical reflection, from social impact to value reshaping, systematically examining, and deeply exploring. These questions will provide us with important insights and action guides for understanding and responding to this revolution. Through researching the 10 questions, we can not only better grasp the development context and the trends of generative AI, but also more prudently consider the relationship between artificial intelligence and human society, contributing wisdom and strengths to achieving human-machine symbiosis and build a better future. So let’s take a look at the knowledge production and reshaping human nature, the revolution triggered by the generative AI and also the 10 questions. The question number one, what are the intrinsic mechanisms of evolution in knowledge production triggered by generative large-length model? Fundamentally, It is transforming the knowledge production mode, fundamentally changing our understanding of knowledge, truth, and cognition, shaking the epistemological presumptions of a subject, object, the autonomy and supremacy of reasons that have been established since the Enlightenment. 2. Does the emergence of generative large-language models signify the end of anthropocentrism, or does it make a new starting point for reshaping human nature? If intelligence is no longer the exclusive domain of humans, and creativity can also be simulated and surpassed by machines, then the status of a human as the spirit of all things will face unprecedented challenges. 3. Will the human-machine collaboration form a new paradigm for the knowledge exploration? When AI is not only a tool for human cognition, but also becomes a subject and even a partner in knowledge production, the relationship between humans and machines will inevitably undergo a profound restructuring. 4. How will generative large-language models subvert the traditional scientific research paradigm and open up new frontiers for knowledge discovery? The interaction between machine and human sciences to form amplified thinking will promote cross-disciplinary and integrated research humanized to disruptive innovations, and the Nobel Prize of Medicine and Physics have already shown that. And then I will talk about the social restructuring decision-making innovation, the transformation driven by generative AI, especially the decision-making innovation. So that is relevant with the question number five. Can generative large-language models help humans break through the limitations of funded rationality and achieve innovations in decision-making? With the help of the machine’s ability to extract insights from massive amounts of information, individuals will have the opportunity to transcend their own cognitive limitations and obtain more comprehensive and objective decision-making basis. About question number six, it will be relevant with the question of how will generative large-language models reshape the structure of human society and the relations of production. And the decision-making transformation is leading to the foundation of a traditional social division of labor because the non-differentiation of knowledge and skill acquisition triggered by large language model is based on the decisions. And then for question number seven, question number eight, how can we break down the disciplinary barriers and construct a fluid knowledge graph without disciplinary boundary? So that is exactly what question number seven is asking. Can generative large-language models break down disciplinary barriers and construct a fluid knowledge graph without disciplinary boundaries? The disciplinary classification system of individual error is built on the basis of a specialization and professionalization of knowledge production. And behind it lies the imprint of reductionism and mechanism. And question number eight, how will large language models revolutionize social science research and create a new paradigm with greater explanatory power, predictive power, and guiding power? With the help of large models, social sciences expect to establish an integrated research paradigm of data-driven human-machine collaboration and multi-scale linkage. So that is very interactive to the traditional way. The two last questions about redefinition of intelligence values, the philosophical reflections triggered by the generative AI. So for question number nine, how will the intelligence and creativity demonstrated by generative large language models redefine the human cognition? Because there is amazing creativity demonstrated by the large models based on the knowledge graph formed by training on massive corpora, blurs the boundaries between the imitation and innovation, quantitative and qualitative change. And question number 10, it is relevant to how the artificial intelligence is benchmarking towards the human intelligence. What are the connotations and extensions of aligning artificial intelligence with human values? What exactly it is aligning to? The question is not very clear yet. When the artificial intelligence system presents its difficult questions from unique perspectives, humans will be forced to re-examine existing beliefs and achieve value upgrades through openness, neutral learning, and evolution. So the human value system is also ever evolving. So that is the core of whether we should align with the peripheral part. Then, in summary, generative AI intelligence is pushing humanity towards a brand new era. In this era, the speed of knowledge iteration and updating will be greatly accelerated, and human-machine collaboration will promote the flourishing of science. Machine intelligence will help perfect social governance. The human-machine interaction will enhance human insight. When the artificial intelligence becomes the norm, the singularity will no longer be out of reach. In this new era, humanity will bid farewell to a civilization centered on individual intelligence and usher in a new era characterized by collective intelligence. Everyone will have their own personalized AI assistant, achieving the self-transcendence through the human-machine symbiosis. Facing this epistemological revolution led by generative AI, we should embrace the technology transformation with an open, prudent, and responsible attitude. And that’s why I put forward the 10 questions. Thank you.


Yuming Wei: Okay, thank you, Professor Mi Jianing, for sharing your thought-provoking perspective. I believe everyone has now gained a deeper understanding of the impact that generative AI will have on the cognitive system. Next, I welcome Professor Kevin D’Souza, who will deliver his speech titled Governing Cognitive Computer Systems for Public Value.


Kevin C. Desuoza: It’s okay, while the slides are being loaded. So it’s a pleasure to be here and address all of you. I would like to express my gratitude to the organizers of the event. Just two quick points. While I will present the presentation, I have a large group that helps me on a number of these projects. And so this is not just my work, it’s the work of my research group. the credit should go to them and these are just my views and they don’t officially reflect any group that we collaborate with. I guess we may not have slides. … … … … … … … … … … … … … Okay, and so one of the things that I thought that I would focus on is to broaden the discussion around AI. As you see in this image, AI is just a small piece of this larger revolution that’s underway right now on what we call cognitive computing systems. If you look at everything else around here you will notice three things. Number one, AI is probably the most developed field among the collection here. So if you look at things like neuropsychology, this is an emerging field. this is where we still have a lot of blue ocean. Whereas AI has been around for decades. The reason why I’m showing you this image is number one, it’s very important to put AI in the larger context if we want to talk about building transformative societies. AI will have a role to play, but it’s not the major role that it will play. It will work with a large assemblage of other innovations and other developments. Now, if you see everything on this image, you will notice one other thing. We are working at high speed when it comes to technical innovations in all of these areas. Yet, our governance and our frameworks to actually regulate and do responsible innovation have large amounts of inertia. And so, what I will do in the remaining few minutes is to highlight a few key points that hopefully will stimulate some further reflection on your part. So, if you can go to the next slide. Perfect. So, if you look at the other view of cognitive computing systems, you will see an image that looks like this. When we look at what really drives public value, we are trying to navigate these two issues of managing, governing actual behavior with cognitive computing systems and trying to understand what are individuals’ behavioral intentions. So, if you look at behavioral intentions, you will see things like risk. You will see things like privacy. When you look at actual intentions, you will see things like trust. You will see things like social presence. These are the areas where, again, work is underway. However, a lot of this work is. fairly disconnected from work going on that I showed you previously. Okay, so if you want to go to the next slide. So in the interest of time, I will not go through each of this. I will just highlight one thing at the end. So if you look at transparency, and it’s an issue that’s plaguing a lot of governments, our research has found that transparency is a very nuanced concept. There is transparency in terms of how government achieves a given outcome. There’s transparency in how we use technologies. And then there’s transparency in how government uses AI technologies. These three have different implications when it comes to explainable AI and our social licenses when it comes to innovating. Because I know we have two other speakers, I’ll go to the next slide. So one of the other areas, if we really want to build a truly global and AI or cognitive computing system driven society, we have to undertake fundamental work in terms of how interdependent our information platforms are, how interdependent our digital algorithms are. Because as recent examples have shown, if we have a single point of failure and it cascades around the ecosystem, we have actually increased the fragility of our societies. We haven’t increased it. Next slide. So the other issue that we have to do if we really want to uncover how do we get public value out of this stuff, is we have to begin tracking where is the money going. We have a long standing project where we’ve been looking at where governments around the world have been allocating their resources when it comes to advancement of AI. And so if you go to the next slide. And with these three, I’ll just highlight one thing. point each. Right now, a lot of the attention is on AI and large language models. To me, the technology is already out of the gate. It’s very hard to regulate. It’s very hard to govern when technology reaches a given scale. But we do have an opportunity when it comes to things like quantum computing. We need to get ahead of the curve rather than try to do it like we’ve been doing with previous generations of technology. The reason I bring up the Indonesia example is we have a lot of countries around the world that have forgotten the classical hierarchy of needs. Many countries around the world are deploying large language models for the higher levels of Maslow’s hierarchy of needs when they haven’t yet protected their databases. They haven’t yet prevented cyber attacks. So it’s this constant battle. And then lastly, one of the points we make in this report that’s coming out is, in order to truly reap the value of cognitive computing systems, we need to rethink how we design problems. So a very simple example. We are still trying to solve for health care in most countries, whereas the leading technology companies are solving for healthiness. They have completely flipped how they look at investments in health care. They are no more trying to solve for health care. They are trying to build healthier individuals. But for governments to be able to do that, they have to restructure government departments. They have to restructure ecosystems. And if we don’t do any of that, I believe we will never truly realize the value of these cognitive computing tools to make our societies more robust and innovative. Thank you.


Yuming Wei: Thanks for Professor Gisosa’s enlightening presentation. Your analysis of the public value of cognitive computer systems has provided us a new perspective for understanding AI and build stronger human-machine trust. Now let us welcome Professor Ru Peng from School of Public Policy and Management, Tsinghua University, who will deliver an online presentation from the Beijing Sub-Forum. Today’s topic is Governors’ Transformation and Standardization Development of the Intelligent Society. Please.


Ru Peng: Ladies and gentlemen, friends from Riyadh and Beijing, both online and offline, good afternoon. At present, the human society is moving towards an intelligent society, and a new generation of information technology represented by artificial intelligence is bringing significant and far-reaching impact to global economic growth, social development, and people’s lives. Chinese AI has formed a development trend of in-depth technological research and development, huge industrial scale, and diverse application scenarios. It can provide practical experience, leading demonstrations, and application feedback from the frontline for the development and governance of the global intelligent society, and provide exploratory and cutting-edge contributions. In order to use long-term cross-disciplinary and multidisciplinary empirical methods to record, describe, and predict the ongoing or upcoming changes in the intelligent society, under the leadership of Professor Su Jun, the Dean of the Institute for Intelligent Social Governance at Tsinghua University, I launched the initiative of conducting artificial intelligence social experiments and exploring the path of intelligent social governance in 2019 in collaboration with domestic and foreign experts and scholars, and promoted relevant appointments to build 92 national intelligent social governance experimental bases in 22 provinces across the country. To our knowledge, this is the largest social experiment on AI technology. and its governance on a global scale. After five years of practice, the experimental governance has achieved many important results and is continuously providing ideas, theories, and the technical standards and norms for building an intelligent society with human touch. For example, the city of Erdos in northern China has built the Duoduo Ping digital community service platform by using small QR codes to cover livelihood service and commercial operations enhancing the enthusiasm of the public to participate in community affairs in Erdos. The Hong Kong system and large-scale mining AI-based models have ensured the safety, greenness, and efficacy of coal production, promoting the intelligent transformation of the regional energy sector. For example, in the field of digital governance, China Mobile has provided an intelligent customer service experience and over 100,000 online Q&A services to 31 million people through the government affairs large model with a positive review rate of 98.7%, creating the most attentive intelligent government assistant. Based on the vivid practice on the vast land of China, we observed that the technical characteristics of AI are triggering a paradigm shift in its governance model. The humanoid nature, self-learning, adaptability, human-computer interaction, and the wide-ranging social impact of generative AI technology has led to a triple change in our AI governance. Number one, the transformation of our governance object from a material technological subject to a human-like technological subject and from static and stable technology to a dynamic and self-evolving technology. This requires us to pay close attention to issues such as the values, responsibility mechanism, and copyright mechanism of the big model and must adopt a flexible, open, and agile governance framework. Secondly, the governance interface has shifted from only dealing with the relationship between technical elements to take into account of the human-machine interaction. This calls for strengthened governance of issues such as information co-cons, cognitive bias, emotional manipulation, and addiction. Thirdly, the scope of governance has shifted from solely focusing on the process of technology co-innovation to emphasizing the micro-system of technology society policy involving multiple aspects such as ethics, social risks, and social impacts. This requires us to pay attention to the social applicability of technology and promote responsible development of AI. In facing with this shift in governance paradigms, we believe that standardization is the first move to address the opportunities and challenges of the times and build intelligent social culture and civilization. Standardization is not only a political tool with technical attributes but also strategic, leading, social, and people-oriented. In recent years, the clear trend of standardization internationally has been shifting from technical standards to governance standards, from standard refinement to standard prioritization. The main issues of standardization in AI have also expanded from traditional topics such as algorithms, data, and network security to comprehensive issues such as privacy, ethics, risk, management system, and social impact. In recent years, China has actively promoted the standardization of intelligent social governance. The relevant departments are studying and formulating the guidelines for standardization of intelligent social governance to build a standard system framework for intelligent social governance. In addition, the National Standardization Working Group on the Social Application and Evaluation of Intelligent Technology, SASWG35, which is headed by Tsinghua University as a secretariat, and I served as secretary general, has also conducted some useful explorations and has promoted the formal establishment of five national standards including social impact, generative AI, technology application, artificial intelligence, and social experiment. In the near future, we will continue to promote development of the key standards in areas such as social application for generative AI technology, smart healthcare, smart justice, and smart grassroots governance. Ladies and gentlemen, the future has arrived, the time is waiting for no one. We need to use standardized means to promote the healthy development of the intelligence technology, advance good governance of the intelligence society and serve the happy life of the people. We must adopt a prudent, positive and optimistic attitude to jointly address the risks and challenges brought by the intelligence technology. Let’s join hands and promote the development and governance of the intelligence society through the new paradigm of experimental governance, ensuring all countries and regions can benefit from the waves of the intelligence society and build a people-centered, humanistic intelligence society. Thank you.


Yuming Wei: Thank you, Professor Lupong, for your in-depth analysis of governance transformation and centralized path for intelligence society governance. Now let us welcome the final speaker, Mr. Pencilit Ilericic. As a highly experienced computer scientist, he will share his insights on leveraging information and communication technology as a tool for sustainable development. Because of the flight delay, Mr. Ilericic will speak online.


Poncelet Ileleji: Thank you very much. Mr. Ilericic, can you hear me? Yes, can you hear me? Can you hear me? Okay. Good morning. Good afternoon. Thank you all. It’s a great pleasure to be in this session. I just want to say that all the previous speakers that have spoken have basically addressed most of the issues that our collab loves to address. And I would like to start by saying the basic principles of how we use information computation technology, I’m talking mainly on artificial intelligence. It has to be human centered. And when it’s human centered, we are also dealing with issues that relate to trust and the respect of human values. Immediately we put that at the center of anything we do with artificial intelligence, be it with the various data models we collect or with the governance structure, then we have a good basis of discourse. And in talking about this, I would like us to go back to the final document of the government for AI for humanity, which was released in September, 2024 and by the UN AI advisory board. You should remember that this UN AI advisory board that was set up by the UN secretary general, Antonio Gutierrez in 2023, they are there on a volunteer basis independently. So their views do not reflect whatever organization or entity they refer to. And I want us to take, I would like to read from recommendation one, which I think is the basis of this session today. And one of the key recommendations from that document, which is recommendation one, was an international scientific panel of AI. And it was recommended that this panel has to be diverse and has to be multidisciplinary in terms of experts in various fields. And that is what this session has done. You know, we have had issues with quantum technology. We have had issues with using AI to mitigate climatic change, which is a big issue in the world today. But key things we should look at if we look very well at that recommendation one, we have to have annual reports in terms of surveying AI-related capabilities and opportunities and risk where there’s uncertainties. And this has to remain the core trust of what we do in terms of does it serve and respect human values? Is it really not encroaching on human rights? We also have to look at producing quarterly thematic research as that UN body says that will help AI, especially with achieving SDGs. Speaking as someone who comes from the Global South, we all know that in six years’ time, we’re going to be looking at the United Nations Substantive Development Goals. And if we use AI in whatever we do to try to achieve no to poverty or health or agriculture or climatic change, by emphasizing on the UN Substantive Development Goals 17, which deals with partnerships and cooperation, we’ll be able to achieve all we talked about here today. So I would like, colleagues, for us to reflect on the human-centric side of AI in what we do, especially with our young people, who are the ones going to be using this technology in everything they do. And they are the biggest social changes. Our governments have to understand this. Our companies have to understand this. And we have to start making sure that evidential-based research of the positive impacts of AI can make in the world we live in today. Thank you very much.


Yuming Wei: Thanks for Mr. Eladji for your wonderful speech. I find that although the speakers did not coordinate in advance, I note that their topics are highly complementary. President Gong He outlined China’s objectives and actions in building an intelligent society, while Professor Ru Peng further explored the governor’s dimension in this context. Professor Mi Jiayin examined the experimental challenges posed by generative AI, while Professor Kevin D’Souza proposed a governor’s approach with goal-oriented focusing for the AI cognitive systems. Mr. Sam Jules highlighted the importance of global governance in addressing energy and environmental problems in AI development, and Mr. Pancelet Eladji showcased the other side of AI’s role in promoting sustainable development. Due to time constraints, we are unable to proceed with further discussion and interaction. I would like to extend my heartfelt thanks to all six speakers today for sharing their brilliant and thought-provoking perspectives. Ladies and gentlemen, the further is already here. Let us embrace the intelligent society together. Thank you all, and we are looking forward to seeing you the next year. Thank you. Thank you.


G

Gong Ke

Speech speed

78 words per minute

Speech length

1057 words

Speech time

811 seconds

China’s national plan and objectives for AI development

Explanation

Gong Ke outlined China’s comprehensive plan for AI development from 2017 to 2030. The plan focuses on building an intelligent society, fostering technological innovation, nurturing an intelligent economy, and enhancing digital infrastructure.


Evidence

The plan is described as a ‘1-2-3-4’ planning, involving one national open and collaborative AI technological innovation system, mastering two attributes of AI (technical and social), three-in-one promotion of R&D, manufacturing, and industry nurturing, and four aspects supported by AI development.


Major Discussion Point

Building an Intelligent Society


Agreed with

Sam Daws


Ru Peng


Poncelet Ileleji


Agreed on

Need for global collaboration in AI governance


Differed with

Sam Daws


Ru Peng


Differed on

Approach to AI governance


S

Sam Daws

Speech speed

130 words per minute

Speech length

944 words

Speech time

433 seconds

International collaboration on sustainable AI development

Explanation

Sam Daws emphasized the need for global cooperation in addressing the environmental and energy challenges of AI development. He highlighted various international initiatives and opportunities for collaboration in 2025.


Evidence

Mentioned initiatives include the UN Universal Tracts, UNESCO’s Ethical Principles, ITU’s AI for Good Summit, and various national leadership opportunities such as Saudi Arabia’s leadership of the Digital Cooperation Organisation.


Major Discussion Point

Global Governance of AI


Agreed with

Gong Ke


Ru Peng


Poncelet Ileleji


Agreed on

Need for global collaboration in AI governance


Differed with

Gong Ke


Ru Peng


Differed on

Approach to AI governance


AI’s potential contributions to climate solutions

Explanation

Sam Daws discussed the positive contributions AI can make to addressing climate change. He highlighted several areas where AI can be applied to develop climate solutions.


Evidence

Examples include new materials research in solar technologies, battery research, biodegradable alternatives to plastics, atmospheric modeling, and climate modeling through digital twins.


Major Discussion Point

Environmental and Energy Challenges of AI


High energy and water consumption of AI systems

Explanation

Sam Daws pointed out the significant energy and water consumption of AI systems, which contributes to global greenhouse gas emissions. He highlighted the growing concern about the environmental impact of AI.


Evidence

Currently, AI accounts for 1-2% of global energy use, and data centers use more water than four times that of Denmark every year.


Major Discussion Point

Environmental and Energy Challenges of AI


Need for energy optimization and efficiency in AI

Explanation

Sam Daws emphasized the importance of improving energy efficiency in AI systems. He discussed various solutions and initiatives to address the energy consumption issue in AI development.


Evidence

Recommendations include standardizing the measurement of AI emissions, incentivizing transparency from industry on energy use, making software more efficient, and powering new data centers only using renewable energy.


Major Discussion Point

Environmental and Energy Challenges of AI


M

Min Jianing

Speech speed

110 words per minute

Speech length

1002 words

Speech time

542 seconds

AI’s influence on knowledge production and human nature

Explanation

Min Jianing discussed how generative AI is transforming knowledge production and challenging our understanding of human nature. He raised questions about the impact of AI on anthropocentrism and the reshaping of human nature.


Evidence

He presented 10 epidemiological questions on generative AI, including questions about the intrinsic mechanisms of evolution in knowledge production and the potential end of anthropocentrism.


Major Discussion Point

Societal Impact of AI


Agreed with

Poncelet Ileleji


Kevin C. Desuoza


Agreed on

Human-centered approach to AI development


Reshaping social structures and decision-making processes

Explanation

Min Jianing explored how generative AI models could reshape social structures and decision-making processes. He discussed the potential for AI to break through limitations of human rationality and create new paradigms for knowledge exploration.


Evidence

He posed questions about how generative AI models might help humans overcome limitations of bounded rationality and achieve innovations in decision-making.


Major Discussion Point

Societal Impact of AI


K

Kevin C. Desuoza

Speech speed

118 words per minute

Speech length

1058 words

Speech time

534 seconds

Cognitive computing systems and public value

Explanation

Kevin C. Desuoza discussed the importance of understanding cognitive computing systems in a broader context beyond just AI. He emphasized the need to focus on public value and the governance of these systems.


Evidence

He presented a framework showing the relationship between behavioral intentions, actual behavior, and various factors like risk, privacy, trust, and social presence in cognitive computing systems.


Major Discussion Point

Global Governance of AI


Agreed with

Poncelet Ileleji


Min Jianing


Agreed on

Human-centered approach to AI development


R

Ru Peng

Speech speed

146 words per minute

Speech length

914 words

Speech time

374 seconds

Governance transformation and standardization for intelligent society

Explanation

Ru Peng discussed the need for governance transformation and standardization in the development of an intelligent society. He emphasized the importance of standardization as a tool for addressing the challenges and opportunities presented by AI.


Evidence

He mentioned the establishment of 92 national intelligent social governance experimental bases in 22 provinces across China, and the development of national standards for social impact, generative AI, and artificial intelligence social experiments.


Major Discussion Point

Building an Intelligent Society


Agreed with

Sam Daws


Gong Ke


Poncelet Ileleji


Agreed on

Need for global collaboration in AI governance


Differed with

Gong Ke


Sam Daws


Differed on

Approach to AI governance


Standardization as a tool for AI governance

Explanation

Ru Peng highlighted the importance of standardization in AI governance. He discussed how standardization is shifting from technical standards to governance standards and expanding to cover comprehensive issues.


Evidence

He mentioned China’s efforts in promoting standardization of intelligent social governance, including the formulation of guidelines and the establishment of a national standardization working group.


Major Discussion Point

Global Governance of AI


P

Poncelet Ileleji

Speech speed

131 words per minute

Speech length

584 words

Speech time

266 seconds

Human-centered approach to AI development

Explanation

Poncelet Ileleji emphasized the importance of a human-centered approach to AI development. He stressed that AI should respect human values and be built on trust.


Evidence

He referenced the final document of the government for AI for humanity released in September 2024 by the UN AI advisory board.


Major Discussion Point

Building an Intelligent Society


Agreed with

Kevin C. Desuoza


Min Jianing


Agreed on

Human-centered approach to AI development


AI’s role in achieving UN Sustainable Development Goals

Explanation

Poncelet Ileleji discussed the potential of AI in achieving the UN Sustainable Development Goals. He emphasized the importance of using AI to address global challenges and promote sustainable development.


Evidence

He mentioned the need to focus on using AI to achieve goals such as poverty reduction, health improvement, and climate change mitigation.


Major Discussion Point

Global Governance of AI


Agreed with

Sam Daws


Gong Ke


Ru Peng


Agreed on

Need for global collaboration in AI governance


Ethical considerations and human values alignment in AI

Explanation

Poncelet Ileleji stressed the importance of aligning AI development with human values and ethical considerations. He emphasized the need for AI to respect human rights and not encroach on individual freedoms.


Evidence

He referenced the recommendations from the UN AI advisory board, which call for annual reports surveying AI-related capabilities, opportunities, and risks.


Major Discussion Point

Societal Impact of AI


Agreements

Agreement Points

Need for global collaboration in AI governance

speakers

Sam Daws


Gong Ke


Ru Peng


Poncelet Ileleji


arguments

International collaboration on sustainable AI development


China’s national plan and objectives for AI development


Governance transformation and standardization for intelligent society


AI’s role in achieving UN Sustainable Development Goals


summary

Multiple speakers emphasized the importance of international cooperation and standardization in AI governance to address global challenges and promote sustainable development.


Human-centered approach to AI development

speakers

Poncelet Ileleji


Kevin C. Desuoza


Min Jianing


arguments

Human-centered approach to AI development


Cognitive computing systems and public value


AI’s influence on knowledge production and human nature


summary

Speakers agreed on the importance of putting human values and ethics at the center of AI development, considering its impact on society and human nature.


Similar Viewpoints

Both speakers addressed the need for sustainable AI development, with Sam Daws focusing on environmental challenges and Gong Ke mentioning China’s plan for sustainable AI growth.

speakers

Sam Daws


Gong Ke


arguments

High energy and water consumption of AI systems


China’s national plan and objectives for AI development


Both speakers emphasized the importance of governance frameworks and standardization in AI development to ensure public value and address societal challenges.

speakers

Ru Peng


Kevin C. Desuoza


arguments

Standardization as a tool for AI governance


Cognitive computing systems and public value


Unexpected Consensus

Interdisciplinary approach to AI development and governance

speakers

Min Jianing


Kevin C. Desuoza


Ru Peng


arguments

AI’s influence on knowledge production and human nature


Cognitive computing systems and public value


Governance transformation and standardization for intelligent society


explanation

Despite coming from different backgrounds, these speakers all emphasized the need for an interdisciplinary approach to AI development and governance, considering technological, social, and ethical aspects.


Overall Assessment

Summary

The speakers generally agreed on the importance of global collaboration, human-centered approaches, and interdisciplinary perspectives in AI development and governance. There was also consensus on the need for standardization and addressing environmental challenges.


Consensus level

The level of consensus among the speakers was relatively high, with complementary perspectives on key issues. This suggests a growing recognition of the complex, multifaceted nature of AI governance and the need for collaborative, holistic approaches to address global challenges and opportunities in AI development.


Differences

Different Viewpoints

Approach to AI governance

speakers

Gong Ke


Sam Daws


Ru Peng


arguments

China’s national plan and objectives for AI development


International collaboration on sustainable AI development


Governance transformation and standardization for intelligent society


summary

While Gong Ke focused on China’s national plan for AI development, Sam Daws emphasized the need for international collaboration, and Ru Peng stressed the importance of standardization in AI governance. This indicates different approaches to AI governance at national, international, and standardization levels.


Unexpected Differences

Focus on energy consumption of AI

speakers

Sam Daws


Other speakers


arguments

High energy and water consumption of AI systems


Need for energy optimization and efficiency in AI


explanation

Sam Daws was the only speaker to extensively discuss the environmental impact and energy consumption of AI systems. This focus on the ecological aspects of AI development was unexpected given the broader discussion on AI governance and societal impact.


Overall Assessment

summary

The main areas of disagreement centered around the approach to AI governance, the focus of AI applications, and the consideration of AI’s environmental impact.


difference_level

The level of disagreement among the speakers was moderate. While there were different emphases and approaches, there was a general consensus on the importance of responsible AI development and its potential to address global challenges. These differences in perspective can be seen as complementary rather than conflicting, potentially enriching the overall discussion on AI governance and development.


Partial Agreements

Partial Agreements

Both speakers agreed on AI’s potential to address global challenges, but Sam Daws focused specifically on climate solutions, while Poncelet Ileleji emphasized a broader range of Sustainable Development Goals.

speakers

Sam Daws


Poncelet Ileleji


arguments

AI’s potential contributions to climate solutions


AI’s role in achieving UN Sustainable Development Goals


Similar Viewpoints

Both speakers addressed the need for sustainable AI development, with Sam Daws focusing on environmental challenges and Gong Ke mentioning China’s plan for sustainable AI growth.

speakers

Sam Daws


Gong Ke


arguments

High energy and water consumption of AI systems


China’s national plan and objectives for AI development


Both speakers emphasized the importance of governance frameworks and standardization in AI development to ensure public value and address societal challenges.

speakers

Ru Peng


Kevin C. Desuoza


arguments

Standardization as a tool for AI governance


Cognitive computing systems and public value


Takeaways

Key Takeaways

China has a comprehensive national plan for AI development focused on building an intelligent society, with objectives like improving social services, governance, and public safety


Global governance and international collaboration are crucial for addressing challenges like energy consumption and environmental impact of AI development


AI and cognitive computing systems are reshaping knowledge production, decision-making processes, and social structures, requiring new governance approaches


Standardization is seen as an important tool for governing AI development and its societal impacts


There is a need for human-centered, ethical approaches to AI that align with human values and contribute to sustainable development goals


Resolutions and Action Items

Promote standardization efforts for AI governance, particularly in China


Explore opportunities for international collaboration on sustainable AI development, especially leading up to COP30


Continue research and experimentation on AI social impacts through initiatives like China’s 92 national intelligent social governance experimental bases


Unresolved Issues

Specific mechanisms for global interoperable approaches to sustainable AI development


How to balance national interests and global collaboration in AI governance


Concrete steps to align AI development with human values and ethics across different cultural contexts


Methods to effectively measure and mitigate the energy and environmental impacts of AI systems


Suggested Compromises

Leveraging existing UN frameworks and multi-stakeholder forums to bridge differences between China, the West, and other regions on AI governance


Balancing the pursuit of AI advancement with responsible development practices that consider social impacts and sustainability


Thought Provoking Comments

AI is therefore subject to the Jevons Paradox, named after an English economist who, back in 1865, observed that increased efficiency of coal use actually led to increased consumption of coal across a wide range of industries, and AI is following a similar path.

speaker

Sam Daws


reason

This comment introduces a counterintuitive economic principle to explain why AI energy use continues to rise despite efficiency gains. It challenges the assumption that technological efficiency automatically leads to reduced resource consumption.


impact

This insight shifted the discussion towards the need for more comprehensive approaches to managing AI’s environmental impact beyond just improving efficiency. It added complexity to the conversation about sustainable AI development.


When the artificial intelligence system presents its difficult questions from unique perspectives, humans will be forced to re-examine existing beliefs and achieve value upgrades through openness, neutral learning, and evolution.

speaker

Min Jianing


reason

This comment presents AI not just as a tool, but as an entity capable of challenging human thinking and values. It suggests a more symbiotic relationship between humans and AI in intellectual and ethical development.


impact

This perspective expanded the discussion beyond technical and governance issues to consider the philosophical and ethical implications of AI development. It prompted deeper reflection on the nature of human-AI interaction and co-evolution.


We are still trying to solve for health care in most countries, whereas the leading technology companies are solving for healthiness. They have completely flipped how they look at investments in health care.

speaker

Kevin C. Desuoza


reason

This comment highlights a fundamental shift in problem-framing that AI enables. It demonstrates how AI can lead to reimagining entire sectors and approaches to societal challenges.


impact

This insight broadened the conversation to consider how AI might transform not just processes, but entire paradigms of thinking about social issues. It encouraged participants to think more creatively about AI’s potential impacts across various domains.


Standardization is not only a political tool with technical attributes but also strategic, leading, social, and people-oriented.

speaker

Ru Peng


reason

This comment reframes standardization from a purely technical process to a multifaceted approach for shaping societal development. It emphasizes the broader implications of how we set standards for AI.


impact

This perspective shifted the discussion towards considering standardization as a key lever for responsible AI development and governance. It highlighted the importance of interdisciplinary approaches in AI policy-making.


Overall Assessment

These key comments collectively broadened the scope of the discussion from technical and governance issues to include economic, philosophical, ethical, and societal dimensions of AI development. They challenged participants to think more holistically about the implications of AI, considering both its potential benefits and risks across various domains. The comments also emphasized the need for interdisciplinary approaches and creative problem-solving in addressing the challenges posed by AI. Overall, these insights deepened the complexity of the conversation and encouraged a more nuanced understanding of how AI might shape future societies.


Follow-up Questions

How can we standardize the way we measure AI emissions?

speaker

Sam Daws


explanation

Standardizing AI emissions measurement is crucial for accurately assessing and managing the environmental impact of AI technologies.


How can we incentivize transparency from industry on their energy use and emissions?

speaker

Sam Daws


explanation

Industry transparency is essential for understanding and addressing the true environmental costs of AI development and deployment.


Can we have a higher ambition coalition of middle and smaller powers moving towards COP30 to address AI sustainability issues?

speaker

Sam Daws


explanation

A coalition of nations could drive progress on sustainable AI development and implementation at a global level.


What are the intrinsic mechanisms of evolution in knowledge production triggered by generative large-language models?

speaker

Min Jianing


explanation

Understanding these mechanisms is crucial for grasping the fundamental changes in how knowledge is created and disseminated in the age of AI.


How will generative large-language models reshape the structure of human society and the relations of production?

speaker

Min Jianing


explanation

This question addresses the potential societal and economic impacts of AI, which are critical for preparing for future changes.


How can we break down disciplinary barriers and construct a fluid knowledge graph without disciplinary boundaries using generative large-language models?

speaker

Min Jianing


explanation

This research area could lead to more integrated and holistic approaches to knowledge and problem-solving across various fields.


How will large language models revolutionize social science research and create a new paradigm with greater explanatory power, predictive power, and guiding power?

speaker

Min Jianing


explanation

This question explores the potential for AI to transform research methodologies and enhance our understanding of social phenomena.


What are the connotations and extensions of aligning artificial intelligence with human values?

speaker

Min Jianing


explanation

This question is crucial for ensuring that AI development remains ethical and beneficial to humanity.


How can we rethink problem design to truly reap the value of cognitive computing systems?

speaker

Kevin C. Desuoza


explanation

Redesigning how we approach problems could unlock the full potential of AI and cognitive computing in solving complex issues.


How can we develop key standards in areas such as social application for generative AI technology, smart healthcare, smart justice, and smart grassroots governance?

speaker

Ru Peng


explanation

Developing these standards is crucial for ensuring responsible and effective implementation of AI across various sectors of society.


Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

Day 0 Event #184 From Compliance to Excellence in Digital Governments

Day 0 Event #184 From Compliance to Excellence in Digital Governments

Session at a Glance

Summary

This discussion focused on digital government excellence and the key factors for improving digital services in the public sector. Dr. Axel Domeyer, a partner at McKinsey, presented a framework for assessing digital government maturity that goes beyond basic compliance to include excellence and impact. He emphasized the importance of having a central digital government agency to drive improvements across the ecosystem of government entities. The discussion highlighted three key elements: compliance with basic standards, excellence in implementing best practices, and measuring impact through key performance indicators (KPIs).

Domeyer presented case studies from the UK, Singapore, and Saudi Arabia to illustrate different approaches to digital government excellence. The UK was noted for its comprehensive functional standard for digital and data, while Singapore was praised for its systematic approach to setting and achieving KPIs. Saudi Arabia was highlighted for its rapid improvement in digital services rankings.

The importance of user satisfaction as a key metric was stressed, along with the need to publish outcomes of digital investments. Challenges in implementing digital excellence were discussed, including the complexity of government ecosystems and the balance between in-house capabilities and external vendors. The discussion touched on the future of KPIs in digital government, emphasizing the need to focus on outcomes rather than specific technologies.

Participants raised questions about the differences between public and private sector digital services, the role of business process management, and the balance between government involvement and private sector innovation in digital development. The discussion concluded with insights on the optimal balance of in-house and outsourced IT capabilities in government entities.

Keypoints

Major discussion points:

– Moving beyond basic compliance to digital excellence and impact in government

– Components of digital government excellence: strategy, processes/operations, technology, organizational resources

– Importance of measuring and publishing KPIs for digital government impact

– Case studies of digital government initiatives in the UK, Singapore, and Saudi Arabia

– Balancing in-house capabilities vs. external vendors for government IT/digital functions

Overall purpose:

The discussion aimed to explore how governments can go beyond basic compliance with digital standards to achieve excellence and measurable impact in their digital transformation efforts. The speaker presented frameworks and case studies to illustrate best practices in this area.

Tone:

The overall tone was informative and professional, with the speaker presenting concepts and examples in an authoritative manner. During the Q&A portion, the tone became more conversational and collaborative as the speaker engaged with audience questions and provided more off-the-cuff insights based on his experience. Throughout, there was an underlying tone of optimism about the potential for governments to improve their digital capabilities and services.

Speakers

– Noura Alsanie: Director of Digital Excellence and Sustainability at DGA (Digital Government Agency)

– Axel Domeyer: Partner at McKinsey and Company, specializes in helping clients with complex technology transformations across government entities

– Audience: Various audience members asking questions (roles/expertise not specified)

Additional speakers:

– Zoran Jordanoski: From UNU-EGOV (role/expertise not specified)

Full session report

Digital Government Excellence: Moving Beyond Compliance

This comprehensive discussion, featuring Dr. Axel Domeyer, an expert in complex technology transformations across government entities, explored the key factors for improving digital services in the public sector. The dialogue centered on how governments can progress beyond basic compliance with digital standards to achieve excellence and measurable impact in their digital transformation efforts.

Framework for Digital Government Excellence

Domeyer presented a framework for assessing digital government maturity that encompasses three key elements:

1. Compliance with basic standards

2. Excellence in implementing best practices

3. Measuring impact through key performance indicators (KPIs)

This framework aims to provide a more holistic approach to digital governance, moving beyond mere adherence to standards. Domeyer emphasized that digital government excellence comprises four main components: strategy, processes/operations, technology, and organizational resources.

The Role of Central Digital Government Agencies

A significant point of discussion was the importance of having a central digital government agency to drive improvements across the ecosystem of government entities. Domeyer noted that while these agencies can design, build, and operate common digital solutions, their primary function is to influence the broader ecosystem. This perspective challenges the common perception of central agencies as direct implementers, reframing their role as ecosystem influencers.

Case Studies in Digital Government Excellence

To illustrate different approaches to digital government excellence, Domeyer presented case studies from the UK, Singapore, and Saudi Arabia:

1. The UK was noted for its comprehensive Digital and Data Functional Standard, which covers eight key areas including user needs, data, technology, and security. This standard serves as a best practice example for other nations.

2. Singapore was praised for its systematic approach to setting and achieving KPIs, with 15 specific metrics guiding their digital governance efforts. Singapore’s GovTech, a statutory board that can operate like a private company, was highlighted as a model for in-house capability building.

3. Saudi Arabia was highlighted for its rapid improvement in digital services rankings, now placed fourth in the digital services index. This progress was attributed to a performance improvement mindset and overachieving their Vision 2030 digital government goals.

These case studies demonstrated varying strategies for achieving digital excellence in government services.

Challenges and KPIs in Implementing Digital Excellence

The discussion addressed several challenges in implementing digital excellence and the importance of well-defined KPIs:

1. Public sector services generally lag behind private sector offerings in quality across most countries.

2. Some nations, like Germany, lack centralized technology platforms, which can hinder coordinated digital transformation efforts.

3. Measuring user satisfaction accurately presents difficulties, despite its importance as a key metric.

4. Balancing in-house capabilities with external vendor expertise remains a complex issue for many government entities.

Key points on KPIs included:

1. User satisfaction and cost-effectiveness were identified as crucial metrics.

2. The need for proactive service delivery metrics was emphasized.

3. Challenges in standardizing KPIs across agencies were acknowledged.

4. A focus on outcome-based KPIs rather than technology adoption metrics was recommended.

Domeyer suggested that fundamental KPIs like user satisfaction and cost-effectiveness are likely to remain important over time, rather than metrics tied to specific technologies.

Building Digital Capabilities in Government

The discussion touched on strategies for building digital capabilities within government:

1. Basic digital literacy for all government employees was deemed essential for digital excellence.

2. A balance between internal expertise and external support was discussed, with industry averages showing a 50-50 split between in-house capabilities and outsourced IT services.

3. The importance of business process management in achieving digital excellence was highlighted by an audience member, introducing a more technology-focused perspective.

4. Domeyer emphasized the need for a product management mindset in government service delivery.

Audience Questions and Future Considerations

The discussion concluded with audience questions, raising several points for future consideration:

1. How to effectively measure and standardize user satisfaction metrics across different government agencies.

2. The optimal balance between government involvement and private sector participation in digital governance.

3. Strategies to address the quality gap between public and private sector digital services.

4. Identifying specific technical capabilities that should be prioritized for in-house development in government agencies.

5. The potential impact of emerging technologies like AI on future KPIs for digital government.

In response, Domeyer stressed the importance of having a strong central digital government governance mechanism and the need for a balanced approach in public-private partnerships for digital services.

Conclusion

The discussion provided a comprehensive exploration of digital government excellence, highlighting the need for a balanced approach that considers compliance, excellence, and impact. Key takeaways include the importance of central digital agencies as ecosystem influencers, the value of well-defined KPIs, and the need for a product management mindset in government service delivery. While challenges remain, the dialogue offered valuable insights into strategies for improving public sector digital services and measuring their effectiveness.

Session Transcript

Nora Saneh: Are you trying to test it? Okay, it’s good, right? Yeah. I think there’s different channels. Yeah, test, test. Can you hear me? No? Yeah. Test, test. No, no. Maybe they switched it off. Okay. What? Can you hear me, guys? Can you hear me? Testing. No? It’s working? Okay. Where did it go? Okay, thank you. Perfect. Okay, thank you. I think so. Right. I think we may be at. Yep. Yep. Salaam alaikum. Can you hear me, everyone? Salaam alaikum. Can you hear me, everyone? Salaam alaikum. Good evening and welcome everyone to this workshop and welcome to Riyadh. I’m honored to have you here or welcome you here at the 2024 IELTS forum. and for those who are attending online, we’re glad to have you attending as well. My name is Nora Saneh, I’m the Director of Digital Excellence and Sustainability at DGA. Today, I’m honored to welcome Dr. Axel Doniel, he’s a partner at McKinsey and Company, and to introduce Dr. Axel, he’s specialized in helping clients ensure the value of complex and long-term technology transformation across multiple government entities. This includes the implementation of best practices in different dimensions or different domains, for example, architectural design, software development, program management, and stakeholder management. And speaking of best practices, Dr. Axel is with us today to discuss and explain what is beyond compliance for government entities, and how can we uptake, help government entities to take them from compliance to excellence. So, join me in welcoming Dr. Axel, the floor is yours.

Axel Domeyer: Thank you very much, Nora, and thank you very much, Digital Government Agency, for having me at this distinguished event. Maybe a quick, small addition to my background. So, I’ve been working with governments around the world, in Germany, which is where I’m coming from, but also in the rest of Europe, very much in the Middle East, and in basically all other continents, for about the last 12 years to support digital transformation. And the type of client that I actually enjoy working the most with are central digital government agencies. So similar to the DGA in Saudi Arabia. And I’ve worked with a couple of those around the world. So such entities, central digital government agencies, they often face the expectation to fix digital government, right? To kind of like finally get it done. And I think it’s important to realize that yes, they can do quite a bit, right? So they can design and build and operate some common digital solutions for the country, which they often do. But at the end of the day, what they really do is they influence the ecosystem, right? So the ecosystem of ministries, of agencies that constitute the government, and they can’t force the ecosystem to become more digitally mature to perform better, but they need to find ways to influence this positively. There’s essentially three mechanisms how that can be done, right? So number one is you can set policies, right? And you can say, you know, like, look, you have to comply with these policies, number one. Number two is capability building, right? So you can teach people in the ministries and the agencies how to implement best practices, how to do the right thing. And then number three, and this is what I want to talk about today, is to put forward an instrument to assess the digital performance, the digital maturity of the ecosystem of individual entities and to basically point out constructive ways of improving, right? So that’s the instrument I want to focus on today. And what I want to argue is that it’s a good idea here to have a compliant approach, yes. So… Do entities comply with digital government standards? But then to also add several, so two further elements, right? So number one is excellence, right? So have a more detailed view of what actually constitutes the best practice in managing digital in a government entity and help agencies to achieve this best practice. And then I would also argue that as a third element, you will require an impact approach as well, right? So it’s not just enough to say this is how you should do it, but you should also track if you’re actually delivering results. And so these are the three elements that I believe you should have, compliance, excellence, and impact. So let’s start with compliance, right? And I think Saudi Arabia is actually a great example of having a very mature approach to tracking and managing the compliance with digital government standards and policies, right? That’s called the QIA statistic. The governor of DGA actually went welcoming the crowd at IGF today. He shared the latest number of Saudi Arabia in 2024, but 87% compliance of all entities with digital government standards. So it’s gone up significantly since 2021. I think that’s good news, right? And clearly it’s very helpful to track if the agency or the entities are complying with the basics, right? So for instance, do you have a cloud computing unit in your agency? You should, right? So let’s take that box and let’s make sure that everybody takes that box. As you can see, I mean, as we are now approaching kind of like a hundred percent, right? On the basics, right? Not quite, right? But getting there. I think it’s important to also have more, let’s say additional and sophisticated tools to see how you can go beyond the basics, right? This is where I believe the excellence approach comes in. And before I talk about excellence, right? So let’s get some facts about this topic, about this problem, strange, right? So that we kind of get context here, right? So fact number one, digital government ecosystem systems are actually very complex, right? So typically you have like anywhere between 150 and 200 individual entities in the government, which is a whole lot more complex than when you’re running a private sector organization, right? Which, you know, like my friends in the private sector in digital McKinsey, they’re always kind of astonished at like how complex it is to run digital transformation in the government, given how complex it actually is, right? So you typically have like about 10 sectors and a hundred plus government agencies. So that’s why a lot of governments have now implemented a central digital government agency like DGA and Saudi Arabia, right? Which is probably the only way to handle this complexity effectively. So that’s fact number one. Fact number two, is we’re kind of getting, so 10 years ago, right, when I started in the field, yeah, I mean, it was kind of new, right, to government and like most of, I mean, there were a lot of paper-based things happening in the clients I’ve been working with. It’s no longer the case, right? So I think digital government has reached a certain degree of maturity now, right? So that it’s not enough to kind of like focus on the basics, right? So… Almost more than 50% of all countries in the world have fully operationalized digital public infrastructure frameworks. The majority of OECD countries is actually already be leveraging AI. So if you really want to go beyond the basics, then you need yeah, you need kind of like individual entities to do what best practice requires, right? And you need to support them in doing so, right? So just checking the boxes on the basics, it’s not going to be enough. This looks like right in our example here, right? So we said compliance, have a cloud computing unit, right? Excellence in this particular context would mean, so for instance, you could say you need the cloud transformation and adoption plan, right? So there’s a detailed migration plan, there’s cloud adoption monitoring in place. So you basically define kind of like a few things here, right? That go beyond just kind of like the basic thing of having a cloud unit in place, right? And you can add many more things here, right? So you could say you need like a financial cloud financial management or in ops capability, right? Which enables you to actually deliver the financial savings that you typically hope to realize, right? When you start a cloud program. So that’s what excellence looks like on this particular example, right? But then again, it’s not just okay or enough to look at how you’re doing things, right? Like if you’re doing it with the right style and you’re looking good, so to speak, best practice like while you do it. I mean, you really want to make sure that you actually track if you also deliver the goods, right? So in this particular example, right? Like if we stick with the cloud example, you would want to track, are you actually delivering cost savings? Are you reducing time to market, right? For innovations, digital innovations that you’re pushing. Live the day, like this digital governor, central digital government agency, you want to support ecosystem to become more mature, to perform better. These are the three ingredients that will be helpful, right? So basic compliance, excellence, that checks kind of like a more sophisticated view of what the best practice behaviors are for the entity. And then you also track the actual outcomes of investing in digital, right? So are you getting return on investment? Right, so in order to, you know, like also involve the audience here a bit, and, you know, I hope this is going to be a dialogue when we get to the Q&A, but let me ask you this question, right? So in your country, and I see there’s a variety of countries represented here. In your country, what do you think, what is the public administration emphasizing in terms of fostering the overall digital government ecosystem? Is it more compliance? Is it more excellent? Is it more impact? And please go to the survey, and then we’re going to see a few results on the screen in a moment. All right. So this is quite interesting, right? So it confirms a bit the hypothesis, right, that I had when writing this talk, that in most places, compliance is kind of where you start, right? I mean, it’s the most basic of the three ingredients. I mean, you should have it. But it’s also, let’s say, kind of like the, I mean, it gets you so far, right? So it gets you to a certain point. But at the end of the day, it’s not kind of like the most sophisticated element. But it is kind of like where most countries are today, right? So let’s take a moment to decompose, to unpack digital government excellence a little bit, right? So what could be included in your country with your central digital government agency had to develop an assessment rubric? Is your, are the entities, the ministries in your government, are they performing well on digital government excellence? What would be the dimensions, right, that you could include? And in my view, there’s essentially four dimensions here, right? Strategy, right? So most foundational component. And here you can ask questions such as, is the strategy bold enough, right? So are you actually setting ambitious targets? Does it link in a meaningful sense to the business strategy of the entity, right? Or just kind of like a cookie cutter digital strategy that could, you know, like as well be true for a chocolate factory, right? So does it link to the business strategy of your entity? Is there a clear business case? in the strategy for the investments that you’re making, right? So are you really putting forward, that’s the ROI that we expect from implementing the strategy? And also does it look beyond the ecosystem, right? So beyond the, into the ecosystem, I mean, right? So beyond the entity into the ecosystem, towards the partners, so that could be other government entities and also suppliers, right, in the private sector. Second element is processes and operations. So this will cover questions about, you know, do you have the right governance framework in place, right? So do you have the right roles? Do you have the right processes defined? Do you use agile ways of working? Funding cycle actually support agile ways of working. And that’s one of the challenges I often see in government entities, right? That yes, we want to do agile, but the way we actually govern ourselves in particular how we govern funding, it doesn’t actually support agile ways of working, right? So in your assessment rubric, that could be a question you could ask. Third category is technology, right? So that would include questions about your architecture. Do you have a modern kind of platform architecture that clearly distinguishes between things that everybody should use in the same way and products, right, that you build on top of the architecture? So are you following what are architecture paradigms here? Are you using cloud in the right way? Do you follow cybersecurity standards? So that would be the technology component. And lastly, we have organizational resources, right? So the most important question in that category is, do you have the right capabilities in-house, right? And do you work with outside vendors in an effective way, right? So that in sum, right, between your… your in-house capabilities and your external capabilities, you have all the right capabilities in place for you to deliver on your strategy. This is just kind of like, it’s one cut, right? Or like one way of categorizing this. I mean, there’s many, many other ways, right? And other governments have done, or different governments have done this in different ways. But this could give you kind of like a basic overview, right, of like what you could potentially include here. Of your assessment framework in place, right? The next question becomes, how do you actually use it? Right, so is it a scoring framework, right? Where you go out to each entity and you score each entity each year, everybody gets a score, the score is published on the websites, and then it becomes a little bit like, you know, like going to school, right, and taking an exam. And I mean, let’s say there’s a certain risk that entity is kind of like over state, right? So their digital performance, right? Because they don’t want to look bad, right? So those are my point of view. And some of the governments I’ve worked with that have such an assessment framework in place, they would rather use it, not as kind of like a schooling device, but as a coaching device, right? Where you have an assessment framework in place that entities can use to self-evaluate, right? To see how they are doing against kind of concrete things that they could be doing better. And then the central digital government agency would kind of coach them on, concite them, so to speak, right? On how to improve on these dimensions, right? But you don’t necessarily get published, right? And you say, oh, you know, like this year, and you know, my experience, not particularly constructive, right? If you’re a digital government agent and you want kind of a good working relationship with the entity. is much better to use that as a coaching device. Impact on the other hand, that’s a different story, right? So on impact, I’m like firmly convinced that you should have a set of KPIs that you’re measuring and then you publish it, right? So if you say, I wanna save 500 million US dollars or whatever currency you are in through using digital, then you should measure if you actually get there, right? And this is something that you should kind of inform the public about in order to make this kind of a firm commitment that is then much more likely to be followed through with. All right, next audience question. I talked a little bit about KPIs for digital government impact, right? So let’s see what you guys have KPIs, right? That you think are kind of constructive to measure and publish kind of nationally in order to track the progress of digital transformation in government. Yes, I think these are a few good ones. And my sense is you probably only get one word to type. So it’s a little bit hard to formulate an actual KPI. But I like these concepts, right? So satisfaction in my view, I mean, that’s the ultimate KPI. So this digital government is all about making life easier for citizens and making it easier to run a business so the economy can grow. And how do you see if you achieve these targets? I mean, you ask people and businesses how they are doing, and I think this is for me like the major KPI that everybody should. Interestingly it’s actually one of the, I mean, there are some governments that actually do this, right? But overall, let’s say the enthusiasm to publish kind of like these national KPIs on how are you actually doing on service delivery is somewhat limited in most places, right? So and I think the places that do best, right? So I mean, Saudi, for instance, it’s made a huge jump in the digital services index on the EDGI. And I think one of the reasons why they’ve made this big jump is because they actually publish the outcomes of their digital government investments, and they hold kind of the ecosystem accountable to actually deliver, right? So in my view, that’s really key. So satisfaction is great, experience is related to this, usage, right? Like another kind of like very important KPI. So in Germany, where I come from, we have a lot of digital government solutions online. I would say the digital adoption, the usage rate is not particularly high, right? And if this was actually published on a regular basis for every service that’s online, my sense is we would probably be doing a little bit better, right? So this is also a great one to publish. But yeah, thank you. Thank you so much for your contributions. I see you guys are kind of like thinking about this very much along the same lines that I was thinking, and thanks for the engagement. So now let’s, let’s take, you know, I’ve given you a little bit of like theory and like a framework. So now let’s, let’s look at some real case studies, right? To show that this is not just something that I, I came up with, right? Like as a theory I want to propose, but this is something that’s actually happening, right? And if we look at excellence, I want to say that the, like one of the countries or the case examples that I like the best is the United Kingdom, right? Which in 2020 published a digital and data functional standard, which is actually not. So this is not, there’s not a scoring framework, right? So this is very much what I described kind of like as a coaching and teaching device, which basically sets forth a number of best practices that digital government or that government entities can follow in the space of digital and data. It’s actually part of a wider web of functional standards in the government, right? So there’s one for HR, there’s one for finance, there’s one for project delivery, there’s one for property management. So the UK government basically has such a standard for basically all the functions that are common across the government, right? So things that are not unique to a particular agency, but that are common across entities. And what they cover in the standard are, so not four areas, but eight areas, right? So they look and they very clearly prescribe. So these are the roles, right? That you should have in your entity, right? You should have a chief digital officer. You should have a chief data officer. You should have some person that’s accountable. all of digital within the wider government ecosystem, right? So, one neck to choke, so to speak, right? For digital performance. So, that’s the governance section. There’s also something in there about the processes, is, you know, like how, what I find interesting in the UK is that they’re very focused on assuring the value of projects and of investments they make. So, they have in the governance section of the standard, they have some very clear guidelines on how do you do this, right? Should do this, right? Some parts of it are mandatory, by the way, right? So, they would have language in there where it says you must do this, right? It’s not optional. And then there’s a lot of language in there where it says, depending on your situation in the entity, you should consider a particular way, right? So, in that sense, it’s very much not like a sloppy like a sloppy check device. The other things that are included in there are service management, right? Which is very much about like, how should you deliver the services in the government, right? So, how do you make them user-friendly? Like, what’s the setup that you should have in place to manage a service? There’s technology management, which covers architecture and IT operations management. Some standards included in there, right? So, for specific technical topics, the digital and data standard would reference some of the detailed standards, right? So, such as the cloud policy and standard, right? Or the cyber security standard. So, in this way, you know, like if you are an IT or digital officer in an entity, really know exactly what to do, right? So, in this way, I think the central digital and data office, right? So, this is the central digital and data agency in the UK. What they seek to accomplish here is to like, really, IT, the digital and data profession in the UK government in a kind of like standardized way. So that everybody rises kind of to the same level for people to kind of like interact with each other, right? So they’re all speaking, so it’s that we should be following some forwards that we should have, right? So I’m the chief digital officer and you are chief digital officer, you know, like we know what each other’s kind of responsibility is, right? And in that way, it becomes much easier to collaborate across the government in a professional way, right? So that’s how they think about this. As far as I can see, this is probably the most apprehensive effort, right? To drive the government excellence that I’m aware of, which is currently in place. I mean, I haven’t studied all 180 countries, so there might be, and I’m, you know, looking forward to the Q&A as well, like what you think about kind of your countries or other countries that are relevant here. But I think the UK is, in my view, that’s kind of like most mature in terms of driving an excellence perspective. The second example I want to highlight is Singapore, right? And I think Singapore is a great example for how to act in a very systematic way, right? So their digital government strategy, which is called the digital government print, I think the cycle just finished, right? So they came up with it in 2018. It was finished kind of to be implemented until 2023. And I suppose they’re working on the next cycle now, but in that blueprint, they put forward 15 KPIs, right? So about 120 KPIs, 15 KPIs. And then they got very serious about making sure they deliver all these KPIs, right? So 70% satisfaction with resident and business services, resident and business services, right? 100% online payment, right? So no entity allowed to not have an- online payment for service, every civil servant, at least basic digital literacy, the way that’s actually been checked, right? So like people actually go through the training and the certification to make sure that this is the case. I did some more technical things, right? Which are, would be equally important. So example of this 90 to 100% of data fields, which are included in government IT systems, machine readable and accessible by an API, right? So if I think about the governments I know, incredibly ambitious goal, right? So I would say the European governments I work with would probably be much lower than this, right? But Singapore kind of set this target and by and large, they got there, right? So they didn’t hit every KPI in 2023, but by and large, they got there. And over the five years, they saw some very significant improvements on many of these dimensions. And they published this, right? And they held themselves accountable to it and stayed honest, so to speak, on the strategy they wanted to deliver. And then, yeah, let me close with Saudi Arabia, our host today. Thank you very much for having all of us. So we’ve talked about kiosks, right? And like your very systematic approach to ensuring compliance, right? Which I think is very inspiring and mature. I also think, so in terms of KPIs, I actually know, I have no other clients in my line of work where the people I work with are more enthusiastic about KPIs, right? So Saudi Arabia, for me, is the land of KPIs. And as I said, I think this is the reason why you guys have made these amazing strides, right? Over the past few years, right? And you are now number four, right? So world class in the digital services index on the EDGI. So on impact, you’re also doing very well. And I understand you guys are working on how to address this in the future, right? So in my perspective, we can probably, you know, like with all of these ingredients in place, we can probably see more progress, right, in digital government. So we’re excited about, you know, what Saudi Arabia is going to do in the next couple of years in digital government, keep inspiring us. And thanks very much for having me, having us today at IGF. Thank you.

Nora Saneh: Simple questions, but allow me to ask first, you have covered several dimensions across the government entity. So what do you think would be the main key areas, critical areas that entities need to focus on and what would be the challenges from Saudi Arabia?

Axel Domeyer: So I think the four areas I talked about are probably a good starting point, right? So strategy, organizational resources, technology management, and operations, more services and operations. I think on these, what you have to think, I mean, the main challenge here is like, how granular do you get, right? So when you set up an assessment framework for individual entities, I mean, there’s a lot of things that people can be doing in the right way or on the wrong way. But you can’t, I mean, the UK digital and data standard, it has 40 pages, right? And when you read it, it actually still feels like sometimes a little bit high level. And then they refer to kind of like individual substandards, right, such as the cloud standard and the cyber standard. So I think the main challenge here is to kind of like hit the right level of abstraction, right? So it’s, you know, it’s still digestible, right? It’s that you’re working with. If you hit them with kind of like a 500 page manual and you kind of like try to regulate kind of like every single thing they’re supposed to be doing, I mean, like people are not going to enjoy this, right? I mean, the independent professionals, I mean, they know how to run their IT and their digital function. So you need to kind of like find a level of abstraction that’s kind of informative enough, right? So that people actually learn something out of it, but you’re not kind of like, let’s say, like overdoing it, yeah? So that I would say is kind of the main challenge when you address this.

Audience: Okay, sorry. Should we say that compliance should be always related to the regulatory framework? That is, if we judge compliance, somehow we should find ideas and in all indices in the legal aspects and so that regulatory framework really are the framework that we measure. Yeah. I think that’s a nice way of thinking about it, right? Because you,

Axel Domeyer: I mean, there are some things where you don’t want to coach people to do it. You want to make people do it, right? And if they don’t, then you have a problem, right? So, but I mean, this again, shouldn’t be like a catalog of like 500 pages where you say, you know, like to the last detail, this is what the regulation requires. You want to give some people, you want to give people some freedom to run their digital function in a way, you know, like that’s suitable to their organization. But then there are some things you want to put down in a regulation, right? So for instance, I mean, cybersecurity, I think is one of those areas where you want to be very precise. descriptive, right, about how people should approach it, and you don’t want to give too much space for interpretation, right? What exactly should be done? And then in my view, it should be a regulation. And everybody, I mean, you don’t, I mean, you should have a hundred percent compliance, right? I mean, you’re not going for 80 or 90 percent, you’re going for a hundred percent on those. Everything else, right, where it’s more of a, you know, like this would be good. This would be professional. That would be nice if you had this. I would put this into the excellence category. Oh, a lot of questions.

Audience: Okay. Okay, I have the mic, so. Yeah. Thank you. Okay, thank you. My name is Zoran Jordanoski from the UNUIGAF. We’ve been dealing with all these online services for approximately more than 20 years. And my first key question is, we have online services provided by the private sector and the public sector. And the public sector services are not, in general, even 50 to 60 percent of the quality of the private sector services. So my first question is, what is the piece that miss in the public sector services? And let’s break the dilemma. I won’t accept the argument that the government lack money or maybe modernization in public administration. Because we know that even some of the high income countries, I will take the example of Germany, can afford the latest new technology, can afford to modernize the public administration. And yet in Berlin, to register a newborn will take you two weeks just to register into the registry. So what is the piece that is missing for public services to have the same quality of the private sector services? And my second one, here it comes, what do you think is the role of the soft regulation? regulations, like you mentioned the standards. If you read one of the rules of the UK service standard is understand your user’s needs. Do you think that the government understand what user wants and what user needs?

Axel Domeyer: Great. So it’s kind of the $30 billion question in Germany, right? So 13 billion is how much we spend on public sector IT with huge amount of money. And I think it’s a very legitimate question to ask what are we actually buying, right? And you gave a good example of why we would want to be skeptical about what we’re buying, right? With these $30 billion roundabout. So the answer to that question is of course, not simple, right? Given there’s many reasons, right? Like why this isn’t working as well as we would hope to. And by the way, this is not like an uncommon phenomenon, right? So we studied a few years back, we studied, I wanna say like around 10 countries around the world and all world regions. It was not a single country where the public sector did better than the private sector in terms of service quality, right? So this was a common finding everywhere. I think there might be, so I recently read something about a digital government ranking here in the Middle East, right? Where it was actually quite close, closely related, right? So private sector and public sector were doing about equally well, right? So I think in general, it’s possible, right? To reach that state. How do you get them? I mean, if you ask me with regard to Germany, the main challenge I would say we have is on the technological platform side because we don’t have one, right? So we have a very complex digital government ecosystem with hundreds of agencies, you know, like the States like Berlin, municipalities, everyone’s working kind of on that. own technology. So the 30 billion euros that we’re spending for each individual service and each individual entity, it actually isn’t that much money, right? Just when you add all of it up, it becomes a lot. But as a result of the subcritical spend and the low maturity at the level of individual entities, the outcomes are what they are, right? So I think that’s one of the main reasons I would highlight, right? So you need some form of technology platform governance at the national level. Once you have that, I think the next important thing is to think about how everyone in the ecosystem can work according kind of like to best practices, develop their services, manage their services, maintain their data, and so on and so forth, in kind of like a best practice way. And I think that’s where a digital government maturity assessment or excellence framework comes in quite handy, right? But it’s kind of like the second most important thing, right? Important, but second most important in my view. Do we know the needs of the users in government? I would say on average, probably less so than the private sector, because if the private sector doesn’t do it, they go out of business. It’s a slightly stronger incentive to look after the user. But I don’t think there’s kind of like a structural obstacle to doing this, right? And there’s many government services around the world, which are fantastic, right? And I mean, they really speak to the needs of their users. So I don’t think it’s like structurally impossible, right? But I think empirically speaking, you’re right, that it’s not the case as much as we would like it to be.

Audience: I’d like to ask about the future of KPI in government, you know, with the fast pace of technology at a really rapid speed. And until now, I think they’re a little bit stagnant, or, you know, sort of the digital maturity was pegged against 100% services, end-to-end, digitized, right? UX, UI, platformization. What is your sense? What are the next batch of, let’s say, KPIs for digital government, given that AI is there? I mean, it’s a big buzzword, but I mean, what are we really talking about in the future of government, digital government?

Axel Domeyer: Yeah, that’s a great question. I think in some sense, I mean, like the good things in life, right, they kind of like stay stable over time, right? I mean, they don’t change that much. So I would expect the ones that we have right now continue to be important, right? So user satisfaction, cost, right, of investing in digital and what do you get out of it. So I think these will remain important KPIs. I think there’s a certain temptation for governments to kind of measure the adoption of specific technologies, right? So like a while ago, it was, you know, like how many blockchain projects do we have in government, right? Which I happen to think about, I mean, this is not like a super important KPI, right? So I wouldn’t go for measuring the adoption kind of individual technologies. What I would always try to do is to measure kind of like some actual outcome that like the ultimate recipients, right? So people, businesses, government entities themselves like actually care about, right? So one thing that I’ve seen becoming kind of more prominent is kind of the question of like, how many services have we moved to kind of like a completely proactive mode of delivery, right? Where you have kind of like zero touch delivery. You just get the service when you’re entitled to it, right? I think that’s a great KPI, right? And then of course, if you want to kind of like make sure that you stay innovative, I mean, you can measure things as like, you know, like how many projects are we doing in AI? Like how many people have been kind of skilled, right? In AI, I think these are kind of temporary things, right? That for a certain period of time might be useful to measure but then for the, like the real KPIs, right? That you put kind of like at the heart of your strategy, I would say kind of like the evergreens are a good start. then like maybe every five to 10 years, you get kind of like a new one. Yes. So I think one of the most important things that are affected by digital advancements or technological advancements in government entities and organizations are business processes. So what’s the role of business processes, processes, management, and digital excellence? I think it’s actually huge. So in business process management, it’s kind of like, it’s not the most exciting kind of like innovative term. And it has been around for a long time. I don’t think so. The public sector entities I have worked with, this is an area where typically they can improve, right? And how can they improve? They often have a process map, but it’s not really focused on what outcomes are these processes delivering for the constituents, right? For residents, for businesses, for other government entities, right? So I think the best entities that I’ve seen, what they have is they have a business strategy, right? Where they say, this is what we actually want and need to deliver. And it goes beyond, we need to implore that apply to us, right? I mean, that’s not a strategy. I mean, that’s a, I mean, like you should be doing it. Yes. But beyond this, you should have a view, right? Like what are you actually delivering right for the community? And then you link kind of your processes to these outcomes, right? And you say, you know, like this process. So I work a lot with labor agencies, right? So the process of matching job seekers with job opportunities, right? So this is a process or a product, right? And you can kind of like break down how it works and you should be doing this, right? And then you should measure how well this process is actually delivering. delivering on these KPIs and you should kind of codify what the process looks like today because a lot of the entities I have seen, they kind of know how they do it, right? But then, you know, like in this location, they do it like this way and another way, they do it another way, right? So the next level is to like really standardize how you’re doing it and then to kind of like continually improve it, right? To have kind of like what I would call a product management mindset, right? Like in private sector, you would say product management, not process management. But to really, I mean, service delivery in public entities is usually process driven, right? So like you start somewhere, you know, there’s transaction that’s initiated by the citizen, for instance, then it goes somewhere in the agency and then goes back to the citizen. There’s a little bit of back and forth. There’s some checking against the rules, right, that apply. There’s often very clear rules that apply to a service or a process. And you should very, you should be very well aware of like how you should model, right? Like what you’re actually doing. And then you should always be on the lookout for ways to improving this, right? So systematic business process management understood in the right way, right? It’s a strategic exercise, not as kind of like, you know, like a way to employ a very large number of consultants. You know, I can take this a little bit self-critically about our industry. Business process management as a strategic exercise is I think key, right? And should be part of a digital government excellence standard.

Audience: Okay. Thank you for your sharing. It’s very valuable and helpful for us. I have a question here. As I saw from the example of the Kingdom of UK, you said, not the Kingdom of UK, Kingdom of Saudi, Saudi Arabic. So the ranking of EGDI is first-ranking. I think it is a very good ranking. very good result. So my question here is, so for the other countries who want to improve the ranking here, what’s the main area they should do? That’s the first question. Second one is for, I saw there is some of the data is for the user experience and for the satisfactory. So I think it’s very difficult to measure for such a KPI. So do you have some good example of practice, how the governance they can measure the user experience and the satisfactory? This is what I want to ask. Sure. So, I mean, Saudi is amazing, right? And I don’t know if

Axel Domeyer: you can replicate kind of like going up 67 ranks and kind of like one, two year cycle on the digital services index. So, I mean, this might be kind of like a historical singularity in a way. Right. So what did they do and like, what can other countries do to kind of get to a similar level? I mean, it will depend a little bit on where you are as an individual country, right? I mean, some countries, I mean, Germany, for instance, is really lacking, I think on the technology platform side, right? Which is what I mentioned in response to an earlier question. Other countries might have different issues, right? But I think what the Saudis did really well over the last couple of years was kind of like this performance improvement mindset, right? So they set themselves a target as part of the Vision 2030. We want to be in a certain place by 2030, right? And I think in digital government, they overachieved, right? And they didn’t just set themselves the target. I mean, they measured it in many, many different dimensions, including kind of like the tough ones, right? Including user satisfaction, including the effectiveness of the money that’s actually being spent, including digital adoption, right? Where a lot of the clients that I have worked with, a little bit similar to what you just said, like, oh, isn’t that hard, right? To measure it. And, you know, like, isn’t it like in this agency, it’s like different from that agency and how can we have a single national KPI? And how do we get these agencies to kind of like report the KPI, you know, like in the first place and, you know, like, let’s not overdo it, right? With a performance mindset. I think Sony is a good, I mean, Sony is not a small country, right? I mean, it’s like almost 40 million people, you know, like a very complex and large government. And they kind of like didn’t follow these precepts, right? Of, you know, like, oh, it’s like hard and let’s not do it. It’s like the ministries won’t go along and so on, right? So I think once you have a strong kind of central digital government governance mechanism in place, then you can follow through on this, right? And you can say, you know, like, look, measuring satisfaction is not that hard, right? I mean, like every company does it, right? I mean, most companies, almost every company does it and many government entities do it, right? And there’s established methods of doing it. I mean, some use NPS, some use CSED and, you know, you just need to kind of align which one are you using and then you need to make the entities do it, right? I mean, that’s kind of like usually the hard part where, you know, the tricky thing that I have observed in like the clients or the contacts that I have worked with is that, yes, you have somebody who is centrally responsible for driving digital government maturity, but they don’t actually have kind of the competences, the powers to, you know, like make those decisions and make kind of all of the entities to contribute in a certain way, right? So I would say, I mean, in line with the, so last year there was an amazing report, I think it was called the digital leaders report. And I think one of the key findings of that report was like all the countries that actually do like reasonably well on digital government have a central. digital agency in place and they gave, and they give the central digital agency enough power to kind of move the ecosystem, right. Instead of, you know, like just running around and, you know, like telling people kind of obvious things, right. So I think like the, the things that I’ve been talking about today, I mean, they’re not particularly complicated or sophisticated, right. They’re kind of obvious. The challenge is to actually implement it in like a complex government ecosystem. Hello.

Audience: Thank you. Can you hear me? Yes. So in your presentation, you mentioned capabilities is an important factor to support. What are some of the digital or technical capabilities that we should focus on to propel excellence and to building it in-house?

Axel Domeyer: Right, right. So I think the approach that Singapore has with, you know, kind of like everybody should have basic literacy. I think that’s very important, right. Everybody should have it in particular people in leadership positions, right. So, so having kind of like a solid curriculum of, you know, like understanding the basics, you know, like how do you, how do you do an IT project, you know, like why is it useful to kind of like do certain things in the cloud and, you know, like others not. So a certain level of digital literacy, I think everybody should have. I think the more tricky question is about how much kind of technical, real technical capability do you build in-house versus how much do you rely on outside vendors, right. And I mean, there’s people who say, you know, like, look, like governments are like too dependent on system integrators and, you know, like outside vendors. And I mean, that’s probably true, but you also shouldn’t kind of like go overboard, right. In terms of trying to do everything in-house, because nobody does that, right. I mean, like even the most successful. I mean, like they work with vendors, right? I think the typical share of like external and internal spend is somewhere between, so on average it’s 50-50, right? And I would say kind of like some have 60-40 and others have 40-60. But there’s always like a very, you know, like big share of like outside capabilities that you’re accessing. That’s logical, right? I mean, like you’re not in the, I mean, like government entities and like also most businesses are not in the business of, you know, like having like the most up-to-date digital capabilities on everything. Like they’re in the business of their business, right? So they should access kind of external vendors to a certain degree. But I also think that, I mean, like every company or every organization is a digital company organization today, right? So there is a certain degree of in-house capability that’s helpful to have. And I think, again, I mean, Singapore is for me kind of like they’re the KPI champions and they are the capability champions, right? So what I really like about, another thing that I really like about Singapore is Gaftec Singapore, right? Which is their kind of IT delivery arm in the government. And Gaftec Singapore has actually been set up, I think they call it a statutory board, which means it’s not a public sector agency, right? So they’re not bound by kind of like the same restrictions as a typical public sector entity, but they can essentially operate like a private company. And they operate like a digital company, right? So like Google or Accenture or what have you, right? So they would hire kind of like the same level of talent. They pay them the same amount of money, but they also manage them in the same way, right? Like how a Google would manage kind of like their star architect, right? So if you deliver your projects, right, great, right? I mean, you could continue your job. If you don’t, right, maybe time for you to look for a position somewhere else, right? So Gaftec Singapore is a very interesting, it’s a public sector. in-house capability organization, but it’s run like a private sector organization that performs much, much better than all the other IT delivery organizations I know of. So having that level of capability is really crucial. But getting there the traditional way. Let me put it that way. All right. I’m getting some signs that I need to kind of like wrap up from the back. I don’t know. Maybe you can do one more question. Yeah.

Audience: Okay. Thank you. Thank you for sharing. And you only, we got to have some idea about the economy. So you know that I’m from China and we know that economy is just a two category. One is market economy, and another one is common economy. And for currently, you also mentioned about digital governance. We also know that there was a balance about technology development and also economy. If the government involved too much, maybe the technical development will be slowing. So my question is, what did you think about what kind of the KPI of the digital combined and why? Sorry, can you repeat the question? I’m not sure I quite caught this. Yeah. My question is, what did you think about the KPI of the digital governance to side to management? Because as we think there was a two category market economy, commodity economy. Yeah. So what’s kind of the right balance between kind of the government and the private sector? Yes. Yeah. It’s kind of a complicated question, right? I mean, my sense is

Axel Domeyer: that what most government entities do like in their core mission, I mean, they’re running a fairly simple business, right? I mean, they’re getting some highly structured and regulated services to the end user. And I don’t think you need to kind of, I mean, it’s not like you’re building a spaceships, right? So my sense is as long as you have kind of like a reasonable level of in-house architecture capability, you have like some software engineers, right? You can kind of like build this and direct kind of the private sector contractors that you work with. Then you’re in good shape, right? So you don’t need to go like a hundred percent, right? Like in-house capability. But you also shouldn’t go kind of like 5% in-house capability and 95% outsourced capability, which is stick, right? But it’s something that I’ve seen in many places, right? So kind of like a very kind of lopsided balance between government and private sector in terms of delivering IT, right? For public sector entities. But my, you know, like in terms of, you know, like if you really want to get a number, I think 50-50 is actually quite good, right? Because I mean, that’s kind of like the industry average, right? If you look at kind of like Gartner IT spend data, that’s kind of like where you typically end up with, right? Across kind of like all industries, 50% is kind of like your own people and like investments that you make directly. And the rest is kind of like working with whoever is your preferred private sector IT provider. So I think government is not kind of like structurally different from this, right? I mean, they’re not kind of like, they don’t need to be kind of like super innovative, like Google, right? And they’re also not like in a place where you could say, oh, you actually don’t need anyone in house, right? So I would say, you know, just go for the industry average, which would be like roundabout 50-50. Great, great discussion and thanks for having me. Thank you. I think I might still be on, right? Did you help me get unplugged? Thank you. Thank you. Thank you.

Nora Saneh: . . . . . . . . . . . . . . . . . . . . . . .

A

Axel Domeyer

Speech speed

164 words per minute

Speech length

7858 words

Speech time

2872 seconds

Three key elements: compliance, excellence, and impact

Explanation

Axel Domeyer proposes a framework for digital government excellence consisting of three key elements: compliance, excellence, and impact. He argues that these elements are crucial for improving digital government performance and maturity.

Evidence

Examples of compliance (QIA statistic in Saudi Arabia), excellence (UK’s Digital and Data Functional Standard), and impact (Singapore’s KPIs) are provided.

Major Discussion Point

Digital Government Excellence Framework

Differed with

Audience

Differed on

Approach to digital government excellence

UK’s Digital and Data Functional Standard as best practice example

Explanation

Axel Domeyer presents the UK’s Digital and Data Functional Standard as a best practice example for digital government excellence. He highlights its comprehensive approach covering various aspects of digital governance.

Evidence

The standard covers eight areas including roles, processes, service management, and technology management.

Major Discussion Point

Digital Government Excellence Framework

Singapore’s systematic approach with 15 KPIs

Explanation

Axel Domeyer praises Singapore’s systematic approach to digital government strategy, which includes 15 key performance indicators (KPIs). He emphasizes the importance of setting clear targets and holding the ecosystem accountable.

Evidence

Examples of Singapore’s KPIs include 70% satisfaction with resident and business services, 100% online payment, and 90-100% of data fields being machine-readable and accessible by API.

Major Discussion Point

Digital Government Excellence Framework

Agreed with

Audience

Agreed on

Importance of KPIs in digital government

Saudi Arabia’s progress in digital government rankings

Explanation

Axel Domeyer highlights Saudi Arabia’s significant progress in digital government rankings. He attributes this success to their systematic approach and focus on key performance indicators.

Evidence

Saudi Arabia is now ranked fourth in the digital services index on the EDGI.

Major Discussion Point

Digital Government Excellence Framework

Agreed with

Audience

Agreed on

Importance of KPIs in digital government

Lack of centralized technology platform in some countries

Explanation

Axel Domeyer identifies the lack of a centralized technology platform as a major challenge in some countries’ digital government implementation. He argues that this leads to inefficient spending and suboptimal outcomes.

Evidence

Example of Germany spending 30 billion euros on public sector IT without a centralized platform.

Major Discussion Point

Challenges in Digital Government Implementation

Importance of user satisfaction and cost-effectiveness

Explanation

Axel Domeyer emphasizes the importance of user satisfaction and cost-effectiveness as key performance indicators for digital government. He argues that these ‘evergreen’ KPIs should be at the heart of any digital government strategy.

Major Discussion Point

Key Performance Indicators (KPIs) for Digital Government

Agreed with

Audience

Agreed on

Importance of KPIs in digital government

Need for proactive service delivery metrics

Explanation

Axel Domeyer suggests the need for proactive service delivery metrics as a new KPI for digital government. He argues that measuring the number of services moved to a completely proactive mode of delivery is a valuable indicator of progress.

Major Discussion Point

Key Performance Indicators (KPIs) for Digital Government

Focus on outcome-based KPIs rather than technology adoption

Explanation

Axel Domeyer advises focusing on outcome-based KPIs rather than technology adoption metrics. He argues that measuring actual outcomes that matter to citizens, businesses, and government entities is more important than tracking the adoption of specific technologies.

Major Discussion Point

Key Performance Indicators (KPIs) for Digital Government

Agreed with

Audience

Agreed on

Importance of KPIs in digital government

Importance of basic digital literacy for all government employees

Explanation

Axel Domeyer stresses the importance of basic digital literacy for all government employees, especially those in leadership positions. He argues that this is crucial for building digital capabilities in government.

Evidence

Singapore’s approach of ensuring basic digital literacy for all civil servants is mentioned as an example.

Major Discussion Point

Building Digital Capabilities in Government

Singapore’s GovTech as model for in-house capability building

Explanation

Axel Domeyer presents Singapore’s GovTech as a model for building in-house digital capabilities in government. He highlights its unique structure and management approach that allows it to operate like a private sector organization.

Evidence

GovTech Singapore is set up as a statutory board, allowing it to operate like a private company and attract top talent.

Major Discussion Point

Building Digital Capabilities in Government

Need for balance between internal and external capabilities

Explanation

Axel Domeyer argues for a balance between internal and external capabilities in government IT. He suggests that while some outsourcing is necessary, governments should maintain a significant level of in-house capability.

Evidence

Industry average of 50-50 split between internal and external IT spend is mentioned as a benchmark.

Major Discussion Point

Building Digital Capabilities in Government

Agreed with

Audience

Agreed on

Need for balance between internal and external capabilities

A

Audience

Speech speed

142 words per minute

Speech length

776 words

Speech time

327 seconds

Public sector services lag behind private sector in quality

Explanation

An audience member points out that public sector digital services are generally of lower quality compared to private sector services. They question why this disparity exists, especially in high-income countries that can afford the latest technology.

Evidence

Example of Berlin, where registering a newborn can take two weeks.

Major Discussion Point

Challenges in Digital Government Implementation

Differed with

Axel Domeyer

Differed on

Approach to digital government excellence

Difficulty in measuring user satisfaction

Explanation

An audience member raises the issue of difficulty in measuring user satisfaction and experience for government services. They ask for examples of good practices in this area.

Major Discussion Point

Key Performance Indicators (KPIs) for Digital Government

Challenges in standardizing KPIs across agencies

Explanation

An audience member highlights the challenges in standardizing KPIs across different government agencies. They note that different agencies may have different needs and contexts, making it difficult to apply a single set of KPIs.

Major Discussion Point

Key Performance Indicators (KPIs) for Digital Government

Importance of business process management in digital excellence

Explanation

An audience member emphasizes the importance of business process management in achieving digital excellence. They ask about the role of business process management in the context of digital government excellence.

Major Discussion Point

Building Digital Capabilities in Government

Agreed with

Axel Domeyer

Agreed on

Need for balance between internal and external capabilities

Agreements

Agreement Points

Importance of KPIs in digital government

Axel Domeyer

Audience

Singapore’s systematic approach with 15 KPIs

Saudi Arabia’s progress in digital government rankings

Importance of user satisfaction and cost-effectiveness

Focus on outcome-based KPIs rather than technology adoption

Both Axel Domeyer and audience members emphasized the importance of well-defined KPIs in measuring and improving digital government performance.

Need for balance between internal and external capabilities

Axel Domeyer

Audience

Need for balance between internal and external capabilities

Importance of business process management in digital excellence

There was agreement on the need for a balanced approach to building digital capabilities in government, combining internal expertise with external support.

Similar Viewpoints

Both Axel Domeyer and audience members recognized the challenges faced by public sector digital services, particularly in countries lacking centralized technology platforms.

Axel Domeyer

Audience

Lack of centralized technology platform in some countries

Public sector services lag behind private sector in quality

Unexpected Consensus

Difficulty in measuring user satisfaction

Axel Domeyer

Audience

Importance of user satisfaction and cost-effectiveness

Difficulty in measuring user satisfaction

While an audience member raised concerns about the difficulty of measuring user satisfaction, Axel Domeyer unexpectedly agreed by emphasizing its importance as a key performance indicator, suggesting a shared recognition of both the challenge and necessity of this metric.

Overall Assessment

Summary

The main areas of agreement centered around the importance of KPIs, the need for a balanced approach to digital capabilities, and the recognition of challenges in public sector digital services.

Consensus level

There was a moderate level of consensus among speakers, particularly on the importance of measuring and improving digital government performance. This consensus suggests a shared understanding of key challenges and potential solutions in digital government implementation, which could facilitate more targeted and effective strategies for improvement.

Differences

Different Viewpoints

Approach to digital government excellence

Axel Domeyer

Audience

Three key elements: compliance, excellence, and impact

Public sector services lag behind private sector in quality

While Axel Domeyer proposes a framework for digital government excellence, an audience member points out that public sector services still lag behind private sector in quality, suggesting that the proposed framework may not be sufficient to address the quality gap.

Unexpected Differences

Role of technology adoption in digital government excellence

Axel Domeyer

Audience

Focus on outcome-based KPIs rather than technology adoption

Importance of business process management in digital excellence

While Axel Domeyer emphasizes focusing on outcome-based KPIs rather than technology adoption, an audience member unexpectedly highlights the importance of business process management, which could be seen as a more technology-focused approach. This difference in perspective on the role of technology in digital government excellence was not explicitly addressed in the main arguments.

Overall Assessment

summary

The main areas of disagreement revolve around the approach to achieving digital government excellence, the feasibility of measuring user satisfaction, and the role of technology adoption versus outcome-based metrics.

difference_level

The level of disagreement appears to be moderate. While there are some differences in perspective, they do not fundamentally contradict the overall goal of improving digital government services. These disagreements highlight the complexity of implementing digital government excellence and suggest that a multifaceted approach, considering various viewpoints, may be necessary for successful implementation.

Partial Agreements

Partial Agreements

Both Axel Domeyer and the audience member agree on the importance of user satisfaction as a key performance indicator for digital government. However, they disagree on the feasibility of measuring it, with the audience member highlighting the difficulties in measurement.

Axel Domeyer

Audience

Importance of user satisfaction and cost-effectiveness

Difficulty in measuring user satisfaction

Similar Viewpoints

Both Axel Domeyer and audience members recognized the challenges faced by public sector digital services, particularly in countries lacking centralized technology platforms.

Axel Domeyer

Audience

Lack of centralized technology platform in some countries

Public sector services lag behind private sector in quality

Takeaways

Key Takeaways

A comprehensive digital government excellence framework should include compliance, excellence, and impact elements

The UK’s Digital and Data Functional Standard and Singapore’s systematic KPI approach are considered best practices

Saudi Arabia has made significant progress in digital government rankings through a performance improvement mindset

Public sector digital services generally lag behind private sector in quality across most countries

Centralized technology platforms and governance are crucial for successful digital government implementation

There should be a balance between building in-house digital capabilities and leveraging external vendors

Outcome-based KPIs focused on user satisfaction and cost-effectiveness are more valuable than technology adoption metrics

Basic digital literacy for all government employees is essential for digital excellence

Resolutions and Action Items

None identified

Unresolved Issues

How to effectively measure and standardize user satisfaction metrics across different government agencies

The optimal balance between government involvement and private sector participation in digital governance

How to address the gap in quality between public and private sector digital services

The specific technical capabilities that should be prioritized for in-house development in government agencies

Suggested Compromises

Aim for a 50-50 split between in-house capabilities and outsourced IT services in government agencies, as this aligns with industry averages

Thought Provoking Comments

I think it’s important to realize that yes, they can do quite a bit, right? So they can design and build and operate some common digital solutions for the country, which they often do. But at the end of the day, what they really do is they influence the ecosystem, right?

speaker

Axel Domeyer

reason

This comment reframes the role of central digital government agencies from direct implementers to ecosystem influencers, challenging the common perception of their function.

impact

It set the tone for the rest of the discussion by emphasizing the importance of influence and ecosystem management in digital governance rather than just direct implementation.

And what I want to argue is that it’s a good idea here to have a compliant approach, yes. So… Do entities comply with digital government standards? But then to also add several, so two further elements, right? So number one is excellence, right? So have a more detailed view of what actually constitutes the best practice in managing digital in a government entity and help agencies to achieve this best practice. And then I would also argue that as a third element, you will require an impact approach as well, right?

speaker

Axel Domeyer

reason

This comment introduces a comprehensive framework for assessing digital governance, moving beyond simple compliance to include excellence and impact.

impact

It structured the subsequent discussion around these three key elements – compliance, excellence, and impact – providing a framework for analyzing digital governance initiatives.

And the public sector services are not, in general, even 50 to 60 percent of the quality of the private sector services. So my first question is, what is the piece that miss in the public sector services?

speaker

Audience member (Zoran Jordanoski)

reason

This question challenges the status quo and prompts a critical examination of public sector digital services compared to private sector offerings.

impact

It shifted the discussion towards a more critical analysis of public sector digital services and prompted Axel to discuss structural challenges in government digital transformation.

I think in some sense, I mean, like the good things in life, right, they kind of like stay stable over time, right? I mean, they don’t change that much. So I would expect the ones that we have right now continue to be important, right? So user satisfaction, cost, right, of investing in digital and what do you get out of it. So I think these will remain important KPIs.

speaker

Axel Domeyer

reason

This comment provides a perspective on the enduring nature of certain KPIs in digital governance, emphasizing fundamental metrics over trendy technological measures.

impact

It refocused the discussion on core, user-centric metrics rather than getting caught up in measuring adoption of specific technologies, providing a long-term view on digital governance assessment.

Overall Assessment

These key comments shaped the discussion by moving it from a basic understanding of digital governance to a more nuanced, ecosystem-focused approach. They introduced a comprehensive framework for assessment, prompted critical examination of public sector digital services, and emphasized the importance of enduring, user-centric metrics. The discussion evolved from describing digital governance to analyzing its complexities and challenges, ultimately providing a more holistic view of the subject.

Follow-up Questions

What is the piece that is missing for public services to have the same quality as private sector services?

speaker

Zoran Jordanoski

explanation

This question addresses the persistent quality gap between public and private sector digital services, which is crucial for improving government service delivery.

What is the role of soft regulations, like standards, in improving public services?

speaker

Zoran Jordanoski

explanation

Understanding the impact of non-binding guidelines could provide insights into effective ways to improve government digital services.

Do governments truly understand what users want and need?

speaker

Zoran Jordanoski

explanation

This question highlights the importance of user-centric design in government services and the potential gap between service providers and users.

What are the next batch of KPIs for digital government, given the advent of AI and other emerging technologies?

speaker

Audience member

explanation

This explores how to measure digital government progress in the context of rapidly evolving technologies, which is crucial for future planning and assessment.

How can governments effectively measure user experience and satisfaction?

speaker

Audience member

explanation

This addresses the challenge of quantifying qualitative aspects of service delivery, which is essential for improving government digital services.

What are the key digital or technical capabilities that should be focused on to propel excellence and build in-house expertise?

speaker

Audience member

explanation

This question seeks to identify the most critical skills and knowledge areas for governments to develop internally to improve their digital services.

What is the appropriate balance between government involvement and private sector participation in digital governance?

speaker

Audience member

explanation

This explores the optimal mix of public and private sector roles in digital governance, which is important for effective and efficient service delivery.

Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

Day 0 Event #189 Toward the Hamburg Declaration on Responsible AI for the SDG

Day 0 Event #189 Toward the Hamburg Declaration on Responsible AI for the SDG

Session at a Glance

Summary

This discussion focused on the development of the Hamburg Declaration, an initiative aimed at promoting responsible AI for achieving the Sustainable Development Goals (SDGs). The conversation involved representatives from UNDP and the German government, along with various stakeholders. The Hamburg Declaration is part of the annual Hamburg Sustainability Conference, which seeks to bridge the gap between AI and development communities.

Key points included the need to align AI applications with SDG principles while addressing potential risks such as exclusion, environmental impact, and data privacy. The declaration aims to gather voluntary commitments from multiple stakeholders, including governments, private sector, and civil society. Participants emphasized the importance of avoiding duplication with existing AI governance efforts while focusing specifically on development contexts.

The discussion highlighted several crucial areas for consideration, including digital sovereignty, infrastructure development in Global South countries, and the need for AI governance structures in developing nations. Participants stressed the importance of including voices from affected communities and showcasing local initiatives that demonstrate responsible AI use for sustainable development.

Human rights-based approaches were suggested as a framework to address various concerns comprehensively. The organizers welcomed input from diverse perspectives and encouraged ongoing engagement through various channels, including online platforms and future consultations. The process aims to create a pragmatic, action-oriented declaration that aligns efforts to use AI responsibly in support of the SDGs.

Keypoints

Major discussion points:

– Developing a Hamburg Declaration on responsible AI for sustainable development goals (SDGs)

– Aligning AI development and use with SDG principles while addressing risks

– Engaging diverse stakeholders including governments, private sector, civil society in the process

– Focusing on concrete, implementable commitments rather than high-level principles

– Considering environmental sustainability impacts of AI infrastructure and development

Overall purpose:

The goal of the discussion was to introduce and gather input on the process of developing the Hamburg Declaration, which aims to promote responsible use of AI to advance the UN Sustainable Development Goals. The organizers sought to engage diverse stakeholders in shaping the declaration’s content and commitments.

Tone:

The tone was collaborative and open, with the organizers actively seeking input and ideas from participants. There was a sense of enthusiasm about the potential of AI for development, balanced with awareness of risks and challenges. The tone remained constructive throughout, with participants offering suggestions and the organizers expressing appreciation for the feedback.

Speakers

– YU PING CHAN: Moderator

– ROBERT OPP: Chief Digital Officer of UNDP

– NOÉMIE BÜRKL: Head of the Digitalization Unit at the Federal Ministry for Economic Cooperation and Development (BMZ) of Germany

– YASMIN AL-DOURI: Co-founder of the Responsible Technology Hub

– MAI DO: Responsible AI manager at ABUS in Hamburg

– THIAGO MORAES: Works at the Brazilian Data Protection Authority and PhD researcher at University of Brussels

Additional speakers:

– KASSIA: UK delegation to United Nations in New York

– CLAIRE: No role/title mentioned

– TRINE: Representative of the government of Denmark, based in Geneva working on human rights

Full session report

The Hamburg Declaration on Responsible AI for Sustainable Development Goals

Introduction:

This discussion focused on the development of the Hamburg Declaration, an initiative aimed at promoting responsible AI for achieving the Sustainable Development Goals (SDGs). The conversation involved representatives from UNDP, the German government, and various stakeholders, as part of the annual Hamburg Sustainability Conference’s AI track. The declaration seeks to bridge the gap between AI and development communities while gathering voluntary commitments from multiple stakeholders.

Purpose and Scope:

The primary purpose of the Hamburg Declaration is to address the intersection of AI and sustainable development, focusing on responsible use of AI for development outcomes and embedding this approach in development practice. Robert Opp, Chief Digital Officer of UNDP, emphasised the need to bridge the gap between AI and development communities. Noémie Bürkl, from the German Federal Ministry for Economic Cooperation and Development, stressed that the declaration should align with the Global Digital Compact while being more concrete and action-oriented.

A key point of agreement among speakers was the importance of not duplicating existing AI governance processes. Instead, the declaration aims to fill a specific gap in the AI for SDGs space. The scope includes addressing potential risks such as exclusion, environmental impact, and data privacy while promoting responsible AI use for sustainable development.

Process and Stakeholder Engagement:

The development of the Hamburg Declaration is designed as a voluntary, non-negotiated process open to multiple stakeholders. This approach was supported by both Opp and Bürkl, although they proposed slightly different methods for engagement. Opp emphasised the voluntary nature of the process, while Bürkl focused on utilising existing conferences and bilateral discussions for input.

There was strong agreement on the importance of including voices from developing countries and local innovators. Thiago Moraes, from the Brazilian Data Protection Authority, particularly stressed the need to showcase local sustainable technology initiatives. The organisers expressed their commitment to engaging diverse perspectives through various channels, including an online website for submitting inputs, future consultations, and existing conferences.

Key Issues to Address and Thought-Provoking Comments:

Several crucial areas for consideration were highlighted during the discussion:

1. Environmental Sustainability: Thiago Moraes raised concerns about the environmental impact of AI infrastructure, particularly in developing countries.

2. Access to AI Infrastructure: Yasmin Al-Douri, co-founder of the Responsible Technology Hub, emphasised the need to address access to AI infrastructure and hardware in developing countries, particularly in the Global South.

3. AI Governance Structures: Al-Douri stressed the importance of developing AI governance structures, especially in developing countries.

4. Responsible AI Training and Education: Al-Douri emphasised the importance of responsible AI training and education.

5. Human Rights-Based Approach: A representative from the Danish government suggested incorporating a human rights-based approach to address various concerns comprehensively.

6. Multi-stakeholder Approaches: The need for inclusive, multi-stakeholder approaches that involve those directly affected by SDGs was highlighted.

7. Showcasing Local Initiatives: Moraes suggested bringing in people from the innovation ecosystem and showcasing local sustainable technology initiatives.

8. Balancing AI for Good and Responsible AI: Al-Douri questioned whether the focus was on AI for good or on ensuring AI itself is responsible in achieving the SDGs.

9. Alignment with Global Digital Compact: A representative from the UK delegation asked about aligning the declaration with the Global Digital Compact and clarifying the level of commitments sought.

10. Holistic Approach to SDGs: An audience member emphasised the need for a holistic approach considering potential conflicts between different SDGs.

Implementation and Accountability:

The discussion emphasised the need for concrete, implementable commitments. Opp stressed the importance of balancing ambition with feasibility in making these commitments. Bürkl suggested reviewing progress at future Hamburg Sustainability Conferences. The organisers mentioned the AI-SDG compendium as an opportunity to feature initiatives and track progress.

Conclusion and Next Steps:

The organisers plan to launch a public call for inputs, put the draft declaration online for comments, and present the Hamburg Declaration at IGF 2025 in Norway. Unresolved issues include specific accountability measures and effectively including perspectives from the Global South.

The Hamburg Declaration on Responsible AI for SDGs represents an ambitious effort to promote responsible AI use in sustainable development. While there is general consensus on its importance, the discussion highlighted the need for careful consideration of various perspectives and potential challenges in its implementation.

QR codes were shared for signing up to the email list and accessing the SDG AI companion, providing additional resources for engagement and information.

Session Transcript

YU PING CHAN: session, where we’ll explain the process towards the Hamburg Declaration, we’ll also talk about the goals and the aims of the Hamburg Sustainability Conference, and really, this collective effort that we hope all of you will join us in, in really realizing the potential of responsible AI for the Sustainable Development Goals. I’ll start by first calling to the stage, Mr. Robert Opp, Chief Digital Officer of the UNDP, the virtual stage, I think. Robert, please.

ROBERT OPP: Okay. Hello, everyone. This is a strange way of doing a workshop with everyone on headphones. I feel a little weird. I see a few people without, okay, good. All right, just making sure everyone has headsets. Well, thank you, Yuping, and welcome, everyone. On behalf of UNDP, as well as our government of Germany representatives who are online, I wanted to just give a little bit of overview on the way that we as UNDP are seeing the current, I would say, interest and expectations, in a sense, around artificial intelligence, and what we do when it comes to our work to support countries and their national development. And I think it’s fair to say that a lot of us see tremendous potential in AI for supporting the achievement of the SDGs. We already see a proliferation of experimental and sometimes scaling approaches around artificial intelligence and leveraging these technologies in support of different development work, like, for example, in the health space, screening for different kinds of conditions like tuberculosis and other things, helping support small farmers access information on subsidies and other programs that’s available using verbal to text kinds of interaction with chatbots, weather risk modeling, building damage assessment, and so on and so forth. So many examples of application of AI going on right now. But of course, we all know that AI also brings with it a number of risks, whether it be the proliferation of misinformation, or issues like data privacy and protection. But there’s another risk that we also see in the space of applying AI for development. And that is the risk of exclusion, the risk of lack of representation of bias and or inaccuracy in systems, as well as some of the sustainability aspects around the environmental impact of these technologies. So when it comes to the way that we as development actors, and by development actors, I’m referring to international organizations like us, national donor governments, national governments themselves who are implementing their national development programs, civil society actors, and other NGOs, etc. So that whole set of players that are involved in development, we have been looking at how it is that we can improve the alignment that we have around the direction of artificial intelligence application. So what I mean is, if we profess to be working toward the SDGs, we have to be mindful of certain things like the the risk of exclusion, like the potential negative environmental impacts of promoting technologies that consume an enormous amount of energy, for example. And so how do we as a community come together and really align ourselves and work toward a sort of set of directions or a set of commitments that we can make together as a community moving forward. So the Hamburg Sustainability Conference was an event that was sponsored last October by the government of Germany, and featured a lot of discussions around the practice of sustainable development. and the future directions of sustainable development. And my colleague from BMZ in Germany will be speaking in just a second and she’ll also address the kind of background of the conference and things like that. But in the Hamburg Sustainability Conference, there was a track that was set aside for AI and digital. And it really looked at the different aspects of responsible application of artificial intelligence and digitalization in the development space. And we looked at a number of things, including specifically the environmental impacts and some of the other aspects. But we also took the opportunity to start convening a panel that was focused on the principles that we wanna work toward for alignment around using AI for development. And we focused those principles around the five P’s, the five principles that are as part of the agenda 2030. So people, prosperity, planet, peace and partnerships. But now, so we had a very good discussion. There was a lot of interest from stakeholders. Generally in Hamburg last year, there was a high level of participation and a very multi-stakeholder participation as well. And now we are moving forward to thinking what is it that we can move toward in next year’s Hamburg Sustainability Conference in this space of artificial intelligence and the SDGs. And we want to continue convening these discussions around what are the areas of commitments we can make, as well as how are we collecting and gathering information on what’s out there. And to that end, one thing I just forgot to mention is we launched at last year’s Hamburg Sustainability Conference, an SDG compendium or AI compendium that starts to gather AI examples that have been used in the development space. In other words, trying to pull together a sense of where the practice is out there as well. So final thing I would say before turning back to you Ping is we do see all of this as a direct part of the follow-up to the Global Digital Compact. Paragraphs 53 and 54 of the Global Digital Compact talk about the application of AI to the sustainable development goals and the promise that they put forward. And so this effort is really seen as taking the next step in terms of collective commitments we can make that are more granular and instead of high level and saying, what are the things we can align around to really work together as a global community when it comes to pursuing the SDGs with artificial intelligence. With that, I’ll turn back to you Ping.

YU PING CHAN: Thank you, Rob. And as Robert said, we’re really looking to this session to be a little bit more of an engagement with you to really think through what we could have as part of the work towards responsible AI for the SDGs. So not just for the Hamburg Declaration, but also around convening these types of discussions at the Hamburg Sustainability Conference, which as Rob says, will be a unique opportunity to make sure that development practitioners which are gathering at Hamburg will also take into account what perhaps technologists and internet governance experts such as yourself convening here for technology conference really need to keep in mind. How do we bring these two communities together? So now, before we really go to hearing from you, we would like to invite our second speaker who is online, Ms. Naomi Burke, who is head of the Digitalization Unit at the Federal Ministry for Economic Cooperation and Development, the BMZ of Germany to speak. Naomi, please.

NOÉMIE BÜRKL: Yes, thank you so much. Thank you. you Ping and Rob, and welcome to all of you from Germany. It’s a pity I can’t be with you today, but the technical tools we have today allows me to be with you, so let’s use them like that. A lot of what I wanted to mention has already been said, and for the sake of time, I don’t want to to repeat them, but also to confirm that from a German government perspective, we do see artificial intelligence as a driver for achieving SDGs or for at least accelerating the implementation of SDGs there. The sectors that we could look at, agriculture, shows that AI can enable analysis for climate and crop data to adapt to climate change more effectively. Health has been already mentioned by Rob, where AI can distribute health information during epidemics, for example, and education, of course, we see that AI can help personalize learning. This is why the Ministry of Development Corporation has been engaged since 2019, actually, as a partner to support also the use of AI in this respect. It has potential, but also the risks that have already been mentioned, talking about, for example, water and electricity consumption, discrimination or disinformation. This is why we want to really focus on how to use AI in a responsible way to ensure that AI serves people and planet. On the HSC, the Hamburg Sustainability Conference, maybe just to say very broadly. It is an initiative that facilitates an exchange based on mutual trust and partnership between leading international minds from politics, international organizations, private sector, academia, civil society on those structural issues that we see. And this is why I think it is very good to know it is not just a one-off conference, it is an all-year and multi-year process. And we really want to take the time to discuss with you what we can do to underscore this need of commitment that has been mentioned, linking also to implementing the efforts underlined in the Global Digital Compact as well. So yes, the SDG Compendium has already been mentioned. A warm welcome to you to participate in that, to look at that, to contribute into that process. We have discussed principles to see what we mean when we talk about a responsible use of AI. And we really want this to be an inclusive and collaborative effort. And so thank you very much UNDP to convene all the minds that can contribute to that. And I’m really looking forward to your support, to your engagement and to using AI in a responsible way to have a boost for the SDGs. Thank you.

YU PING CHAN: Thank you so much, Naomi. And I really want to welcome new colleagues that just came into the room and say that we’re looking forward to having an engagement with you, not just through this particular meeting, but also throughout the entire process towards the Hamburg Declaration. And to that end, I’ve actually circulated some documents, a copy of the, some of the background around the SDGs. So if you have any questions, please feel free to reach out to me. the Hamburg Sustainability Conference that encapsulates what Rob and Naomi had just briefed but we’re also happy to provide more copies. I’ve also sent around an email list and so if you could leave your email addresses on it if you’d like to stay and engage with this process as we develop the Hamburg Declaration. We also have a couple of links online that I think I’ve put there but if not I’m happy to repeat the links later so that you can sign up online as well. So basically starting with this workshop and moving towards the next Hamburg Sustainability Conference which will be in early June 2025 we will be convening a number of these both online as well as in-person consultations on the content of the Hamburg Declaration. It will be as Rob has explained really thinking about what we together as the global community can come together to think about what are commitments or areas of action that we think need to be committed to or agreed on so that we can realize responsible AI for the SDGs and it’s really a very iterative process. We don’t have a draft in mind, we don’t really have areas that we want to focus on beyond the guiding principles of the Agenda 2030 and so it really would be shaped by your contributions and inputs as well. We’ll also have an online website where you can submit such inputs in writing if you or your organization would like to contribute something towards the thinking process as well as possibly even convene consultations of your own as well. So we’ll have some background material that you can use to also convene these types of informal conversations around the content of responsible AI for the SDGs. We also want to emphasize it’s not really a one-off, right? It’s not that we would necessarily come up with something at the Hamburg Conference in June of next year and end there because Hamburg as Nomi has explained will be an annual meeting. There will be an opportunity to continuously reflect back on these areas of responsible AI for the SDGs. We do think that it will be a start of a continuing conversation that will be multi-stakeholder in nature, hoping to bring in these commitments not just from the private sector but also donor institutions, governments and development actors as well towards AI for the SDGs. So having sort of started with that point, I think what I’ll do first is maybe open up the floor to any questions that colleagues might have. around Hamburg, the background, the Declaration, before we dive right into the content. Would there be any questions both online and offline? I also want to welcome, I think there are about 20 colleagues online as well. I see a couple of questions in there about the content of the Declaration and maybe some of the specific areas that we had discussed, so I’ll leave those for a little bit later, but I’ll just start with any questions around the process, the Hamburg Sustainability Conference that happened just this October, before we open up into the substance itself. Ah, please, come to the standing mic and introduce yourself.

YASMIN AL-DOURI: Okay, now I can hear myself as well. Hi, my name is Asmina Alduri, I’m the co-founder of the Responsible Technology Hub and I have a very maybe general question. When we talk about responsible AI for the SDGs, are we talking about AI for good? Are we talking about AI that is used to get to the SDGs or are we talking about AI that needs to be responsible to get to the SDGs? They are two different things. So this is like what I was wondering earlier also in the talks.

ROBERT OPP: Noemi might have a take on this as well, but if I’m understanding where you’re coming from, this is about responsible use of AI, which for me includes ensuring that AI systems themselves are responsible. So we’ll try to, because this is the distinction I think we’re making between, you could utilize AI for a development end or output of some kind, but if you’re not, let’s take a concrete example. Let’s say that we think that AI could revolutionize the education platforms in certain countries and so we invest a lot in creating a lot of compute power and extending AI systems to students, etc. But if we’re not mindful of the sustainability footprint of those AI systems, we’re actually creating another problem while we’re trying to fix one. And so it’s actually about both of those. It’s about utilizing AI for development outcomes, but doing that in a responsible way. So I hope that answers the question.

YU PING CHAN: Noemi, if you might want to come in on that as well.

NOÉMIE BÜRKL: No, I would support that. And… I would add the dimension of, but this is actually what you just said, Rob, that we need to make sure that when we do believe that AI can actually contribute to a positive development outcome in one area, we have to make sure that we also see at the same time the potential risks and be mindful of that as well. I mean, HSC, why we bring this topic in, to also maybe explain a little bit overall, it is not per se an AI conference. It is a conference on how to promote the achievement of the SDGs by 2030, which is a very difficult task. And this is where we see the role of AI and digital, both in the positive and negative sense. So you can say AI for good in a way, but yes, I think it is more holistic than that. Thank you. Always focused on development aspects though.

AUDIENCE: From the UK, Kassia? Hi, colleagues. UK delegation to United Nations in New York. From the perspective of GDC negotiator, have you thought about aligning the lines of declaration with GDC more like clearly, just for consistency? So I’m just asking this question to maybe stir the pot a bit. And the second question is, do you know what the level of commitments that you want to achieve in the end? What’s the ultimate goal in terms of commitment?

YU PING CHAN: Pardon the passing of mics over here. Before I give the floor back to Rob and Naomi to respond to Kassia’s question, I think this is linked to another question that I’m seeing in the chat from Monica, which is, how is the Hamburg annual conference linked to GDC AI follow-up processes? Does it stand apart? And how is it linked to the IGF dynamic coalition on data and AI? I think very quickly before I turn it over to them, I would say we would very much welcome a link between the IGF dynamic coalition on data and AI, if the dynamic coalition wanted to think about possible inputs that they could actually submit towards this process, as well as to perhaps, we would very much value the network and the coalition being part of the consultation forward. So over to Rob and Naomi.

ROBERT OPP: Okay. Well, thanks for the questions on the link with the GDC, and I was just pulling up my copy of the GDC right now. If I understand your question correctly, so we see this process as contributing overall to the, I would say, implementation of the GDC or in the spirit of the GDC. For sure, I expect elements that are mentioned in the GDC to come up in this process. So GDC talks about the importance of capacity building, It talks about like a lot of other aspects around technology in general and then the AI pieces themselves. The. But in terms of, like, direct linkages with for example the, the scientific panel that is proposed around AI and some of the things as well we don’t know yet because those mechanisms are not in place. I suspect that there could eventually be a link. There’s also the proposal in the GDC around the global annual dialogue on AI governance. Again, that may form a part, but we’re not trying to address the issue of international AI governance with this process. This is much more as Noemi also just reminded. What we see is there’s a whole set of summits going on globally around AI safety and AI governance and so on and so forth and there will be another one there was Bletchley Park. There was one in Korea, there will be another one in Paris, there will be a galley talking about AI in Africa. In April, and so on and so forth. The Hamburg Sustainability Conference is not trying to be one of those. The Hamburg Sustainability Conference is a conference on development, which has an AI track. And so we’re trying to ensure that we don’t just keep this discussion of AI and technology in those AI and technology focused conferences but actually we’re embedding it in our development practice, because to be frank, that’s where the big money flows when it comes to overseas development assistance and many other forms of bilateral and multilateral cooperation. And so we want to be sure that the practice of development is infused with this responsible AI utilization and application for SDGs. So I hope that answers today.

NOÉMIE BÜRKL: I think Rob said it very nicely and maybe also because I just saw another question on the chat. Why do we need another declaration? We want to make sure that there is no duplication to other processes. When we came up on these issues, really what stands out for us is we have on the one side and this was just just mentioned we have the SDG community discussing and we have the AI community on the other side discussing. And what we really want to focus on is on the implementation aspects of those paragraphs on the GDC that that that were mentioned. But also to go a step further, because what we do bring in together here are the players from the private sector, from academia, civil society, et cetera, and to really continue what we see in the agency is that we do not have that formalized discussions, which is good because we want to move forward on really this particular issue and see how how far it takes us there. And I think that some actors are really quite willing to participate in this process and we really need them as well. So, yes, we are mindful of not to be duplicating other aspects.

ROBERT OPP: Noemi, you just also reminded me of a key aspect of this declaration, which is that it is not intended to be a negotiated process, meaning we’re not trying to get universal adoption here as such. We hope that everyone will come and commit to it, but it’s going to be a voluntary kind of thing, not a negotiated process like the GDC or some of these other intergovernmental or universal kinds of agreements.

YU PING CHAN: And again, on that particular point, even though the AI space is very crowded and the UN space is very crowded, we do think that there is a gap when it comes to this idea of AI for the SDG. and particularly from a development perspective. So that is the gap that we’re looking to fill. And as the United Nations Development Agencies practicing in this field, we have noticed that there needs to be that coming together of these types of communities. I saw a comment or a question over there, please.

MAI DO: Hello, can you hear me? Yeah, thank you. So, Mai Do, I’m a responsible AI manager at ABUS in Hamburg. So I work for the civil part of ABUS. And I was very interested about the aspect of the involvement of the different stakeholders. And as you mentioned before, it’s a voluntary commitment, right? So, therefore, how do you envision to bring those different stakeholders together from the private sector to the civil society to ensure building a resilient infrastructure, which is one of the goals you’re trying to promote? So how do you envision it? And how do you go about it? And lastly, how do you think of holding these people or the different stakeholders accountable to ensure these goals? Thank you.

YU PING CHAN: Thank you for the question. I think maybe given that this is a German stakeholder and you’re coming from the perspective of Hamburg, I could reverse the order and ask if Naomi could take that question first, followed by Rob.

NOÉMIE BÜRKL: Well, thank you very much. What we really want to do is, we’re kicking off the process, right? So we really want to go with those who feel that they can have and really make a difference in that. So we do want to involve all those interested. And in this room, we have those minds that I mentioned before, and also online. And I think that’s really important. think that is exactly what we’re trying to aim for. It will be a process where we will also use those conferences mentioned before in Kigali and in Paris, et cetera, to also involve all those partners. But we will also have discussions on a bilateral basis, for example, with firms that have already shown their interest, SAP, et cetera, and others in the private sector. Because we really think that there is a huge potential there. And of course, civil society and academia. I know that being involved in those GDC discussions, there may be some hesitancy in why we need this. But we really want to become more concrete and in terms of how we can really have a major boost in these SDGs. This is a very, actually, narrow approach to that. And we will not be able to tackle all the SDGs in all the areas. We are looking at five principles. We will be also looking at the ideas that we have. And we hope to be as ambitious as we can. But we will have to see how the process goes in the coming months to see what we can agree on. And on this, after that, it will depend how we will make sure in terms of accountability. We will keep looking at the next HSEs after that. Looking back, how far did we go with these commitments so that it is beyond mere agenda setting? And what comes out of that? We do believe that those who are part of the process will also be those who are also convinced and willing to participate. So I’m still very hopeful that this is a very good approach to go. And also, we are. I want to underline again, we want to be very concrete as well. Thank you.

ROBERT OPP: I don’t have a lot to add to that except to say in the Hamburg Sustainability Conference, I’m not sure if you were there or not, but on the discussion that kicked all of this off, we had the head of sustainability from SAP there and inside discussions with him, he was saying this would be of interest to his company to align if those principles and the eventual commitments can make sense and so on. And we’re still kind of designing what this actually looks like. We very much welcome private sector to sign on to the commitments as well. And there will likely be a number of ways to do that. So more to come on that.

YU PING CHAN: And so moving on to really that concrete part of it, if we could move on maybe from the questions around process, do we welcome them subsequently as well, and maybe get into the meat of what the declaration should look like. And here again, we’re just really looking for ideas and inputs as to what you think are critical issues in the AI for SDG space, or as maybe a development practitioner, what do you think when you think about AI? What does it mean to use AI responsibly? And conversely, from a technologist or AI scientist space, if you’re looking at how AI is being used in development space right now, what are your concerns? What are your thoughts on the risk? What do you see as the opportunities? And so let’s have this as a little bit more of that open discussion that Rob was speaking to a little bit earlier on, in this thought that if you could really have curate a conversation or create an opportunity for the private sector, the multi stakeholders, the governments to come together around these issues, what do you think should be top of mind for them? Anyone in the room would like to take a first stab at this? If not, while you are ruminating on that, I actually have a question from the online chat already, and I will direct this at both Rob and Naomi, where? So the question here is about the, whether the Hamburg Declaration will be using. we’ll be dealing with issues around artificial intelligence-based weapons, where there has been a concern over the use of such weapons in the ongoing situation in Gaza. And so what could be the role of the UNIGF and the United Nations in the fight against the weaponization of artificial intelligence? And so I would turn that over to Rob and Naomi.

ROBERT OPP: Okay, so the issue of weaponization of AI will not be tackled directly by this process, we don’t foresee. There are elements of the peace principle that we need to respect in terms of the way that AI is applied so that it doesn’t promote divisions among people and so on. But because there are other parts of the United Nations multilateral system, that like disarmament affairs and things like that, that are dealing with some of the weaponization issues, we don’t feel that this is the best process placed to actually do that. So I think we need to acknowledge it in some way, but likely not be, this is not meant to be the platform to really address those issues which are being taken up elsewhere.

NOÉMIE BÜRKL: Yes, I agree, absolutely. Because I think there is already so much to do on these issues that I mentioned before. Also, people, prosperity planet, etc. We don’t want to duplicate processes that are being discussed elsewhere. And I think there’s another question also on presenting then the declaration at the IGF 2025 in Norway. I don’t see why we shouldn’t do that. That’s also interesting to look at, because we really want to link the different processes. But I think it’s good to get all the ideas here in this room. and to shape the process together. This is what this is really about.

YU PING CHAN: Naomi, can I also ask you to answer the second part of Dennis’ question online in the chat, which is if the conference is by invitation only?

NOÉMIE BÜRKL: Yes, it is by invitation, but we of course have a say in who is invited. I think what we will do is to look at the process, who is involved and who would like to participate. I think we can look at that flexibly in the coming months.

YU PING CHAN: Any other questions from here? Yes. Please introduce yourself.

THIAGO MORAES: Thanks. My name is Thiago Moraes. I work at the Brazilian Data Protection Authority and also as a PhD researcher at University of Brussels. Responsible innovation in AI or other emerging technologies has been part of my research topic. The more I look into that, I see sometimes a bit of a dissonance on how we as governments, using a bit of the government hat, we’re discussing a lot these days about digital sovereignty and how we have to raise the capacity of infrastructure, especially in global majority countries that usually have more challenges for that. We just participated this year as the host of Digital Training Brazil. The Digital Economy Working Group was discussing a lot about this importance of raising the capacity level. But then, and this here now is more of my academic hat that I want to bring, it’s like I think as well that I miss a bit of the part of what I’m doing. I miss a bit of the part of what I’m doing. I miss a bit of the part of actually is being doing for the sustainable part because when we are so concerned about digital sovereignty and actually creating more infrastructure for better data centers, better you know like just having more data power if we don’t think of the other side of the balance and how we are actually promoting that and for sure green energy like green data centers is that concept really exists is definitely part of it but even like environmental impacts because it’s not only about using you know like okay Brazil use a lot of energy from water sources which supposed to be cleaner for sure but still there’s a lot of environmental impacts that sometimes we create and this should be part of the discussion so if several countries now are working to build better data centers more represents and they don’t add this to the equation we’ll have a lot of trouble in the upcoming years so maybe I know it’s a voluntary declaration but this somehow should be embedded there in the discussion I think that’s my suggestion. Thank you and I think that’s an important suggestion and

YU PING CHAN: we will definitely look to taking that up under the planet part of the declaration but really indeed as you say this environmental sustainability and bilingual as we look to build out compute data and the AI revolution is really critical. Responses, Naomi, Rob.

ROBERT OPP: No just to say absolutely yes and the one of the challenges we are going to need to address over the next few months as we do this is what can those commitments look like and so that’s also why we welcome the participation of many voices to help us understand what would actually be feasible and implementable ways of putting those kinds of commitments in because I don’t think it’s quite as simple as just saying as you mentioned okay it’s data centers have to be carbon neutral or something right like it’s there are other aspects of the issue that we need to explore and then eventually balance the perfect with the feasible of what can actually be implemented what people can commit to so just completely agree and since you’re a PhD researcher and government authority we would definitely welcome both sides on the challenge of academia with the kind of implementation necessity or need or how what governments would be able to do.

YU PING CHAN: Yes there was a question over there and then I think another one over here.

AUDIENCE: So I hope yeah it’s working good. This is less of a question and more of an answer to your question that you had. So I basically work in the field of responsible AI used to work for it for big corporations big tech in the past before I was one of the co-founders of the Responsible Tech Hub and then Munich where we focus on these topics specifically from a youth perspective and there are a couple of things that I think or I deem as super important when we talk about SDGs and using AI to harness it the first thing is what SDGs are you focusing on after you actually really define which SDGs you focus on you can actually go into the aspect of okay who has access to the infrastructure and who has access to the hardware. As long as these questions are not answered there’s no way we can even include for example the global south or anything sub-saharan if we focus on Africa. That’s the one thing. The other thing is also if we talk about responsible AI training is the number one aspect that we’re not only focusing on in Germany right now but generally in the EU. So yes we have the EU AI Act for example but there is a lot of governance still lacking. I was just at a session where we discussed AI governance structures for the Middle East that is barely even existing so there’s still a lot of room to talk about AI governance in different countries specifically in developing countries so this has to be set as well and then there needs to be training for those who are actually developing the AI and who are actually deploying it. I think there’s a lot of resources out there. There are a lot of institute the Alan Turing Institute the Tom think tank which I also represent in some kind of ways. There are a lot of academic resources to go into AI impact assessments for example. So that already exists but it’s super important to keep in mind that if we talk about SDGs we always have to include those who are directly affected by the SDGs and those are mostly the ones who don’t have the access AI and to actually access the training and to have the base, which is AI governance. So these multi-stakeholder approaches have to actually happen first, I believe, before we can even set up trainings for them. Comments in the room. These are incredibly helpful. Hi, I’m Claire, and I wanted to ask a question which is somewhat in the similar direction, and that is that the point I found most interesting is that point of conflict between SDGs, maybe. And I was wondering, because if I look at the document, you ask for input for special areas. So I don’t see the representation of somewhat of a general point of looking at, well, if I want to promote a certain point, I also have to incorporate others, as well as, especially if you look at AI and that most of the use cases are based on the data, obviously. And I mean, that is probably the area where we are lacking most in responsibility and in humanity, and whether this will also be part of it to look at the development point from a more holistic point of view and incorporating those SDGs, as well.

YU PING CHAN: I think that’s a great point. And yes, for convenience sake, we did sort of split into those five Ps, but we do expect there will have to be some kind of chapeau, as you say, like looking at AI more generally and then maybe touching on data that would then fit across all of this. So thank you for that. I also want to say, really, we appreciate those of you that are really looking at this from your practitioner, but also expert perspective. So if you could make sure to pass me your contact details and card or sign up online to the website and the email list, we really want to stay in touch with you and have you as part of the process. Rob, Nomi, while there are any other questions in the room, over here, please. I think after this and another comment, we’ll turn it back to panellists very quickly and come back to comments.

AUDIENCE: Hi, and apologies in advance. I’m not quite sure if this will turn into a comment or a question. My name is Trine, I’m a representative of the government of Denmark, but I’m normally based in Geneva working on human rights. So I’m actually neither a development or an AI practitioner. But I got inspired by your talk about not duplicating, not reinventing the wheels, but actually making sure there’s complementarity. And of course, in the human rights field, we also work a lot on on AI and the negative, mostly consequences of it. And I think a human rights based approach to whatever you are doing, be it AI or the SDGs, actually, is very helpful in that sense, because if you make sure that you have a human rights based approach, you don’t only cover those very obvious elements such as discrimination, AI bias, all of that, if you indeed incorporate it in the design, development and deployment phases. So when you talk to those actors, when they develop the products, basically, and I think that’s also partly where the education comes in, those communities talking to each other. But you also have the right to health. You have the right to a clean, healthy and sustainable environment. So indeed, the human rights framework is not only respected and accepted by all states, but it’s actually also very well developed and well versed. So I think it probably turned into a comment that that would be something to look at. Thanks.

YU PING CHAN: More comments in the room or online from online colleagues and participants. Again, it doesn’t have to be focused, per se, on the Haber process. as a declaration, maybe just what do you worry the most about when you look at the use of AI today? Particularly interested also in perspectives from developing countries and the global majority. A follow up.

THIAGO MORAES: Yeah, OK. I mean, it’s always nicer when we have more perspective. But just in addition, another thing that I think it could be really interesting for any conference, not only the number one, but when we want to have more concrete results. I know, of course, policy-oriented conference, in the end, you try to end up with a statement, but also, I don’t know, I mean, it’s the first time I’m hearing about the conference, so maybe you already do that. But one thing that I miss in conference in general, like the IJF, for example, is have more of like showcases of local initiatives that are actually making this kind of difference. So for example, let’s say, why not bring some people that are actually bringing a smart, clean, sustainable way of using a certain type of technology and reaching, even if just in a local level, what’s happening in some places. Also bringing a bit more of people from the innovation ecosystem. And nowadays, we see a lot of different initiatives like the sandboxes, innovation hubs, and experimentation facilities that are also trying to bring more of this discussion of responsibility in AI development, or sustainability, et cetera. And usually there are some small use cases, and maybe for them to become scalable, we have to look more into them. So if this is not already done, maybe it’s something that could be nice to have during the conference, you know?

YU PING CHAN: We think that’s a great idea. We look to our German colleagues who are actually supporting more the organization of the conference. I would also say, I think we had mentioned that we had launched the AI-SDG compendium, where there could be an opportunity to also feature these initiatives. And from the UNDP perspective, because we are present in so many countries and really are looking to globally scalable, as Sergio said, initiatives in this area that really represent the ability to affect across, please don’t hesitate to reach out to us as well.

ROBERT OPP: Yeah, maybe I can just add to that by saying we have a, you know, as Yu-Ping said, countries in all of the country, representation in all countries across Africa and many others, so 170 countries in total. Many of those places, we actually are tied into the local innovation ecosystem. And there’s been a couple of incidents recently, where we have been forming networks of African innovators in particular, and featuring their participation in global conferences and discussions, which is very interesting, because you need those voices around the table. Also in the Hamburg conference last year, we brought an indigenous activist from Chad to discuss the aspects from her perspective, what is actually happening on the ground, what is touching people there, or indeed not representing them well when it comes to the rollout of technology and AI. So this is absolutely, just to say, completely agree with your point. And the conferences are very, especially those that are based in the global north, often have trouble getting the representation that we need to have the right discussion around the table. But that’s also another reason why the IGF and some of these multi-stakeholder forums are important to have as part of our consultation process.

YU PING CHAN: We’re running out of time. I see a reminder. I just really want to give an opportunity to anybody who still has any comments or suggestions. Again, recognizing this is the first time that we are opening this up, so again, we welcome future contributions. If any of the dynamic coalitions or the youth or regional IGFs want to take up the Hamburg Declaration or even the conversation around responsible AI for the SDGs, please let us know. Reach out to us. We’d be more than delighted to do that. Any last comments from those online, offline, here in the room? Comments, observations, and so on. I also ask my colleague Marie to share a QR code for you to scan to stay updated with us at the very end. So maybe Nomi first, then Rob. Nomi?

NOÉMIE BÜRKL: Well, thank you very much to all of you for your good questions, good ideas. This is exactly what we were going for. We hope you will be or stay engaged. We’re very excited about the future months to come. Please do also use the online possibilities to reach out, as Yuping just mentioned, and yes, looking forward to your ideas. Thank you.

ROBERT OPP: Similarly, from my side, just to thank you for participation. Thank you for the comments. It is exactly what we had hoped that we would get out of this session. It is a good reminder about the kind of importance of collective brain power. So we want to have something good. We want to pressure test it from a lot of angles. It’s not going to be perfect. We know that from the beginning. But what we want is something pragmatic because the practice is evolving so quickly out there that we want to try to stay ahead as much as we can and start to align our actions, our commitments, so that we really are making sure that AI is used in the right direction and for the actual support of people, putting people and their rights at the centre. So that’s just a thanks from our side for all these good ideas and comments.

YU PING CHAN: Shear screen very quickly. Marie, you should have permission that these are the QR codes for you to be able to sign up to the email list. I think, Marie, you need to full screen. There we go. So that’s www.bmz-digital.global.en. And then here we also have the SDG AI companion that I mentioned, where we also do welcome initiatives, especially at the global level, where you feel that it fits this idea of responsible AI for the SDGs. And we really look forward to keeping in touch with all of you and really taking into account your views and perspectives. We’ll also have a public call for inputs later on. We’ll also put up the declaration online so that we can take comments and so on. And really looking forward to this being an engaging process and thinking about how the IGF can contribute not just here, but also into other global processes as well. We look forward to working with all of you. And thank you again for sharing your time with us today. 笒笒笒笒笒笒笒笒笒笒笒笒笒笒笒笒笒笒笒笒笒笒笒

R

ROBERT OPP

Speech speed

142 words per minute

Speech length

2303 words

Speech time

971 seconds

Addressing the gap between AI and development communities

Explanation

Robert Opp highlights the need to bridge the gap between AI and development communities. He emphasizes the importance of embedding AI discussions in development practice rather than keeping them isolated in technology-focused conferences.

Evidence

Hamburg Sustainability Conference is not trying to be one of those AI-focused conferences. It is a conference on development, which has an AI track.

Major Discussion Point

Purpose and Scope of the Hamburg Declaration on Responsible AI for SDGs

Differed with

NOÉMIE BÜRKL

Differed on

Scope of the Hamburg Declaration

Focusing on responsible use of AI for development outcomes

Explanation

Opp stresses the importance of using AI responsibly for development outcomes. He emphasizes the need to consider both the potential benefits and risks of AI in development contexts.

Evidence

Example of AI revolutionizing education platforms while being mindful of the sustainability footprint of those AI systems.

Major Discussion Point

Purpose and Scope of the Hamburg Declaration on Responsible AI for SDGs

Agreed with

NOÉMIE BÜRKL

Agreed on

Focusing on responsible use of AI for development outcomes

Voluntary, non-negotiated process open to multiple stakeholders

Explanation

Opp explains that the Hamburg Declaration is intended to be a voluntary commitment rather than a negotiated process. The aim is to encourage broad participation from various stakeholders without requiring universal adoption.

Major Discussion Point

Process and Stakeholder Engagement for the Declaration

Agreed with

NOÉMIE BÜRKL

THIAGO MORAES

Agreed on

Importance of multi-stakeholder engagement

Making concrete, implementable commitments

Explanation

Opp emphasizes the need for concrete, implementable commitments in the Hamburg Declaration. He stresses the importance of balancing ambition with feasibility in the commitments made.

Major Discussion Point

Implementation and Accountability

Balancing ambition with feasibility

Explanation

Opp highlights the importance of finding a balance between ambitious goals and what is realistically achievable. He suggests that the commitments in the declaration need to be both impactful and implementable.

Major Discussion Point

Implementation and Accountability

N

NOÉMIE BÜRKL

Speech speed

139 words per minute

Speech length

1409 words

Speech time

606 seconds

Aligning with the Global Digital Compact while being more concrete

Explanation

Bürkl emphasizes that the Hamburg Declaration aims to align with the Global Digital Compact while providing more concrete actions. The focus is on implementation aspects of the GDC paragraphs related to AI and SDGs.

Major Discussion Point

Purpose and Scope of the Hamburg Declaration on Responsible AI for SDGs

Agreed with

ROBERT OPP

Agreed on

Focusing on responsible use of AI for development outcomes

Differed with

ROBERT OPP

Differed on

Scope of the Hamburg Declaration

Not duplicating other AI governance processes

Explanation

Bürkl stresses that the Hamburg Declaration is not intended to duplicate existing AI governance processes. Instead, it aims to fill a gap by focusing specifically on AI for SDGs from a development perspective.

Major Discussion Point

Purpose and Scope of the Hamburg Declaration on Responsible AI for SDGs

Agreed with

ROBERT OPP

Agreed on

Not duplicating existing AI governance processes

Utilizing existing conferences and bilateral discussions for input

Explanation

Bürkl outlines the strategy for gathering input for the Hamburg Declaration. This includes leveraging existing conferences and engaging in bilateral discussions with interested parties.

Evidence

Mentions using conferences in Kigali and Paris, as well as bilateral discussions with firms like SAP.

Major Discussion Point

Process and Stakeholder Engagement for the Declaration

Agreed with

ROBERT OPP

THIAGO MORAES

Agreed on

Importance of multi-stakeholder engagement

Reviewing progress at future Hamburg Sustainability Conferences

Explanation

Bürkl explains that future Hamburg Sustainability Conferences will be used to review progress on the commitments made in the declaration. This approach aims to ensure ongoing accountability and progress beyond mere agenda-setting.

Major Discussion Point

Implementation and Accountability

T

THIAGO MORAES

Speech speed

128 words per minute

Speech length

611 words

Speech time

284 seconds

Environmental sustainability of AI infrastructure

Explanation

Moraes highlights the importance of considering the environmental impact of AI infrastructure. He emphasizes the need to balance digital sovereignty with sustainability concerns.

Evidence

Mentions the environmental impacts of building data centers, even when using renewable energy sources.

Major Discussion Point

Key Issues to Address in the Declaration

Balancing digital sovereignty with sustainability

Explanation

Moraes points out the potential conflict between efforts to build digital sovereignty and environmental sustainability. He suggests that this balance should be a key consideration in the declaration.

Evidence

Refers to discussions in the Digital Economy Working Group about raising capacity levels in global majority countries.

Major Discussion Point

Key Issues to Address in the Declaration

Including voices from developing countries and local innovators

Explanation

Moraes suggests including more perspectives from developing countries and local innovators in the conference. He emphasizes the importance of showcasing local initiatives that are making a difference.

Major Discussion Point

Process and Stakeholder Engagement for the Declaration

Agreed with

ROBERT OPP

NOÉMIE BÜRKL

Agreed on

Importance of multi-stakeholder engagement

Showcasing local sustainable technology initiatives

Explanation

Moraes proposes showcasing local initiatives that demonstrate sustainable use of technology. He suggests this could help identify scalable solutions and bring more voices from the innovation ecosystem into the discussion.

Evidence

Mentions examples like sandboxes, innovation hubs, and experimentation facilities.

Major Discussion Point

Process and Stakeholder Engagement for the Declaration

Y

YASMIN AL-DOURI

Speech speed

174 words per minute

Speech length

93 words

Speech time

31 seconds

Access to AI infrastructure and hardware in developing countries

Explanation

Al-Douri emphasizes the importance of addressing access to AI infrastructure and hardware in developing countries. She suggests this is a crucial first step before other aspects of responsible AI can be addressed.

Major Discussion Point

Key Issues to Address in the Declaration

AI governance structures, especially in developing countries

Explanation

Al-Douri highlights the need for AI governance structures, particularly in developing countries. She notes that this is an area where there is still significant work to be done.

Evidence

Mentions recent discussions on AI governance structures for the Middle East.

Major Discussion Point

Key Issues to Address in the Declaration

Responsible AI training and education

Explanation

Al-Douri stresses the importance of training for those developing and deploying AI. She suggests that this is a critical component of responsible AI implementation.

Evidence

Mentions existing resources from institutions like the Alan Turing Institute and the Tom think tank.

Major Discussion Point

Key Issues to Address in the Declaration

M

MAI DO

Speech speed

145 words per minute

Speech length

124 words

Speech time

51 seconds

Building a resilient multi-stakeholder infrastructure

Explanation

Do inquires about the strategy for bringing together different stakeholders and ensuring their commitment to building a resilient infrastructure. She emphasizes the importance of accountability in this process.

Major Discussion Point

Implementation and Accountability

U

Unknown speaker

Speech speed

0 words per minute

Speech length

0 words

Speech time

1 seconds

Incorporating a human rights-based approach

Explanation

The speaker suggests incorporating a human rights-based approach in the Hamburg Declaration. They argue that this approach can help address various issues including discrimination, AI bias, and environmental concerns.

Evidence

Mentions that the human rights framework is well-developed, accepted by all states, and covers various relevant rights such as the right to health and the right to a clean environment.

Major Discussion Point

Purpose and Scope of the Hamburg Declaration on Responsible AI for SDGs

Agreements

Agreement Points

Focusing on responsible use of AI for development outcomes

ROBERT OPP

NOÉMIE BÜRKL

Focusing on responsible use of AI for development outcomes

Aligning with the Global Digital Compact while being more concrete

Both speakers emphasize the importance of using AI responsibly for development outcomes, aligning with existing frameworks while providing more concrete actions.

Not duplicating existing AI governance processes

ROBERT OPP

NOÉMIE BÜRKL

Voluntary, non-negotiated process open to multiple stakeholders

Not duplicating other AI governance processes

Both speakers stress that the Hamburg Declaration is not intended to duplicate existing AI governance processes, but rather to fill a gap in the AI for SDGs space.

Importance of multi-stakeholder engagement

ROBERT OPP

NOÉMIE BÜRKL

THIAGO MORAES

Voluntary, non-negotiated process open to multiple stakeholders

Utilizing existing conferences and bilateral discussions for input

Including voices from developing countries and local innovators

All three speakers emphasize the importance of engaging various stakeholders in the process of developing the Hamburg Declaration.

Similar Viewpoints

Both speakers highlight the importance of considering infrastructure issues in developing countries, including environmental sustainability and access to AI hardware.

THIAGO MORAES

YASMIN AL-DOURI

Environmental sustainability of AI infrastructure

Access to AI infrastructure and hardware in developing countries

Both speakers emphasize the need for concrete, implementable commitments and ongoing review of progress in future conferences.

ROBERT OPP

NOÉMIE BÜRKL

Making concrete, implementable commitments

Reviewing progress at future Hamburg Sustainability Conferences

Unexpected Consensus

Importance of local initiatives and voices

THIAGO MORAES

ROBERT OPP

Showcasing local sustainable technology initiatives

Voluntary, non-negotiated process open to multiple stakeholders

Despite coming from different perspectives, both speakers unexpectedly agree on the importance of including local initiatives and voices in the process, particularly from developing countries.

Overall Assessment

Summary

The main areas of agreement include the need for responsible AI use in development, avoiding duplication of existing processes, multi-stakeholder engagement, and the importance of concrete, implementable commitments.

Consensus level

There is a moderate to high level of consensus among the speakers on the overall approach and key principles of the Hamburg Declaration. This consensus suggests a strong foundation for developing a meaningful and impactful declaration on responsible AI for SDGs. However, there are still areas where more detailed discussions and alignment may be needed, particularly on specific implementation strategies and addressing the unique challenges faced by developing countries.

Differences

Different Viewpoints

Scope of the Hamburg Declaration

ROBERT OPP

NOÉMIE BÜRKL

Addressing the gap between AI and development communities

Aligning with the Global Digital Compact while being more concrete

While both speakers agree on the need for the Hamburg Declaration, they emphasize different aspects of its scope. Opp focuses on bridging the gap between AI and development communities, while Bürkl stresses alignment with the Global Digital Compact and providing more concrete actions.

Unexpected Differences

Environmental sustainability of AI infrastructure

THIAGO MORAES

ROBERT OPP

NOÉMIE BÜRKL

Environmental sustainability of AI infrastructure

Focusing on responsible use of AI for development outcomes

Not duplicating other AI governance processes

While Moraes raises concerns about the environmental impact of AI infrastructure, this issue is not explicitly addressed by Opp or Bürkl in their main arguments. This unexpected difference highlights a potential gap in the current focus of the Hamburg Declaration.

Overall Assessment

summary

The main areas of disagreement revolve around the specific focus and scope of the Hamburg Declaration, the methods of stakeholder engagement, and the extent to which environmental sustainability should be addressed.

difference_level

The level of disagreement among the speakers is relatively low, with most differences being more about emphasis and approach rather than fundamental disagreements. This suggests that there is a general consensus on the importance of responsible AI for SDGs, but some refinement may be needed in defining the specific goals and methods of the Hamburg Declaration.

Partial Agreements

Partial Agreements

Both speakers agree on the importance of stakeholder engagement, but they propose different approaches. Opp emphasizes a voluntary, non-negotiated process, while Bürkl focuses on utilizing existing conferences and bilateral discussions for input.

ROBERT OPP

NOÉMIE BÜRKL

Voluntary, non-negotiated process open to multiple stakeholders

Utilizing existing conferences and bilateral discussions for input

Similar Viewpoints

Both speakers highlight the importance of considering infrastructure issues in developing countries, including environmental sustainability and access to AI hardware.

THIAGO MORAES

YASMIN AL-DOURI

Environmental sustainability of AI infrastructure

Access to AI infrastructure and hardware in developing countries

Both speakers emphasize the need for concrete, implementable commitments and ongoing review of progress in future conferences.

ROBERT OPP

NOÉMIE BÜRKL

Making concrete, implementable commitments

Reviewing progress at future Hamburg Sustainability Conferences

Takeaways

Key Takeaways

The Hamburg Declaration aims to address the gap between AI and development communities, focusing on responsible use of AI for SDGs

It will be a voluntary, non-negotiated process open to multiple stakeholders

The declaration will align with the Global Digital Compact while being more concrete and implementation-focused

Key issues to address include environmental sustainability of AI, access to AI infrastructure in developing countries, and AI governance structures

The process will incorporate voices from developing countries and showcase local sustainable technology initiatives

Resolutions and Action Items

Launch a public call for inputs on the declaration

Put the draft declaration online for comments

Utilize existing conferences and bilateral discussions to gather input

Create an AI-SDG compendium to feature initiatives and use cases

Present the Hamburg Declaration at IGF 2025 in Norway

Unresolved Issues

Specific commitments and accountability measures for stakeholders

How to balance digital sovereignty with sustainability concerns

Extent of addressing AI weaponization within the declaration

How to effectively include perspectives from the Global South in the process

Suggested Compromises

Balance ambition with feasibility in making commitments

Incorporate human rights-based approaches while focusing on development outcomes

Address environmental sustainability without duplicating other AI governance processes

Thought Provoking Comments

When we talk about responsible AI for the SDGs, are we talking about AI for good? Are we talking about AI that is used to get to the SDGs or are we talking about AI that needs to be responsible to get to the SDGs? They are two different things.

speaker

Yasmin Al-Douri

reason

This question cuts to the heart of defining the scope and goals of the initiative, highlighting an important distinction between using AI as a tool for development versus ensuring AI itself is developed responsibly.

impact

It prompted clarification from the organizers that the initiative aims to address both aspects – using AI for development outcomes while ensuring the AI systems themselves are responsible and sustainable. This helped frame the subsequent discussion.

From the perspective of GDC negotiator, have you thought about aligning the lines of declaration with GDC more like clearly, just for consistency? … And the second question is, do you know what the level of commitments that you want to achieve in the end? What’s the ultimate goal in terms of commitment?

speaker

Kassia (UK delegation)

reason

These questions probe important aspects of how the Hamburg initiative relates to existing processes and what concrete outcomes it aims to achieve.

impact

It led to clarification that the Hamburg process is meant to complement rather than duplicate other AI governance efforts, focusing specifically on development applications. It also highlighted that the declaration will be voluntary rather than a negotiated agreement.

I miss a bit of the part of actually is being doing for the sustainable part because when we are so concerned about digital sovereignty and actually creating more infrastructure for better data centers, better you know like just having more data power if we don’t think of the other side of the balance and how we are actually promoting that and for sure green energy like green data centers is that concept really exists is definitely part of it but even like environmental impacts because it’s not only about using you know like okay Brazil use a lot of energy from water sources which supposed to be cleaner for sure but still there’s a lot of environmental impacts that sometimes we create and this should be part of the discussion

speaker

Thiago Moraes

reason

This comment highlights the tension between digital development goals and environmental sustainability, bringing attention to often overlooked environmental impacts.

impact

It broadened the discussion to include more emphasis on environmental considerations in AI development, which the organizers acknowledged as an important point to address under the ‘planet’ aspect of the declaration.

So I don’t see the representation of somewhat of a general point of looking at, well, if I want to promote a certain point, I also have to incorporate others, as well as, especially if you look at AI and that most of the use cases are based on the data, obviously. And I mean, that is probably the area where we are lacking most in responsibility and in humanity, and whether this will also be part of it to look at the development point from a more holistic point of view and incorporating those SDGs, as well.

speaker

Claire

reason

This comment emphasizes the need for a holistic approach that considers potential conflicts between different SDGs and highlights data as a critical area for responsible development.

impact

It prompted acknowledgment from the organizers that they would need to consider a more overarching framework beyond the five P’s to address these interconnections and data issues.

And I think a human rights based approach to whatever you are doing, be it AI or the SDGs, actually, is very helpful in that sense, because if you make sure that you have a human rights based approach, you don’t only cover those very obvious elements such as discrimination, AI bias, all of that, if you indeed incorporate it in the design, development and deployment phases.

speaker

Trine (Danish government representative)

reason

This comment introduces the importance of incorporating a human rights-based approach into AI development for SDGs, providing a framework that can address multiple concerns.

impact

While not directly addressed by the organizers, this suggestion added a new perspective to the discussion on how to ensure responsible AI development across multiple dimensions.

Overall Assessment

These key comments shaped the discussion by clarifying the scope and goals of the Hamburg initiative, highlighting important tensions and considerations in AI for development (such as environmental impacts and potential conflicts between SDGs), and introducing frameworks like human rights that could guide responsible AI development. They pushed the organizers to think more holistically about the initiative and consider how to address complex, interconnected issues in the declaration and conference.

Follow-up Questions

How to balance digital sovereignty and infrastructure development with environmental sustainability?

speaker

Thiago Moraes

explanation

Important to consider environmental impacts when building data centers and digital infrastructure for AI development, especially in developing countries

How to ensure access to AI infrastructure and hardware in developing countries, particularly the global south?

speaker

Audience member (unnamed)

explanation

Critical for inclusive AI development and addressing SDGs in underserved regions

How to develop AI governance structures in developing countries, particularly in the Middle East?

speaker

Audience member (unnamed)

explanation

Necessary for responsible AI implementation and addressing potential risks

How to incorporate a human rights-based approach in AI development for SDGs?

speaker

Trine (Danish government representative)

explanation

Ensures comprehensive coverage of issues like discrimination, bias, health, and environmental sustainability

How to showcase local initiatives and small-scale use cases of sustainable AI applications?

speaker

Thiago Moraes

explanation

Provides concrete examples and potential for scaling up successful projects

How to ensure representation of voices from the Global South and indigenous communities in AI and SDG discussions?

speaker

Robert Opp

explanation

Critical for understanding on-the-ground realities and ensuring inclusive technology development

Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

Day 0 Event #140 African Library Internet Governance Ambassadors Program

Day 0 Event #140 African Library Internet Governance Ambassadors Program

Session at a Glance

Summary

This discussion focused on the role of libraries in the digital age and their potential integration into the Internet Governance Forum (IGF) ecosystem. The conversation was led by representatives from Library Aid Africa, who presented their organization’s efforts to transform libraries into innovative digital hubs. They outlined programs aimed at empowering librarians with digital skills and engaging them in internet governance discussions.


Participants explored strategies for positioning libraries as digital inclusion centers within communities, emphasizing the importance of infrastructure, accessibility, and community engagement. The discussion highlighted the need for libraries to upskill staff, provide digital tools, and collaborate with partners to meet community needs effectively.


The conversation also addressed ways for libraries to engage with the IGF and influence digital policymaking. Suggestions included increasing librarian participation in IGF events, sharing case studies, and leveraging libraries’ unique position as information hubs to contribute to policy discussions.


Sustainability and partnerships were identified as crucial elements for libraries’ digital transformation. Participants proposed collaborations with educational institutions, private companies, and local startups to secure resources and support. The importance of adopting a multi-stakeholder approach and exploring intersections between libraries and various sectors was emphasized.


The discussion concluded by acknowledging the changing media landscape and the need for libraries to adapt their outreach methods to engage youth and diverse communities. Participants stressed the importance of encouraging responsible use of digital tools and highlighted libraries’ potential to bridge digital divides and promote equitable access to information and resources.


Keypoints

Major discussion points:


– Overview of Library Aid Africa’s programs and initiatives to transform libraries into digital hubs


– How libraries can serve as digital inclusion centers in communities


– Ways for libraries to engage with and influence internet governance forums


– Potential partnerships and collaborative models to support libraries’ digital transformation


The overall purpose of the discussion was to explore how libraries in Africa can evolve to become digital hubs and engage more actively in internet governance, while serving their communities’ needs for digital access and skills.


The tone of the discussion was collaborative and forward-looking. Participants shared ideas enthusiastically and built on each other’s comments. There was a sense of optimism about the potential for libraries to play an important role in digital inclusion and internet governance, balanced with pragmatism about the challenges involved. The tone remained consistent throughout, with participants offering constructive suggestions and insights.


Speakers

– DAMILARE OYEDELE: Facilitator/moderator of the discussion


– SHAGUN: Presenter from Library Aid Africa


– SARAH KADDU: Librarian


– GABRIEL KARSAN: Online participant


– MARIA: Participant with expertise in policy and libraries


Additional speakers:


– Unnamed male participant: Works with capacity building for regulators in Sub-Saharan Africa


– Unnamed female participant


Full session report

Revised Summary: Libraries in the Digital Age and Internet Governance


Introduction:


This discussion, facilitated by Damilare Oyedele, explored the evolving role of libraries in the digital age and their potential integration into the Internet Governance Forum (IGF) ecosystem. The conversation featured insights from Library Aid Africa representatives, librarians, and participants with expertise in policy and capacity building.


Damilare Oyedele opened by introducing Library Aid Africa’s programs, including the African Library and Information Associations and Institutions (AfLIA) Leadership Academy and their digital literacy products. He posed key questions to guide the discussion:


1. How can libraries transform into digital inclusion hubs?


2. How can libraries engage with and influence internet governance forums?


3. What partnerships are needed for libraries’ sustainability and digital transformation?


Libraries as Digital Inclusion Hubs and Their Role in Internet Governance:


The discussion emphasized the transformation of libraries from traditional book repositories to innovative digital hubs. Damilare Oyedele highlighted this shift, describing libraries evolving from “quiet, boring space of books alone to a more innovative environment.” Sarah Kaddu stressed the need for libraries to stock digital tools and upskill staff to manage them effectively. Maria underscored the fundamental requirements of infrastructure and internet access for libraries to serve as digital inclusion centers.


Speakers agreed that libraries must evolve to meet community needs in contextualized ways, not only providing access to digital resources but also actively engaging with communities. Sarah Kaddu suggested that libraries should reach out to communities, inviting and encouraging all categories of people to access and learn to use digital tools.


The importance of libraries engaging with and influencing internet governance forums was highlighted. Damilare Oyedele and Sarah Kaddu agreed that librarians need competence and upskilling to effectively participate in Internet Governance Forums (IGFs). Sarah Kaddu emphasized that libraries should participate in IGFs to understand and relate to their work.


Gabriel Karsan drew a parallel between libraries and internet governance principles, stating, “When we think about the internet and its principles of openness, accessibility, and providing a user the chance to have end-to-end access to resources, it’s the same thing as the abstraction of the library.” This comparison highlighted the shared values of openness and accessibility.


Maria suggested that libraries should use case studies to demonstrate their ecosystem to policymakers, proposing that libraries “use their own case studies at a national level and perhaps bring them into these spaces to talk about these success stories with other policymakers.”


Damilare Oyedele mentioned the African Library Internet Governance Ambassadors Program, which aims to build capacity for librarians to engage in internet governance discussions.


Partnerships and Sustainability for Libraries:


The speakers unanimously agreed on the importance of collaboration and partnerships for libraries’ sustainability and digital transformation. Damilare Oyedele emphasized the significance of collaboration with schools, colleges, and tech companies. Sarah Kaddu advocated for a multi-stakeholder approach to attract funding, stating, “We need to work in a model which is mild stakeholder take on a mild stakeholder approach so that we can contribute… to so many of them health we can come in to contribute education agriculture we don’t have to work alone anymore.”


The discussion also explored the potential for libraries to serve as competence centers for both education and professional upskilling. Maria suggested that libraries should explore intersections with themes like democracy and peacebuilding, broadening their societal impact.


Sarah Kaddu emphasized the importance of encouraging responsible use of digital tools and AI, suggesting that libraries should embrace rather than resist these technologies.


Conclusion:


The discussion concluded with Damilare Oyedele inviting participants to check out Library Aid Africa’s website and scan a QR code for attendance sheets, demonstrating the organization’s own use of digital tools.


Session Transcript

DAMILARE OYEDELE: In African countries to be able to navigate The IGF ecosystem. Well, I have us to progress today. I have three speakers joining me online in the person of Sheg who will be walking us through a few pointers that we do at library in Africa and from there You You Okay All right, great so walk you through a couple of things today We’re gonna discuss about what we do as an organization and from they’re gonna dive into the insert about today’s session and from there We’re gonna speak about what’s comes to mind when you hear about the word library and from there We’re gonna dive into the program proper and also question and answer session I’m also gonna give you a guide of today’s consultation and of course that we did a construction program and we dive into Questions remarks and we end the session. So it’s gonna be interactive with our online participants and those that are present with us here today So without further ado, I’m gonna dive in further to invite my colleague Shagun to guide with the presentation Shagun about you


SHAGUN: Thank You Damilare So, good day, everyone, and it’s nice to have you join us. So, I’ll be taking us through what we do as an organization and who we are. Thank you. So, let’s move on. Thank you, Damle. Next slide, please. All right. So, at Library Aid Africa, we collaborate with partners using – we use digital transformative tools and also community engagement to see how we can make and transform libraries to be a viable space. Can we move to the next slide? Now, based on what we’ve done so far as an organization, within the space of five years of establishment, we’ve been able to work with partners across nine African countries, and we’ve also been able to make impact that cross across 22 African countries. And most of our impact, we’re going to be sharing it with you, and you’ll get to know more about what we’ve done. Now, some of the impact, just as I mentioned earlier on, we have that of the Young African Library Leader Fellowship, and we also have another of our programs, that is the Community Library Center, and we also have another program that is the Library Selfie Series, and we also have the Mini Library Project. All those programs, we do them to see how we can make libraries a viable space. Like that of the YALF, most of the participants went as far as carrying out projects. So what we do in YALF is we train young library leaders and then using digital tools. And at the end of the day, they try to carry out a capstone. And that capstone is within their local communities where they can be able to make impact. And from what we’ve been able to done, we started YALF like four years ago. And within that space of four years, we’ve been able to train about a hundred participants, young library leaders, and then also those leaders caught across 19 African countries. And at the end of the day, about 50 and more have carried out capstone that have really been a positive impact to their communities. They’ve been able to do capstones that was able to make their libraries viable and their community. So we’re going to listen to some of the testimonies of those participants as we hear how they benefited from the programs we put them through.


DAMILARE OYEDELE: So the video will be played very soon, just a few seconds to play the video. Thank you.


SHAGUN: Sorry for the delay. The video will soon play. It’s just a short one minutes or so video just to show testimonies of YALF participants that have benefited. They did a lot of projects that turns out to be very impactful to their communities in terms of building their libraries around their communities, equipping those libraries, and these are as a result of-


DAMILARE OYEDELE: Can you check when the video is playing already, please? I’m sorry.


SHAGUN: All right. For those of us online, we can’t really see the video from our end.


DAMILARE OYEDELE: Yeah, apologies for that, but we have to progress. The next part of the agenda, so you can go ahead with your presentation, Shagun.


SHAGUN: Okay. All right, so based on the programs that we have, we develop and are currently working on some products that can help us seamlessly carry out those programs and also make library viable space. One of the products we’re currently working on is a platform for YARV, and the platform is to see how we can upskill young library leaders from that platform. Then we also have other platforms and products that we are working on, and one of the, sorry about that. So yeah, this is, yeah, one of the products is a library tracker. Sorry, can you still move back? Library tracker is next after YARV. Yeah, library tracker. So library tracker, with library tracker, what the product does is how it tries to connect library users to library, where they can be able to find libraries around them, they can be able to view resources about the library, and not just that, the good part is also they can borrow books from those library using that app that we’re currently working on. And also one of our product we’re also working on is the library XAfrica. And what it is doing is equipping and upskilling librarians with digital skills to be able to transform them beyond just the normal bookkeeping, to also see how they can put up technical skills into the library ecosystem. Also another product we’re also working on is the library volunteer core, and what this product seeks to do is to see how it can connect as many digital talents we have out there with libraries, such that they can be able to volunteer their talents to see how they can improve on. the libraries, and then also doing so, those volunteers will also get to improve on their experience while at the same time servicing those libraries. And then also, these are some of the programs that we have currently running, one of which is the Library Policy Fellowship, which is geared toward empowering librarians with the knowledge they need to have to be able to influence and also change library policies and also legislation around the ecosystem. And also, we have the Library Internet Governance Ambassadors Program, which you will get to know more about, which is part of why we’re here today. And then also, we also have another program, which is the Community Library Center Project, and that is geared towards setting up libraries in our local communities and seeing how we can make those communities a hub that have access to learning resources. So these are just some of the programs that we have, and these are some of the things you need to know about Library Aid Africa. Thank you. I believe you’ve been able to have an overview of what we do in Library Aid Africa. Thank you so much. Damilare, over to you.


DAMILARE OYEDELE: Thank you so much, Shagun, for that presentation. Diving in further into what we’ll be doing today, into this conversation, is for us to understand that we can work with libraries much more better to understand the legislative process and IGF governance ecosystem as a whole. We’re going to inquire today, inquire about what libraries can do to be part of the IGF ecosystem, how libraries are essential partners to address digital public goods, and how libraries are key partners towards achieving digital features in African countries. And of course, also going to like dive into to hide this about what can libraries do better? What can we do? How can we improve on things? What do we should we collaborate with, and how can we upscale our working library ecosystem as we progress? But however, I have a question for you. What comes to mind when you have the word libraries? Anybody think, what comes to mind when you hear the word libraries? A lot of books. Okay, you’re correct. You? Books? Okay. Okay. All right. I mean, we have different responses as regards to what comes to mind when you hear the word libraries. And I can see, let me show the chat box to see if we have a response in the chat box from an online participant. What comes to mind when you hear the word library? All right. You’re all correct. However, libraries, over time, are transforming from the quiet, boring space of books alone to a more innovative environment, right? And that’s why empowering librarians’ digital skills is important to upskill themselves to be able to make libraries much more vibrant and innovative ecosystem. Okay. And diving further into this conversation, we’ll be presenting to you proper what the program is all about, African Library Internet Governance Ambassadors Program, and what do we aim to achieve with it. So what are we doing as a program on this particular intervention? We’re working towards empowering library leaders in Africa to integrate libraries into the discourse and actions of the IGF. The national level is ensuring that active participation across board is guaranteed, and also to ensure that we’re able to shape internet governance policies and advance data inclusion strategies for libraries in African countries. And why are we doing this? We are doing this to specifically address the capacity gap among librarians, to engage meaningfully in the internet governance ecosystem, and also to contribute towards shaping policies around data inclusion for libraries. So what approach are we deploying to achieve this particular intervention that we’re working on currently? Number one, we’ll be working on an annual ambassadors program where we get to engage and train librarians to build practical skills and knowledge about the internet governance ecosystem, and collaborate with existing IGFs. IGF and ICANN ambassadors in various countries. And the second approach here that we’ll be deploying is to explore collaborative engagement in the context of to initiate and build collaboration between libraries and IGF ecosystem in African countries at a national and regional levels across board. And also it’s around community building. So we need a vibrant community of librarians, independent governors, ecosystem partners to leverage expertise and network activities and programs that will drive value for impact of library engagement at the IGF level in Africa. And what do we aim to achieve with this? Are we looking at building a very vibrant team of librarians across African countries, fostering collaboration between libraries and the IGF ecosystem partners, leading an increased representation of libraries in the IGF discussions. The second point that we’re trying to achieve is to enhance ability to advocate and lead engagement activities with libraries resulting to more improvement of libraries at a country level of the IGFs. And to increase awareness of libraries importance in the IGF ecosystem and this time mission efforts through evidence-based activities in collaboration with IGF partners in various countries in Africa. So that’s an overview of the program. I don’t know if you have questions that you’d love to ask online and offline that you need further courtesy on this particular part of the presentation. Questions, comments? Okay, please pass me the mic. Thank you very much. Thank you for the presentation. Interesting topic, I have not thought much about this. I’m working with the capacity building for regulators in South Sahara Africa. So I assume the library you are talking about is kind of a hub for information or something in a virtual or physical space where I assume connectivity is a presumption, some kind of processing going on and some structure. So can you say anything on what is the realities on the hubs that you have described in your project? Where are the libraries today and what would you like them to be? If you can say something on that. Sure, thank you so much for your question. And I would say that libraries are transforming from the boring quiet space to a more innovative environment. And access to internet is very- in that particular ecosystem for libraries to transform to digital hubs, I want them to be. For instance, if a library in a community has internet connectivity, the community gets to benefit from digital goods and capacity that the library has to provide digital access to them. So libraries are not just access to book selling, but libraries are transforming to access points, opportunities, digital tech hubs and environments for people to thrive, to create ideas and innovation. The reason why this particular program is important for us to see that how are libraries able to scale from what we know what we know them to be to what we really are currently. Libraries have moved from the book space alone to a more innovative environment that thrives and supports innovation. Great. Any further questions? So all participants, please feel free to drop your question in the chat box as we progress. All right, great. So it appears there are no further questions, and I’m going to dive in further to our conversation for today, which we’ll be discussing about. Okay, so for the next 20 minutes, we’re going to have a couple of exercises to do both online and offline, and we’ll be conversing together to respond to some very key questions around based on the program we presented to you earlier just now. What can we do to make this a more content approach to do? And we have three themed questions that we have for you to respond to, but the questions will be interactive. So you get to come back together, share ideas on how that can be better, and document your thoughts and share with us. All right, so we have three questions here. The three questions here has been designed to help us to work together much more with the IGF ecosystem and create ideas. The three questions are across three major categories. Category number one is around digital inclusion in libraries, and the question is that how can libraries be better positioned to serve as digital inclusion hubs within their communities, and what strategies can ensure equitable access to digital resources? And the second part of the question goes to around, you know, integration into the IGF, right? And the question is that what innovative approaches can libraries adopt to effectively engage with internet governance forum and influence digital policymaking at local, national, and regional levels. And the last part of the question goes around sustainability and partnerships. In the context of what collaborative models or partnership can libraries leverage to secure resources and support sustainable digital transformation? So these are the questions we have. But I see there’s a small crowd in person, so I think we can interact with these questions and share our ideas. So we have 20 minutes to discuss these things. So on the first question, for those in the room and those online also, how can libraries be better positioned to serve as digital inclusion hubs within their communities? And what strategies can ensure equitable access to digital resources and tools? It could be ideas from your work experience on how what you’ve seen in other ecosystems and how libraries can potentially align towards those priorities. Please feel free to speak. OK, thank you. Availability. I mean, to be able to have access. I assume that’s one factor, probably others as well. Availability and access. Yeah, availability. To be able to have access to these hubs, even though if they are virtual or if they are physical. So I assume there’s a low threshold to enter. So I assume to be aware of and also have the resources to really have the low barrier to access. I would say that was one contribution. Great, insightful. You want to go ahead? Especially nothing from my side. It’s OK. He’s told it’s a good point. And I have nothing to say. Awesome.


SARAH KADDU: OK, thank you so much, Damilare. I think how the libraries can be better positioned to serve as digital inclusion, especially if I look at our communities that are underserved. First of all, the libraries need to stock the tools and also upskill themselves to be well conversant with management of these tools and then work with the communities, reach out to the communities and not only to wait for the people to come to them, but also go to the communities, invite, encourage all categories of people to come and access these tools and maybe also train them, upskill them and be able to meet their needs. Because if they are not able to meet their needs, then they’re going to work in isolation. and the communities will not see these libraries as something important or a space that will be important for them to visit and access. And maybe the strategy is that they need to work with other partners within the ecosystem and the infosphere so that they are able to know what exactly is offered by others so that they come in to assist.


DAMILARE OYEDELE: Interestingly, you mentioned availability and access and you emphasized more community and game and integration, right? Brilliant. Well, yeah, I think I would also complement the idea of Sarah, but also of what he mentioned about the first step for me would be the infrastructure. So really know if the libraries can have access to the internet, what kind of access do they have, but also about perhaps to operate in a more contextualized way to make sure that they understand the needs of their communities because they can vary also, it can be quite different. But I also found very interesting what you mentioned about the library tracker so that libraries can connect to each other. I found that a great solution also. So perhaps some libraries who have less resources or less access know that there’s other libraries around in the region that they can also rely on and perhaps even complement the resources they have or yeah, some questions and challenges. Interestingly, I mean, our pointers are interconnected, availability and access, community and infrastructure. First of all, these are essential components to bring together to make libraries thrive digitally. All right, diving to the next point of the conversation here which is about integration into internet governance. And the question here specifically aligns towards what innovative approaches can libraries adopt to effectively engage with internet governance forum and influence digital policy as local national journalists? Is it a more technical question? Sure, go ahead. Yes, thank you. The question is not obvious for me. Well, probably because I don’t know enough of the IDF’s different forums but I assume it has to be, the possibility to advance the usage but in the same time you have to have as was underscored previously, the competence, capacity, ability to be able to integrate and assuming then that you have access and connectivity and that is stable and also that the places are secure. I mean, you showed Abuja, I mean, it is, I mean, Nigeria is maybe the case point. So that these are secure spaces where you can really get into this. And I assume there could then be ambassadors for usage to advance how do you step on to the digital transformation basically where the internet is a tool for doing. I mean, education, training, upskilling, other things. That means that the Internet becomes an integrated part of the library as an information hub. Maybe that is a way for integrating. But as I said, I don’t know enough about Internet de forums, but I assume competence, upskilling, and also making the point of advancing the usage could be maybe one avenue, but there are probably others. Great. Thank you so much for that comment. Yeah. Interesting point you mentioned about competence, upskilling, and that’s why this program we have is important for us to be able to empower librarians in African countries to understand how to engage in Internet governance ecosystem. And a few points we try to work on here is to make sure that we are able to train librarians on what Internet governance is all about and engage them with local IGFs in diverse communities and engage in conversations and to prioritize library connectivity in their societies. And through those kind of dialogues and conversations, we see a more collaborative effort to drive capacity, competence, like you mentioned, and also lead to these libraries being more information hubs that are much more vibrant, leveraging technology. Thank you so much for that. Madam Sarah, do you have any comments to add on that?


SARAH KADDU: Thank you so much. I think to begin with, librarians, libraries need to know that they are not almost mentioned in the IGF. So to begin with, they should start by participating in the IGF and see what goes on in the IGF so that they can relate with their work and then be able to innovatively practice and deliver services that speak to what IGF is talking about and also be able to participate in policymaking activities at national and regional levels.


DAMILARE OYEDELE: Thank you so much for that comment. Before I go to you, Maria, we have a hand up online. Gabriel Karsan. Over to you. Gabriel, are you able to unmute yourself to speak? Excuse me. Someone’s going to want to speak online. Gabriel wants to speak online.


GABRIEL KARSAN: Thank you, Damilare. I hope I am audible. Sure, go ahead. Great. First of all, quite an interesting conversation. I just wanted to jump in and share a little bit of context on how we could integrate internet governance and the library community. When we think about the internet and its principles of openness, accessibility, and providing a user the chance to have end-to-end access to resources, it’s the same thing as the abstraction of the library, because the library is an open center where records from different sources all over the world build on diversity and inclusion for the purpose of preserving knowledge, but also making information more utilized and also accessible without any barrier. So these things already go hand-in-hand as we see today’s synergy. When we go back to think about how the internet in itself got built, it was a lot of the academic departments or academia that had a lot of library initiation in collecting the data, but also processing the data and preserving them so that the next generation could take on and build. And I think this is the case that we have now. So I think the program of having librarians and internet governors equipped together is first a reminder of the role of academia, but also the role of librarians as the safe guardians of a collective, not just physical, but also virtual space of preservation of knowledge, but also history and integration of diversity, because they still stand with the same principles of openness, centralization, but end-to-end delivery of resources. So the first thing we could do is jump on the case of literacy, because the internet is just a technology, it is a medium. Now, a library can be anything, a library of code, a library of books, a library of different intellectual property. And when you think of it, the source code, a human being is the original library, because you store particular forms of information within yourself. But now the internet has established an infrastructure where we could store information, preserve, and make it utilized to each and every one of us. But when it is based on the principles of a library, community-driven, and always accessible, as most of the speakers have mentioned, this will be something that can push for further integration within our purpose. Thank you.


DAMILARE OYEDELE: Thank you so much for those insightful comments there, Dr. Sen. And I should mention that the internet itself is a combination of a lot of libraries on the internet. And librarians, of course, I would say, play a very vital role in creating the internet itself. We have the records and information from ages that were all uploaded online. And I hope that through this conversation we’re having now, we’re able to see more and more librarians on board to contribute meaningfully and actively to the internet governance ecosystem. Opomaria, do you have any pointers you want to add on this particular question?


MARIA: Yeah, just a quick point. Well, I think everyone already mentioned more or less everything. But perhaps another thing that I would add that I think it’s also important is something that I think it is also important that maybe policymakers understand how libraries’ ecosystem operates at a national level. Because usually it’s quite unique, very contextual. It also depends on the country. It can be very different. So I think it’s something that I also observed that can be interesting is that libraries use their own case studies at a national level and perhaps bring them into these spaces to talk about these success stories with other policymakers. And also so that they can have a better understanding on how the library ecosystem is perhaps built in a certain country and how can they collaborate with them for their projects.


DAMILARE OYEDELE: Interesting point that you mentioned about storytelling and communicating impacts. And that shows the importance of upscale librarians on how to document the impacts, communicate advocacy, and engage with the wider society, interestingly. And the last point that we have here is around sustainability and partnerships, right? And this is very important because libraries can’t do it alone. We need to work with other partners. partners, collaborators, or ecosystem partners to see how can we cross-pollinate ideas and innovation, right? And what collaborative models or partnership can libraries leverage to secure resources and support sustainable data transformation initiatives in the societies? Munagop is quiet. Thank you. It’s a very good initiative. I’m seeing the full session. So I’m just to find out some points for this particular section only because I’m working types of that we are working on digital literacy. So library, you can collaborate schools and college levels. So there can be students can be learned many things from the library. Next part is the public private company, then they will be support this program. And in technologies company, also more than I major things is a local startup companies, they can be developed this initiative and grow up. That’s all that’s all from my side. Sure. Thank you so much for that. From your point is you spoke about a lot about public private partnerships and how libraries can engage key partners in the private sector, technology company partners, NGOs, NGOs, and other partners to see how we can prosper in this innovation, not just we didn’t say number and getting these partners to scale impact and spread the word out there of what our roles are. Thank you so much. Going forward on that. Any comments you want to add to that question? What collaborative models or partnership can libraries leverage to secure resources and support for sustainable digital transformation initiatives? For those online, if you’re willing to speak, kindly raise your hands, or you can drop your responses in the chat box, and I’m able to read on your behalf over here. Sure. I think maybe it could be on the education sector or it’s training. So it’s a combination both on school side, and other will be on the professional side. I mean, work-related upskilling to use the libraries as a hub or central for competence center if that could be possible so both going school side and then the other professional companies public institutions other that would like to see the advancement and particularly talking about digital transformation number of skills would be soft skills of various kinds will be so that’s two possibilities interesting thank you so much but I’m Sarah thank


SARAH KADDU: you so much personally I think there is a lot to offer but we can’t over it as people from the library sector so we need to work in a model which is mild stakeholder take on a mild stakeholder approach so that we can contribute I know we can contribute to so many of them health we can come in to contribute education agriculture we don’t have to work alone anymore but we have to work as a mild stakeholder group to be able to attract funding to be able to win resources and then be able to take on the sustainability part of it and also the digital transformation that you want to see great points Maria yeah just a


MARIA: final point I think I would also well adding up to what Sarah said I would also say perhaps also a model could be for libraries to dare to venture a bit beyond the library and information not necessarily feel but ecosystem also because libraries in the latest years they also engage in a lot of different themes they engage a lot in democracy related themes peace building support of digital skills like you said so I also perhaps think it is also interesting to explore these intersections that libraries have and see if there is an opportunity to engage with the stakeholders that libraries are sometimes not used to engaging with in these spaces thank you


DAMILARE OYEDELE: an emphasis on intersection of our work in the library space in terms of our work related to health, education, no poverty, zero hunger, connecting those dots to engage and speak the language that this partner has to understand. Great. Thank you so much for your insights, for your responses. My colleagues are documenting these thoughts you shared with us, and that will further inform our decision going forward on how we shape things forward on this possible intervention. And going forward on today’s charts, we’d love to hear from you in terms of ideas, thoughts you have in your mind you want to share that we have not mentioned in this conversation today, that you think will be worth exploring for libraries as we progress, to be much more digital savvy, and for that build strong capacity of librarians and also librarians to be able to engage in digital economic opportunities, and not just that, how libraries can be best positioned as a hub for digital empowerment. So if there are comments or general remarks you have that you have not captured, that you think will be essential for us to capture as we progress in this conversation, and afterwards for implementation, please quiet down and speak about those. Sure. I’m just thinking of how do you reach, how do you reach the youth or the people, your customers, or you say your clients, the public. I assume that’s an issue that could be challenging depending upon context, social setting, and capacity. So I assume access will be how do you reach out basically, and also in the media landscape, changing with different formats, shorter communication from youth, etc. So the ability to reach various information channels, to reach and get positive development for this. But otherwise, very interesting to hear this, because this was not something that I’ve been thinking so much about, but makes a lot of sense. Very good job you’re doing. Thank you so much for your comments. I will take that into consideration as we progress.


SARAH KADDU: Thank you so much, Demilare. I found your presentation quite interesting. anything. I can say that from a librarian perspective, I think we are not going to fight the digital tools, we are not going to fight AI, but rather we work to ensure that we encourage everyone to use these tools responsibly, to use these tools maybe in governance, information access. We can also navigate this space in an ethical way so that we can benefit from the benefits that they come along with. Otherwise, if you want to fight the digital tools, we are likely to lose the war. We cannot afford anymore to do that. And also, we start from our youth, the kids at home. We should encourage them to use the digital tools rather than telling them how bad they are, but just question them, that use them responsibly, meaningfully, engage in this space because you want ABCD and don’t engage in this space because of ABC, so that they are able to benefit. Otherwise, we are going to be left behind and yet we are already saying no one should be left behind. And also, to the elderly, we should also tell them how beautiful digital inclusion is in their activities and the communities as well, because if they are farmers, they also need to get the best prices of their crops produced, but if they are not digitally literate, they are not able to find the best prices for their commodities. And also, to the learners, we need to inform them that they can pass well their courseworks, their exams, if they engaged so well in the digital space and do a lot of preparations, seminars, group work, and the like. And then, of course, the traders. they will do the same. I thank you.


DAMILARE OYEDELE: I love how practical you are with your interventions. And these are the comments we’ll be taking notes now. And these are things you’re gonna see how do we integrate this going forward and improve on them as we dive in towards implementing this particular intervention. Yes, Maria.


MARIA: Well, no, not final comments really, just to congratulate you on everything you’re doing. I think it’s really great because you’re doing not just capacity building, but also you’re keeping the field stronger by keeping the libraries interconnected. And yeah, that’s also what makes the field stronger on the long-term. So congratulations on everything.


DAMILARE OYEDELE: Great, thank you so much for the feedbacks, for the interaction and for the conversation. I believe we’ve been able to dive into a couple of things today, which has informed our decision, which will inform our decision going forward, I must say, on some clear pointers on what we want to do. Gabriel Kassan, do you have any comments you wanna add as we end round of this session this afternoon? Gabriel, any comments you wanna heard?


GABRIEL KARSAN: Yes, I hope I’m audible.


DAMILARE OYEDELE: Sure, you’re audible, we can hear you.


GABRIEL KARSAN: First of all, congratulations. And I think we have received quite insightful comments that still are pivotal in how we move forward in the agency of having more inclusion, more openness and the ability of access, but access in a very localized manner. And when we think about libraries, and when we think about libraries, libraries are birthed from the matter of community. And I think we can build the community from there and using the internet now, which is a collective of digital intelligence with the emerging technologies, it’s quite pivotal where we have the principles of libraries to connect to the internet as how we evolve and progress to make it more equitable and interoperable that every person can have access to intelligence, information to further their own lives. So I think this will build a lot of cohesion in our policy element where the policymakers could also understand and also a good marriage. to the tech community and the whole multi-stakeholder approach, especially now when we are at the IGF. So those are my few comments and I’m looking forward to how we unravel. And thank you all for the insightful comments.


DAMILARE OYEDELE: Great, thank you so much for your comments. Please kindly QR code for the attendance sheets. We’re going to connect with you to share the reports of this conversation and to show the progress you make as an organization. We appreciate your time and the conversation we’ve had today. And kindly check out our website to learn more about what we do as an organization. And we’re happy to connect with you further to see how we can collaborate, how we can cross-pollinate ideas, how we can engage more with other stakeholders to make libraries a more vibrant ecosystem that caters for all aspects of our lives. On that note, thank you so much for your time. I appreciate your commitment and the conversation. And we engage with you further as the conference goes by. Do enjoy the rest of the afternoon. Thank you so much and bye for now. We’re done. No, no, no, it’s just remember, don’t worry. I’m okay. I don’t have BS, so don’t worry. I don’t have anything to drag. Okay, so I need to… It’s going to break in a minute. We’re going to have to do a little bit of a the the the the the the the the the the the the the . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .


D

DAMILARE OYEDELE

Speech speed

137 words per minute

Speech length

3682 words

Speech time

1611 seconds

Availability and low-barrier access are key factors

Explanation

Damilare emphasizes that availability and low-barrier access are crucial for libraries to serve as digital inclusion hubs. This implies that libraries need to ensure their digital resources are easily accessible to community members.


Major Discussion Point

Libraries as Digital Inclusion Hubs


Agreed with

SARAH KADDU


MARIA


Agreed on

Libraries need to transform into digital inclusion hubs


Librarians need competence and upskilling to engage with Internet Governance Forums

Explanation

Damilare highlights the importance of training librarians to understand and engage with the Internet governance ecosystem. This includes empowering them to participate in local IGFs and prioritize library connectivity in their communities.


Evidence

The program aims to train librarians on Internet governance and engage them with local IGFs in diverse communities.


Major Discussion Point

Integration of Libraries into Internet Governance


Agreed with

SARAH KADDU


Agreed on

Libraries should actively engage with Internet Governance Forums


Collaboration with schools, colleges and tech companies is important

Explanation

Damilare emphasizes the importance of public-private partnerships for libraries. He suggests engaging key partners in the private sector, technology companies, NGOs, and other partners to scale impact and spread awareness of libraries’ roles.


Major Discussion Point

Partnerships and Sustainability for Libraries


Agreed with

SARAH KADDU


MARIA


Agreed on

Collaboration and partnerships are crucial for libraries’ sustainability


Libraries can serve as competence centers for both education and professional upskilling

Explanation

Damilare suggests that libraries can function as hubs or central competence centers for both educational and professional development. This approach combines support for school-related learning and work-related upskilling.


Major Discussion Point

Partnerships and Sustainability for Libraries


Reaching youth and adapting to changing media landscapes is crucial

Explanation

Damilare points out the challenge of reaching young people and adapting to changing media landscapes. He emphasizes the need for libraries to consider different formats and shorter communication styles preferred by youth.


Major Discussion Point

Future Directions for Libraries


S

SARAH KADDU

Speech speed

139 words per minute

Speech length

695 words

Speech time

297 seconds

Libraries need to stock digital tools and upskill staff to manage them

Explanation

Sarah emphasizes the need for libraries to equip themselves with digital tools and train their staff to manage these tools effectively. She also stresses the importance of community outreach and meeting the needs of diverse user groups.


Evidence

Sarah suggests libraries should reach out to communities, invite and encourage all categories of people to access the tools, and train them to meet their needs.


Major Discussion Point

Libraries as Digital Inclusion Hubs


Agreed with

DAMILARE OYEDELE


MARIA


Agreed on

Libraries need to transform into digital inclusion hubs


Libraries should participate in IGFs to understand and relate to their work

Explanation

Sarah suggests that librarians need to actively participate in Internet Governance Forums to understand the discussions and relate them to their work. This participation can help libraries innovate their practices and contribute to policymaking activities.


Major Discussion Point

Integration of Libraries into Internet Governance


Agreed with

DAMILARE OYEDELE


Agreed on

Libraries should actively engage with Internet Governance Forums


Libraries should take a multi-stakeholder approach to attract funding

Explanation

Sarah advocates for a multi-stakeholder approach in the library sector. She suggests that libraries can contribute to various sectors like health, education, and agriculture by working collaboratively, which can help attract funding and resources.


Major Discussion Point

Partnerships and Sustainability for Libraries


Agreed with

DAMILARE OYEDELE


MARIA


Agreed on

Collaboration and partnerships are crucial for libraries’ sustainability


Libraries should encourage responsible use of digital tools and AI

Explanation

Sarah emphasizes that libraries should not fight against digital tools and AI, but rather encourage their responsible use. She suggests that libraries should guide users, especially youth and the elderly, on how to benefit from digital tools while using them ethically and meaningfully.


Evidence

Sarah provides examples of how digital literacy can benefit different groups: farmers finding best prices for crops, students preparing for exams, and traders improving their businesses.


Major Discussion Point

Future Directions for Libraries


M

MARIA

Speech speed

162 words per minute

Speech length

312 words

Speech time

115 seconds

Infrastructure and internet access are fundamental requirements

Explanation

Maria emphasizes that infrastructure, particularly internet access, is a crucial first step for libraries to transform into digital hubs. This underscores the importance of basic technological resources for libraries to function in the digital age.


Major Discussion Point

Libraries as Digital Inclusion Hubs


Agreed with

DAMILARE OYEDELE


SARAH KADDU


Agreed on

Libraries need to transform into digital inclusion hubs


Libraries should operate in contextualized ways to meet community needs

Explanation

Maria suggests that libraries need to understand and address the specific needs of their communities. This approach ensures that library services are relevant and beneficial to the local context.


Major Discussion Point

Libraries as Digital Inclusion Hubs


Libraries should use case studies to demonstrate their ecosystem to policymakers

Explanation

Maria recommends that libraries use their own case studies at a national level to illustrate their ecosystem to policymakers. This can help policymakers better understand how the library ecosystem operates in different contexts.


Evidence

Maria suggests bringing success stories to these spaces to talk about them with policymakers.


Major Discussion Point

Integration of Libraries into Internet Governance


Libraries should explore intersections with themes like democracy and peacebuilding

Explanation

Maria suggests that libraries should venture beyond traditional library and information themes. She encourages exploring intersections with topics like democracy and peacebuilding, which libraries have been engaging with in recent years.


Major Discussion Point

Partnerships and Sustainability for Libraries


Agreed with

DAMILARE OYEDELE


SARAH KADDU


Agreed on

Collaboration and partnerships are crucial for libraries’ sustainability


Keeping libraries interconnected strengthens the field long-term

Explanation

Maria commends the efforts to keep libraries interconnected, stating that this approach strengthens the field in the long term. This suggests that collaboration and networking among libraries contribute to the overall resilience and effectiveness of the library sector.


Major Discussion Point

Future Directions for Libraries


G

GABRIEL KARSAN

Speech speed

163 words per minute

Speech length

592 words

Speech time

217 seconds

Libraries and internet governance share principles of openness and accessibility

Explanation

Gabriel draws parallels between libraries and internet governance, highlighting their shared principles of openness and accessibility. He emphasizes that both aim to provide users with end-to-end access to resources and preserve knowledge for future generations.


Evidence

Gabriel mentions that academia and libraries played a crucial role in collecting, processing, and preserving data that contributed to the development of the internet.


Major Discussion Point

Integration of Libraries into Internet Governance


Libraries can build community and make intelligence more accessible through the internet

Explanation

Gabriel emphasizes that libraries are born from community needs and can use the internet to build and connect communities. He suggests that libraries can leverage emerging technologies to make intelligence and information more equitable and accessible to everyone.


Major Discussion Point

Future Directions for Libraries


Agreements

Agreement Points

Libraries need to transform into digital inclusion hubs

speakers

DAMILARE OYEDELE


SARAH KADDU


MARIA


arguments

Availability and low-barrier access are key factors


Libraries need to stock digital tools and upskill staff to manage them


Infrastructure and internet access are fundamental requirements


summary

All speakers agree that libraries need to evolve into digital hubs by ensuring availability of digital resources, providing low-barrier access, and developing necessary infrastructure and skills.


Libraries should actively engage with Internet Governance Forums

speakers

DAMILARE OYEDELE


SARAH KADDU


arguments

Librarians need competence and upskilling to engage with Internet Governance Forums


Libraries should participate in IGFs to understand and relate to their work


summary

Both speakers emphasize the importance of librarians participating in and engaging with Internet Governance Forums to better understand and contribute to digital policy discussions.


Collaboration and partnerships are crucial for libraries’ sustainability

speakers

DAMILARE OYEDELE


SARAH KADDU


MARIA


arguments

Collaboration with schools, colleges and tech companies is important


Libraries should take a multi-stakeholder approach to attract funding


Libraries should explore intersections with themes like democracy and peacebuilding


summary

All speakers agree that libraries need to form partnerships and collaborations with various stakeholders to ensure sustainability and expand their impact.


Similar Viewpoints

Both speakers view libraries as centers for digital literacy and skills development, emphasizing their role in educating and upskilling community members in the responsible use of digital tools.

speakers

DAMILARE OYEDELE


SARAH KADDU


arguments

Libraries can serve as competence centers for both education and professional upskilling


Libraries should encourage responsible use of digital tools and AI


Unexpected Consensus

Libraries’ role in broader societal issues

speakers

MARIA


GABRIEL KARSAN


arguments

Libraries should explore intersections with themes like democracy and peacebuilding


Libraries can build community and make intelligence more accessible through the internet


explanation

Both speakers unexpectedly agree on libraries’ potential to address broader societal issues beyond traditional roles, suggesting a more expansive and transformative vision for libraries in the digital age.


Overall Assessment

Summary

The speakers generally agree on the need for libraries to transform into digital hubs, engage with Internet governance, form partnerships, and expand their roles in society. There is a strong consensus on the importance of digital inclusion, capacity building, and collaboration.


Consensus level

High level of consensus among speakers, implying a shared vision for the future of libraries in the digital age. This agreement suggests potential for coordinated efforts in transforming libraries and integrating them into the broader digital ecosystem and policy discussions.


Differences

Different Viewpoints

Unexpected Differences

Overall Assessment

summary

No significant areas of disagreement were identified among the speakers.


difference_level

The level of disagreement was minimal to nonexistent. The speakers generally agreed with and built upon each other’s points regarding the role of libraries in digital inclusion, internet governance, and sustainable partnerships. This alignment suggests a shared vision for the future of libraries in the digital age, which could facilitate smoother implementation of proposed initiatives.


Partial Agreements

Partial Agreements

Similar Viewpoints

Both speakers view libraries as centers for digital literacy and skills development, emphasizing their role in educating and upskilling community members in the responsible use of digital tools.

speakers

DAMILARE OYEDELE


SARAH KADDU


arguments

Libraries can serve as competence centers for both education and professional upskilling


Libraries should encourage responsible use of digital tools and AI


Takeaways

Key Takeaways

Libraries are transforming from quiet book spaces to innovative digital hubs


Availability, access, and infrastructure are crucial for libraries to serve as digital inclusion centers


Libraries need to upskill staff and stock digital tools to meet community needs


Librarians should actively participate in Internet Governance Forums to shape policies


Multi-stakeholder partnerships are important for libraries to secure resources and support digital transformation


Libraries should encourage responsible use of digital tools and AI rather than resist them


Resolutions and Action Items

Train librarians on Internet governance and engage them with local IGFs


Prioritize library connectivity in communities


Document and communicate library impact stories to policymakers


Explore intersections between libraries and themes like democracy and peacebuilding


Unresolved Issues

Specific strategies for reaching youth and adapting to changing media landscapes


Detailed plans for implementing digital transformation in resource-constrained libraries


Methods to measure success of library integration into Internet governance ecosystem


Suggested Compromises

Balance traditional library services with new digital offerings


Collaborate with both public and private sector partners to leverage diverse resources


Thought Provoking Comments

Libraries are transforming from the boring quiet space to a more innovative environment. And access to internet is very- in that particular ecosystem for libraries to transform to digital hubs, I want them to be. For instance, if a library in a community has internet connectivity, the community gets to benefit from digital goods and capacity that the library has to provide digital access to them.

speaker

Damilare Oyedele


reason

This comment reframes libraries as digital innovation hubs rather than just repositories of books, highlighting their potential for community impact.


impact

It shifted the conversation to focus on libraries as centers of digital access and innovation, leading to discussion of specific strategies for digital inclusion.


First of all, the libraries need to stock the tools and also upskill themselves to be well conversant with management of these tools and then work with the communities, reach out to the communities and not only to wait for the people to come to them, but also go to the communities, invite, encourage all categories of people to come and access these tools and maybe also train them, upskill them and be able to meet their needs.

speaker

Sarah Kaddu


reason

This comment provides concrete suggestions for how libraries can become digital inclusion hubs, emphasizing proactive community engagement.


impact

It deepened the discussion by moving from abstract concepts to specific actionable strategies libraries can implement.


When we think about the internet and its principles of openness, accessibility, and providing a user the chance to have end-to-end access to resources, it’s the same thing as the abstraction of the library, because the library is an open center where records from different sources all over the world build on diversity and inclusion for the purpose of preserving knowledge, but also making information more utilized and also accessible without any barrier.

speaker

Gabriel Karsan


reason

This comment draws an insightful parallel between the principles of the internet and libraries, highlighting their shared values and goals.


impact

It elevated the conversation by connecting libraries to broader internet governance principles, leading to discussion of how libraries can engage more deeply with internet governance forums.


I think it’s something that I also observed that can be interesting is that libraries use their own case studies at a national level and perhaps bring them into these spaces to talk about these success stories with other policymakers. And also so that they can have a better understanding on how the library ecosystem is perhaps built in a certain country and how can they collaborate with them for their projects.

speaker

Maria


reason

This comment introduces the important idea of libraries sharing their success stories to influence policy and build partnerships.


impact

It shifted the discussion towards practical ways libraries can engage with policymakers and other stakeholders, emphasizing the importance of storytelling and demonstrating impact.


We need to work in a model which is mild stakeholder take on a mild stakeholder approach so that we can contribute I know we can contribute to so many of them health we can come in to contribute education agriculture we don’t have to work alone anymore but we have to work as a mild stakeholder group to be able to attract funding to be able to win resources and then be able to take on the sustainability part of it and also the digital transformation that you want to see

speaker

Sarah Kaddu


reason

This comment emphasizes the importance of multi-stakeholder collaboration for libraries to achieve digital transformation and sustainability.


impact

It broadened the discussion to consider how libraries can engage with diverse sectors and stakeholders to secure resources and support.


Overall Assessment

These key comments shaped the discussion by progressively expanding the vision of libraries from traditional book repositories to digital innovation hubs, community engagement centers, and key players in internet governance. The conversation evolved from identifying the need for digital transformation to exploring specific strategies for implementation, stakeholder engagement, and policy influence. The comments collectively emphasized the importance of proactive community outreach, multi-stakeholder collaboration, and sharing success stories to drive the digital transformation of libraries and increase their impact in the internet governance ecosystem.


Follow-up Questions

What are the realities of the library hubs described in the project? Where are the libraries today and what would you like them to be?

speaker

Unnamed participant


explanation

This question seeks to understand the current state of libraries and the vision for their future role, which is important for contextualizing the project’s goals and challenges.


How can libraries ensure availability and low-barrier access to digital resources?

speaker

Unnamed participant


explanation

This area of inquiry is crucial for understanding how to make libraries effective digital inclusion hubs within communities.


How can libraries work with communities to meet their specific needs?

speaker

Sarah Kaddu


explanation

This question addresses the importance of tailoring library services to community requirements, which is essential for their relevance and effectiveness.


How can libraries collaborate with other partners within the ecosystem and infosphere?

speaker

Sarah Kaddu


explanation

This area of research is important for understanding how libraries can integrate with and complement other information services.


What kind of internet access do libraries have, and how can it be improved?

speaker

Maria


explanation

This question is crucial for addressing the infrastructure needs of libraries to function as digital hubs.


How can libraries operate in a more contextualized way to understand the needs of their communities?

speaker

Maria


explanation

This area of inquiry is important for ensuring that library services are relevant and effective for their specific user base.


How can libraries become secure spaces for digital access and learning?

speaker

Unnamed participant


explanation

This question addresses an important aspect of making libraries viable and trusted digital hubs in their communities.


How can libraries effectively participate in and contribute to Internet Governance Forum discussions?

speaker

Sarah Kaddu


explanation

This area of research is crucial for increasing library representation and influence in internet governance.


How can policymakers better understand the library ecosystem at a national level?

speaker

Maria


explanation

This question is important for fostering collaboration between libraries and policymakers in digital governance.


What public-private partnerships can libraries engage in to scale impact?

speaker

Unnamed participant


explanation

This area of inquiry is crucial for identifying sustainable models for library digital transformation.


How can libraries effectively reach out to youth and adapt to changing media landscapes?

speaker

Unnamed participant


explanation

This question addresses the challenge of keeping libraries relevant and accessible to younger generations.


How can libraries encourage responsible use of digital tools and AI across different age groups?

speaker

Sarah Kaddu


explanation

This area of research is important for promoting digital literacy and ethical use of technology through libraries.


Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

IGF 2024 Global Youth Summit

Session at a Glance

Summary

This discussion focused on the impact of artificial intelligence (AI) on education and the challenges and opportunities it presents. Participants, including policymakers, educators, and youth representatives, explored various aspects of AI in education.

Key points included the need for ethical and trustworthy AI systems, addressing the digital divide, and ensuring equitable access to AI-powered educational tools. Speakers emphasized the importance of involving youth in decision-making processes and policy development related to AI in education. The discussion highlighted concerns about data privacy, algorithm bias, and the potential for AI to exacerbate existing inequalities.

Several speakers stressed the need for global collaboration to develop shared standards and best practices for AI in education. The importance of cultural diversity and localized content in AI-powered educational tools was also emphasized. Participants discussed the role of educators in implementing AI systems and the need for proper training to use these tools effectively.

The discussion touched on the challenges of AI-generated content and its impact on academic integrity. Speakers also addressed the potential of AI to personalize learning experiences and improve accessibility for students with diverse needs. The need for critical thinking skills and digital literacy in the age of AI was emphasized.

Overall, the discussion underscored the complex nature of integrating AI into education systems and the need for a multi-stakeholder approach to address challenges and harness opportunities. The importance of balancing innovation with ethical considerations and human-centered design in AI development for education was a recurring theme throughout the discussion.

Keypoints

Major discussion points:

– The impact of AI on education, including opportunities and challenges

– Ethical considerations and accountability for AI in educational settings

– The digital divide and ensuring equitable access to AI-powered education

– The role of youth voices and participation in shaping AI policies for education

– Addressing biases and ensuring diversity in AI development and implementation

Overall purpose/goal:

The discussion aimed to explore the implications of AI for education from multiple stakeholder perspectives, with a focus on including youth voices and considering both opportunities and challenges. The goal was to identify key issues and potential ways forward for responsibly integrating AI into educational systems.

Tone:

The tone was largely constructive and collaborative, with speakers building on each other’s points. There was a sense of urgency around addressing challenges, balanced with optimism about AI’s potential benefits. The tone shifted slightly towards the end to become more action-oriented, with calls for youth to propose solutions rather than just raising problems.

Speakers

– Li Junhua: UN Under-Secretary-General for Economic and Social Affairs

– Ihita Gangavarapu: Coordinator of India Youth IGF, co-moderator for onsite participants

– Ahmad Khan: Researcher and Development Engineer, Aramco, Saudi Arabia

– Henri Verdier: Ambassador for Digital Affairs, Ministry of Europe and Foreign Affairs, Government of France

– Margaret Nyambura Ndung’u: Minister at the Ministry of Information, Communication, and Digital Economy, Government of Kenya

– Carol Roach: Moderator

– Phyo Thiri Lwin: Active in regional youth initiatives from Myanmar

– Amal El Fallah Seghrouchni: Minister of Digital Transition and Administration Reform of Morocco

– Umut Pajaro Velasquez: Coordinator of Youth LAC IGF and Youth IGF Colombia

Additional speakers:

– Jarrel James: Researcher for internet resiliency

– Lily Edinam Botsyoe: PhD researcher in privacy

– Ahmad Karim: From Year in Women

– Osei Keja: Representative from African Youth IGF

– Dana Cramer: PhD candidate, Youth IGF Canada

– Asfirna Alduri: Part of the Responsible Technology Hub

Full session report

Expanded Summary of AI in Education Discussion

Introduction

This discussion explored the impact of artificial intelligence (AI) on education, examining both challenges and opportunities. The panel included policymakers, educators, youth representatives, and researchers from various countries, offering diverse perspectives on this complex topic.

Key Themes and Discussion Points

1. Historical Context and Current AI Applications in Education

Amal El Fallah Seghrouchni provided historical context, noting that AI-assisted teaching began 40 years ago. Li Junhua shared specific examples of AI use in education across different countries, such as AI-powered teaching assistants in China and personalized learning platforms in the United States.

2. AI’s Impact on Education

Speakers discussed AI’s potential to enhance and personalize education. Phyo Thiri Lwin highlighted AI tools’ ability to help non-native speakers improve language skills. Amal El Fallah Seghrouchni emphasized the importance of voice in generative AI for education, especially in multilingual contexts.

Ahmad Khan categorized AI developments in education into two approaches: instructor-focused (instructionist) and student-focused (constructionist). This distinction helped frame the discussion on AI’s varied applications in educational settings.

3. Ethical Considerations and Accountability

There was broad consensus on the need for ethical considerations and accountability in AI education. Amal El Fallah Seghrouchni stressed the importance of transparency and fairness in AI systems making educational decisions, as well as protecting data privacy and cognitive rights.

Umut Pajaro Velasquez argued for shared accountability among multiple stakeholders, including developers, educators, policymakers, and students. The discussion also touched on the ethical implications of using AI tools for academic work, with panelists and audience members debating the boundaries of acceptable AI assistance in education.

4. Addressing Biases and Inequalities

The discussion revealed significant concerns about biases and inequalities in AI education. Li Junhua pointed out the digital divide, noting that less than a third of the global population is connected to the internet. Ahmad Karim highlighted the disparity between the global south, which faces threats and protection concerns regarding AI, and the global north, which tends to focus on opportunities.

Phyo Thiri Lwin elaborated on the challenges of accessing AI education in developing countries, such as infrastructure and funding issues. Asfirna Alduri brought attention to the often-overlooked issue of underprivileged workers, often from the global south, who label AI training data and are frequently excluded from discussions about AI development.

5. Youth Participation in AI Governance and Development

There was strong agreement on the importance of youth involvement in shaping AI policies and implementation in education. Henri Verdier emphasized the need to engage youth in shaping AI policies. Osei Keja advocated for youth involvement in policy-making and governance of AI at regional and national levels.

Asfirna Alduri proposed creating intergenerational spaces where youth can develop AI solutions. An audience member suggested that youth should propose solutions to AI challenges rather than just asking older generations to fix problems, indicating a slight difference in approach to youth involvement.

6. Role of Policymakers and Public Infrastructure

Margaret Nyambura Ndung’u highlighted the role of policymakers in ensuring AI becomes a force multiplier for inclusive and equitable education, addressing the digital divide, and safeguarding equity. Henri Verdier stressed the need for a public infrastructure for educational AI to ensure accessibility and fairness.

7. Concerns and Future Considerations

Ahmad Khan raised concerns about the potential for AI to replace human thinking and creativity. The discussion also touched on the need to validate the accuracy of AI-generated information, especially in STEM fields.

Umut Pajaro Velasquez emphasized the role of academia in researching the impact of AI on education. An audience member suggested the need for universal guidelines on AI use in education.

Conclusion

The discussion underscored the complex nature of integrating AI into education systems and the need for a multi-stakeholder approach to address challenges and harness opportunities. While there was general optimism about AI’s potential to enhance education, speakers emphasized the importance of balancing innovation with ethical considerations, addressing inequalities, and ensuring meaningful youth participation in shaping the future of AI in education.

Moving forward, it is clear that continued international collaboration, inclusive governance structures, and a focus on ethical, user-centric AI development will be crucial in realizing the potential of AI to positively transform education while mitigating associated risks. The historical context provided a valuable perspective on the evolution of AI in education, while the focus on current applications and future challenges highlighted the dynamic nature of this field.

Session Transcript

Li Junhua: It’s very heartening to see that the decision makers collaborating with all of you and recognizing the critical role of the young people in shaping these discussions. I’m very or I’m truly inspired by the remarkable leadership and the spirit of the cooperation that you have demonstrated through the IGF youth track over the past months. It has facilitated the individual at a very invaluable cross regional dialogue and learning along the set of a global youth track or the global youth summit. The youth track that sends a very powerful message to the world, namely, uniting across the generations is essential. So only together can we meaningfully address the issues of reshaping the foundation for our future. As we come together today to discuss the impact of the AI that on education, that does remember that, first and foremost, education is one of the fundamental human rights enshrined in the universal declaration. It is our shared responsibility to ensure that the AI supports this right, this fundamental right throughout the world, rather than undermine it. They are a number of the good examples around us. For instance, in Morocco, AI is helping to reduce learning disparities in rural areas. In France, AI is helping virtually impaired students to read the converting digital information into haptic feedback. And in Brazil, AI powered the natural language processing is improving the literacy. Also in India, AI driven voice assisted education tools are fostering language inclusiveness. And in UK, AI is being used to convert the complex documents into sentences using trigonometry. After and concludes some key facts of the opening lecture, we are scheduled during our online conference to have a dedicated discussion and a virtual 있는데 during the week. Follow us on Facebook, Twitter, Instagram, andTwitter.com, Facebook.com. You can also. Well, I’ll keep sharing more data rather than talking quite a lot into easy-to-read formats in over 70 languages, which actually enhanced accessibility for learners with diverse needs. Having said that, we have to recognize that there’s a digital divide at Hamburg’s. That actually hampers the potential of the artists to build on all those good practices. And one very striving figure, less a third of the global population is not connected to the internet, not to mention the AI accessibility. So in this connection, United Nations has been given a strong mandate to right this wrong. The recent adopted Global Digital Compact calls for actions to force the international partnerships that build AI capacity through the education and training to expand access to open AI models, systems, and the training data. So today’s summit provides a platform for multistakeholders intergenerational dialogue on AI, especially on AI training, AI education. I believe your summit, your discussion will greatly help identifying a path towards an ethical digital future where we leverage AI to help guarantee that every individual has an open access to quality education regardless of their backgrounds. So I look forward to hearing from you more with your very brief ideas and innovative actions. Thank you. Thank you very much.

Ihita Gangavarapu: Thank you, Mr. Lee. I know he’s been one of the busiest at this conference and we appreciate the support towards young people in the internet governance space. With this, I think I’ll take over from Ms. Carroll. I am Ayita Gangavarpu. I’m the coordinator of India Youth IGF and also the co-moderator for onsite participants. But before I begin, I would also like to introduce our online moderators. We have Ms. Ines Hafid from the Tunisia IGF and Arab IGF, as well as Mr. Keith Andre from Kenya IGF and the African Youth IGF. This is to ensure that we have a seamless communication and participation, both virtually as well as onsite. All right, with this, we now move on to a very interesting intergenerational panel and it gives me immense pleasure to introduce you to our panelists for today’s session. We have Mr. Andre Verdier, the Ambassador for Digital Affairs, Ministry of Europe and Foreign Affairs, Government of France. We’re also joined by Ms. Margaret Nyambura Ndugu, Minister at the Ministry of Information, Communication, and Digital Economy, Government of Kenya. We have Ms. Amal El-Falah Segroshni, the Minister of Digital Transition and Administration Reform of Morocco. We have Ms. Fio Thirimin, who is the Coordinator of Youth Myanmar IGF. We have Mr. Ahmed Khan, Researcher and Development Engineer, Aramco, Saudi Arabia. We have Mr. Umut Pajaro Velasquez, the Coordinator of Youth LAC IGF and Youth IGF Colombia, among other affiliations. We have Mr. Khalid Hadadi, who’s the Director of Public Policy, Robolox. With this, now I move on to the very first question, which is directed to our host, Mr. Ahmed Khan. Let us start from the host country. So what is your experience on how AI’s innovation is impacting education? What are the different countries in the world who are adapting AI in education? And what are the different opportunities that this space brings to all of us?

Ahmad Khan: Okay, thank you very much. Actually, the thought came to my mind whether I should use ChatGPT to help me with this remark. I mean, on the one hand, it would improve my output. On the other hand, it could dilute my thoughts and perspective that I’ll share. I think this is the dilemma we face with AI, right? And I’ll get back to this point, but I’d like to touch on some points that answer this topic, and we can expand more in the Q&A as there is interest. So in terms of innovations in AI, there are generally two categories of AI developments in education. The first one is educator-focused or instructor-focused technologies, and these are called the instructionist approaches, which are focused on supporting the automating grading in procedures and feedback on students, for example. And the other approach is called the constructionist approach. And the focus is on how students and learners can use technology to construct knowledge themselves. And that’s more of a hands-on approach to education. And both really provide value and should be involved in how we go about integrating AI technologies into education. In terms of concerns, there are concerns with data use by technology companies, there are concerns with decision-making capabilities by AI tools, and by the way, this is a structural limitation inherent in large language models. But my main concern is, and this is looking at the future of what education could be, my main concern is the possibility of a future where society would become an end-user of knowledge and not a creator of knowledge, right? And as a youth advocate, I’d like to talk more about this and how we can give a few characteristics of what good education of the future could look like and what it should do. So first, good education should instill a level of deep thinking and curiosity for knowledge. And there are some tools, new tools now, that where you have large language models, they use the Socratic methods with students where they ask them questions, get them engaged, and reach answers on their own. And there will be sharp and critical. The value we can see from that and the idea here is, the vision here is generation should be able to use AI to support their thinking and not replace their thinking. Second, good educators should be a source of inspiration and guidance for students, and we should really focus on enabling and supporting this direction. The idea here is someone who is inspired with AI to just get the answer, while someone who’s inspired will use AI to help the answer and foster. Third, good education would foster self-learning and empower lifelong learners who would think collectively and not individually. On this, I think that’s something we have to practice as we teach. One example that I would think of is developing learning hubs that we can spread around the globe, where students, policy makers, technology development can come together and discuss ideas, come out and see how they get feedback, what worked, what didn’t work, and then that helps with integration and really pushing it forward. I’ll give here an idea, I think that I will close it. The idea of technology adoption curve, if you’re familiar with it, you have your normal bell curve that starts with your pioneers, your Steve Wozniak, the magician of technology. Then you have your early adopters, which is about 10-15 percent of the population, and it turns out this is the section that really drives the maturity and adoption of technology. Those are the people who will stand in line to wait for the new iPhone for hours. Those are the people who will tell you how good it is and what needs to improve. Then you have your average adopter, the majority adopters, those are the practical people who want to think about, how will I use the technology? How will it help me without really taking much of my time? Then you have the late adopters, those who are either not able or not interested in adopting it early on. In developing these learning hubs, we really want to think about how we can help more people, facilitate for more people to become early adopters, and get them involved in the discussion and engaged early on. Then you can have the effects go from a local to a global level. Thank you very much.

Ihita Gangavarapu: Thank you so much for your points. It sets a lot of context for the upcoming discussions. I now request Mr. Henry Werdia, the Ambassador for Digital Affairs from Government of France. My question to you is, France, the rich initiatives to address AI, how do you see AI impacting education? And what principles should guide the development and implementation of AI in the educational settings?

Henri Verdier: Thank you very much. That’s a question. It’s impossible to answer in five minutes, but I will try to share some view. But first, let’s recall that when we speak about AI on education, we speak about at least three different things. We speak about how to use AI for education. And of course, we can dream of a world with, for example, more personalized education. If a model could tell me, you don’t understand mathematics because you didn’t understand this two years ago, and I will fix it, and now you can continue. That’s, of course, a dream. We also have to think about education to AI. So we need skills and literacy and, frankly speaking, a human with absolutely no AI literacy won’t be as free as they could soon. So we need to empower a bit and to prepare. And we need to prepare our children to the world of AI, a world, a very complex world, where if you don’t know how to do your job with AI, you will lose your job, where we’ll constantly live with small companions that will always obey and serve us. But that’s not a good way to become a great human being, to never oppose, to be surrounded by a servile model. So that’s a very different question. And that’s important, too. The youth represent a vulnerable group. We have a duty to let them become citizens and free human beings. They have rights, and we have to pay much more attention that most users of AI. I say this because, of course, everything we are doing in the field of AI regulation and governance matters for education. And let’s start with general principles. We need to find some trade models. You need to respect the UNESCO ethical principles and the strong ethics. You need to avoid bias and to pay attention. In order to make this, we need to conceive, because we don’t have it, a way to audit AI model and in a democratic way. The problem is not just a few experts coming to me and saying, no, I did audit the model, it’s great. We need a society to be able to have a conversation regarding the models. And for this, we need to conceive new strategies. We need to avoid, not bias, but just a lazy confirmation of current inequalities. So, for example, today, as you know, if I ask to the AI, show me a CEO, it will propose a white, 50 years old man, so like me. Because today, the average CEO is like this, but it will change, and the AI has to change, too, or to prepare. So this is not just for education. education but that are very important question if you don’t fix it you will have trouble for education then maybe you have questions that are more educational we need to to save a spirit of public service education is a fundamental right so of course the market can help us companies can help us we have research but we have to be sure that it will remain a public service we probably need let’s imagine if we end with a world with one giant company teaching every children of the world we are lost that’s finished so we need a diversity of solution respecting cultural diversities and needs of every countries so that’s very important we need to be sure that the principle of equity equal access non-discrimination will be preserved we in France we think that for this we need a kind of public infrastructure at some level we cannot just rely on self-regulation from an oligopoly of few companies so we have to to think about what is a public service of educational AI so that’s our some question probably and I quite finished because the five minutes are running fast probably the international community will have to conceive a framework for knowledge and education we cannot let capture we’ll produce a lot of knowledge with AI with all those data but can they be the property of the companies that will build this knowledge you know we live in a world where we live because there is a public science because there is a common knowledge and you can innovate and create value on whatever because there is also a common knowledge so what is a common knowledge that we need to share regarding AI and this I don’t think that we have a strong conversation regarding this question and I conclude with this, and that’s not just because we are at the IGF Youth. We need to engage the youth from the beginning. I think this deeply and frankly, not just because I’m the father of two daughters, 18 and 20, but we will need new solutions. We need strong innovation and brave innovation. We need ideas coming out of the box. And for this, we need to engage very early the youth. And for example, to prepare the Paris AI Summit, we did work with IGF France and other organizations, and we did organize some sessions and workshops on Akathon. We did just ask to young, which kind of education do you dream about? And I was very interested by the answer. So for example, all the ideas were ideas with a personal AI in my pocket that I did teach myself and I know the model and the model doesn’t know me so well. So they don’t want, they didn’t, those young that did work with us, they didn’t want one big central AI model somewhere in the United States. They wanted their personal companion that they teach themselves. And that was interesting because that was instinctive. They didn’t really think about it. But for them, a good future is a future where the AI works for me in my pocket with my prompt and not a future where someone did decide somewhere about my future. So that was very, very interesting for me.

Ihita Gangavarapu: Thank you. Thank you, Mr. André. You mentioned a lot of concerns, Robert, that meaning as a community, we need to address, we need to think about and also deliberate. You know, that brings me to my next question, which is directed to Ms. Margaret. This is on, when we talk about cooperation and collaboration, what can policy makers do for AI to support and push education for everyone? And how can global collaboration be fostered to address the challenges and opportunities in the AI field, particularly in education?

Margaret Nyambura Ndung’u: Thank you, Madam Moderator. Good morning, good afternoon, and good evening to all of the distinguished panelists. It’s a great honor to join you in these 2020. for IGF, and I’m glad that I’m able to join you online, and extend my gratitude to the host country, Saudi Arabia, for organizing this and the UN Secretariat. Going into the question, the intersection of artificial intelligence and education presents both profound opportunities and pressing challenges. As we delve into this discussion, I would like to frame my remarks on the two key areas that you have asked, what policymakers can do to ensure artificial intelligence supports education for all, and how global collaboration can address the challenges and opportunities for artificial intelligence. Policymakers have a critical role in ensuring that artificial intelligence becomes a force multiplier for inclusive and equitable education, as envisioned in the UN Sustainable Development Goal Number 4. To achieve this, there is a need to focus on accessibility, addressing the digital divide, and safeguard equity, among others. Artificial intelligence-powered tools must be designed with inclusivity at their core, ensuring they cater to learners with diverse needs, including those enabled differently, and those in underserved communities. Governments should incentivize the development of open-source artificial intelligence tools and platforms that democratize access to quality education content. This can be done or can be achieved through following universal design principles, also using artificial intelligence to create personalized learning experiences that adapt to individuals’ needs, such as text-to-speech capabilities for visually impaired learners, or speech recognition tools for those with hearing disabilities. We also must focus on multilingual support through equipping artificial systems with language translation capabilities, especially for local and indigenous languages, to bridge linguistic barriers for learners in underserved communities and across all communities. Again, we are talking about collaborative platforms through promoting the creation of open-source educational platforms that pool resources and expertise globally, making high-quality content accessible to all. learners regardless of location or socio-economic support. And finally, we are talking supporting development of localized content that is culturally relevant and content-specific learning materials that resonate with the local communities. Distinguished delegates, the second area I would like to focus on is the issue of digital divide. Artificial intelligent potentials can only be harnessed if all learners have access to the necessary digital infrastructure. Policymakers must prioritize investments in affordable and reliable internet connectivity and digital devices, particularly in rural and marginalized areas. Addressing the digital divide through artificial intelligence in education requires comprehensive strategies that ensure all learners have access to the digital infrastructure and tools necessary to benefit from artificial intelligence-powered solutions. By focusing on infrastructure, affordability, and inclusivity, and combining efforts across stakeholders, AI can be a transformative tool to overcome the digital divide and provide equitable education opportunities for all learners. I must say we are doing a lot of infrastructure development, looking at affordability, and looking at capacity building as a country. Distinguished delegates, safeguarding equity is important in leveraging AI to back education for all. We must mitigate the risk of bias in AI algorithms that could exacerbate existing inequalities. Policymakers should establish regulatory frameworks that ensure transparency and fairness in the design and deployment of AI in education. To safeguard equity in leveraging AI for education for all, several strategies must be adopted to ensure AI supports inclusivity and does not inadvertently perpetuate or exacerbate existing inequalities. I know we all know across the continent and more in Africa that these inequalities exist. The third area that I would like to focus on is the element of fostering global collaboration. Global collaboration is not merely a choice, but a necessity to harness AI potential in education responsibly. By working together, governments, institutions, and the private sector can ensure that AI contributes to inclusive high-quality education for all. The challenges and opportunities presented by AI in education are inherently global, and so must be our response. Collaborative efforts are essential in shaping an inclusive digital future that includes strengthening international partnership. Governments, education institutions, private sector actors, and the civil society must work together to develop shared standards and best practices for AI in education. Multilateral organizations like the UN can provide platforms for dialogue and cooperation. With these, again, we bring the education institutions because we are talking of our young people, our youth, to ensure that they are fully integrated. Governments, education institutions, private sector, and civil society must come together to develop shared standards and best practices that ensure AI’s ethical, equitable, and effective integration into education systems. Sharing knowledge and resources is one key area of fostering global collaboration. And we are seeing a transformative potential of artificial intelligence in addressing global challenges, particularly in education, health, and education development, calls for equitable access to AI technologies and expertise. Countries with advanced AI capabilities bear a responsibility to ensure their knowledge, resources, and innovations with those that are still developing the AI ecosystem. Some of the key strategies is promoting digital public goods. We are talking of global research collaborations. We are talking of capacity building and knowledge transfer. Once we do that and ensure that we are exchanging knowledge, and particularly when we are talking about digital public goods, advanced AI nations can support the development and dissemination of open source AI tools. Data sets and platforms as digital public goods, ensuring accessibility for all, and a collaborative platform can facilitate the adoption of these tools in under-resourced regions fostering inclusion. As we discuss this, empowering youth participation is one of the core issues of this forum. forum to consider. And again, this is a youth forum. And we are saying that empowering youth through global collaboration is critical to shaping an ethical, inclusive, and forward-looking AI ecosystem. By giving young people a seat at the table and fostering their active engagement, we can ensure that AI policies and practices resonate well with the aspirations of the next generation, safeguarding a future where technology serves humanity. And finally, the last but not least area is ethical AI development that is equally critical in fostering global collaboration. By embedding ethical principles into the design and implementation of AI systems, we can build global trust. This includes respecting cultural context and safeguarding data privacy and security, especially for vulnerable populations. Global collaboration is critical to embedding ethical principles in the design, deployment, and use of AI systems. By leveraging shared values and diverse perspectives, the international community can ensure that AI development aligns with the principles of fairness, inclusivity, and respect for human rights, fostering global trust. Thank you, moderator.

Ihita Gangavarapu: Thank you so much for your points. I think very well captured a policymaker’s perspective on the various concerns that we have, as well as such certain ways with respect to collaboration how these concerns can be addressed. Now I’ll hand it over to Ms. Carolyn to take it forward.

Carol Roach: Thank you very much. We had a lot to digest just now, and it’s basically from a lot of us with the gray hair, except honorable hair. So now it’s time to hear from the youth, especially based on what we’ve heard and the desire to really engage the youth and not just as figureheads but to really get you involved and sitting at the table. So here’s your question, sorry. You come from Myanmar and are very active in the regional youth initiatives. I see her online all the time. Not everyone has the same opportunities. What policies are necessary to prevent the digital divide from widening due to AI implementations in education?

Phyo Thiri Lwin: Thank you for introducing me a bit. I feel like as from the perspectives of the young people from the developing country, we are also trying our best to catch up every day as the way we can. Because I know that there are also the academy in the private sector which are trying to train the young people. Younger generation, I’m Gen Z anyway, but they are trying to educate and train them to learn more about the AI, the Gen F, let’s say. So that is highlighting that even though there are many things are happening in many developing countries, we are still trying our best to catch up the best, not to miss any kinds of the opportunity. But there are also the challenges for, you know, like assessing the education, because for example, developing the infrastructure related to the AI is quite expensive, especially for those in the private sector academy or the school or university. They are quite challenging with the, you know, funding and also the investment related to the, let’s say, like the setup, the learning hub in the developing country related to the AI. That is one of the challenges that I see. Another challenge would be related to the accessible education. Internet has been challenging for us to assess the technology in the developing country. Maybe it is related to the geopolitical related matters. So, you know what, without Internet, I don’t think we can learn continuous learning about the AI and also we can empower the young people to continue their education. So if we are talking about the AI in education, the Internet is also important for us to get access. So for preventing the digital device, that’s a question. So if you prevented the digital device from the AI, learning from the AI education, I personally feel like at least we need to get access to the Internet as a fundamental right. Then we can continue to shape the policy. Even if there is a challenge at the policymaker level, we can shape our society and community at the very ground level, like at school or university. We can change the policy. We can allow students to use the AI at least. But the educators also need to be open-minded for using the AI. What I experienced is that for assessing the AI, let’s say even though the students want to use the AI too, some of the educators stay narrow-minded. It’s a very derogatory way to say, but I feel like they are very concerned for cheating on the assignments or something like that. I can feel from their perspective why they are concerned for cheating on the assignments using the AI technology. But from the learner’s perspective, like me, I have a challenge. I’m not a native speaker in English, right? I need the AI assistant to revise my idea in a better version, let’s say, kind of like this. I feel like at the school or university level, we can shape the ground policy at the school and university at least to grant access to the students for using the AI too. But one concern might probably be the assessment system because many students, maybe students can also cheat using the AI technology, right? Maybe we need to change their exam system, maybe assessment system as well. That is the way what we can do, and even from the educator’s side, I personally feel like it’s better to change their exam system or assessment system on the way. I’m mentioning about at the ground level what we can shape our society by changing the policy at the school or university, right? But at the higher level of the policymaking for the developing country, and also there are always the big gap between developing countries and the developed countries. But the thing is that we can share our resources to each other because we are all human beings. One of the speakers said we have to think about the diversity and inclusion, and we can also share the resources, at least sharing the information and also sharing the opportunity to learn about the AI. Let’s say this speaker mentioned about the AI Summit in France, right? So maybe we can give the opportunity to the young people to go and learn what is happening in the developed country by attending the AI Summit. Kind of like that, this is also the opportunity and also empowering them to do something back in their initiative at the local level. For example, if we invite the educator or learner, both of them get a chance to attend the AI Summit, that might be probably very beneficial for them to share their best and how to be open-minded using the AI technology. Yeah, that is a way how we can share the resources among us and also at the global level. We should not leave anyone behind in this AI evolution and revolution period and era. We have to bring everyone as much as we can by shaping our educational policy at the ground level and also at the higher level of the position.

Carol Roach: Thank you, Fio. I think what we hear repeating here is that… Very good, thank you. We hear repeating here that we need some kind of corporate responsibility in terms of helping countries to develop their AI because it is an expensive endeavor and we need them. I totally agree with regards to the change of mindset, especially of the educators, probably of parents as well, so that the youth have a say and just don’t look at AI as a negative, but to embrace the positive part. And for us to embrace the positive part, yes, we do need that mindset change. We are going to now hear from Umut online. Can you hear us? Umut? You can hear? Okay. Yeah. Okay, so as we sort out the technology, let’s move on to our next question. Minister Amal, this is a big one for you. After hearing all of what’s being said of these important views, both from the youth and from others of the older generation, what preoccupies your attention as a decision maker on AI implications for education? It’s a big one.

Amal El Fallah Seghrouchni: Thank you very much. Yes, it’s a huge question, but I will start with the dream. My dream is to keep young people as far as possible from computers because I think they spend already a lot of time connected and very close from machines. And I think that AI should be used when it has a real added value. And we have to discuss what does it mean, added value. For example, if you use AI to simulate classroom, this is something you cannot do alone as teacher. So you need a tool to simulate this classroom, to simulate interaction with students, etc. And in this situation, AI can provide some benefits. And I would like to say something just to set up the scene. For me, in 18, there was already education-based AI. We call this enseignement assisté par ordinateur, assisted with computer. Education assisted. And it started many, many years ago, like 40 years ago. And there was a lot of advances in mathematics, in science, etc. And then we scale up from this basic AI-assisted teaching to serious game, for example. And gaming becomes something very important in many, many situations, not only at school but also in companies, etc. Because it puts the person in a situation of learning. And then we started thinking about more personalized experience with AI. And we got this generative AI very recently, in 2000, maybe GPT, 2022. But generative AI started like five years before. And this gen AI can have, like, very interesting features. And I will go back to this. But also some bad features. Like for example, plagiarism is something we met. All education system is disturbed by this. Chat GPT, you started by saying maybe chat GPT can give me the answer. So let’s focus on the positive aspects. For example, the voice. The voice in generative AI is very useful for education. And we started developing a lot of apps, for example, for translation from one language to another one. And using some an approach based from speech to text or from text to speech. And this is very useful in particular in the global south. Because you have to face literacy. And you have to face multilanguage. If you go to Africa, for example, in one country, you have to deal with, like, 15 or 20 languages. Different languages. And generative AI helps us to move to shift from one language very smoothly to another one. And this is, I think, the real added value in education of generative AI. Now, just when I listen to you, I think there is a very huge problem. Again, if we focus on the global south, it’s about connection, connectivity. It’s something very difficult to have everywhere. The infrastructure is also a huge problem if you deal with large language models. So we have to find some new approaches. For example, in France, there is a very nice group working on frugal AI and trustworthy AI in the sense that we need to certify that the output of AI are very accurate. And also, they don’t need a huge amount of data and not very big or very large models. And also, the access to platforms. If you put the apps on platforms, people should be able to access. So I think maybe the thing that is crucial is about ethical aspects of AI and education. We have already some dividend on numerical stuff should not exceed more. I mean, if you have people that have access to learning with very sophisticated tools while others don’t have access, you have this problem of accessibility and equity. Transparency. There is also a need to maintain clarity about how AI systems function and make decisions. We have been now, we moved from automatization of systems to autonomy of systems with AI. And this autonomy allows some systems, educational systems to, for example, to make decisions about orientation, to make decisions about access to the university, et cetera. And this is related also to accountability. We need to explain why this person will get this kind of access or not. Another topic, very important, is data privacy and cognitive rights in education. Because you know, data is something very important to protect. And in particular, if you deal with cognitive data, it’s much more important. So there is a possibility to trace all the cognitive data and to manipulate, to apply some nudging on this cognitive data to go ahead with a lot of manipulations at large scale. And finally, I would like to mention all the problems related to AI, like gaze in data, you know, this book of Invisible Woman. There are a lot of problems related to the use of data and we rely on data and gaze algorithm also. So just to summarize, there is the problem of data, there is the problem of infrastructure, and also there is the problem of design. How to make AI trustworthy in particular in the case of education?

Carol Roach: Thank you very much. We hear a lot of different terms of ethical AI, but I think I like trustworthy AI. So we’ll have to use that a little bit more as well. Is Umut on? Okay, Umut, so here is your question and welcome. Okay. What are the youth in Latin America and the Caribbean thinking when it comes to who should be held accountable for decisions made by AI systems in educational environments? Take it away.

Umut Pajaro Velasquez: Okay. everyone on good day or good evening wherever you are. When it comes to decisions on how or who is going to be held accountable in AI decisions, well actually in Latin America we think that actually this has many problems related to internet governance. It’s a problem that should be addressed by several stakeholders at the same time. Probably the main stakeholders that can be held accountable for decisions made by AI systems in educational environments requires careful consideration of the role and responsibilities of first of all developers. AI developers have the responsibility to design and develop AI systems that are ethical, unbiased and transparent. They should ensure that their system and train on diverse and representative data sets especially in a region like Latin America where we have so many cultural nonsense and different languages and all of that and also ensure that the AI system especially that the ones that we’re using for education are designed to protect student privacy. Educators have a role in this aspect on accountability also and I’m an educator so we talk about this a lot of time and despite a lot of people say that most of the educators have some resistance to AI, I think it’s the opposite. Probably most of the educators don’t know exactly what is the responsibility in all this process so they don’t know exactly how to you’re saying somehow, so they feel more afraid because they don’t know. No, actually they are against it to the use of the technology. So educator here has to play a crucial role in implementing AI system in the classroom. Mostly, most of the educators need to be trained on how to use AI effectively and ethically, and they should be involved in the decision making process regarding how AI is used in the schools. So that means that educators should be involved also in the implementation and the policy making processes, not only being the ones that receive some education to implement those tools, but also the ones that decide how it’s going to be implemented and how to be going to be regulated. They use all these tools inside of the classroom. And the other stakeholders that can be considered important here is obviously the policymakers. Policymakers have a responsibility with all the society to create regulations, guidelines, and guidelines that the government that use AI in education. These policies should be addressed issues such as data privacy, algorithm bias, and obviously accountability. And students, because they are also part of the process, we can’t avoid having some great accountability without including students inside of this conversation. Without them, it’s impossible to actually address fully the encompass or the complexity of having AI education tools. and on making them accountable for the use of the tools because it’s not only developers that are going to be accountable for it, it’s all we need to see in all the process, not only in the design stage, but until the deployment and implementation on it. Students themselves should be empowered to understand how AI is being used in their education and how to have a voice in the decision-making processes. They should be educated about the potential benefits and risks of AI and encouraged to critically evaluate the information generated by AI systems. Students in this case, they need not only proper education to know how to use the tools, but also some critical thinking on how and when they are starting using the tools because most of the students are using the tools, so we can avoid that. We have to think that accountability is a really complex topic to talk about. Probably with five minutes, we don’t have enough time to cover everything about accountability, but what we can say is that accountability should be something that should be shared among all stakeholders. It requires a collaborative approach that prioritises ethical consideration, transparencies and the wellbeing of the students because the students are the main focus on the education system. Before I forget, another stakeholder that should be taken into account is academia. Academia should actually need to understand and investigate AI is affecting the education, not only in the practices, and then inside of the classroom, but outside of the classroom, and how it’s changing the dynamics on how a student can learn and improve their abilities, or their abilities inside of the classroom, or for their daily lives. So academia needs to understand the pedagogy, the didactical and the pedagogical aspects that are being affected by the use of artificial intelligence tools in the classroom. So there is another stakeholder that should be taking account in this. So we can have AI more accountable in education, that actually is transparent, fairness, has a human oversight, respect the privacy of the student, proponents to equity, and is child-friendly. So that’s my approach to it. Thank you.

Carol Roach: Thank you very much. So we can see that the number of critical stakeholders are growing here. We have government, the technical community, and of course, academia. After listening to all the talks, I have a question running around in my head, but I have to leave it running for one more speaker, and then I’ll put it out to you. So Mr. Khalid. Sorry. Just go ahead, okay. Right, so I get to put out my running question. So I’ll be honest, I have not used one single AI tool. And I’ll tell you why. It’s because, you can still hear me? Yeah, okay. It’s because of what was said. at the beginning and I think it’s from Mr. Khalid. Hold on. Am I gonna be enhancing what I do or diluting? Is it really gonna be me, is it a thousand other people? So I’m going to put it out to you who’ve probably used AI. How do you feel ethically, personally, when you use an AI tool to help you, as Vajo said, to enhance? Do you think that you’re really enhancing? What do you do to help your ethical compass? So I’m throwing that question out to you in the audience.

Henri Verdier: Very brief comment. Indeed, you did use AI tools a lot. If you take a picture with your iPhone, that’s AI. If you do receive some advertising on the web or within a social network, that’s done with AI. If you do receive information in your social network, the feed, that’s AI. And that’s a wicked aspect of the problem because you don’t always notice that AI is everywhere. And maybe that’s the most important because we cannot confront and contest and discuss democratically because decisions are made and we don’t even know that there are decisions.

Carol Roach: No, I agree. I agree that sometimes we don’t know, but I’m throwing it out there in terms of I do know. So when you look at Zoom, you have the little AI apps you can use, WhatsApp, everywhere you turn. But right up in now, I don’t know if they’re trying to be ethical, you have to click a button to say, yes, I want to use it. So I’m looking at the point where I click and say, yes, I want to use it. But thank you. You’re quite right.

Jarrel James: Hi, my name is Jerrell James. I’m a researcher for… or internet resiliency. And I do a lot of work with AI and a lot of concerns that Ahmed has made are… Also, I’ve heard him talk about them before, but I have them as well. And so I think what you’re discussing there is like consensual data mining, consensual activity with AI. And I think when I use it myself and my hesitations right now around using it, which is something I would love to hear the panel discuss, I think for me, I’m wondering about who is responding to me in the sense like who… It’s a large language model. So whose language, whose background, whose perspective is diluting my creativity? Whose background perspective is diluting my output? Because I really value the fact that I come from East Africa, I am well-trained and well-educated in all sorts of things in the West, but I am applying that education through this perspective and this lens. And when I use AI tools, I often notice that I almost have to give the AI my philosophy first, and I have to write out logical prompts that give if-then statements, like if I believe this, then this is my outcome of what I would like to see. And I almost have to deprogram the AI from the language model that it exists in currently. And so I would love to hear, yeah, Ahmed, I think you are ready to go on this, but I’d love to hear more about, instead of just owning how it’s implemented regulatory-wise, who are the people in the… Do you see Global South members being the next Steve Jobs of AI, or these big innovators in AI, or is it going to follow the same path of the foreign delegates or the corporations come in and they give you these language models, and then you have to decide what’s true or not? Thank you.

Ahmad Khan: So maybe first I’ll start with the concern that you raised, and this goes to what the ambassador said. Really, I think we blew the social media. Why is it that someone in a different continent get to decide what I watch on my phone for two to four hours? And I’m not mentioning any names, Mark Zuckerberg. But for AI specifically, so the idea of how the technology works, basically, it’s a large language model, as in it takes data and it learns what the data says, and it can predict the next word. This is what it learns. So you take a bunch of information from the internet. So what it learns, on average, is the average content you see. So without any post-training, this is what you get, the average response you would get in the internet. But then there is the fine-tuning that happens after, be more supportive, give it more information, be this and that, and then it learns some concepts that can provide you with the direction that you can give it, right? So how it happens now is the different companies control for these things, and they ensure that it’s trustworthy in a sense. They try to make their own best judgment in terms of how to use it. And then we become end-users. So now the question is, again, back to the point, how can we push it to use AI so that I can tell it what I want, and it serves me and not the company? And this is really what we want to focus on. And I think this really is an overlap between the capabilities of the technology itself and then what we do with it. And I’ll leave the floor if there’s any more comments, if anyone wants to add more to this.

Carol Roach: Next.

AUDIENCE: Can you hear me okay? Brilliant. So thank you so much for that, Ahmed. And I think my experience as a young person utilizing AI, especially in, you know, I’m a consultant, my job is quite professional, is I usually struggle to get it to give me an objective stance. Whatever prompts I give it, it ends up speaking to what I’m feeding it, which is not necessarily what I want, right? I want this thing to challenge me, to give me some sort of objective truth. So I guess my question to you is, do you think an objective truth exists in the sense of AI, or is it always going to be manipulated to a certain extent by its users and the community that utilizes AI technology?

Ahmad Khan: Yeah, I think, again, this is a bigger question of what’s right and what’s wrong overall, right? But AI will give you the answer that it has learned. And in that sense, it’s always objective. And if you tell it, I want you to challenge me, then it will try to challenge you. And this is what it’s good at. So if you use it for what it’s built for, it’s great. But if you use it, try to extend it further than what it should do, then it will fail. And then we say the company is responsible for it. If you use a knife that’s supposed to cut things and it cuts your finger, maybe you didn’t use it right. Maybe it wasn’t too sharp. So we have to really know what the limits of AI is before we really try to use it for all intents and purposes, right? In terms of how we can actually use it to get logical and objective answers, there are tools now. So large language models learn an intuition from data. So this is what they get. If anyone is familiar with the system one, system two, it’s the fast thinking process of intuition. It just learns intuition, but it doesn’t have a structure for logic. There are now hybrid models that they’re developing that can actually ensure something is logically and reasonably objectively making sense. And that’s something we can incorporate into developing tools. I think that would take longer. So maybe hold on to ask it what the meaning of life is until we get that answer.

Henri Verdier: A very brief comment regarding your question. So the current model, we are built by companies to sell something. So they try to be bureaucrats and the others. They didn’t always answer. Sometimes they told, are you sure that this is your question? Do you know why you are asking this question? There will always be answers. And when they don’t know, they do invent. And when they don’t invent, they hallucinate. But they will always agree to answer. And for me, that’s my worst concern.

Ihita Gangavarapu: Very well answered, actually. Just quickly before we proceed, I just want to check with our online moderators. Ines and Keith, do we have any questions or comments online? I see that we have a comment or question from Lily. If you’d like to speak, please.

Lily Edinam Botsyoe: Hi, everyone. Good morning and sorry morning from here as I’m in Cincinnati, Ohio and excited to join the conversation. So one of the things I wanted to say early on was the fact that Madam Carol had mentioned that she had not, and then she actually clarified it so I’m going to point out the fact that in using our emails and our calendars, there’s the subtle use of AI so much so that it enhances productivity in one way or the other. We all are using and so for somebody who’s a youth and coming from the angle of perspective of answering if it adds any efficiency or is effective for me, first off it is. But secondly, I’m a PhD researcher in privacy. So my concerns actually go towards the idea of privacy and I share the sentiment of the speaker who took the microphone the first time to say whose perspective is it may be spotlighting for me, right? And so in that aspect, one of the things that we start to look at is even what these companies are doing. For example, ChargeGPT. I’ve discussed so much about my dissertation with ChargeGPT that when I ask a question, it brings elements of my past work into the prompt or into the response it gives to me. One of the things that I think that they’re doing is that when you start a conversation, you can toggle a button and say, hey, don’t train with my information or don’t train using my data. For one step, it is a way for people to say, hey, I’m looking into my privacy and maybe I don’t want this to be used in training this large model for others to be a part of. But one of the conversations is that aside from what these companies are doing, we are speaking about responsibility. And for us as people who are probably looking to be private and secure while using AI tools, we will also start thinking about how we know and understand what these tools are. And first off, look out for our own security. Are you uploading your social security numbers? Are you uploading passwords? What else are you doing out there that probably can land into the training of these models? Remember, like what the minister said, he said, this is a machine learning that this AI tools are using. there’s natural language processing that they’re using, or even these tools acting like the human brain. And so it brings about that neural network part of AI. In that sense, we all are using the tool, but for us also, we have to take the time to also learn for ourselves and make sure we are taking proactive approaches to protect ourselves while these companies and policies and every other thing also works in place. So from my point of view, AI supports me, but I also look out for privacy because it is huge. And if you don’t think of it as for yourself, the companies would only play a secondary role in your information probably may be used in training these models.

Carol Roach: Okay, thank you very much. That was very helpful. The next speaker online, we have quite a queue here. So please, we’re asking everybody to stick to the two minutes. I think we’re down to maybe a one minute intervention so that persons can have a chance to get involved. Thank you.

Ahmad Karim: Hello, everyone. Hello. Are you listening? Hello, I’m not getting. Go ahead. Hello, thank you so much for the insights. My name is Amad Karim, I’m from Year in Women. And I have two questions. The first one is that we see a very wide gap between two conversations when it comes to AI. Global North, it talks about AI and the opportunity that would bring is the economy of everyone. And in Global South, if the threats and the protection side of AI and technology, how can we guarantee that we have, women and girls have that side in the conversation where they are, the systems are aware of the concerns and their safety measures and concerns, but it does not defy the opportunity space where we can have more girls and women shape that whole industry. And then the second question related to bias and AI. We know that there are biases related to the data itself. AI inherited our history, our civilization, tens of thousands of years. of biases, again, it’s women and girls, and this is what we’re also receiving from AI input, but also the algorithm bias when it comes to those who are creating AI, mostly men, creating softwares for other men, and then with a smaller percentage of women in the AI industry, that’s also a perpetuated application. And then the last part is the bias in the users. Those who have already gender biases and to asking the wrong questions, and how can we make sure that we are, who is responsible for fixing the AI that would work for women and girls? Thank you.

Carol Roach: We’ll go back to the online speaker after you have a response to a question that was put to us. All right, we’ll go to the online.

AUDIENCE: Hi, I would like to answer the last part of the question that talk about our gender bias on AI, because that’s related to my work. So yeah, one of the things that we can do, well, we can actually blame AI for being gender biased when we actually create the data that fit the AI system with those biases. The biases exist in the society, so we need first of all, change the cultural background of the entire society in order to have less gender bias in artificial intelligence. We can actually improve the language sometimes when we are talking about language models, and when we can do it, that’s actually what I’m trying to do with it. with my language that is Spanish is to actually improve those models in the way that actually the representation and the outputs that the people receive when it comes to gender are more equal when it comes to men’s representation of things and women’s representation of things. So it’s hard when you have some languages that actually are so strong, have some representation of gender inside of it, and there is a cultural background that actually is really, really, really, really gendered. So it’s not going to be easy to tackle the gender bias, but we should try, as many people are trying to do it at the moment. So what we can pretend with one of the things that I say to people that I want to improve the models related to gender, gender, gender, gender bias on AI is actually start feeding the data with more related things than women and other genders to how they spread themselves and what they do in the more common things. So it would be helpful for that AI, for the different AI models to give a less gender response to the problems that they receive. So, yeah, that’s what I wanted to say.

Carol Roach: Thank you very much. Very, very helpful. And you’re quite right. It’s almost a circular argument there with regards to what AI is learning, feeding from what we have vetted and feeding from our biases. So we do need to look at how to address that. Thank you very much. I’m going to take… Okay, the persons on the floor to really stick to one minute if we’re going to get to the end of the line. I’m going to ask at this time that nobody else join the queue. Thank you. I’m going to ask back from the perspective of global norms or the other.

AUDIENCE: So like when you’re questioning the use to be critical, I’m not sure if you are getting a good environment for them. When we all have the UN organization and everything working on education and technology, however, we don’t have a universal guideline that as someone saying what is pleasureism or not, and you know how much is okay or not, are we giving enough information about that so that you snow? No. Even at my German university, they do not really specify until where is pleasureism and not and how to use it. But then we are straight up to what we should facilitate all the other trends on. I think it’s very complicated, and I think we should really think how we work on this so that we actually inform, and not just telling them to be on the other. Thank you very much. That’s a very good point,

Carol Roach: and we now have five minutes to wrap up, or four minutes. So let’s make it quick. 30 seconds.

Osei Keja: My name is Osei Keja from Ghana, and also a rep from African youth IGF. So this year we had our African youth IGF last November, and the topic was digital governance and emerging technologies, youth participation, amplifying youth voices. One of the recommendations was establish advisory and participatory platforms to involve youth in policy making and governance at regional and national levels. My question is that what kind of methodologies? Oftentimes the youth are sort of an afterthought in all these conversations. How possible, or what kind of methodology or say structures should we put in place so that they are all inclusive? And also, quick one, who are we benchmarking in terms of all these technologies or say policies? Who are we benchmarking? And what is that? Who should we learn from? Who are we benchmarking, and who are we learning from? So to answer our questions. Thank you very much. Very good point, and I would encourage you

Carol Roach: to join the working group on youth and the IGF. We formed that group to try to help some of the things that you said so your voice can be heard actively. Thanks.

Dana Cramer: Hello, Dana Kramer for the Record Youth IGF Canada. I’m curious about how we as students can advocate for AI adoption in our educations. For context, I’m a PhD candidate in Toronto, Canada, and my university has sweeping regulations on AI usage now, which really impacts on how youth can become first movers of AI programs. And then by being first mover have more experience to then enter, for example, that seat at the governance table on it. And our regulations at my university aren’t just CHAT-GPT, but also allowed to review synthesis and dissemination programs. So I’m wondering if the panel could speak to strategies for advocating for having youth be able to use AI in our educations, that then we can be partners and stakeholders in governance tables too. Thank you. I just want to flip the switch a bit here.

Carol Roach: We’ve been asking, or persons have been asking the older generation on how to change. You said you’d like to see a change, but what are your ideas towards change? What are your ideas? How can you ensure, a lecturer, that I can use CHAT-GPT to produce my paper without you worrying what are the guardrails you’re suggesting in that type? I’m just throwing the question out.

Asfirna Alduri: The perfect introduction for my question, or actually my comment. My name is Asmin Alduri. I am part of the Responsible Technology Hub, a youth-led non-profit that is actually working on this question specifically. So one aspect that we do is we actually have spaces that are intergenerational in means of we’re not only giving young people the mic, but we let them actually develop AI. So instead of asking them and serving them all what you want, we ask them the question of give us a solution and then we will have a question or we will go through the problems that you’re actually seeing. That way young people are actually taken serious, they feel respected and on the same level, and the kind of discussions we’re having are way deeper, way more solution-oriented, and way more inclusive of young people, at least for us in Germany. But one aspect that I really wanted to highlight, because I feel like it’s missing, and Minister Villeflach actually brought it up in regards of ethical aspects. We do not talk about click workers. AI has to be developed by labeling data, and that data is being labeled by young people who are super underprivileged, mostly from the global south, not paid well enough. So if we talk about including people and young people in this aspect, we need to include those who are exploited by developing it. And maybe that’s an open question for later on as well. How can we include these young people? This is the most important part for my work, at least. Thank you.

Carol Roach: Insightful. 30 seconds.

AUDIENCE: Okay, sure. Okay, I’m Oliver from Hong Kong Welfare Foundation and I’m auseful investor there. So personally, I majored in biology and I use generating AI for extended learning, for example, for accessing undergraduate knowledge and academic essays. So what has been proved is that the generating AI gives BAs a misunderstanding about the STEM topic. So how can they use scientific researcher or the STEM student to make sure that, or to judge that the response is correct? And who should actually be responsible for the false information that’s being given by the AI? I’m sorry, my two speakers, I’ve been given the signal to end. I can’t even do a wrap-up.

Carol Roach: So I’m very sorry about that. We cannot take any more speakers. However, we cannot take anybody else again. However, I think we started a very good conversation. Now the point is take it past a conversation. Now we wanna take it to action. I think sometimes for youth is that I’m gonna ask the older generation, this is my problem, how are you gonna fix it? Now we’re gonna flip it around and say, I have a problem with you guys. How are you gonna fix it? So keep that in mind, please. And thank you very much for your participation. Go ahead.

AUDIENCE: Just a second, I think also one good way is I have a solution, what do you think about it? Instead of how do you fix my problem, this is a solution, what do you think about it? And this is the idea of the learning hubs that come with a solution and see what the policy makers think about it.

Carol Roach: That’s a good way of putting it, yes. Thank you. Thank you, everybody. Give yourselves a good round of applause. Thank you very much. Thank you, online participants. Thank you very much. I’m sorry. Oh, it’s all right. My roommates are here. It’s still going and I can definitely somehow am closing it out for you. I like that approach. It’s good to make it.

L

Li Junhua

Speech speed

113 words per minute

Speech length

510 words

Speech time

270 seconds

AI can personalize learning experiences and adapt to individual needs

Explanation

AI has the potential to tailor education to each student’s specific requirements. This personalization can enhance the learning process by addressing individual strengths and weaknesses.

Major Discussion Point

AI’s Impact on Education

Agreed with

Amal El Fallah Seghrouchni

Phyo Thiri Lwin

Agreed on

AI has potential to personalize and enhance education

AI tools are helping reduce learning disparities in various countries

Explanation

AI is being used to address educational inequalities across different nations. This technology is helping to bridge gaps in access to quality education.

Evidence

Examples include Morocco using AI to reduce learning disparities in rural areas, France using AI to help visually impaired students read, and Brazil using AI-powered natural language processing to improve literacy.

Major Discussion Point

AI’s Impact on Education

Differed with

Ahmad Karim

Differed on

Approach to AI in education between Global North and Global South

There is a digital divide that hampers AI’s potential, with less than a third of the global population connected to the internet

Explanation

The unequal access to internet connectivity globally limits the potential benefits of AI in education. This digital divide creates disparities in who can access and benefit from AI-powered educational tools.

Evidence

The speaker cites that less than a third of the global population is connected to the internet.

Major Discussion Point

Addressing Biases and Inequalities in AI Education

A

Amal El Fallah Seghrouchni

Speech speed

107 words per minute

Speech length

840 words

Speech time

470 seconds

AI can simulate classroom interactions and provide benefits teachers cannot alone

Explanation

AI technology has the capability to create virtual classroom environments and interactions. This can offer educational experiences that go beyond what a single teacher can provide.

Major Discussion Point

AI’s Impact on Education

Agreed with

Li Junhua

Phyo Thiri Lwin

Agreed on

AI has potential to personalize and enhance education

AI raises concerns about plagiarism and disruption of education systems

Explanation

The introduction of AI in education brings challenges related to academic integrity. There are concerns about how AI might be used to cheat or undermine traditional educational practices.

Major Discussion Point

AI’s Impact on Education

Transparency and fairness are needed in AI systems making educational decisions

Explanation

AI systems involved in educational decision-making processes need to be transparent and fair. This is crucial to ensure that AI-driven decisions in education are ethical and unbiased.

Major Discussion Point

Ethical Considerations and Accountability in AI Education

Agreed with

Ahmad Khan

Umut Pajaro Velasquez

Lily Edinam Botsyoe

Agreed on

Need for ethical considerations and accountability in AI education

Data privacy and cognitive rights need protection when using AI in education

Explanation

The use of AI in education raises concerns about the protection of personal data and cognitive rights. It’s important to establish safeguards to protect students’ privacy and intellectual property.

Major Discussion Point

Ethical Considerations and Accountability in AI Education

Agreed with

Ahmad Khan

Umut Pajaro Velasquez

Lily Edinam Botsyoe

Agreed on

Need for ethical considerations and accountability in AI education

P

Phyo Thiri Lwin

Speech speed

118 words per minute

Speech length

864 words

Speech time

438 seconds

AI tools can help non-native speakers enhance their language skills

Explanation

AI-powered language tools can assist learners in improving their proficiency in non-native languages. This can be particularly beneficial for students struggling with language barriers in education.

Evidence

The speaker mentions using AI to revise ideas and improve language expression.

Major Discussion Point

AI’s Impact on Education

Agreed with

Li Junhua

Amal El Fallah Seghrouchni

Agreed on

AI has potential to personalize and enhance education

A

Ahmad Khan

Speech speed

170 words per minute

Speech length

1320 words

Speech time

463 seconds

Companies controlling AI models need to ensure they are trustworthy

Explanation

Organizations developing and managing AI models have a responsibility to ensure their reliability and ethical use. This is crucial for maintaining trust in AI-powered educational tools.

Major Discussion Point

Ethical Considerations and Accountability in AI Education

Agreed with

Amal El Fallah Seghrouchni

Umut Pajaro Velasquez

Lily Edinam Botsyoe

Agreed on

Need for ethical considerations and accountability in AI education

Differed with

Umut Pajaro Velasquez

Differed on

Responsibility for AI accountability in education

U

Umut Pajaro Velasquez

Speech speed

116 words per minute

Speech length

822 words

Speech time

423 seconds

Multiple stakeholders including developers, educators, policymakers and students share accountability for AI in education

Explanation

The responsibility for ethical and effective use of AI in education is shared among various groups. This includes those who create AI systems, those who implement them in educational settings, those who regulate their use, and those who use them for learning.

Evidence

The speaker mentions specific roles for developers (designing ethical and unbiased systems), educators (implementing AI effectively), policymakers (creating regulations and guidelines), and students (understanding and having a voice in AI use).

Major Discussion Point

Ethical Considerations and Accountability in AI Education

Agreed with

Amal El Fallah Seghrouchni

Ahmad Khan

Lily Edinam Botsyoe

Agreed on

Need for ethical considerations and accountability in AI education

Differed with

Ahmad Khan

Differed on

Responsibility for AI accountability in education

L

Lily Edinam Botsyoe

Speech speed

200 words per minute

Speech length

527 words

Speech time

157 seconds

Users need to be aware of their own role in protecting privacy when using AI tools

Explanation

Individuals using AI tools have a responsibility to safeguard their personal information. This includes being cautious about what data they input into AI systems and understanding the privacy implications of their actions.

Evidence

The speaker mentions the importance of not uploading sensitive information like social security numbers or passwords when using AI tools.

Major Discussion Point

Ethical Considerations and Accountability in AI Education

Agreed with

Amal El Fallah Seghrouchni

Ahmad Khan

Umut Pajaro Velasquez

Agreed on

Need for ethical considerations and accountability in AI education

U

Unknown speaker

Speech speed

0 words per minute

Speech length

0 words

Speech time

1 seconds

Gender biases in AI stem from societal biases and need to be addressed culturally

Explanation

AI systems often reflect and perpetuate existing gender biases present in society. Addressing these biases requires not just technical solutions, but also cultural changes to promote gender equality.

Major Discussion Point

Addressing Biases and Inequalities in AI Education

Youth should propose solutions to AI challenges rather than just asking older generations to fix problems

Explanation

Young people should take a proactive approach in addressing AI-related issues. Instead of solely relying on older generations to solve problems, youth should develop and present their own solutions.

Evidence

The speaker suggests that youth should come with solutions and ask policymakers what they think about them, rather than asking how to fix problems.

Major Discussion Point

Youth Participation in AI Governance and Development

A

Ahmad Karim

Speech speed

171 words per minute

Speech length

273 words

Speech time

95 seconds

The global south faces threats and protection concerns regarding AI, while the global north focuses on opportunities

Explanation

There is a disparity in how AI is perceived and approached between developed and developing nations. While developed countries often emphasize the potential benefits of AI, developing countries are more concerned with potential risks and protective measures.

Major Discussion Point

Addressing Biases and Inequalities in AI Education

Differed with

Li Junhua

Differed on

Approach to AI in education between Global North and Global South

A

Asfirna Alduri

Speech speed

172 words per minute

Speech length

267 words

Speech time

92 seconds

Underprivileged workers labeling AI training data, often from the global south, need to be included in discussions

Explanation

The workers who label data for AI training, often from developing countries, are an important but often overlooked part of AI development. Their perspectives and concerns should be included in discussions about AI ethics and governance.

Major Discussion Point

Addressing Biases and Inequalities in AI Education

Intergenerational spaces where youth can develop AI solutions should be created

Explanation

There is a need for collaborative environments where young people can work on AI development alongside older generations. These spaces can foster innovation and ensure that youth perspectives are integrated into AI solutions.

Evidence

The speaker mentions their work with the Responsible Technology Hub, which creates intergenerational spaces for AI development.

Major Discussion Point

Youth Participation in AI Governance and Development

O

Osei Keja

Speech speed

154 words per minute

Speech length

161 words

Speech time

62 seconds

Youth need to be involved in policy making and governance of AI at regional and national levels

Explanation

Young people should have a voice in shaping AI policies and governance structures. Their participation is crucial for ensuring that AI development and implementation considers the perspectives and needs of younger generations.

Evidence

The speaker mentions a recommendation from the African Youth IGF to establish advisory and participatory platforms for youth involvement in policy making.

Major Discussion Point

Youth Participation in AI Governance and Development

D

Dana Cramer

Speech speed

175 words per minute

Speech length

143 words

Speech time

48 seconds

Students should advocate for responsible AI adoption in their education to gain experience

Explanation

Students should actively push for the integration of AI in their educational institutions. This advocacy can help them gain practical experience with AI, preparing them for future roles in AI governance and development.

Evidence

The speaker mentions university regulations on AI usage that impact how students can become first movers in AI programs.

Major Discussion Point

Youth Participation in AI Governance and Development

Agreements

Agreement Points

AI has potential to personalize and enhance education

Li Junhua

Amal El Fallah Seghrouchni

Phyo Thiri Lwin

AI can personalize learning experiences and adapt to individual needs

AI can simulate classroom interactions and provide benefits teachers cannot alone

AI tools can help non-native speakers enhance their language skills

Multiple speakers agreed that AI has the potential to improve education by personalizing learning experiences, simulating classroom interactions, and assisting with language skills.

Need for ethical considerations and accountability in AI education

Amal El Fallah Seghrouchni

Ahmad Khan

Umut Pajaro Velasquez

Lily Edinam Botsyoe

Transparency and fairness are needed in AI systems making educational decisions

Data privacy and cognitive rights need protection when using AI in education

Companies controlling AI models need to ensure they are trustworthy

Multiple stakeholders including developers, educators, policymakers and students share accountability for AI in education

Users need to be aware of their own role in protecting privacy when using AI tools

Several speakers emphasized the importance of ethical considerations, transparency, and shared accountability in the development and use of AI in education.

Similar Viewpoints

Both speakers highlight the disparity in AI access and perception between developed and developing nations, emphasizing the need to address the digital divide and consider the unique challenges faced by the global south.

Li Junhua

Ahmad Karim

There is a digital divide that hampers AI’s potential, with less than a third of the global population connected to the internet

The global south faces threats and protection concerns regarding AI, while the global north focuses on opportunities

These speakers advocate for increased youth participation in AI governance, development, and implementation, emphasizing the importance of including young people’s perspectives in shaping AI policies and solutions.

Osei Keja

Dana Cramer

Asfirna Alduri

Youth need to be involved in policy making and governance of AI at regional and national levels

Students should advocate for responsible AI adoption in their education to gain experience

Intergenerational spaces where youth can develop AI solutions should be created

Unexpected Consensus

Addressing biases in AI

Unknown speaker

Ahmad Karim

Asfirna Alduri

Gender biases in AI stem from societal biases and need to be addressed culturally

The global south faces threats and protection concerns regarding AI, while the global north focuses on opportunities

Underprivileged workers labeling AI training data, often from the global south, need to be included in discussions

There was an unexpected consensus on the need to address various forms of bias in AI, including gender bias, regional disparities, and the inclusion of underprivileged workers. This consensus highlights a growing awareness of the complex social and cultural dimensions of AI development.

Overall Assessment

Summary

The main areas of agreement included the potential of AI to enhance education, the need for ethical considerations and accountability in AI education, the importance of addressing the digital divide and biases in AI, and the necessity of youth involvement in AI governance and development.

Consensus level

There was a moderate level of consensus among the speakers on these key issues. This consensus suggests a growing recognition of both the opportunities and challenges presented by AI in education, as well as the need for inclusive and ethical approaches to AI development and implementation. The implications of this consensus point towards a need for collaborative, multi-stakeholder efforts to harness the benefits of AI in education while addressing potential risks and inequalities.

Differences

Different Viewpoints

Approach to AI in education between Global North and Global South

Ahmad Karim

Li Junhua

The global south faces threats and protection concerns regarding AI, while the global north focuses on opportunities

AI tools are helping reduce learning disparities in various countries

While Li Junhua emphasizes the positive impact of AI in reducing learning disparities globally, Ahmad Karim points out a disparity in perception between the Global North and South, with the latter more focused on threats and protection concerns.

Responsibility for AI accountability in education

Ahmad Khan

Umut Pajaro Velasquez

Companies controlling AI models need to ensure they are trustworthy

Multiple stakeholders including developers, educators, policymakers and students share accountability for AI in education

Ahmad Khan emphasizes the responsibility of companies controlling AI models, while Umut Pajaro Velasquez argues for a shared accountability among multiple stakeholders.

Unexpected Differences

Approach to youth involvement in AI development

Asfirna Alduri

Unknown speaker

Intergenerational spaces where youth can develop AI solutions should be created

Youth should propose solutions to AI challenges rather than just asking older generations to fix problems

While both speakers advocate for youth involvement, their approaches differ unexpectedly. Asfirna Alduri suggests creating collaborative intergenerational spaces, while the unknown speaker proposes a more independent approach where youth develop solutions on their own.

Overall Assessment

summary

The main areas of disagreement revolve around the approach to AI in education between Global North and South, responsibility for AI accountability, data privacy protection, and methods of youth involvement in AI development.

difference_level

The level of disagreement among speakers is moderate. While there are differing perspectives on specific issues, there seems to be a general consensus on the importance of AI in education and the need for responsible development and implementation. These differences highlight the complexity of integrating AI into education globally and emphasize the need for collaborative, multi-stakeholder approaches to address challenges and opportunities.

Partial Agreements

Partial Agreements

Both speakers agree on the importance of data privacy in AI education, but they differ in their approach. Amal El Fallah Seghrouchni emphasizes the need for systemic protection, while Lily Edinam Botsyoe focuses on individual user responsibility.

Amal El Fallah Seghrouchni

Lily Edinam Botsyoe

Data privacy and cognitive rights need protection when using AI in education

Users need to be aware of their own role in protecting privacy when using AI tools

Similar Viewpoints

Both speakers highlight the disparity in AI access and perception between developed and developing nations, emphasizing the need to address the digital divide and consider the unique challenges faced by the global south.

Li Junhua

Ahmad Karim

There is a digital divide that hampers AI’s potential, with less than a third of the global population connected to the internet

The global south faces threats and protection concerns regarding AI, while the global north focuses on opportunities

These speakers advocate for increased youth participation in AI governance, development, and implementation, emphasizing the importance of including young people’s perspectives in shaping AI policies and solutions.

Osei Keja

Dana Cramer

Asfirna Alduri

Youth need to be involved in policy making and governance of AI at regional and national levels

Students should advocate for responsible AI adoption in their education to gain experience

Intergenerational spaces where youth can develop AI solutions should be created

Takeaways

Key Takeaways

AI has significant potential to personalize and enhance education, but also raises ethical concerns around privacy, bias, and accountability

There is a need for global collaboration and inclusive governance to ensure AI benefits education equitably across regions

Youth participation is crucial in shaping AI policies and implementation in education

Addressing biases and the digital divide is essential for AI to truly benefit education globally

Resolutions and Action Items

Establish advisory and participatory platforms to involve youth in AI policy making and governance at regional and national levels

Create intergenerational spaces where youth can develop AI solutions

Improve AI language models to reduce gender biases

Develop ‘learning hubs’ globally for students, policymakers and tech developers to collaborate on AI in education

Unresolved Issues

How to effectively regulate AI use in educational settings without stifling innovation

How to ensure AI enhances rather than replaces critical thinking skills in students

How to address the exploitation of underprivileged workers labeling AI training data

How to validate the accuracy of AI-generated information, especially for STEM topics

Suggested Compromises

Balancing AI assistance in education with preserving human creativity and critical thinking

Finding a middle ground between strict regulations on AI use in education and allowing students to gain experience with AI tools

Developing AI models that serve individual needs while also respecting privacy and data rights

Thought Provoking Comments

My dream is to keep young people as far as possible from computers because I think they spend already a lot of time connected and very close from machines. And I think that AI should be used when it has a real added value.

speaker

Amal El Fallah Seghrouchni

reason

This comment challenges the assumption that more AI and technology in education is always better, introducing an important counterpoint to the discussion.

impact

It shifted the conversation to consider the potential downsides of AI in education and the importance of using it judiciously, rather than just focusing on its benefits.

How can we push it to use AI so that I can tell it what I want, and it serves me and not the company? And this is really what we want to focus on.

speaker

Ahmad Khan

reason

This comment highlights a crucial issue of user agency and control in AI systems, especially in educational contexts.

impact

It sparked further discussion about the ethical implications of AI and the need for user-centric design in AI tools for education.

We need to engage the youth from the beginning. I think this deeply and frankly, not just because I’m the father of two daughters, 18 and 20, but we will need new solutions. We need strong innovation and brave innovation. We need ideas coming out of the box.

speaker

Henri Verdier

reason

This comment emphasizes the importance of youth involvement in shaping AI policies and practices, recognizing their unique perspectives and potential for innovation.

impact

It led to increased focus on youth participation throughout the rest of the discussion, with several subsequent speakers addressing this point.

AI has to be developed by labeling data, and that data is being labeled by young people who are super underprivileged, mostly from the global south, not paid well enough. So if we talk about including people and young people in this aspect, we need to include those who are exploited by developing it.

speaker

Asfirna Alduri

reason

This comment brings attention to an often overlooked aspect of AI development – the labor conditions of those involved in data labeling.

impact

It broadened the scope of the discussion to include ethical considerations in AI development processes, not just in the end product or its use in education.

Overall Assessment

These key comments shaped the discussion by introducing critical perspectives on the ethical implications of AI in education, the importance of user agency, the need for youth involvement in AI policy and development, and the often-overlooked labor issues in AI creation. They helped to deepen the conversation beyond surface-level benefits of AI in education to consider more complex, systemic issues that need to be addressed for responsible AI implementation in educational settings.

Follow-up Questions

How can we push AI to serve individual users rather than companies?

speaker

Jarrel James

explanation

This is important to ensure AI tools enhance individual creativity and perspective rather than diluting them with generic responses.

Do you see Global South members being the next big innovators in AI?

speaker

Jarrel James

explanation

This is crucial for understanding if AI development will continue to be dominated by certain regions or if there will be more diverse representation in the future.

Does an objective truth exist in AI, or is it always manipulated to some extent by its users and the community that utilizes AI technology?

speaker

Audience member

explanation

This question is important for understanding the limitations and potential biases of AI systems in providing information.

How can we guarantee that women and girls have a voice in AI conversations, addressing both safety concerns and opportunities?

speaker

Ahmad Karim

explanation

This is crucial for ensuring gender equity in AI development and implementation.

Who is responsible for fixing AI to work for women and girls?

speaker

Ahmad Karim

explanation

This question is important for addressing gender biases in AI systems and ensuring accountability.

What kind of methodologies or structures should be put in place to ensure youth are included in AI policy-making and governance?

speaker

Osei Keja

explanation

This is important for ensuring meaningful youth participation in shaping AI policies and governance.

Who are we benchmarking in terms of AI technologies and policies?

speaker

Osei Keja

explanation

This question is crucial for understanding best practices and models in AI development and regulation.

How can students advocate for AI adoption in their educations, particularly in universities with strict regulations?

speaker

Dana Cramer

explanation

This is important for enabling students to gain practical experience with AI and become stakeholders in its governance.

How can we include click workers, who are often underprivileged young people from the Global South, in discussions about AI development?

speaker

Asfirna Alduri

explanation

This question addresses the ethical concerns of AI development and the need to include those who are potentially exploited in the process.

How can scientific researchers or STEM students judge if the responses given by AI are correct, especially for complex topics?

speaker

Oliver

explanation

This is crucial for ensuring the reliability and accuracy of AI-generated information in scientific and academic contexts.

Who should be responsible for false information given by AI?

speaker

Oliver

explanation

This question is important for establishing accountability in AI-generated content and misinformation.

Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.