IGF 2023 WS #313 Generative AI systems facing UNESCO AI Ethics Recommendation

12 Oct 2023 02:00h - 03:30h UTC

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Changfeng Chen

The concept of culture lag refers to the delayed adjustment of non-material aspects such as beliefs, values, and norms to changes in material culture, such as technology. This concept aptly describes the situation with generative AI, where technology changes faster than non-material aspects such as regulations. The rapid evolution of generative AI presents challenges in adapting legal and ethical frameworks to address its potential risks and implications.

While some argue for a moratorium on generative AI to allow time for comprehensive regulation and understanding of its implications, this approach is deemed drastic and unlikely to be effective in the long term. The field of generative AI is constantly evolving, and a blanket ban would hinder progress and innovation. Instead, flexible and adaptive regulatory frameworks are needed to keep up with technological advancements and address potential risks holistically.

China has emerged as a leader in the development and regulation of generative AI. Companies like Baidu, ByteDance, and iFlight Tech are at the forefront of generative AI applications, with their technology being installed on mobile phones and laptops to assist users in decision-making processes, such as choosing a restaurant. China has released interim administrative measures for generative AI services, which demand legitimate data sourcing, respect for rights, and risk management. This highlights China’s commitment to responsible AI development and regulation.

However, there are concerns about the fairness of the regulatory framework in China. Some argue that the heaviest responsibility is placed on generative AI providers, while other stakeholders such as data owners, computing power suppliers, and model designers also play critical roles. Allocating the majority of responsibility to providers is viewed as unfair and may hinder collaboration and innovation in the field.

Generative artificial intelligence has the potential to significantly contribute to the education of young people and foster a new perspective on rights. By harnessing the power of generative AI, educational institutions can create dynamic and personalized learning experiences for students. Additionally, young people have the right to access and use new technologies for learning and development, and it is the responsibility of adults and professionals to guide them in leveraging these technologies effectively and ethically.

Efforts have already been initiated to promote these rights for young people, such as UNESCO’s Media and Information Literacy Week, which aims to enhance young people’s skills in critically analyzing and engaging with media and information. This reflects the international community’s recognition of the importance of digital literacy and ensuring equitable access to information and technology for young people.

Promoting professionalism in the field of artificial intelligence is crucial. Professionalism entails adhering to a set of standards and behaviors such as reliability, high standards, ethical behavior, respect, responsibility, and teamwork. By promoting professionalism, the field of AI can operate within ethical boundaries and ensure the responsible development and use of AI technologies.

It is also important to have a professional conscience towards new technologies that respects multicultural values. While it is necessary to respect and consider regionalized values and regulations, there should also be a broader perspective in the technical field to promote global collaboration and understanding.

In conclusion, the concept of culture lag accurately describes the challenges faced in regulating generative AI amidst rapid technological advancements. A moratorium on generative AI is seen as drastic and ineffective, and instead, flexible and adaptive regulatory frameworks should be established. China is leading in the development and regulation of generative AI, but concerns about fairness in the regulatory framework exist. Generative AI has the potential to revolutionize education and empower young people, but it requires responsible guidance from adults and professionals. Efforts are underway to promote these rights, such as UNESCO’s Media and Information Literacy Week. Promoting professionalism and a professional conscience towards new technologies is crucial in ensuring ethical and responsible AI development.

Audience

The debate surrounding the responsible usage and regulation of AI, particularly generative AI, is of significant importance in today’s rapidly advancing technological landscape. The summary highlights several key arguments and perspectives on this matter.

One argument put forth emphasises the need to utilise the existing AI tools and guidelines until specific regulations for generative AI are developed. It is acknowledged that constructing an entirely new ethical framework for generative AI would be a time-consuming process. Therefore, it is deemed wise to make use of the current available resources and regulations until more comprehensive guidelines for generative AI are established.

Another argument draws attention to the potential risks associated with the use of generative models. Specifically, it highlights the risks of inaccuracy and unreliable sources that are made up by these models. Of concern is the fact that many individuals, especially young people, are inclined to utilise generative models due to their efficiency. However, they may be unaware of the potential risks involved. Thus, it is suggested that raising awareness among the public, especially the younger generation, about the potential risks of generative AI is crucial.

Advocacy for the importance of raising awareness regarding the use of generative AI models is another notable observation. It is argued that greater awareness can be achieved through quality education and the establishment of strong institutions. By providing individuals with a deeper understanding of generative AI and its potential risks, it is believed that they will be better equipped to make responsible and informed choices.

The responsible coding and designing of AI systems are also stressed in the summary. It is essential to approach the development of AI systems with a sense of responsibility, both in terms of coding practices and design considerations. Implementing responsible practices ensures that AI systems are developed ethically and do not pose unnecessary risks to individuals or society as a whole.

One perspective questions whether self-regulation alone is sufficient for responsible AI or if an official institution should have a role in examining AI technologies. The argument here revolves around the idea that while self-regulation may be important, there is a need for external oversight to ensure the accountability and responsible usage of AI technologies.

It is worth noting that AI systems are no longer solely the domain of big tech companies. The accessibility of AI development has increased, allowing anyone, including criminals and young individuals, to develop AI models. This accessibility raises concerns regarding the potential misuse or irresponsible development of AI technologies.

The feasibility of regulating everyone as AI development becomes more accessible is called into question. It is argued that regulating every individual may not be a practical solution. With the ease of developing AI models without extensive technical expertise, alternative approaches to regulation may need to be explored.

Regulating the data that can be used for AI, both for commercial and official usage, is seen as a possibility. However, regulating the development of AI models is deemed less feasible. This observation highlights the challenges in finding a balance between ensuring responsible AI usage while still fostering innovation and development in the field.

In conclusion, the expanded summary provides a comprehensive overview of the arguments and perspectives surrounding responsible AI usage and regulation. It underscores the importance of utilising existing AI tools and guidelines, raising awareness about the potential risks of generative models, and promoting responsible coding and design practices. The debate surrounding self-regulation versus external oversight, the increasing accessibility of AI development, and the challenges of regulating AI models is also considered.

Moderator – Yves Poullet

UNESCO has made significant strides in regulating AI ethics. In November 2022, it published a recommendation on AI ethics, demonstrating its commitment to addressing the challenges posed by artificial intelligence. This recommendation has already been applied to CHAT-GPT, indicating that UNESCO is actively implementing its ethical guidelines. The director of the SIH UNESCO department, Gabriela Ramos, is leading the implementation efforts. Despite her absence at an event, she sent a video expressing support and dedication to ensuring the ethical use of AI. Generative AI systems, which include foundation models and applications, require attention from public authorities due to their unique characteristics and potential risks. There is concern about potential biases and inaccuracies in the language used by generative AI models, which deal with large amounts of big data, including language translation and speech recognition. The future of generative AI is seen as potentially revolutionary, but there are also risks associated with these systems, such as the manipulation of individuals and job security concerns. Generative AI systems also pose risks to democracy, as they can spread misinformation and disinformation. Public regulation or some form of regulation is necessary to address these risks, with discussions on the feasibility of a moratorium and different approaches taken by leading countries. The ethical values set by UNESCO are widely accepted worldwide, but the challenge lies in their enforcement. Standardization and quality assessment are proposed as effective mechanisms to reinforce ethical values. The idea of AI localism, where local communities propose AI regulations aligned with their cultural values, is appreciated. Concerns are raised about language discrimination and the poor performance of AI systems in languages other than dominant ones. Efforts to address these issues, such as Finland’s establishment of big data in the Finnish language, are encouraged. In conclusion, UNESCO’s efforts in regulating AI ethics and the need for public regulation and enforcement mechanisms are highlighted, along with the challenges and potential harms associated with generative AI systems.

Dawit Bekele

Generative AIs are advanced artificial intelligence systems that can generate human-like content. These models are built on large-scale neural networks such as GPT (Generative Pre-trained Transformer). By learning from extensive amounts of data, generative AIs can produce outputs that closely resemble human-created content. However, they may also perpetuate or amplify existing biases if the training data contains biases or unrepresentative samples.

Despite these concerns, generative AI technology presents significant opportunities for innovation. Researchers and public authorities are actively working to address the ethical issues inherent in generative AI, with discussions taking place at UNESCO. Regulatory frameworks are needed to ensure transparency and accountability in the development and deployment of these models.

Generative AI systems also have the potential to impact the education system negatively. They can provide answers to learners immediately, potentially replacing the need for human assistance. This raises concerns about the displacement of human workers and disruption of traditional job markets.

It is crucial to have local responses tailored to the specific needs and values of each society when implementing generative AI. Societies should have the autonomy to decide how they use the technology based on their specific contextual considerations. However, certain countries may face challenges in handling generative AI due to a lack of resources and knowledge. Organizations like UNESCO should empower and educate societies about AI, providing necessary resources and knowledge to ensure responsible use. Big tech companies also have a responsibility to financially support less-resourced countries in adopting and managing generative AI technology.

In conclusion, generative AI offers significant opportunities for innovation, but also raises ethical concerns. Regulatory frameworks, local responses, and support from organizations like UNESCO and big tech companies are necessary for responsible and equitable implementation of generative AI technology.

Gabriela Ramos

The analysis reveals potential negative implications of AI that necessitate effective governance systems and regulation. Concerns arise from gender and racial biases found in generative AI models, such as Chat GPT-3. This emphasizes the urgent need for ethical guidelines and frameworks to govern AI development and deployment.

UNESCO has conducted an ethical analysis of generative AI models. This analysis underscores the importance of implementing proper governance and regulation measures. The impact of AI on industries and infrastructure aligns with Sustainable Development Goal 9. However, without appropriate guidelines, the risks and consequences associated with AI deployment can be detrimental.

To mitigate these risks, UNESCO recommends the implementation of ethical impact assessments. These assessments foresee the potential consequences of AI systems and ensure adherence to ethical standards. Considering the rapid advancement of AI technology, ethical reflection is crucial in addressing questions and concerns related to AI risks.

In addition to ethical considerations, the concentration of AI power among a few companies and countries is a cause for concern. The impressive capabilities of generative AI raise worries about negative social and political implications. Furthermore, legal actions have been taken regarding potential copyright breaches by open AI. It is important to make AI power more inclusive to reduce inequalities, as emphasized by Sustainable Development Goal 10.

Moreover, countries need to be well-prepared to handle legal and regulatory issues pertaining to AI. UNESCO is actively collaborating with 50 governments globally to establish readiness and ethical impact assessment methodologies. Additionally, UNESCO, in partnership with the renowned Alan Turing Institute, is launching an AI ethics observatory. These initiatives aim to support countries in developing robust frameworks for managing AI technologies.

In conclusion, the analysis emphasizes the need for effective governance systems and regulation to address potential negative implications of AI, such as biases and concentration of power. Implementation of UNESCO’s recommendations on ethical impact assessments and ensuring a more inclusive distribution of AI power are crucial in mitigating risks. Collaboration with governments and launching the AI ethics observatory demonstrate UNESCO’s commitment to harmonizing AI technologies with ethical considerations on a global scale.

Marielza Oliveira

The International Federation of Library Associations and Institutions (IFAP) has a crucial role in advocating for ethical, legal, and human rights issues in the realm of digital technologies, particularly artificial intelligence (AI). They recognize that advancements in AI, specifically generative AI, have significant implications for global societies. As a result, IFAP emphasizes the importance of examining the impacts of AI through the lens of ethics and human rights to ensure responsible and equitable use of AI.

IFAP is committed to ensuring access to information for all individuals. They endorse a new strategic plan that highlights the importance of digital technologies, including AI, for our fundamental right to access information. IFAP aims to bridge the digital divide and ensure that everyone can benefit from the opportunities presented by these technologies.

Additionally, IFAP focuses on building capacities to address the ethical concerns arising from the use of frontier technologies. They recognize the potential of inclusive, equitable, and knowledgeable societies driven by technology. To achieve this, IFAP supports and encourages research into the implications of these frontier technologies. They assist institutions in making AI technologies accessible and beneficial to everyone, while also raising awareness about the risks associated with their use. By examining and understanding these risks, IFAP aims to develop effective mechanisms and strategies to address them.

Another important aspect of IFAP’s work is the promotion of the implementation of recommendations on the ethics of AI. They actively engage in discussions and collaborations with stakeholders to design and govern AI based on evidence-based frameworks. IFAP recognizes that a multi-stakeholder approach is essential to create responsible policies and guidelines.

In addition, IFAP actively participates in global dialogues and forums to address digital divides and inequalities. They function as a platform for sharing experiences and best practices in overcoming these challenges. Through these dialogues and forums, IFAP aims to foster collaboration and partnerships to build sustainability and equality across all knowledge societies.

In conclusion, the International Federation of Library Associations and Institutions (IFAP) is at the forefront of promoting ethical, legal, and human rights issues in the context of digital technologies, especially AI. They emphasize the need to examine the impacts of AI through ethical and human rights lenses, while also ensuring access to information for all individuals. IFAP supports research into the inclusive and beneficial use of frontier technologies, along with raising awareness about the associated risks. They actively participate in global dialogues and forums to address digital divides and inequalities. Through their collective efforts, IFAP strives to shape a digital future that upholds shared values, sustainability, and equality across knowledge societies.

Fabio Senne

The summary is based on a discussion among speakers regarding the ethical, legal, and social implications of generative AI. They agree that a global forum is necessary to address these issues. Additionally, promoting digital literacy and critical thinking skills among young people is seen as crucial for responsible use of generative AI.

One speaker, Omar Farouk from Bangladesh, emphasizes the need for convening a global forum to discuss the ethical, legal, and social implications of generative AI. This indicates an awareness of the potential risks and challenges associated with this technology.

UNICEF also voices concerns about digital literacy and critical thinking skills. They argue that young people need to be educated about generative AI to be informed users. This highlights the importance of ensuring that individuals understand the potential implications and risks of generative AI, especially as it becomes more prevalent in society.

Another area of concern raised by UNICEF is the impact of generative AI on child protection and empowerment. They express worries about the unknown effects of AI on children and the need to protect and empower them in an AI-driven world.

The importance of more investigations and data in the field of AI is suggested by a speaker working in Brazil with CETIC.br, a UNESCO Category 2 centre. This indicates a recognized need for further research and understanding of AI, as it continues to rapidly develop.

Global digital inequality is identified as a major issue in the discussion. Inequalities in accessing the internet and digital technologies can affect the quality of training data, and languages may not be properly represented in AI models. In addition, there are inequalities within countries that impact the diversity of data used. These concerns highlight the need to address digital inequalities to ensure more inclusive and human-centred AI.

The need for improved AI literacy and education is emphasised. Data from Brazil reveals an underdevelopment of informational skills among children, with many unsure of their ability to assess online information. Therefore, raising awareness and literacy about AI in educational systems is crucial.

There is a call to monitor and evaluate AI, recognising the importance of assessing its impact and making informed decisions. Mention is made of international frameworks from OECD and UNESCO, highlighting the need for global cooperation and collaboration in understanding and regulating AI.

In conclusion, the discussions highlight the need to address the ethical, legal, and social implications of generative AI through a global forum. Promoting digital literacy and critical thinking skills, protecting children, conducting further investigations, addressing digital inequalities, improving AI literacy and education, and monitoring AI are all seen as crucial steps in fostering responsible and inclusive AI development.

Stefan Verhulst

The discussion surrounding Artificial Intelligence (AI) has shifted towards responsible technology development rather than advocating for an outright ban or extensive government intervention. OpenAI, an AI research organisation, argues for closed development to prevent potential misuse and abuse of AI technology. On the other hand, Meta, formerly known as Facebook, supports an open approach to developing generative AI.

Maintaining openness in AI research is considered crucial for advancing the field, despite concerns about potential abuse. AI research has historically been open, leading to significant advancements. Closing off research could create power asymmetries and solidify the current power positions in the AI industry.

Another important aspect of the AI discourse is adopting a rights-based approach towards AI. This includes prioritising principles such as safety, effectiveness, notice and explainability, and considering human alternatives. The Office of Science and Technology Policy (OSTP) has taken a multi-stakeholder approach to developing a Bill of Rights that emphasises these aspects.

In the United States, while there is a self-regulatory and co-regulatory approach to AI governance at the federal level, states and cities have taken a proactive stance. Currently, around 200 bills are being discussed at the state level, and several cities have enacted legislation regarding AI.

Engaging with young people is crucial in addressing AI-related issues. Young people often provide informed solutions and in many countries, they represent the majority of the population. Their deep understanding of AI highlights the need to listen to their preferences and incorporate their solutions. It is believed that engaging with young people can lead to more legitimate and acceptable use of AI. Additionally, innovative methods of engagement aligned with their preferred platforms need to be developed.

The importance of data quality cannot be overlooked when discussing AI, particularly in the context of generative AI. The principle of “garbage in, garbage out” becomes crucial, as the quality of the output is only as good as the quality of the input data. Attention should be focused not only on the AI models themselves but also on creating high-quality data to feed into these models.

Furthermore, open data, open science, and quality statistics have become more important than ever for qualitative generative AI. Prioritising these aspects contributes to the overall improvement and reliability of AI systems.

Overall, the discussion on AI emphasises responsible technology development rather than outright bans or government intervention. Maintaining openness in AI research is seen as crucial for the advancement of the field, although caution must be exercised to address potential risks and abuses. A rights-based approach, proactive governance at the local level, meaningful engagement with young people, and attention to data quality are all key considerations in the development and deployment of AI technology.

Siva Prasad Rambhatia

The analysis explores different perspectives on the impact of Artificial Intelligence (AI) on society. One viewpoint highlights that AI has contributed to the creation and exacerbation of inequalities in society. Specifically, it has had a significant impact on marginalized communities, especially those in the global South. The introduction of AI technologies and applications has reinforced existing social, cultural, and economic barriers, widening the gap between privileged and disadvantaged groups. This sentiment is driven by the assertion that AI, particularly in its current form, creates new types of inequalities and further amplifies existing ones.

Another viewpoint revolves around the negative consequences of generative AI models. These models have the potential to replace various job roles traditionally performed by humans. This phenomenon has raised concerns regarding the social and economic implications of widespread job displacement. In addition, the advent of generative models has been associated with a growing disconnect within societies. As AI takes over certain tasks, the interaction and collaboration between humans may decrease, leading to potential societal fragmentation.

Conversely, there is a positive stance arguing for AI to adopt local or regionally specific approaches and to preserve local knowledge and traditional epistemologies. This perspective highlights the potential benefits of embracing context-specific AI applications that address unique regional challenges. Advocates argue that these approaches can contribute to building more inclusive and equitable knowledge societies. By utilizing local knowledge and traditions, AI can help identify appropriate solutions to complex human problems.

Inclusivity and multiculturalism are also emphasized as essential aspects of AI design. Advocates argue that AI systems must be designed with consideration for marginalized and indigenous communities. By incorporating inclusive practices in AI development, it is possible to mitigate the potential negative impacts and ensure that the benefits of AI are accessible to all.

Additionally, the analysis underscores the importance of documenting and utilizing local knowledge systems in model building. By incorporating local knowledge, AI models can be more effective in addressing local and regional issues. The accumulation of local knowledge can contribute to the development of robust and contextually sensitive AI solutions.

Overall, the analysis highlights the complex and multi-faceted impact of AI on society. While there are concerns about the creation of inequalities and job displacement, there are also opportunities for AI to be inclusive, region-specific, and leverage local knowledge. By considering these various perspectives and incorporating diverse viewpoints, it is possible to shape the development and implementation of AI technologies in a way that benefits all members of society.

Session transcript

Moderator – Yves Poullet:
Thanks also for the remote audience, it is quite clear that you have the floor too during the question and answer and we hope that you will intervene. So perhaps just to start, as you know UNESCO has taken a certain number of initiative and we must underline the importance of this initiative as regards AI ethics regulation. You know that they have published in November 2022 a recommendation on AI ethics and definitely more recently perhaps you have seen that they have published a report about the application of the recommendation to CHAT-GPT and that’s why it is an honour for us to host Gabriela Ramos. Gabriela Ramos is definitely very well known, she is the director of the SIH UNESCO department which is in charge of the AI ethics recommendation implementation. So Gabriela Ramos was unable to join us because of the time difference between Paris and Kyoto but she has sent yesterday a video in order to be present with us. So perhaps you might launch the video.

Gabriela Ramos:
The compass for our work on AI is the recommendation on the ethics of AI that was adopted by 193 countries back in 2021. visual intelligence technologies need to be well aligned with human rights, human dignity, fairness, inclusiveness. And these values that are the ones that we put together for the technologies translate then into principles, principles of accountability, transparency, the rule of law, proportionality, but we do not stop there because all this framework then is translated into very concrete policy recommendations. We have 11 policy chapters that go into the gender issues, data issues, environmental issues, and many more. And those policy areas instruct member states, for example, I’m gonna give you an example, to develop data governance strategies that ensure the continual evaluation of the quality and the training of data, promote open data and data trust, and call members to invest in the creation on gold standards data sets, and ensure that when there is harp, compensation is given related to this information. And the recent release of foundational models, AI models have been meteoric, which had GPT gained 100 million users within the first month of operation. And we have seen a huge amount of excitement around the capabilities of this generative AI. It’s impressive what they can do and what they can offer in terms of the service to the world. But these models have also foregrounded major concerns about potential negative ethical, social, political, and legal implication, and highlighted the urgent need for robust and effective governance systems and regulation. We have conducted our own analysis of generative AI models through the lens of the recommendation and found that a range of ethical concerns related to fairness and non-discrimination and reliability, misinformation, privacy, data protection, the labor market, and many more. anymore with this accelerated pace of issues that we already have identified before. The systems replicate but also massively scale up many of the same ethical and governance challenges of previous generations of AI systems. For example, we have known about the potential of gender and racial biases in AI systems for many years now and we see that the same kind of stereotypes being massively reproduced in the latest systems. For example, narratives generated by GP3 were shown to reinforce gender stereotypes depicting female characters as less powerful and defining them by their physical appearance and family roles. And just last week a researcher at Oxford John Hopkins found that it was impossible for Mid Journey, a commonly used AI image generation tool, to produce a picture of black doctors treating white children. Whatever variation of the prompt used, the system will only produce a picture of a white doctor treating black children. But there are also new and pressing challenges, for example, around issues of authorship and intellectual property rights, as the platform does not quote these sources and lacks transparency on how it works. Legal actions are currently underway to determine, for example, whether open AI breached copyrights by training its model on novels without the permission of the authors. And on the other hand, to decide whether an output of a generative AI model can itself be copyrighted. This is another area where the incredible concentration of economic and now cultural power in the hands of a small group of companies and the small, of course, group of countries need to be addressed in a determined manner to make it more inclusive and more representative of the very diverse world. world in which we live. And then the way in which these current experimental AI tools have been unleashed in the public provides a primary example of why it is imperative for member states to implement the recommendation of UNESCO to ensure that actors identify, clarify, and mitigate some of the risks of harm from such models before rushing them to deploy them in the markets. And to address this challenge, UNESCO has developed an ethical impact assessment. And this assessment facilitates the prediction of consequences and mitigation of risk of AI systems via a multi-stakeholder engagement before a system is released to the public. And allowing those developing and procuring AI systems to avoid harmful outcomes, but at least to think about them, to have a tool by which we can understand what the systems can do, and what needs to be enhanced, and what needs to be corrected. And the ethical reflection by itself is a vital tool to comprehensively address the questions that everybody has in their minds right now about the risk of AI systems and how we can identify them. And we are currently piloting the ethical impact assessment as well as another tool that we were asked to produce in the recommendation when this was adopted by our member states, the readiness assessment methodology. This is to see how much countries are well prepared to deal with the legal and regulatory and governance issues related to AI. And we’re now working with 50 governments around the world to deploy this tool. The results of this assessment will be made public on the AI ethics UNESCO observatory that we are launching with the Alan Turing Institute, but also with the ITU. And this is going to be an online platform to share information and good practices of implementation efforts across the globe, while creating an inter-governmental partnership. space for people working on this domain to collaborate and actually to raise awareness, to understand better, to look at what works and what doesn’t, and then to translate that into actions on the ground to equip ourselves, the governments, the people, civil society, to deal with these technologies better. And in this sense, I’m also glad to share with you that we started a path-breaking project with the Dutch Digital Infrastructure Authority that is supported by the European Commission DG Reform to enhance the competencies and capacities of the Dutch and European competent authorities to supervise AI, and this, above all, considering that the European Commission is going to be implementing soon their AI acts, and they need institutions that are well equipped to deal with the issues. And here again, the large language systems and generative models are more broadly are high on everyone’s agenda, and the detailed data and analysis from this project will form the empirical basis for our development of a model governance framework, bringing together the different elements of an ethical AI ecosystem to help guide governments in developing robust governance systems aligned with the recommendation. We will present this framework at the Global Forum on the Ethics of Artificial Intelligence that is going to take place in Slovenia in the spring of 2024, and I’m looking forward to see you all there to continue learning together and to continue building together the capacities to deal with these technologies. Thank you very much.

Moderator – Yves Poullet:
Thanks Gabriela for this marvellous introduction. I think this introduction will help us to fix exactly the scope of our discussion, and as you have seen, there are a lot of… challenges raised by the AI generative system. Just the first question, perhaps it would be quite interesting to see among the person present who has already used AI generative system, like Chai Tripathi, like Bert, like Rernie, like Coco, that’s a Korean generative system, who has already used a generative system. Rézeant? I told everybody has already used it. You remember in November 22, Sam Alkman, OpenAI CEO, put into the market for the general public certain Chai Tripathi services. Perhaps it is quite interesting to remember that three years before, Sam Alkman said that Chai Tripathi must be reserved for professional users only because it was too dangerous for large public. He has modified his mind. It’s his business, that’s normal, but perhaps it’s quite interesting to recall it. This initiative was a full success, 1 million users, less than five days after the launching. Since this moment, we assist to a multiplication of applications supported by what we call foundation models, like Google Bart, Chai Tripathi, definitely, the Baidu Rernie, the Korean Co-GPT, the Meta Open Pre-Trained Transformer, and another. What is quite interesting is that all these foundation models are general purpose models and they are not used for a specific purpose, but it is quite clear that apart from this foundational model there is a lot of application developed by the same companies or by other companies. And now we are using this application. For instance, my students are used to chat GPT for preparing their hands to the memory definitively and it is quite clear that if you feel alone, please find with companion chatbots like Replica, Chai and others, which understand you like your best friends or friends. If as company you need to develop a marketing strategy, it is very easy to use Jasper as an application for finding the right slogan and definitively the right logo. If you are work seekers, if you want to write a successful letter of motivation, please use a generative AI application. So generative AI systems are more and more used. I would like to give definitively the floor to David in order to answer to a certain number of questions and my question would be the following. First, generative AI systems, I mean both foundation models and generative AI applications are definitively AI systems. Could you please in a few minutes explain the peculiarities of this system among the other AI systems and definitively link it with the peculiarities. Is that possible to explain why these generative AI systems are used? system need a specific attention distinct from that afforded to the other AI system, including for our public authorities. I have another question. The application of language model, large language models, are diverse, include text compression, text-to-speech conversion, language translation, chatbots, virtual assistants, speech recognition. They are working with big data. Which ones? Is there a problem with the language used within this big data? And last one, how do you see the future of this genuine TVI? Is that a revolution? Mr. Beckley, you have the floor.

Dawit Bekele:
Thank you very much. So generative AIs are advanced artificial intelligence systems designed to generate human-like content, including text, images, and even multimedia. You have probably heard and I’m sure used applications such as Chaptivity that answers to your questions, almost as if there is a human being at the other side, at the other end of the line. There are also applications that change photos into artwork, translate people’s speech into another language in real time. For example, you have heard probably the news recently, the Secretary General of the UN speaking in a language that he doesn’t speak. So there are so many applications that generative AIs have already shown us. These models are built on large-scale neural networks, such as GBT, and are trained on vast data sets to learn the patterns and structures of human language and other forms of data. The key peculiarity of the systems lies in their ability to generate coherent and contextually relevant content on their own, based on input they receive. Unlike search engines, for example, that we have been using for quite some time now and that provide useful responses, but that are often not in the form that you would expect from a response from a human being. Generative AI responses are very much like what you would expect from another human interlocutor. This has, of course, numerous benefits since the output of generative AI applications can be used almost directly by humans, unlike what you would get from search engines that require human interpretation, filtering, formatting, and often rewriting. But it also brings, as it has been already said by the previous speakers, many challenges that public authorities will have to deal with. One significant aspect that requires specific attention is the potential for biases and ethical concerns with the generated content. These models learn, as it has been said, from diverse and sometimes biased data sets, reflecting societal prejudices present in the training data. Consequently, the output of these models may inadvertently perpetuate or amplify existing biases, such as race biases, race concerns about fairness, and the reinforcement of harmful stereotypes. Already, the use of AI systems in law enforcement raised so much concern that some authorities banned the use of AI, at least for the time being. Another important consideration is the misuse of generative AI for malicious purposes, such as the creation of deepfake content that are indistinguishable from real content. In particular, the technology’s ability to mimic human-like communication poses risks to the integrity of information and has implications for issues like misinformation, fake news, and online. manipulations. An aspect that I believe should also be a concern is that it renders many societal tools obsolete. For example, as a former teacher myself, I’m concerned by how generative AI affects education. Learning, at least as we understand it today, requires personal work from the learner that needs to be further evaluated by our instructor. Generative AI can now provide an answer to the learner immediately and without effort. And the answer is so much indistinguishable from what a human being would give that it is almost impossible for the instructor to know whether the student has given the answer or it is generated by AI. This will have a major and negative impact on the quality of education and create major frictions within schools and universities. Generative AI can also render many jobs obsolete, probably more than any technology in the past. There’s almost no industry that has at least a few of its jobs replaced by generative AI. Generative AI can do the work of computer programmers, content creators, legal assistants, teachers, artists, financial advisors, and so forth. This can create a major havoc in societies like we are currently seeing in the movie industry in the US, where writers are on strike, in most parts, for fear of losing their jobs to AI. So public authorities need to pay attention to these systems for several reasons. First, there is a need for regulatory frameworks to address ethical concerns, as has been said by the keynote speaker, and mitigate potential misuse of generative AI. Second, public authorities play a crucial role in ensuring transparency and accountability in the development and deployment of these models, and I’m very happy that there are already discussions at UNESCO around this. Third, there is a growing need for public policy that addresses the impact of generative AI on various sectors, including jobs, privacy, intellectual property, and cybersecurity. In general, the peculiarities of generative AI and its massive impact on our societies demand specific attention from public authorities to establish ethical guidelines, ensure transparency, and address the broader societal impact. applications of these powerful technologies. I don’t think we can stop AI’s progress, but I also do not believe that we should let it develop without setting any boundaries. To your other questions on language models, there are many language models like GPT-3, and they are indeed applied across various tasks in different applications, such as text completion, text to speech conversion, language translation, et cetera. These language models, especially large ones like GPT-3, are trained on vast data sets of human language using languages coming from a broad range of texts from internet, books, articles, and various sources. Of course, these sources are not representative of the whole world, and they have biases and so on. So there are some concerns. One significant issue, as indicated earlier, is biases present in the training data. If the training data contains biases or unrepresented samples, the model can inadvertently produce biases outputs, reinforcing existing societal prejudice, raising many ethical questions. There are also concerns about potential misuse of these models or generating deceptive or harmful content. We have already seen how social media can create chaos in our societies by spreading misinformation. I come from a country that has been hardly affected by this misinformation. And I’m very afraid of what can happen with AI. Many people have difficulty to distinguish between the truth and the fake news. fake since they trust what they see in writing. Generative AI is taking this problem to a new high with deep fake where it is possible to make anyone say anything blurring even more further the line between the truth and the false. This will have an impact on our society that might be catastrophic if not mitigated well in advance. So for the future of generative AI, despite the many dangers of the generative AI, I believe that there are immense opportunities ahead. I believe that we can expect the development of even more powerful and sophisticated generative models. Moreover, future generative AI models may be fine-tuned to specific industries or domains, allowing for more specialized applications such as in healthcare, finance, law, and more. I also believe that the researchers and public authorities will attempt to address the concerns such as the ethical issues. And I’m happy to hear that UNESCO has taken this issue very seriously. We have already seen almost unprecedented attention from authorities such as the U.S. Congress, EU Commission, and UNESCO to understand and establish a framework for the development of a generative AI. UNESCO, for example, have done a number of work and developed a number of recommendations on the ethics of artificial intelligence that has been adopted by all its 193 members that has been indicated by the keynote speaker. My personal hope is that we learn from the cost of our inaction on social media, and researchers as well as public authorities will act as fast as the development of AI so that the risks are mitigated. and the opportunities outweigh the risks. Thank you.

Moderator – Yves Poullet:
Thanks a lot, David. It was very clear. Your presentation was very nice and develop what we have in mind. It means that generative AI system are multiplying the risk already linked with AI system. And you have developed a certain number of this risk and you have appeal to a public regulation or at least to a regulation. It is quite clear that generative AI application are bringing a lot of benefits for all of us, citizen and perhaps societies. But at the same time, as you have said, as you have underlined it, their development are source of arms, individual arms as regard definitively financial arms as regard definitively also physical arms. I would like just to mention a Belgian case, a recent Belgian sad case. In my country, definitely an engineer, civil engineer, perhaps a bit depressive has decided after nice and nice discussion with a company on chatbots to commit suicide. I think it’s a risk of manipulation we might fear from AI generative system. And perhaps we have to create a new right, the right to mental integrity. There are other, definitely there are other risk, there are risk of privacy as regard intellect property. And if we think about human rights. As you have noticed, it is quite clear that we must also speak about the problem of the right to job. And right to job is definitely compromised when you see the problem of the translator, when you see the problem of certain social artists. Definitely, it’s not only a question of individual arms. It’s also a question of collective arms. And the second part of our discussion after the QA time will develop the problem of discrimination, discrimination between countries, between regions, between definitively certain communities. We will come back on that issue. But you have also mentioned, and that’s very important, the problem for our democracy, and especially as we have the problem of multiplication of misinformation and disinformation, especially with the possibility for all people to create deepfakes. How to face all this risk? And I come now to the following speakers. How to face this risk? It is quite clear that you have already mentioned a certain number of initiatives from UNESCO. But it is quite clear that we have also to turn our attention, to pay attention to what happens in the two leader countries of AI. I mean China and definitively US. And to speak there about, I will ask to Changfen Chen from Tsinghua University in China and Stefan Verhulst, which is professor at the New York University and the director of the governance laboratory and editor-in-chief of Data and Policy, to comment. And on this point, I have a certain number of questions. Perhaps you remember that there was. a very important open letter signed by more than 35,000 people, including a very important CEO of a high-tech company like Elon Musk, asking for a moratorium. Is that a good solution? Do you think this moratorium is feasible? They have asked to stop the development of generative AI systems during six months. Do you think it’s a good solution? Another problem is definitely the question to know to what extent we need a regulatory, a public regulatory answer. And on that point, Changfen, it is very interesting to know a bit more about the China’s initiative – China was the first to elaborate administrative measures, what they call administrative measures – and I would like to know a bit more what does it mean, administrative measures as regard generative artificial intelligence services. They have done that, and definitely the EU has also decided to have legislation, not administrative measures, but to have comprehensive legislation about AI, and more precisely, with the recent European Parliament amendments about generative AI systems. So I would like to see what China’s position is. And as regards definitely the US, they have adopted another approach. The US has published the White House Office of Science and Technology Policy in October 2022, a blueprint for the AI Bill of Rights. This blueprint is definitely very interesting, but it is more a sort of co-regulation discussion and negotiation between public authorities and the big tech sector, definitely perhaps. And in that blueprint, there are a certain number of recommendations about how to build up, how to build up AI. AI system and which ethical values we have to follow. So, Changfen first, and perhaps after that, Stephan, take the floor on those issues. Changfen, you have the floor.

Changfeng Chen:
Thanks. Thanks for Professor Yves Pollard’s efforts. Nice to see you all, friends. It’s my honor to attend this session. Before discussing the question, I would like to mention a concept, a concept about culture lag. Culture lag is a term coined by sociologist William Alban to describe the delayed adjustment of non-material culture to changes in material culture in the 1920s. It refers to the phenomenon where change in material culture, such as technology tools, occur more rapidly than changes in non-material culture, such as beliefs, values, norms, including regulation. I think culture lag is describing the situation when generative AI appears. We are excited, and meanwhile, we are panicked. The capabilities for these new technologies break through the scope of traditional legal regulations. So first, I just said we need a regulation for generative AI. It is a powerful technology with the potential to be used for good or for harm. But generative artificial intelligence is still developing, and even the scientists and the engineers who created it cannot fully explain and predict its future. Therefore, we need to regulate it prudently rather than nip it in the cradle through regulation. So it’s the reason because after I introduce some policies and regulations, I can’t judge something. So I just speak this kind of thing first, and at the beginning of a new thing, we need to be more inclusive and have the wisdom to calmly deal with the mistakes it causes that only shows human civilization and self-confidence. So the question is, monitoring on generative AI would be a temporary ban on the development and the use of this technology. This would be a drastic measure, and it is unlikely to be effective in the long term. Generative AI is a powerful technology with the potential to be used for good, and it would be unwise to stifle its development entirely. And then I think a global regulatory model for generative AI would be ideal, but it will take time to develop and implement. So just talking about in China, artificial intelligence, including generative intelligence, is developing very rapidly in China and has been widely used. Generative AI applications from Baidu, from ByteDance, from iFlight Tech, and other companies installed on my mobile iPhone, mobile phone and laptop, while using GPT, BARD, and BIN at the same time. When I choose something in my life, like when I choose restaurant in Beijing or in Shanghai for a party with my friends, these applications always help me. In the field of education, the artificial intelligence applications developed by iFlight Tech are already helping teachers update their curriculum, correct students’ homework, and provide personalized teaching guidance. So China has been At the forefront of developing and regulating generative AI, in 2022, China released the interim administry measures for generative artificial intelligence services. These measures require providers of generative AI service to source data and foundation models from legitimate sources, respect the intellectual property rights of others, process personal information with appropriate consent or legal basis, establish and implement risk management systems and internal control procedures, take measures to prevent the misuse of generative AI services, such as the creation of harmful content. The interim measures to regulate generative AI services are just a start. China’s first artificial intelligence management measures are more realistic than the previously released draft four comments. On the day the measure was published in the afternoon of July 13, the share price of the CGBT concept stock in the Hong Kong stock market rose. Perhaps, yeah, some legal experts believe that the current regulatory framework in China cannot effectively address regulatory challenges. Its main content focuses on regulating providers of AI products or services, and it still belongs to the traditional responsibility models of AI governance. Generative AI involves diverse titles in multiple circuits, such as data owners, computing power suppliers, and model designers. It is unfair for regulations to allocate the heaviest responsibility to providers of generative AI. That’s some, the resource of this is from some legal experts who published the article in China, in Chinese, and also, it is also unable to deal with some social issues, and I said it’s just a

Moderator – Yves Poullet:
start. Thank you. Thanks a lot, Feng Chen. It was a very interesting point you are underlining. I retained from your intervention first a certain number of keywords. You say the famous cultural lack, I think that’s very important. You call for what you call a prudent regulation, not to go too fast, and definitely you ask for what you call an inclusive procedure in order to have the participation of all stakeholders. As regards the content of the administrative measure China has taken, it is quite proximate with what EU regulation is proposing. I’ve seen that you are quite, you pay attention to the intellectual property questions, you pay attention to the privacy question. I was very surprised because it’s very important in your regulation, and definitely you propose for solving the risk to have internal risk assessment, risk assessment which must definitely identify the risk, not only individual risk but also societal risk, and definitely which is proposing a certain number of mitigation of this risk. So I am quite comfortable with this approach because this approach is quite proximate of the EU regulation, and now I turn to Stefan, and I give the floor to Stefan because you have at US taken another option, and it is perhaps quite interesting to see to what extent, even if US has taken a co-regulation approach, the same principle, some ethical principle might be developed, and the same procedure might be implemented. Stefan, you have the floor. Yeah, thanks so much, and I hope you can hear me.

Stefan Verhulst:
Thanks Yves for having me, and I wish I was there in person myself in this beautiful room you have there, which looks like a really adequate place for having a conversation like this. And so just to cover the questions you posed, the first question seems to me was really about the moratorium, and I think the discussion from my point of view did open up a broader debate whether a moratorium is even feasible, or whether we should wait. focus on a responsible technology development as opposed to banning or even having government intervene in how innovation is being facilitated. And I think it was an interesting conversation, but at the same time, I think in addition to this tension between a moratorium and a responsible development approach, the underlying tension was also to what extent should the development of AI and in particular case here, the development of large language models and generative AI be open or closed? Because that was the other big discussion from my point of view, which from my point of view was actually more interesting because it really identified the interests behind the moratorium and also the interests that are currently being proposed. Because on the one hand, you have organizations like surprisingly open AI advocating for closed development, quite often with the argument that if you would open up the development of large language models or generative AI, you would have the potential for abuse. But then on the other hand, you have Meta, for instance, which has been advocating for an open approach to the development of generative AI, which from my point of view is actually most in sync with how AI has been developed till recently. Most of the research as it relates to artificial intelligence was always open. And as a result, I would argue, has actually been able to make massive advances because it was open and because you had a whole army of actually developers, researchers working on improving existing models, including GPT models. If we start closing it, then on the one hand, we actually will create new power asymmetries between those that actually have the closed models versus those that have the open models. But from my point of view, it would actually be undermining a core principle of research in the artificial intelligence space, which always has been open. And by making it open, you will also be far better in a position to actually identify the weaknesses, the challenges that might be out there. And I think that is another layer that I think needs to be addressed, which is not just about regulate or not regulate. It’s really about to what extent should you make the technology open so that you actually can really examine what are the vulnerabilities. And of course, the argument here is that if you make it open, others will use it. But that does not, from my point of view, validate a closed approach because a closed approach, from my point of view, will actually solidify the current power asymmetries that you have in the market that actually, from my point of view, are equally challenging and important to be addressed than just a potential abuse of the technology itself. So that’s as it relates to the first question, Yves, which is a more kind of sophisticated, we need a more sophisticated way to have a conversation about a moratorium. It’s really about how do we actually develop technology in a responsible way. I don’t think a ban will automatically make it responsible and actually will solidify certain kinds of power positions. And then the second element is really to what extent can we sustain the kind of culture of openness as it relates to artificial intelligence research that has made tremendous strides till date. Now, of course, you asked what’s the approach from the U.S. as it relates to AI and then specifically as it relates to generative AI. And as always, it’s more complicated than just one approach. And I think there are multiple approaches currently being tested out. And from my point of view, I would just touch on kind of six approaches that we can see within a U.S. context. And indeed, Yves, as you rightly said, many of the approaches might be somewhat or feel like they are different, but many of the principles that underpin those approaches are actually very much in sync with, for instance, the UNESCO recommendations and also very much in sync with emerging other principles, such as the ones that have been advocated within Europe as a result of the AI Act as well. And also, before I delve deeper in, it also suffice and it is perhaps important to state that the U.S. is again a member of UNESCO and that that also provides a new opportunity to actually bring the U.S. within the conversations as it relates to the implementation of the UNESCO recommendations, which, as you know, the U.S. was absent until recently. And I think having the U.S. again being a member provides an opportunity to also perhaps create more approaches that are in sync also at the international level as well. Now, the six approaches, the one approach that already was mentioned by Yves is more kind of a rights-based approach. And indeed, OSTP has tried to convene kind of a multi-stakeholder approach in order to develop this kind of bill of rights, which was really an effort to set out a set of principles, a set of rights that need to be enshrined in a voluntary way. Because indeed, Yves, as you already said, this is not about kind of hard regulation. This is more kind of co-design of some kinds of frameworks that subsequently will need to be implemented in some kind of a self-regulatory, voluntary. kind of way. But the Bill of Rights was interesting because it did specify a set of principles and a set of areas of concern, such as, for instance, the need to really focus on safety and effectiveness of the systems that are being provided, focusing on algorithmic discrimination, focusing on privacy. And of course, as you know, the US does not have a national privacy legislation, but I think the Bill of Rights was important to emphasize the need for perhaps a more national cross-sectoral approach as it relates to privacy in order to deal with also AI, but also issues of notice and explainability, which, again, is not unique to the US, but is coming up everywhere. And then, of course, also the need to think about human alternatives as opposed to automated alternatives in actually making decisions. And so these were kind of the areas that the Bill of Rights addressed and subsequently also provided the framework for additional commitments, because I think that’s the second big element that what has happened within the US is that the White House, through, for instance, the Bill of Rights, but also through other means, have been able to engage all the large tech companies in making commitments for responsible development of AI, which includes commitments to test their systems, to what extent they are aligned with an assessment tool that interestingly was developed in a collective manner during DEF CON 31, which in itself was kind of an interesting exercise, because here where they tried to tap into the collective intelligence of expertise in order to come up with actually a framework that then subsequently was recommended by the White House to be the framework to assess, if you want to view it. Just a remark, perhaps it would be needed to conclude in one or two minutes, because we have a lot of older discussion. I know, I know. So yeah, I can go on the wall here. The other element that I will briefly emphasize some aspects, the other element is, of course, that we also have seen the creation of methodologies to assess risk, similar to what has happened in Europe. I think NIST, or the National Institute of Standards and Technology, developed its risk assessment framework, where it really tries to define what is trustworthiness, and how do we know whether systems are trustworthy, and I think it’s definitely a worthwhile exercise to look into it. And then the other element, which is always important, is not only regulation, but quite often the shadow of regulation, given the fact that we are relying on self-regulation. And so what has happened is that Senator Schumer, who leads the Senate, ultimately has held a set of hearings, and as you know, hearings is actually a very valuable tool in actually regulation, because it does provide for oversight, and it does provide for a discussion. Last thing I will say, Eve, and then I will shut up, is that while all this has happened, and while a lot of this is actually co-regulation, in most cases self-regulation, what we have seen happening is that the states in the US have actually become far more active than the federal agencies in regulating, which refers again, Eve, to my other area of interest in AI governance, which is of course AI localism. And what we have seen is that states and cities have actually been really active in AI governance in the US. There were about 200 bills at the moment being proposed at the state level, and multiple cities have started legislating AI as well. And I think that’s also worth noting at the international level that states and cities are actually in the forefront of coming up with frameworks and legislation.

Moderator – Yves Poullet:
And I’ve got to stop here. Definitely. Thanks, thanks, thanks, Stefan. I think your proposal to complexify the discussion, notably as we gather the question of the open AI, is definitely a very interesting thing. And I think we have to come back during the question and answer time. Another question I think is that you said that you have repeated are the same ethical values than the ethical values asserted by China. And I think we have a sort of common agreement about the fact that ethical values are fixed by the UNESCO in a very clear way, and that we might accept that. So I don’t think there is really a problem of three unethical values. The problem is more how to enforce these ethical values. And you have proposed to pay attention not only to public or self-regulation, but you mentioned a certain number of things like standardization, like definitively, like quality assessment. And I think that’s very, very interesting. And you finish by this marvelous point about AI localism regulation. And I think that’s very powerful. I think we need also the fact that communities, local communities are taking that very seriously and that they are proposing solution which are totally in accordance with their culture and with the habits of these people. Okay, so now we have a question and answer discussion. I know that Fabio, thanks a lot for being the moderator, has already certain questions. Please.

Fabio Senne:
Thank you. So we have two questions and comments online, more or less connected. One of them is from Omar Farouk, a 17 years old boy from Bangladesh. Who sent some very nice contributions. I won’t read all the contribution because we’re using the chat. But just to mention, regarding the question too, the comment from Omar is, convene a global forum on generative AI to discuss the ethical, legal, and social implication of these technologies. Support research on the impact of generative AI on everyone, including children and young. young people and promote digital literacy and critical thinking skills among children and young people so that they can be informed users of generative AI. And also Stephen Voslo building on Omar’s point, Stephen from UNICEF, say that they are also concerned that there is no known, they don’t know yet the impacts of generative AI, positive and negative on children’s and social, emotional and cognitive development. Research is critical, but take time. So how is the best way to navigate the reality that the tools are out in the public and we need to protect and empower children today, but we only fully know the impacts later. So how to deal with the need for research, but at the same time that things are out there.

Moderator – Yves Poullet:
Thanks a lot for this first question, perhaps I ask to the different speakers and not only the speakers who have already taken the floor, but also to Siva and perhaps you Fabio, if they want to answer to these questions. Definitely, I have a look at the audience, I see the micro are there, so perhaps if you have other questions, perhaps it would be interesting to raise now these questions. No, there is nobody. Okay, I come to the two first questions and it’s quite interesting to see that there are questions raised by young people, very interesting and there is a specific need for being educated in the use of this generative AI system. It is quite interesting, I think I had in mind, Stefan has spoken about the fact that you must have responsible people using AI generative system and when you think about responsible people, it is not only the tech companies which are developing this AI system, but also the users. So perhaps it might be quite interesting in that line to answer to the questions. Another answer, function, Stefan, Siva, Fabio, no?

Stefan Verhulst:
Yeah, happy to briefly reflect on that and I fully agree with Omar is that we do need to engage with young people in a far more sophisticated way to really figure out A, what are their preferences, B, what are their solutions, because I think it is not just about listening to young people, they actually might have solutions that are far more informed because of their… being digital natives in many countries as well. And so we just finished actually last week, we had six huge solutions labs in six regions together with UNICEF and with the Lancet Commission focusing on adolescent wellbeing. And one of the questions that we posted them was actually about data and artificial intelligence. And the responses were extremely sophisticated and it shows that young people really have a sense on what is happening and what their preferences are as it relates to AI as well. And so we need a lot more of those conversations, especially in countries like in low and middle income countries where the majority are actually young people. So we need to actually engage the majority in order to really become more legitimate on how to go about AI as well. So I fully embrace that. And I think we actually also need to do a lot more innovation in how we engage with youth, which is why perhaps, anyway, good that Omar joined today, but not many youth are joining sessions like the ones that we have, which is still kind of based upon anyway, how we’ve done conversations for the last 50 years. And I think they have moved on and are having conversations in different platforms where we as, and I talk about myself, kind of the aging population are not used to have those conversations. So we need to really innovate in that way as well.

Moderator – Yves Poullet:
Thanks, Stephan. I think Feng Chen has anything to say, something to say.

Changfeng Chen:
I think generative artificial intelligence is conducive to educating young people and it creates a view of rights. In fact, there’s a theory of rights for children in the media literacy, that young people have the right to use new technologies, to learn and to develop themselves. And adults and the profession. should have the obligation to guide the young people. And yeah, it’s a long process to young people to get this right. But I think the efforts has begun, has start. UNESCO has a Media and Information Literacy Week in this end of the month, in the last week of this month, in Judan. Judan, how do you say, Judan? Yes. Many people is worried about the young people who are in this kind of situation. And I think we should give young people the right. And also for the technology company, they should create some special help for the young people.

Moderator – Yves Poullet:
Thanks, Lotte. I’m quite interested by this new right for children to use technology for their own development. That’s a very interesting point. Yeah? Okay, I think we have a question from remote audience. Doha, you have a question? Please, two minutes no more because we have other things to develop. Doha, you have the floor.

Audience:
Thank you, I hope you can hear me. No, I’m Doha, I’m a program specialist at UNESCO working with Gabriel. I actually wanted to react to the previous questions, if that’s okay, very quickly and briefly. I think the questions are very important and pressing because it’s true, as very rightly pointed out, that even if we would think about a new ethical framework or a new regulation for generative AI in particular, it would take a lot of time. And it would be indeed more wise to utilize the tools that we currently have, like the recommendation and other guidelines on AI to be used. But until we have more concrete takes, so what can be done in practice? I think it’s important to also go back to the essentials of awareness raising. Most people that I know, and especially I think young people, it’s very tempting to use those models, right? Because it shortens a lot our time, our efforts, but not too many are actually aware of the risks that are rightly pointed out by all the panelists. Usually only if people would try to use generative models to ask questions that you already kind of know the answer in advance, you would see the pitfalls, you would see the challenges, the inaccuracy, the references to sources that are made up and things like that. So I think being aware, raising awareness-

Moderator – Yves Poullet:
I’m sorry, I think we have understood what you mean. Thanks a lot for your intervention, but I must restrict you, I’m sorry, okay?

Audience:
No worries.

Moderator – Yves Poullet:
Thanks a lot, thanks a lot. There is a question in the room? Yeah, two questions.

Audience:
Yeah, thank you very much. As a child rights researcher from Germany, I appreciate really that we have questions about- the rights and interests of young persons in this room, but for me, it’s not just a question of the responsible usage of young people, persons of AI. It’s a question of the responsible usage of us all and much more important for me is that it’s not that it’s also a question of a responsible coding and designing and I’m wondering if this could be evaluated in a process of self-regulation or if it it’s not necessary to have a kind of official institution to give a permission if such an AI technology should be come come into force or distributed to us all. So maybe I’m not familiar with the proposed bills and laws but maybe we can hear something about that. Is it the right way to self regulate it by the private sector these responsible technologies or do we have an maybe official institution to give a kind of certificate or permission to roll it out? Thanks.

Moderator – Yves Poullet:
Thanks a lot for your question. It is quite clear that we have already a certain labeling institution and your question might refer to the use of the standardization process as a solution for a responsible AI which must follow the standards. The problem is that there is not a lot as we get the AI generative system of standards and the company must work on that issue very actively. Okay, there is another question I think. Thank you.

Audience:
I’m Tapani Tatvanen from Electronic Frontier Finland and it seems to me that we are already talking about the past. The AI systems are no longer the purview of big tech companies only. When you can run a large language model on your own laptop and the cat or let’s say the llama is already out of the bag in that respect. Basically everybody is not only AI actor in the sense that the UNESCO document but effectively will be developer as well. I predict this will happen in about two years. It will be easy to develop your own models without serious technical expertise. Everybody can be doing that and you cannot regulate everybody. It would be nice if all developers would be responsible as it were, but if everybody’s a developer, I can’t see how you can make everybody responsible. Maybe someone can. I’d be happy about that. I don’t see how that works. So think about the implications of people, all people, criminals, young people, anybody developing AI models for themselves to do whatever they want them to do. Not just using the existing things developed by someone we can regulate. So what can be regulated is the question. You can regulate commercial usage, official usage, the data perhaps that can be used, but the development, no, I don’t think you can. Thank you.

Moderator – Yves Poullet:
Thanks a lot for your statement. I am afraid we have to go to the second part of our session and to give the floor to Fabio and Siva. Siva is present remotely and I have two questions. A recent OECD report on a large language model has clearly demonstrated in the poor performance of these two in many languages other than predominant language in AI system like English or Chinese. Notwithstanding and that notwithstanding the effort of certain states to establish big data in their own language, I mean for instance, the Finland has taken a certain number of measures to drop data repository in Finnish language. More important is the fact that the generative AI system are promoting cultural inference. How do you see solution to that discrimination denunciated by the UNESCO recommendation? A second question is also the fact that the use of most of the generative AI application contrary to the traditional internet service are based on a business model which requires payment for the proposed service. Once again, there is a risk to see a certain number of persons excluded from the benefits of this innovation according to an inclusive scenario. How do you see that risk and which solution are you envisaging to solve it? Siva, you have the floor.

Siva Prasad Rambhatia:
Thank you, thank you for the opportunity. I have benefited from listening to previous panelists presentations and all of the questions. Basically, when you see because UNESCO document really is aware of the kind of issues that we are discussing but at the same time, they are more of a generalistic kind of solutions that offers. And we all know how technology.

Moderator – Yves Poullet:
Siva, is it possible to increase the volume? It seems that there is.

Siva Prasad Rambhatia:
Yes, I’m audible now.

Moderator – Yves Poullet:
Is it okay for you? Please. Go for it.

Siva Prasad Rambhatia:
Is it okay?

Moderator – Yves Poullet:
I think.

Siva Prasad Rambhatia:
Is it okay?

Moderator – Yves Poullet:
It’s okay for me.

Siva Prasad Rambhatia:
Okay, okay. So what is important for us is generally any technology. Technology discriminates between those who are better off and those who are not better off. Those who are in terms of education, in terms of resources and I think will control would not control. control. This is one thing that we must remember. That’s where discrimination begins. And that is the big discrimination lies in that source itself. And that’s, in fact, what artificial intelligence has done is it has created new kinds of inequalities, new kinds of divides. That’s what we call digital divide, or in fact, the digital divides are, actually, they co-exist or they accelerate the existing socio-cultural and other kind of inequalities. And that’s where, when we are talking about the technologies, technologies by themselves are creation of the companies or individuals or anybody else. But then they have their motives, they have their kinds of ideas. And that doesn’t really affect, they may not be very concerned about the other inclusivity and other kinds of problems, because it’s a profit is more important for them. And this, in fact, has been established very widely by scholars. And in fact, what we find is the artificial intelligence has affected the societies in multiple ways. And it has also affected the societal relations. In fact, it has affected the socio-cultural ecosystems, whether it is through, you know, using fake news or other kinds of kinds of things or breaching the privacy or any, you have a number of things that are discussed already. And given this, we must also remember that, because these generative models are also a challenge for ethical issues. In fact, we need to focus. and ethical and well-being issues of artificial intelligence and the intergenerative AI, specifically reflecting on the marginal communities or the indigenous communities or those who are poor and illiterate, especially more from the global south. And in fact, this is where most of these generative models are general in a sense. In fact, as Stephan was talking about, the kind of layers, some of them are larger in terms of their applications, they are more in terms of homogenous in the kind of models. But when we are talking about societies, when they are plural societies, multicultural societies, multilingual societies, the problems are compounded. And even within that, the gender and other issues also become more problematic. So, which means that when we are talking about any kind of guidelines, any kind of restrictions or any kind of controls, one has to be sensitive to all these kinds of layers of hierarchies. And in fact, what we find is the generative models have, in fact, dispensed with to a larger extent, many sections of it. We don’t need the writers, somebody can replace them. And we also have the kind of issues, especially, let me not waste much of the time because of the paucity of the time, I will just touch upon that what the AI as well as generative models are doing is they are creating a kind of a disconnect between the humans and within the societies and also between humans and nature. So, what we need to do is basically, we must focus on more the local or regional specific approaches in generation. We must also try to, you know, use or develop a database from the local knowledges, traditional epistemologies, which are more usable for building better knowledge societies and for also finding solutions to the human problems that we have. This can be really a good contribution to humanity and also the nature in order to build a sustainable and equal society and that is what I would like to briefly touch upon. I can answer, elaborate your question, because the time is very short. Thank you.

Moderator – Yves Poullet:
Thanks, Siva. Thanks. Thanks a lot. I like your expression, when Bill of Rights for every citizens, everybody in the world, but plural society and you come back to the idea developed by Stefan about localism. I think this is very, very important to hear that from you. Fabio, you have the floor. Thank you. Just a question. Is that possible to have 15 minutes more? I’m turning to the technicians. Is that possible to have 15 minutes more now? No. I think we will do this later. Is that okay? Okay. 10 minutes is okay.

Fabio Senne:
Okay. Thank you. I’ll try to be very brief and it’s very easy to speak after such a great contribution that we have. I’ll just highlight a few points from my perspective. I work in Brazil in CETIC.br, which is UNESCO Category 2 center and is also connected to the Brazilian multi-stakeholder internet governance model represented by NIC.br and CGI.br. And that’s producing research and data in the future. field, we need to say that we don’t know yet and we don’t have enough data on this issue, so there’s a need for more investigations in this area. But we do know some things that I think it’s important to understand in the possible risks and the possible influence of the scenario. First of course, the global digital inequality, such as the inequalities among countries and regions and how they access the internet and the digital technologies, how this can impact the quality of the training data that these models have, such as issues like languages, so how such part of the languages are not represented or well represented in these models. But also the inequalities within countries that also affects much the diversity of the data used, so in the case of Brazil we know that there are persistent patterns of inequalities, digital inequalities, connected to race and gender, rural versus urban population, income, level of education, age, and so on. So from the perspective of the diversity and inclusiveness of the process, I think digital inequality is something very important. But also from the perspective of the use of these tools, of this type of generative AI tools. So these also can be affected by or correlated with other aspects such as poverty and other vulnerabilities, so we know from other technologies and disruptive technologies that early adopters tend to benefit more when a new application is available, and the impacts tend to be more disruptive in the early phases of dissemination of the tool when a few can access and benefit from it. So from a perspective of fairness and non-discrimination, I think this is also important. And finally, I think also when we talk about digital inequalities, we are not talking about just access and use, but also about skills, what are the differences between the abilities have in terms of using this, so we know from the data we have in Brazil, we know that, for instance, when we research children use of the internet and their skills, we know that although operational and social skills are very widespread among this population, informational skills, the skills that are related to the critical understanding of content, for instance, is underdeveloped among the population that we interviewed in the case of Brazil. For instance, 43% of children 11 to 17 years old in the country agree that the first result from a survey online is the best result, 51% agree that every person find the same content when searching online, and 42% are unsure about their ability to check online information. So we are talking about, in this case, about children and the need for raising awareness in literacy and AI literacy throughout the educational systems is also an issue. So just to finish, I would like to call the attention for, of course, the need for data production and for research and to understand better this process, but from the data we have, we already know that we need to face digital inequality as a matter of having an AI that is more inclusive and human centered. So this is my perspective for

Moderator – Yves Poullet:
now, and thank you. Thanks a lot to Fabio for this very short but definitely very interesting remarks. I think you have given very concrete indicators about what happens and the inequalities we are facing with this new technology. So we might go now to the question and answer time. I don’t know if there are questions, and after that we will have a tour of the table. among the person, the panelists, in order to have from them, in one minute, a recommendation to address to the IGF about generative AI systems. So please, as we have the question and answer online, there is no questions. Perhaps Mr. Barbosa, no? No? Okay. So, I turn my hat, no? Okay, so we might go directly to the recommendation, and perhaps I will start with Siva. You have finished with a very strong recommendation, so perhaps you might repeat it, and so Noemi might write what you had exactly in mind. Siva, you have the floor, for one minute.

Siva Prasad Rambhatia:
Yeah, yes. My recommendation would be that when we are designing the AI generative models, we should concentrate more on the local and regional kind of issues, so that we can think in terms of multicultural aspects, and also inclusivity. Only then they will be able to participate, otherwise we will be excluding all sections of them, which are majority, they don’t form minority. Thank you.

Moderator – Yves Poullet:
Thanks a lot for the recommendation. We are not presently in that sort of situation, because it is quite clear that if you want to create big data, you need a lot of data, you need definitely a very complex algorithmic system. You know that most of the large language model are using more than one billion of parameters, so how to develop all that, that’s very, very difficult.

Siva Prasad Rambhatia:
Can I add to it?

Moderator – Yves Poullet:
Yeah.

Siva Prasad Rambhatia:
where I was suggesting that the local knowledge systems need to be documented, so that that can help in building this kind of models. Thank you.

Moderator – Yves Poullet:
Okay, thanks. Thanks, Losiva, for this precision. Function, have you a recommendation?

Changfeng Chen:
Yes, the discussion, the question were very interesting and inspired me to bring up a relative thinking, professionalism. I think a kind of a professionalism in artificial intelligence should be promoted. Professionalism is a set of standards and behaviors that individuals and organizations are expected to adhere to in the workplace. It involves demonstrating certain qualities and characteristics that contribute to the positive and effective work environment. Just as the justice to law and the fact to journalism, key aspects of professionalism include reliability, high standards, ethical behavior, respect, responsibility, teamwork, and so on. So for artificial intelligence, human needed to have a real professional conscience on the new technology, rather than regionalized values and regulations. Of course, we still needed to respect multicultural values, but at the same time, in the general technical field, we needed to have a general thinking. So I think AI journalism can, AI professionalism can have the effect of the regulation.

Moderator – Yves Poullet:
Thanks, Frank. Mr. Beckley, have you certain ideas regarding recommendation?

Dawit Bekele:
Thank you. I agree with most of the things that have been said and in particular on the importance of having local responses to the question. I believe that generative AI shouldn’t be imposed on any society. Societies have to choose how they use it. But I see some challenges, particularly resources. Some countries don’t have the resources to deal with these kinds of problems. And also, you know, the knowledge. I think it’s important for organizations such as UNESCO to make sure that everyone is empowered. Everyone understands the issues and has the possibility, you know, to address the issues at the local level. And also, I think the big companies also have the responsibility to support, even financially, poorer countries so that they decide what they take from this important revolution. Thank you.

Moderator – Yves Poullet:
Thanks a lot, David. And Stefan, perhaps?

Stefan Verhulst:
Yeah, sure. So very shortly, I think we need to pay more attention to the fundamental principle of garbage in, garbage out, as it relates to generative AI, which means that we actually have to focus on not just the model, but really on thinking about how do we actually create quality data and be more focused on the data side and then being focused on unlocking quality data, which means that the whole agenda of open data, open science, and quality statistics has actually got more, has become more important than ever. Because if we want to have qualitative generative AI, we actually need to have the infrastructure.

Moderator – Yves Poullet:
Thanks. Fabio, you are the last one.

Fabio Senne:
Thank you. Just to highlight also the need for monitoring and evaluation, I think we have to foster both international frameworks. indicators from UNESCO, there is the OECD Observatory on AI, I think those tools can be very useful for nationally and internationally, create ways of fostering research, monitoring and understanding the impacts of those tools that

Moderator – Yves Poullet:
are already emerging. Thanks for you, I think it was a marvelous transition to you, Marielsa. Marielsa, thanks a lot for joining us. I know that it is very, very early in the morning and definitely thanks a lot for being with us. Marielsa, you are the director of the IFAP program, so perhaps a few words. You have heard the expectation of a certain number of persons from UNESCO, so perhaps you have the floor.

Marielza Oliveira:
Thank you very much, Yves, and hello everyone. I’m really pleased that I can join you, even if it’s only part of this very important Internet Governance Forum session on generative AI. Unfortunately, I had a previous commitment, but in my capacity as the Secretary of the UNESCO Information for All program, let me first warmly congratulate Yves and the IFAP Working Group on Information Ethics, which is the convener of this fascinating discussion on generative AI. This is a new technology which holds profound implications for our societies. It’s crucial that we examine the impacts that it has through the lens of both ethics and human rights. IFAP is an intergovernmental program that supports member states in fostering inclusive knowledge societies, and our mission is fostering universal access to information and knowledge for sustainable development. Information ethics is, of course, among our top priorities, and IFAP has recently endorsed a new strategic plan for the period for 2023-2029 that emphasizes the implications of digital technologies, including AI to our right to access to information. And one of the areas of work that we have is to build capacities for and convene reflections on ethical, legal, and human rights issues that arise out of frontier technologies. And this session, this marvelous session, is an example of the excellent contributions being made by the IFAP working group dedicated to this topic. And the application of frontier digital technologies that go from artificial intelligence, including generative AI, blockchain, internet of things, artificial reality, and new technology are profound over information ecosystems. And we need to really grapple with these implications. And so what IFAP does is support and encourage a series of actions. For example, we work on promoting research into these implications to the inclusive, equitable, and knowledge societies, raising awareness of the sustainable development opportunities that these technologies bring, but also, you know, of the risks and the mechanisms to address these risks, including the impact, for example, on privacy, on the environment, and so on and so forth. Following the endorsement of UNESCO’s 41st General Conference on the Recommendation of the Ethics of Artificial Intelligence, which is the first global instrument on artificial intelligence, IFAP promotes the implementation of the recommendation and supports regional and international cooperation, research, exchange of good practices, and development of understanding and capabilities to respond to these ethical impacts over information ecosystems. IFAP also promotes applying evidence-based frameworks and a multi-stakeholder approach towards designing and governing artificial intelligence, and we certainly use the principles of the internet universality realm that Fabio just mentioned, which says that digital systems must be human rights-based, open, accessible, and multi-stakeholder governed. IFAP also serves as a platform for member states, academia, civil society, private sector, to share experiences and best practices that overcome digital divides and inequalities, including these different capacities to work with technologies such as generative AI. We assist institutions in ensuring that AI technologies are accessible and beneficial to everyone, including marginalized communities and groups such as women, the elderly, persons with disabilities, and so on. And we participate in global dialogues and forums across the globe that trigger discussions among all stakeholders to share the challenges, best practices, and the lessons learned on this technology. And this is why I’m calling upon all stakeholders that are here today to amplify the call for human-centric approaches to AI. Not only it’s a common collective effort that we need to shape a digital future that upholds shared values and build sustainability and equality across all knowledge societies. And for that, I want to congratulate again, the working group on information ethics and particularly Yves, which has been taking this critical conversation forward to a series of major global and regional workshops on this topic. And I hope that you can all join the next events and disseminate the outcomes of this discussion. So thank you very much for your insights and commitment to shaping a really more informed and ethical digital future that leaves no one behind. Back to you, Yves. Thank you.

Moderator – Yves Poullet:
Thanks Marietje for this marvelous concluding remarks. That’s a pity we have to finish this workshop so early. I think we need more than one day for discussing all the topics we have mentioned today. But definitely, it would be a common collective effort to address all these issue and to find solution to all these issues. So I would like first to thank the technicians for their nice support. I think that’s very important. And thanks a lot for their comprehensiveness as we are the fact that we have 10 minutes more. I would like to thank the audience, the remote audience and definitely the person who have the courage to stay here. And definitely, I would like to thank very, very strongly the panelists for their nice input to the discussion. I see Marietje that you raised your hand, no. Okay. Oh no, that was an applause. Okay, so I think we need applause, definitively. Applause. Thank you.

Audience

Speech speed

158 words per minute

Speech length

715 words

Speech time

271 secs

Changfeng Chen

Speech speed

112 words per minute

Speech length

1145 words

Speech time

614 secs

Dawit Bekele

Speech speed

135 words per minute

Speech length

1557 words

Speech time

692 secs

Fabio Senne

Speech speed

148 words per minute

Speech length

947 words

Speech time

384 secs

Gabriela Ramos

Speech speed

157 words per minute

Speech length

1270 words

Speech time

485 secs

Marielza Oliveira

Speech speed

146 words per minute

Speech length

734 words

Speech time

301 secs

Moderator – Yves Poullet

Speech speed

124 words per minute

Speech length

3557 words

Speech time

1727 secs

Siva Prasad Rambhatia

Speech speed

130 words per minute

Speech length

874 words

Speech time

403 secs

Stefan Verhulst

Speech speed

156 words per minute

Speech length

2174 words

Speech time

837 secs