Bottom-up AI and the right to be humanly imperfect | IGF 2023

8 Oct 2023 02:15h - 03:45h UTC

Table of contents

Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.

Full session report

Sorina Telenau

The analysis unveils an assemblage of sentiments regarding the application of Artificial Intelligence (AI) in multifaceted domains such as negotiations, decision-making, educational sectors, foreign affairs, and surmounting challenges faced by smaller and developing nations.

A positive aspect of AI is enlightened in its capacity to support complex decision-making procedures and foster critical thinking within educational environments. The effectiveness of AI in enhancing decision-making and negotiation is showcased in the global digital compact simulation. The AI advisor was utilised to refine arguments and language, whilst being trained to offer details on digital policy and internet governance. Further, in the realm of education, dismissing the use of AI in schools is argued to be counter-productive. The significance of AI in stimulating critical thinking and understanding intricate policy matters is underscored, thereby highlighting its role in shaping quality education and nurturing innovation.

However, the sentiment isn’t unequivocally positive. The analysis also uncovers AI’s limitations, stressing the importance of its critical application. Instances where AI hallucinates and doesn’t always deliver perfect results have been pointed out, demonstrating that although AI could be a valuable tool, it must not be relied upon blindly.

The evaluation also delves into the struggles of small and developing nations, particularly in digital governance and diplomacy. The overwhelming volume of information and tasks, combined with limited resources and a dearth of time, often poses significant challenges for these countries, thereby requiring the use of AI for effective decision-making and negotiation.

AI’s significance in foreign affairs emerges as it economises time and provides diplomats with a foundation for negotiations. Ministries of Foreign Affairs are encouraged to develop their own AI systems to retain control over data, relying on their knowledge base and experience. The concept of ‘bottom-up AI’ is proposed, arguing that it could allow a more controlled and tailored use of AI, and return AI back to users.

The potential of AI to promote underserved communities and mitigate representation inequalities is also explored. Bottom-up AI’s development based on knowledge from these communities bolsters this argument, aided by the observed stance that AI can encourage more meaningful engagement for smaller countries.

Nevertheless, despite the proposed benefits, the need for transparency and accountability of AI systems is underscored, with apprehensions regarding the non-explainability of neural networks being raised. There is significant criticism regarding uncritically accepting statements from large AI systems and a generic tendency for blind trust.

The evaluation concludes by emphasising the importance of addressing current AI issues, such as regulation, before getting consumed with future challenges. Large firms are depicted as demanding future AI regulation whilst disregarding existing issues, prompting a call for allocating resources to counter today’s challenges before concerning ourselves with future ordeals.

In harmony with Sustainable Development Goals (SDGs) 4, 9, 10, 16, and 17, the overall analysis accentuates the potential of AI in driving innovation, assisting in quality education, reducing inequalities, aiding in institution-building, and fostering partnerships. Nevertheless, the pivotal importance of careful, regulated, and transparent usage of AI is underscored.

Audience

The discourse unveiled a plethora of critical points spanning numerous subjects. A significant challenge was identified in Brazil with regard to technology – a substantial number of NGOs are grappling with integrating technological approaches due to lacking tech literacy. This issue hampers these organisations from fully capitalising on their potential in their operations, suggesting the necessity for dedicated digital literacy programmes.

Interestingly, the proposition was raised that augmenting participation and representation in tech-related matters could bolster the advocacy of local perspectives. This argument was underpinned by the desire to categorise knowledge in a manner that respects and supports local viewpoints, shining a spotlight on an essential consideration in the democratisation of technology and inclusivity.

The discussion then veered towards concerns about the economic ramifications of automation. Technological tools such as chatbots in Brazil’s service sector usage have soared, stirring anxieties surrounding potential structural unemployment and escalating the possibility for diminished economic opportunities and job security. In view of this, there was concurrence on the need for a paradigm shift to orchestrate the origination of dignified, rewarding economic opportunities.

The discourse additionally exhibited a robust belief in innovation and its prospective benefits. Participants conveyed stout support for a bottom-up Artificial Intelligence (AI) approach and open-source methods for managing knowledge on a grander scale. The capacity of these methods to organise and categorise knowledge with sensitivity to local perspectives was seen as a hopeful potential.

However, feedback and constructive criticism were deemed essential for the amelioration of larger systems. Questions were raised about whether insights from these systems were being considered and whether prevailing systemic problems required addressing, indicating a need for rigorous examination and rectification of these systems.

A particularly thought-provoking point in the discourse was the expression of concern regarding the rapid displacement of families due to the expanding influence of modern technology. This issue particularly afflicts rural areas of Brazil, leading to a diminution of the countryside and augmentation of cities. This cultural and knowledge erosion is significant, especially in small communities.

A suggestion was forwarded in response to these challenges to utilise AI to preserve and cultivate the history and culture of small communities. This would involve AI assisting in updating and uploading knowledge about these areas, spanning physical practices, agricultural practices, stories, and mythologies.

One neutral sentiment proffered revolved around AI’s design and adaptability, specifically tailored towards individuals with disabilities. Current AI systems are often trained on ‘perfect’ data, potentially making them less adaptable to human error. Conversely, humans are able to learn from their mistakes. Consequently, developers must cultivate more adaptable AI that can accommodate humanlike errors.

In a related argument, it was posited that AI should be enhanced to aid persons with disabilities rather than marginalising them. There is apprehension that current AI protocols might inadvertently engender a standard of ‘perfection’ that could be exclusionary, particularly for individuals with disabilities. However, by ensuring AI is a tool for inclusivity rather than exclusion, an opportunity arises.

In sum, these insights prompt a reassessment of how technology, specifically AI, is utilised and incorporated into diverse sectors of society. The call is widespread for more tech literacy programmes, adaptable AI, and active involvement in technology decision-making. These transformations would contribute significantly to striking a healthy balance between swift technological progression and preserving crucial aspects of our cultural heritage and humanity.

Jovan Kurbalija

Jovan Kurbalija, the esteemed Director of the Diplo Foundation, professes the significant intersection of philosophy, technology, and artificial intelligence (AI), particularly concerning education, cultural context, governance, and ethics. He promotes profound understanding of technological advancements without becoming engrossed by its complexities, thereby maintaining a steadfast focus on the broader societal and philosophical effects.

At the heart of Kurbalija’s argument is the Diplo Foundation’s innovative development of a hybrid system. This unique construct, merging artificial intelligence with human intelligence for reporting, has been cultivated based on the Foundation’s extensive experience and session management. The potential capabilities of this system in promoting dynamic learning environments and stimulating intellectual engagement were also highlighted.

Adding a fresh perspective to the discourse, Kurbalija proposed that AI models should harmonise with each community’s distinct traditions and practices. He believes this would contribute to a more authentic, bottom-up AI model that does not limit itself to predominantly European philosophical traditions. In a similar vein, he emphasised the urgent need for high-quality data in developing diverse, flexible open-source AI models.

However, he stressed the importance of preserving individual and community-based knowledge rights, protecting against its potential commodification by AI. Kurbalija highlighted concerns regarding transparency and explainability within AI applications, allied with apprehensions about AI’s misuse in creating disinformation.

Certain aspects of AI’s current governance invoked criticism, notably the sidelining of smaller entities by larger corporations. A call was made for increased corporate responsibility due to the extant challenges related to AI usage. Despite AI’s potential in preserving small communities’ heritage and culture, a significant gap was recognised concerning the lack of initiatives that leverage AI to safeguard cultural diversity.

While acknowledging AI’s potential in aiding individuals with disabilities, caution was raised about anthropomorphising AI, reinforcing that AI should serve as a tool, not as a master. The uniqueness and imperfection of human traits were lauded as invaluable characteristics and were claimed to be essential considerations in the development of AI.

In conclusion, Kurbalija’s discussions presented a potent outlook on AI’s broad societal impacts, issuing an urgent summons for more inclusive and ethical AI development, whilst highlighting concerns regarding transparency, accountability, and the conservation of local cultures and individual rights.

Session transcript

Jovan Kurbalija:
My name is Jovan Kurbalija, I’m Director of Diplo Foundation and Head of Geneva Internet Platform. Together with me is Sorina Teleanu, who is Director of Knowledge at Diplo Foundation and person who is involved extensively in AI developments. And now, while we were preparing for today’s session, we thought of having two ways to approach it and we will be guided by your questions and comments about this session. We want to develop it by genuinely as a dialogue. We have a lot to offer in terms of ideas, concepts and overall approach of Diplo to artificial intelligence, but I’m sure there is a lot of expertise in the room. And this is basically the key, therefore let me suggest a few practicalities. We will talk, but whenever you have a question or comments, raise the hand and don’t feel intimidated. The only stupid question is the question which is not asked. There are a few exceptions of this rule, but that’s basically our approach. Therefore you can, I always think when we gather for a meeting or for a course, because we are teaching a lot, I said how we can really maximize on this hour, this valuable time for all of us, generally we sometimes underestimate that importance of moment, importance of being there. And I think in Kyoto with Zen Buddhism and other things, with the Asian religious traditions, we can learn more about being there, being at the moment, and trying to grasp, trying to really find this unique energy. And unique because this is moment, this very second, this very second of our life and our existence and our interaction. Therefore let’s maximize on that. Now, Serena, shall I monopolize the microphone or you’re so gently nice? I started with philosophy, and probably this is one of the possible entry points. Because artificial intelligence for the first time pushed us to think about the questions, why do we do it, or questions of why for our existence, the question of our dignity, the question of purpose, the question of efficiency, many core questions that civilization has to face. Therefore, if you see our leaflet about the Humanism Project, you can see that we approach it through technology, through diplomacy, through governance, and through philosophy, linguistic and art. You can get any entry point. I suggested this philosophy entry point, and you will see why it is important. Now, I’m sure you will be using a lot of cameras. Unfortunately, these days we don’t use this. We just brought with my wife Nikon from Europe, very heavy Nikon, and she told me in Tokyo, she said, why do I need to carry this heavy Nikon with the lenses, you know? Zoom out, zoom in, when iPhone is basically, good iPhone camera is doing a lot. Now, we won’t get into this discussion. I’m sure that there will be passionate Nikonists or Canonists, you know, these two tribes, who will say, no, no, no, no, you still do it with Nikon. But idea is to zoom in, zoom out. We zoom in on philosophy, we zoom out on questions, zoom in on technology, or zoom out on philosophy. Therefore, try to use that optics within next hour. What is uniqueness of Diplo is that we, whatever we do in digital governance, since the very beginning of organization, we needed to touch technology. Therefore, we did the TCPI programming, we did the DNS, we did everything in order to know how it functions. We wanted to see what is under the bonnet. One problem, and I’m noticing, I was at the first IGF, at the Working Group on Internet Governance, which is ancient history, a long, long time ago. But I noticed that sometimes we discuss things without understanding it. We don’t need to be techies, mind you. These issues are sometimes even philosophical. But you have to have a basic understanding what’s going on and how it functions. This again, when we need to have the scale, you know, to understand technology, but not to become techies. Because then if you are only techie, you are basically, you won’t see the whole forest from the tree. Everything will be just neural networks these days. Or yesterday, crypto or blockchain, or the day before, TCPIP, and that’s then basically a problem. So it’s a tricky exercise. We have all of these entry points. And what I suggest, which is also in the title of the session, is also there is another aspect that we should keep in mind, that walk the talk approach works in a way that whole IGF will be reported by our hybrid system combining artificial intelligence and human intelligence. So if you go to IGF 2023, it’s Digwatch 2023, you can also download the iPhone or Android app, and you will be having the reports from the sessions coming by mix of artificial intelligence and human intelligence. Now, how does it work? We have been reporting from IGF for decades, summarizing long session into humanly, basically. Now we said, okay, let’s codify that, our reports, and create AI system. Therefore we can have something which could be called IGF GPT, or IGF AI. But basically we train the IGF on our reporting and our sessions. It is now deployed by our AI team. Poor guys have to wake up early in the morning, they are based in Belgrade. They are now doing reporting by AI system, doing everything automatically from transcribing, also special language for transcribing for IG and AI and cyber terminology. Everything transcribing, and then making it into the report which you can visit here for each session. Now, as you will see from the reports and you will see from our work, I think this session is, we have just to put that it’s GMT time because I was confused this morning. I said, what, 2 o’clock in the morning? You will have after the session, I don’t know, about 20 minutes or half an hour, I don’t know exactly what will be the timing. You will have reports from this discussion. Therefore, again, we think we have to walk the talk. It’s enough to talk about AI, how important AI is, how it’s changing the world, ethics. AI will eat us for breakfast, or we may survive, we may not survive. That’s another discussion which I’m very critical and skeptical about. But let’s use AI. And let’s see, only by using it, we can see how it works and how dangerous it is. We are not naive about dangers. There are risks. But many risks are now and here. If you just go to the risk in the future, it could be a bit tricky. Because whenever future was brought in negative way in discussions, it was often around certain ideologies. And the message is, forget it today, forget it now, we discuss future. And once we come to the bright future, we’ll be happy. But what’s happened in the meantime with our lives, you know, the references, I won’t make references to the historical experiences, but it’s a very tricky argument on the future. Therefore, there is something that you can use now. But let me again zoom out. And go to basically, if I manage to close this, oh, I managed to find, great, wow, now it’s again, make to be now. We call it that we had winter of excitement, ChargePT came into the force, everything is changing, it can write master thesis instead of you, blog post, you know, you know the whole stories, let’s say, December, January, February. Although AI is much, much older, as all of you know. Then there was a spring of metaphors. People suddenly realized, wow, it’s coming, let’s do something with this. Metaphors are danger, apocalyptic, a magnet on the risk for society, or nice ones, it will help us. Then you have summer of reflections. And we call it autumn of clarity. Think about four seasons, not the hotel, but four seasons in AI, winter of excitement, spring of metaphors, summer of reflections, autumn of clarity. Now, during the summer of reflections, what I did, I said, OK, let’s see what happened. Two things we did, Serena and myself, and she will explain the other thing, what she did at the course. We said, OK, let’s do, let’s recycle ideas. What were the ideas of ancient Greek on axial age? What can Socrates teach us about AI and prompting? What about journey of zero from Indian civilization via Al-Khwarizmi and Tunis to Fibonacci? What about ancient Greek? What about Chinese three big philosophers and AI? What would these people tell us? About knowledge, about ethics, about individuals and communities, about the Renaissance with Voltaire and Rousseau, great thinking of Renaissance period, Holbein’s painting, it’s a bit of my niche interest, the Vienna thinkers. When you really think about today’s era, and under this, there is a text, you can see that you have five thinkers who live in Vienna between two world wars who basically set the stage for AI in Geneva and Vienna and I’ll show Geneva thinkers. Hayek on knowledge, Freud about human psychology, and possibly a person who inspired thinking about AI is Ludwig Wittgenstein, who basically moved to the probability theory and language as a key element of the philosophy. Then we said, OK, those are Vienna thinkers. You have then Ubuntu thinkers in Africa, again another rich civilization and thinking not written in the texts, but basically codified in the practices. And in parallel, what we did during this summer of reflection, Serena went to deliver the course with the College of Europe for the group of students from Germany. She wrote the blog post. Whatever we do, we codify because we believe in creative commons and enriching discussion on, in this case, AI. Serena, you may tell us a few words how you, I will scroll, but what did you do during the course and what was the purpose of using AI and how did we use it?

Sorina Telenau:
Thank you, Jovan. Hello, everyone. We won’t spend more time talking, but just quickly because the title of this session includes this whole idea of bottom-up AI, and we hope to hear from you what you understand by that. But what we did at the summer school is just one example of bottom-up AI. So quickly explaining what happened there, we had a group of 25 students. And for about 10 days, we simulated the negotiations of a global digital compact. You’re at the IGF. I’m pretty sure you know what’s with the whole GDC and the discussions around it, so I won’t go into that. We split the team into, well, the group into a few teams. Technically representing some of the biggest countries and groups, we had China, the US, Brazil, and a few others, also civil society and technical community. And the task was to prepare and then negotiate how they would see a global digital compact looking like. But to help them, and also because many of them were newcomers to the whole idea of digital governance, what our team in Belgrade, Serbia did was to prepare this AI advisor. How it worked, we fed it. A lot of documents on internet governance and digital policy, and also with the contributions that stakeholder made to the global digital compact process. And then each of these five teams had their own advisor. What you see on the screen is the advisor of Brazil, right? The idea was for students to engage with the AI, to see how it works, to use it in the process of them preparing their arguments for negotiation, but also to discover the bad of the technology or the challenges. And that, I found the most beautiful part of it all. At the end, we sat a bit and talked about how they actually used the advisor, what they found useful and what they found challenging, and the discussion was really good. They were able to say, okay, we used it to fine-tune our language, to be better at negotiating for our position, to find things we might not know about our own country or our own stakeholder group. But we also understood that we cannot just rely on what the AI is telling us, but take it critically, assess it, and actually use our minds. Another reason why we did this is, as you probably know, some of the schools around the world have taken this very, very knee-jerk reaction, saying, okay, we’re going to ban the use of artificial intelligence in schools, which we think is not a good approach to take. So the idea at the summer school was to expose students to the use of AI for them to be able to develop this critical thinking as to how you can use it, why it’s good, and where you shouldn’t actually rely on it, because, again, it’s just the technology, and sometimes it does hallucinate. But this was just an example. example of a bottom-up AI and how we’re trying to build this from the bottom. And I think we can turn to the audience, Jovan, and ask what everyone here actually understands by bottom-up AI before we actually go into more of what we’re doing. So I’m going to move around. Do we have a roving mic? Ah, there is a mic there, right? A question to you all in the room, because we promised we’re going to have more of a discussion and not the two of us speaking for 90 minutes, which kind of defeats the whole purpose. What do you understand by bottom-up AI? Or if that doesn’t sound like an interesting question, why did you join this session? What did you expect from it?

Jovan Kurbalija:
Please. Is it before or after coffee? Thank you so much.

Audience:
The reason I’m here is because I want to know what you think of it. Help us wrestle the idea to the ground, and we’ll probably help you back.

Jovan Kurbalija:
Thank you. The AI charge EPT won’t reply in this way, you know, therefore it is really smart. Thank you. Thank you. But try to be, as we move to the next step, basically Sorina explained practical use on the critical issue when universities worldwide are banning use of AI, charge EPT. They tried with anti-plagiarism software, it doesn’t work. Even AI stopped using anti-plagiarism software, therefore this is not an option. Therefore our message, and it was successfully accepted, there are some anecdotes how some professors reacted to it, but we won’t mention the names. But academic community reacted, no, we are in charge. Forget AI. We said, no, AI can be interlocutor, can sharpen your thinking. As Sorina proven practically, and students love that, because it can sharpen your thinking. Then we comment on the questions that AI ask, provide it, answers that provide it, and say this is good, this is stupid, this works well, this works good. Therefore that element is critical. Now it is going to change educational system profoundly. We are on the similar generation, let’s put it this way, depending on the educational traditions, but there was a lot of learning by heart. There was a lot of listening to the ex-cathedra professors in my educational sort of process. And a few professors who basically acted like charge EPT, in mind question and answered and provide me stupid answers, stupid questions, are still people whom I can remember. Therefore that element of conversation AI can help us. And now the argument is that don’t kill the messengers, don’t put your head in the sand. Let’s see how AI can help us achieving critical elements of any educational system. It is improving critical thinking, it is improving creativity, and it can do. Therefore our argument, and we can substantiate it practically, whatever we are mentioning today can be substantiated practically, is that AI can be a great help for the real education. I’m sorry, not for the Bologna style, taking the assignments and number of the credits and this and that. That’s another story we can discuss. But for, I would say, ancient Greek or Roman education about inquiry, creativity, questioning, and consider yourself as a dignified thinker who can engage in the thinking process. Now let me, this is for example about Ubuntu ethos on these philosophical issues, where you can find really powerful thinking from Africa that can enhance artificial intelligence and basically should be codified, especially if the companies or hopefully African actors deploy AI in their context. And this is the first building block for bottom-up AI, which was the title of the session. We have to codify local traditions, practices, ideas that deal with questions of family, the question of universal individual creativity, knowledge, happiness, and whatever we ask charge EPT today, or even more advanced system in the future. It is, it cannot be designed only by European philosophical and thinking tradition. This is the first point on genuine bottom-up AI. The second important aspect, which we have been doing at Diplo, and I have so many windows open, I will hope that is to develop, I’m sorry, to, okay. We argue that there are a few, two points of relevance for bottom-up AI. First, it is ethically desirable because it let us preserve our knowledge. It’s not anymore just about data. It’s about our knowledge. This is what defines us as individuals, as humans, as a civilization, as a culture, as a family. And we are speaking about ultimately critical discussion for the future of our society and each of us individually. And what we did, we basically said, okay, what can we do? And first we went for open source. And as you can see, there is a very critical discussion about big systems bringing the fear and danger as a risk for society, mainly by a few big companies, OpenAI, Google, a few companies, you know, usually some Altman and these people were touring the Congress and places all over the world, which is a bit paradoxical situation. They created something and telling us, hey, guys, it’s very dangerous. I said, okay, but stop investing in it if it is too dangerous. Of course, there is a bottom-up competition argument, but there is something strange on a very logical level on these things. And most of them are very nervous about open source AI, except if somebody told me that he would become one of my heroes, I would be very surprised. Mark Zuckerberg created Meta, created Lama, and they made it for their own reasons, competition with Microsoft, with Google, and other actors. But Lama is doing quite well, and there are developments like Falcon in United Arab Emirates. There are now developments with quite a powerful model, which brings things to the relatively simple issue, and we can now discuss it. It is not that much innovation, neural networks, they were innovation when they were introduced. But now you need basically a lot of hardware and a lot to be friends with NVIDIA and basically to have processing GPUs. and a lot of hardware to process. If you can invest in that, you can train big models. That’s another issue which makes me personally nervous. It’s forget garage, forget bottom-up, in-depth scenario. Except for time being, there are pushbacks. And it will be dynamics in this way. Therefore, the first element is open-source approach. The second is you need high-quality data. And that will be an interesting story, because most of these companies more or less process the trillions of, I don’t know, whatever, books. I got a bit lost when it comes to this number, over the billion, but trillion something. And now they come to the point that they cannot get any more high-quality data. Therefore, they are doing so-called annotators, or data labeling, mainly. You know this Kenya case with OpenAI. There was the strike of people who were working on OpenAI data. But basically, they sit next to each other. And they’re basically annotating, say, for this is a bird, this is a cat, or this text is useful, this text is bad, and the other things. I will show you how we do it at Diplo, as sort of annotations. But this is, I would say, the key diagram, because quantity of data is limited by definition. You know, there is this idea of AI creating data itself, but I’m not sure that it will go too far. And you have the quantity, quality of data. Therefore, quality of data will be critical. And then, even with the small data, if you have high quality, you can create AI. That’s basically what is going to happen in the coming years. And this is the reason why companies are very nervous. They are rushing to get into quality of data, to get it in order to capture that future competition. Now, what we do in a, you can read the blog post, but what we do, we have a system which basically annotates any text. Therefore, when Sorina and I read the text, we annotate the text. And we are, our teaching system is based on annotations. Therefore, by teaching, by doing research, we are creating high quality data. Therefore, it’s integrated in the work. And I will show you practically how it works. Sorina, if you don’t mind. Okay, for example, for example, for example. You are following, obviously, developments in Middle East. You are on Alger’s era. And you are reading the text. And you will say, I’m not now, I’m just inventing the argument. You will basically, I will use the highlight. Let me see. If I’m in the Google, you will use highlight. You know how it works. It’s usually all, often does not work when it’s needed. When you try to show it. Ah, it’s a public. Okay. Okay. Or you can open. You annotate. And I write in annotator, Sorina. What do you think about this argument? Sorina will answer this. In this case, it’s public. She will answer this. She will receive the annotator. And two of us are adding a new layer on the thinking of the text on Alger’s era. Now, we have, because we have been using this as a teaching method for the last 20 years. Those of you who are from Diplo alumni, they know that we were basically, I designed this method based on metaphor that I like to highlight the text. And write something in annotations on the side, or have a sticker. We developed this system 20 years ago. But this is now the critical system of adding the layers of the quality on the text. Now, when AI comes and see this text, Charjiputi will just process it. But in our case, if there is discussion, we say, aha. This paragraph is important. Sorina, Jovan asks Sorina. Sorina asks answer. And then Sorina and I are developing our, basically, our very local bilateral AI, build around knowledge graphs. Therefore, we can then share it with the rest of the humanity, or keep it for ourselves, or share with Diplo, or share with you, share with the others. Therefore, our idea is that we can bring AI back to individuals. And then develop big systems. Ultimately, why should I send it to the big system? Well, I can do it. We can keep it for ourselves. And then share as our human right, our right of citizens of society with the rest of society. Now, this is the key concept behind AI. Now, has it triggered some ideas for questions or comments? How does it work, practicalities, anything else? It’s a bit intimidating where you have to stand and to walk next to Mike. But if you can shout also, I’m fine for any question or comment. So far? So far? No. Therefore, this is the basic idea. Return, let us preserve our knowledge. Why this knowledge that Sorina and I will create around discussion? She can comment then on what’s going on today is in Israel and Palestine. Why should we share it with somebody else? Why we don’t preserve and then share as our knowledge? It can become much more complex when you annotate complex texts, philosophical books, other texts. This belongs to us. Then we, in Diplo, we share it. You see, it’s public. We share it because we think everything should be creative commons. But we are very nervous if we, because of technical facilities, have to contribute it to open AI or to Google or to Baidu, whoever is basically providing this system. Therefore, what happened with Google 10 years ago, or Facebook and others, when they basically commodify our data and our sort of use of internet is now becoming, is starting on much higher level with knowledge. And that’s basically idea two, bottom up. One thing is that we talk, and I explain to maybe some people who have got interested into this. The other question is if we can prove it in practice. And this is different. If you have a system that can prove in the practice that it can work. That’s basically what we have been doing with the bottom-up AI, returning AI back to people with all their strengths and weaknesses. Serena?

Sorina Telenau:
Maybe we give one more close to practical example of how this could be implemented. We’re having these discussions in Geneva. A part of our work is to support engagement of small and developing countries in digital governance, digital diplomacy, and all these big organizations in Geneva, but also beyond. And we’re interacting a lot with missions in Geneva, and we hear a lot, especially from the smaller one, how they cannot follow everything and anything because, well, there’s a lot. And also how sometimes they don’t have enough time to research what they have done before to actually come up with a position to present at some organization or some negotiation. So in discussions with them about this whole idea of bottom-up AI and how we can or cannot use technology, this idea also came up. Can a Ministry of Foreign Affairs develop its own AI system to use for their own purpose instead of putting data into ChatGPT or BARD or whatever else, and actually rely on the wealth of knowledge they have developed over the years? And the simple answer is yes. And should they do it? Again, the simple answer would be yes, because you don’t give your data to a bigger system out there, and you don’t rely on all other information that might be coming from different sources, but you rely on what your Ministry of Foreign Affairs has developed over the years, policy papers, documents, and whatever else. And again, the question would come obviously here as well, can you rely completely only on AI to come up with a position that your diplomat will negotiate in an intergovernmental process? No. But you can use it as a starting point to save time, because you don’t have that much time to actually come up with something. So if you have a starting point, and then you bring your own expertise and your own abilities, that would help. So this would be one example of how we see bottom-up AI happening, and helping in this specific example smaller countries.

Jovan Kurbalija:
And here is, we may just display it again if you don’t mind, here is the conclusion from the last week discussion on, well, 10 days, on the General Assembly. We processed all statements delivered, you know, President Biden, heads of states, were basically saying what do they want to do, what are their views on different issues, from climate change, Ukraine war, digital, and we, okay, we asked the question, what did they say about digital? And we processed that, and we got to the report, which is a very interesting report. You say, let’s say, on artificial intelligence, line by line, relevance, what Barbados said, what Ethiopia, what India, what Malta, what Andorra, what Somalia, in the bullet points, what they said. And then you have also in-depth report with the statements, what each country basically, what is the transcript of the session, and what is the summary, let’s say, on Albania, you can see how many words, the speech length. What is knowledge graph? I mentioned already knowledge graph is critical. You can do knowledge graph on anything. We’ll be having knowledge graphs about all sessions in the IGF. This is a proximity of thinking. Could we have, Sorina and myself, knowledge graph about today’s session? What were the stances of what Albania was arguing it? What are the arguments? What is the speech itself? What is the summary of the session that was hosted by Albania? And then what was interesting, we asked also AI, based on all statements, if you put all knowledge in the General Assembly, or in IGF, we’ll do similar thing with the IGF. You asked the question, and you asked the question, what should we do to combine action on climate change, change and gender? I hope they’re not testing the system, because now they’re shifting to, let’s see, I hope it will work sometimes. I hope the system gives the question, the answer, based on all speeches delivered. Now we won’t read it, but that’s basically what is delivered. Or what was the, when there was a session in Security Council, we did the same thing. And then each session, you know how it is with the multi-stakeholder advisory group, you have at the beginning of the session, you have the key question, and then the answers, but also on, based on what parts of the speeches AI generated the text. Unlike charge EPT, which will give you just the basically answer, we said no, we want to ask AI to tell us what parts, for example, this answer was generated on part of speeches of the professor from King’s College, mainly his speech. But some other answer was generated, OK, this is Malta speech, or you can go through basically 360 questions that are based on transcript and generated around the idea of what is the climate change answer, the question Bangladesh, Nepal, Slovakia. And you suddenly realize that Bangladesh and Slovakia have something close when it comes to the discussion question is about climate change and digital commerce. Therefore you basically discover completely different event. And this will happen with IGF, maybe we’ll have, oh, at the session on AI, there was somebody else discussing bottom-up AI, which I’m not aware of. Maybe not calling bottom-up AI, maybe calling organic AI, or something like this. And you suddenly say, aha, here is a knowledge graph between Jovan, Sorina, and John, and Pietro, and Mohamed, and the other sessions, and said, OK, I didn’t know that we are doing the same thing. And that’s basically, I’m just giving you very concrete examples. What Sorina said, small states got really excited about it. Because you say Djibouti had three diplomats in Geneva. They don’t have a chance to follow the all sessions in Geneva on health, on migration, human rights. But if they have this system, they say they will receive alert, hey, by the way, Djibouti, at the working group, 70, 100, I don’t know how many working groups, at the ITU or WHO, there was discussion of relevance for your maritime security. They are very, because they are a big port, they’re interested in maritime security. By the way, follow that discussion. Therefore, suddenly, you have equalizing aspect of AI, that it brings small states that they can take care of their sort of interests, specific interests. We just highlighted a few options, and probably we’ll close with this. We started with philosophy. This is ultimately philosophy. philosophical issue, but give you a few concrete applications in education, in diplomacy, in IGF itself. You can follow IGF itself. And it would be interesting to hear your reflections on the quality of the report on the ideas around it. And then about this practicalities, how it can improve, let’s say, inclusion in global governance. For small countries, small organizations to follow what’s going on on their interests. The ultimate message is, let’s return AI to citizens. Let’s make it bottom up. Let’s build around it. And let’s find practical uses. It’s enough of the big talks about ethics and AI. Here are practical uses. And last point, which is important, it was part of the title of the session. As we are discussing, let’s preserve human imperfection. Because we cannot compete with machine. We should sometimes, people were critical about my title of this session, that we should let AI hallucinate, as we sometimes hallucinate. And if you think about the major breakthroughs in the history of humanity, they’re usually related to the time when some people had a chance to be lazy. And in ancient Greece, in, let’s say, British Empire time, when all sports were invented, from soccer to tennis to all major sports, because these people had a lot of time. Others were working for them. I won’t go into that. But if we can basically leave a bit of imperfection, and there is one blog post which I cannot find, about need for human imperfection, we should facilitate that. We won’t win the battle with machine on optimization. This is not possible. But we should preserve spaces for imperfection, for being lazy, for having time to reflect, for developing arts, for making mistakes. And this is the reason why I went to the flea market in Belgrade to search for the new Turing test. Basically, flea market traders, as you know, they’re masters of human psychology. And I said, they’re completely imperfect, always on the edge of the criminal milieu and the other things. And I was going through the market and asked one of the traders, who is not, who is legitimate. I asked him, OK, tell me, what do you think? For ultimate limits of artificial intelligence, with colleague of mine, Misko, we’ve been trying to see if AI can replace experienced trader at the flea market. Approaching a seller on a flea market can be a great way to find unique items at a reasonable price. But it’s important to be aware of the potential risks of being ripped off. It’s usually best to avoid revealing too much about your level of experience. Don’t be confrontational or aggressive, as this can put the seller on the defensive and make negotiations more difficult. Approach negotiations with confidence and a clear idea of what you’re looking for. You know, it’s like, OK, I’m going one more round. Well, I got confused. Trader Misko and AI gave us very similar answers. But, and big but, AI can explain what to do. However, AI cannot act yet as a flea market trader. For the time being, flea markets remain a refuge for human uniqueness. That was one of the, in my search for human imperfection, I go to the flea markets and other places and see what are going to be our niche. Because we cannot compete with machines. We cannot, they will be always more optimized than us. But we have a right and we have, I would say, duty to preserve the core humanity which has been passed to us from previous generations in all cultures, from Ubuntu to Zen to Shintoism to basically ancient Hinduism to Christianity to ancient Greece. Underlying element is that humans are in charge. And that is basically one thought which I would like to leave you with, that in this battle, we will be having a tough time. But we can do it and we show it practical how it can be done with bottom-up AI. I’m getting some sign, but my human imperfection is. No, I’m looking at the room.

Sorina Telenau:
I’m hoping now we can have a bit of a dialogue. So please, questions, comments, your own thoughts about your interactions with AI, how we preserve our humanity in all this, how we build bottom-up AI, how we rely on it for whatever your work is. Yes, please.

Audience:
Hello, everyone. My name is Emanuela. I’m from Brazil. And I represent Instituto Alana, which is an organization that is focused on defending children’s rights on the internet, on the environment, and focused on social justice as well. I have a few questions for you. One thing that I thought that was really interesting about the diplomatic view and the advocacy view is that these two that you guys presented, like this approach, could be really good for advocacy organizations because you have a knowledge management system approach that I think that could be very helpful and contextual. But I think my question is very practical, like how to incorporate this when considering especially in Brazil, I see that a lot of NGOs and organizations, they are not very tech. So you said about open source. I want to know the practical side. How can we benefit from this kind of technology? I have a second question that another issue that we face is considering how to increase voices and increase participation in such a big world. But how can Can we increase participation on these matters about tech, and do you think that this approach of bottom-up could be something that could be used to organize different participation approaches from different places and, you know, categorize knowledge in a way that could be sensitive to local perspectives, but with more, you know, data analysis? So this is my second question. And the last question, sorry, but just to, you know, fill up the debate. Sorry? You compensate for other people. And one thing that worries me a lot is about the structural unemployment that we are seeing in the service sector. Like, this is a sector that employs a lot of people in Brazil. So and we see the increase of usage of chatbot and automation. So I was wondering, what are the economic perspectives of bottom-up AI that you are presenting? How do we move, like, economic opportunities for people that are rewarding, that, you know, that signify, yeah, dignity? Because we see a lot of unemployment and we don’t see a lot of – anyway, I think you guys understood, like, the basic approach. Thank you a lot.

Jovan Kurbalija:
Thank you for excellent questions. Well, all inspiring, let’s probably start with the third one. This is exactly what I mentioned when I said instead of discussing what will – may happen with AI generative, artificial generic intelligence basically killing us, which you can hear from some Altman and his gurus, there are things that are happening now. People are losing jobs. And there is a risk that whole generation, if I can use the slang, could be basically thrown under the bus. Not only anymore blue-collar jobs, but white-collar jobs, lawyers, accountants, I would say many of us in this room. That’s a big, big problem. And how to deal with this now and here? I hope that IGF, we can report in the – with AI basically what will IGF say about that, but it’s a huge problem. Our argument and strong argument is that job is not about – only about universal income. It’s a question of dignity. It’s a question of realization of your potentials. It cannot be reduced of, oh, you will get the money at the end of the day and go fishing or go whatever you want to do, what makes you happy. No. This job has been throughout the civilization the way of realizing our potential and appreciating our core human dignity. Now, it’s a big issue. This is why this is a social contract discussion of utmost relevance. And for example, Ubuntu civilization, African traditions are interesting. You are because I am. And there are different ways of seeing it, not just optimization, optimization, optimization. I don’t have an answer, but I would say that should be on the top of the agenda or whoever discusses policy and the other issues. Do we need always to optimize? In some cases, we may step back. It will be counterintuitive. It would be difficult to promote, but we should introduce this right, human right, to be imperfect. We have that right because it defines us as humans. Therefore, that’s the – Sorina, if you want to add anything on that.

Sorina Telenau:
No, no, no. Shall we take the other two questions? So we had the other one on how bottom-up AI might be able to help better representation from the underserved communities, I guess. I guess there are multiple ways. First of all, as Jovan was saying earlier, making sure that we do use knowledge from these communities when developing these AI systems. And then as we were giving the examples of small missions or these kind of, yeah, smaller entities, that would be a way to help them better represented in the discussion. But what I didn’t understand from your question was whether you’re talking about representation in governance discussion or representation in the development of AI. Then the example we were giving with following the reporting, for instance, from the UNGA, which would then be able to alert the smaller countries, okay, this is something that might be of interest for you. This is a country that you might want to build an alliance with. So in this way, it can help foster more meaningful engagement while or where these countries cannot follow everything and anything. And then the other example we were giving, how it can help build the position to get to that meaningful engagement. And then what we usually say that if you’re not at the table, you’re on the menu, then AI in these examples can help avoid that very unpleasant situation, especially with the smaller countries that don’t afford to follow everything because of limited resources. So we do see these issues, and it’s not only us, again, it’s countries seeing it themselves. We have had quite a few discussions in Geneva with smaller missions.

Jovan Kurbalija:
What Sorina said, I think what applies to small countries, let’s say in Geneva, applies to small NGOs or civil society coming from Brazil, I guess, or any other country. You don’t have human resources. I mean, Diplo delegation, this place is three of us in the room, and Anastasia will come. Comparing to other delegations, it’s basically a statistical mistake. But we will contribute to public good by this reporting. And now practically what can be done, and it’s the most important, we are starting the project where we will try to push some of the ideas on civil society supported by European Union and engagement and inclusion of civil society. You basically, how would it work? Your organization deals with jobs or? Child rights, okay. You will make your map and say knowledge graph based on your documents, based on your Zoom meetings, whatever you want to put it, it will be your knowledge, knowledge graph. You will just apply it on the whole analysis of IGF, and you will say, aha, here is the similar problem that people face in Uganda, or in Romania, or in whatever place. Therefore, suddenly out of the transcript you will get and you will say hints how to do it. Or how to frame discussion next time for the next IGF to be more persuasive. Because you realize that this argument in child protection didn’t fly at this IGF. People just brush it and say, that’s not next question, you know how it works. But somebody’s rhetorical approach made the wonder that we will get really deep insights into this. And you, what is beautiful, through the process you develop AI. Because by commenting on what worked, what didn’t work, you have reinforced learning. And your system is on every stage stronger and stronger. Therefore, in two or three IGFs, even with delegation of two people, you can have impact of organization of 200 sometimes. Because you know what is your focus, you know what are your strengths, what sessions you will follow, and what you will do practically. That’s powerful. Now, how to do it? The best way is, Paulina, my colleague, can brief you. on later on, or you can exchange details about this project that is starting in January, which will have one of the elements, how to use AI to enhance basically participation of the local communities and the other actors. And what Sorina said, by developing your knowledge graph, you will take specificities of Brazil, and it will be element which won’t be generic child safety or child rights, which is developed by big system. No, it will be specific to Brazil or even local communities. I don’t know, Rio, I don’t know Brazil very well, but specific problems that exist in communities. Therefore, from the problem of future work of jobs, which is big issue, to what Sorina explained about developing system, to practicalities, contact Pavlina and you can join some activities or project. I think we have a partner from Brazil as well, and that could work in this way, practically. And it’s very important that we are practical on AI, otherwise discussion will be too theoretical. Let’s see if you inspire some other questions or comments. Critical ones, challenges, we need to, we have some, or you’re just playing with your hair, you know. Good. No questions, everything is clear? Or extremely

Audience:
I’m just wondering what you’re learning about the bigger systems, so that, are there ways in which you are giving them feedback or ways in which you are noticing sort of systemic problems that really ought to be addressed in the models themselves?

Jovan Kurbalija:
Well, bigger systems are big, and they’re big not only in the number of the data they process of money they attract, but also they basically don’t listen to small guys like us. They have important things to finish, to go to US Congress or EU Parliament or Chinese, whatever place they discuss this issue, or therefore there is a bit of arrogance, an element of hubris, I would say, which could be dangerous because it’s not only their business, it’s also our business about our future and our knowledge. We found it a bit, you know, in any technology you have magic. I still remember when I first was using a mobile phone, it was a magic. Technology is a bit magical. Internet and the other things. For us, we are now typing, but when you think, there is the element of magic. Now, AI brings magic on steroids, and some Altman can go, I’m mentioning him very often because I’m very critical about this use, and say, oh guys, AI will eat us for breakfast. I’m using this sort of, I said, okay, but why, how, when? Give us something. We cannot trust you just on these words. I mean, you have to, and first, let’s discuss jobs today. Let’s discuss disinformation. Let’s discuss a destroyment of the public spaces, online spaces, with the AI contributes. It’s not only AI. We found that problematic discussion, and especially non-explainability or partial explainability of neural networks adds to the magic. We put something, AI does something, and you get something. This is why we insist always to have the source of the answer of the question. Yes, here is a source, and this is the first step. We don’t know how AI got this answer, but we know, and ChargPT can know that, and Bard, and Baidu, and the others, they can know what were the sources for that answer. This is already the first step. Therefore, we see a lot of lack of transparency, confusion, and I’m afraid to say that it will be fertile ground for the conspiracy theories, because when you are just saying, well, trust us, we want to regulate you, and don’t ask questions, just trust what we are telling you, and then you basically, for me personally, I have a problem with that. I don’t think that things cannot be explained, at least source of your conclusion. I know neural network is not easy to explain technically. I have a colleague who is into AI, and he said, listen, be careful when you go to these IGFs of the UN. If you introduce explainability of neural networks, half of us will be in the jail, and I said, okay, there are realistic concerns, but there are things that can be done. That’s my sort of criticism of big, big systems. And to add on that a bit,

Sorina Telenau:
if I may just reinforce one of your points, in all these discussions about AI governance, you’ve probably followed Sam Altman and a few of the other big guys saying, yes, it’s a huge mess that we’ve created. Well, they don’t really say we’ve created, but AI is coming with all these challenges, and it’s going to break the world, and destroy us, and this and that, and we need to regulate. But if you look carefully at this discussion on we need to regulate, what they’re saying is we need to regulate future AI, not the AI we as big companies have developed, but future AI. So let us do our things, we’ll continue doing the best, and you should worry about the future. And I think that’s problematic, and I think we should hold them more to account what’s happening right now. As Jovan was saying, we have problems right now with AI that we should be solving before looking at the future. Not saying we shouldn’t worry about the future, and what might happen, but maybe put more resources into what’s happening right now, and how we address today’s challenges. And that would be it.

Jovan Kurbalija:
I’m looking for one presentation which we may share later on, where we are basically, ah, here it is. I was recently in Brussels, obviously they’re preparing the new regulation, and we said okay, let’s see what does it mean to regulate AI. You regulate hardware, you regulate data, you regulate algorithms, and you are the first to see it publicly. We didn’t show because there was some problem with PowerPoint during that session. And we regulate apps. What does it mean practically? What do you regulate? For example, as Serena said, you can’t hear some Altman saying regulate apps, or even data. Why they are not showing sources? Obviously, if you find the book which is copyrighted as a source, there will be a problem. As you know, there are already court cases in the United States against open AI. Or hardware computing power, where things are happening with the NVIDIA and the GPUs. What do you regulate? Read carefully. Next time when you hear, listen to some Altman, you can’t find except, oh, regulate AI capabilities. What does it mean? Basically, we created these capabilities, let us stop the other developments, and basically, I’m now a bit cynical. Let’s have monopoly on this. I said, no, that’s against competitive market. It’s against creativity. It’s, again, other issues. But there are problems that we have to deal with. How apps can be misused? How people can be thrown out of the jobs? How this information can be generated? You know the whole story. It’s part of public discussion. But where do we regulate? You can’t hear companies talking about data. That’s non-existent. They are already concentrated on this blue one, which is basically vague. They avoid apps, red one, because this is very concrete, you know. And hardware, it’s more geopolitical discussion these days between US, China, and these big players who is going to have hardware capability to process data. We’ll be publishing soon article on this to bring clarity when I started winter of excitement, spring of metaphors, summer of reflections, autumn of clarity. There could be disagreements. But let us not misuse the magic of technology of AI. Magic is important. It can inspire. But let’s not misuse it. Let’s basically keep the magic of technology while discussing governance issue where they are. That’s it. Looking again at the room. It’s always this sort of tension in the air. We want questions. No, we don’t want to force you to ask the questions. But we have 10 minutes more? No, we have 25 more minutes. OK. Let’s listen. Let’s chat in the corridors if there are no other questions and our comments. Oh, you have two questions on this side. I hope it is not forced question because we are asking for the questions. No? No, no, go ahead. Go ahead.

Audience:
I was thinking, sorry, let me introduce myself. I’m Julia. I am a youth from Brazil. I am with my delegation here. And I was thinking when you were talking about Ubuntu and other societal aspects of the philosophy behind AI or what could be the philosophy behind AI. And I got me wondering if there is any initiative to use AI as a means to preserve and to develop small communities, history, and culture and have them not to be lost into the translation that we are experiencing of losing practical and physical knowledge and ways of sharing knowledge, like families are being estranged by the recent modern changes that they are moving too much. They are being displaced by technology and opportunities, job opportunities, and so on. Is there an initiative or a group or an entity towards preserving small cultures or at least, not small cultures, but trying to bring access to small cities or small communities to try and update and upload their knowledge and basically, their knowledge. I’ll step their knowledge. But we can also imagine that knowledge of villages, of small cities can comprehend into physical practices, agricultural practices, and stories, and mythology, and so on. Because I have a personal, that’s also a personal question for me. Because I think about how are we losing, I’m from Brazil, how much we’re losing from being away from the countryside and having the cities expand and the countryside shrink. Although, the countryside is the majority of our landmass.

Jovan Kurbalija:
Great. I think it’s an excellent question. The short answer is yes. And I’ll give you, but let me give you an example of Diplo. It’s always, I always try to start with myself. We are a small organization. We have our, let’s say, we are a small community somewhere in Amazonia, where we basically live in the river and we had a culture. And we had to deal with the questions that every humans have to deal with, the question of family, love, purpose of life, what do you do after you die, what do you do with your kids, and these things. This is a knowledge. This is a very valuable knowledge, maybe not codified in the books of big philosophers, but this is core knowledge. Can it be saved? Yes. Should it be saved? Yes. Are there initiatives to save it? No. Why is it the case? I can’t tell you, but it’s very sad, because we are losing on this diversity of humanity. And I don’t think there is a hierarchy of knowledge and experience. Maybe money and power is not equally distributed, but human capability to innovate is distributed. And that’s basically how it can be done. Now, is there initiative? No. Can it be done with open source tools? Yes. Is it easy to do technically? Yes. Organizationally? No, because you have to change the habits and you have to change quite a few things, but not undoable. Is there interest to support it? No. Well, you will hear here many inclusion, cultural diversity, but when it comes to concrete things, there is no action. And I think countries like Brazil should push, especially the new government, I think it’s keen on the diversity, should push organization like UNESCO to do something to preserve the knowledge by using AI. And that, what is your name? Julia. It could be Julia’s initiative. We have a question from a colleague here. Could you just, well, the processes that you have to stand next to the mic. Please.

Audience:
Yes. Thank you. My name is Nicodemus Nyakundi. I’m from Kenya. I’ve come under the Dynamic Coalition on Accessibility and Disability. I work in Kiktanet under Digital Accessibility, more specifically for persons with disabilities. So there’s something that has been disturbing my mind, and I really need to understand when it comes to AI. More so that AI has not deviated so much from the normal approach towards machines and computers, like it is based on inputs and output models. So we have AI that is mostly trained on perfect data. I call it perfect because it is a predetermined data, and that is considered to be normal. But we want that AI to work with the imperfect human, a human who makes errors. So also we have to recognize the good thing is that we make mistakes, but then as humans, we resolve back to correct the mistakes. So my question is, what approach should we take to ensure that the AI is as human as us, and that it can work with persons with disabilities, and ensure that they also contribute to basic life needs for persons with disabilities, so that it does not create more of a marginalization, because it will come into the interface of, say, defining another form of perfect of which not all of us are. Thank you.

Jovan Kurbalija:
Sorina? OK. Let me unpack the few issues, one about people with disabilities. AI offers possibility, serious possibility. We are seeing it with transcribing, with the other issues with people with disabilities. Again, people with disabilities are not prominent yet in AI debates. And here again, small communities could ask actors like UN to check their disability quality, how people with disabilities can access. We recently have some study, and we are going to do to check diplomatic websites how disability friendly they are. And that’s, I would say, that push has to be strong from the bottom up communities and other actors. This is the first question. Should we make AI look like us? That’s a philosophical issue, and I’m not sure. I would preserve AI as a tool in the mindset. It will be a powerful tool, but always a tool, which Serena used during the course this summer to enhance learning. To have it always as a good tool, as a good, not to have it as a master, but to have it as our servant. That’s very important mentally. That will be powerful servant, which may revolt and which may say, OK, I want to have some power over it. But that’s basically I would keep it. Obviously, we’ll try to mimic. It excites us. If you read the Frankenstein from the Mary Shelley, this is the best example. Basically, she, Dr. Frankenstein, wanted to create the perfect creature. And that creature, if you can recall the book, was created to be good. And then it went out of the lab, and people were afraid. And people became aggressive. And then creature reacted and started getting nasty, basically, how we now perceive Dr. Frankenstein. This is where I am, for example, very uneasy with anthropomorphizing AI, putting it as humans. Because it is exciting. You can have a nice event. People are excited. Oh, Sophia, what is the name of all of these robots? Fortunately, I don’t see any Sophia at the IGF. Oh, Sophia can answer your question. And they said, no. What we do, we have a coffee machine as AI. It had the first session. For those of you who were in IGF Berlin 2019, it was participant in one session. You can search IGF coffee machine. That’s an element which we have to be very careful. Otherwise, we will finish like creature of Dr. Frankenstein. Because we will think that that creature is creating us some problems. But I here made one suggestion. If the IGF gives you a chance to be a bit imperfect, and I share it here on the screen, you can go to the philosopher’s path here in Kyoto. I heard it’s a nice walk. Run away. Be a bit imperfect. Don’t be at all sessions, except thank you for coming for our session. But here is the leading Japanese philosopher who basically studied, who used to have a walk through this philosopher path. And you can see that he was reflecting on society, on purpose, on happiness, on other issues. I don’t know if you are going to have somebody from philosophy department at Kyoto University, which was one of the best in Japan. But that would be an interesting discussion. Back to tradition of Ubuntu coming from Kenya. OK, it’s more towards south. But all that tradition of us belonging to collectivity and being the part of being empowered by collective, by family, by our surrounding. That’s the answer to the practical, again, answer, if the weather will be nicer, go to the. We don’t have a cherry blossom. I will criticize AI organizers why it is not in April. But we can get back to Kyoto for this. But philosopher’s path is an interesting place where these guys walking like Kant used to walk in now Kaliningrad. It was, at that time, Prussian city. The famous Immanuel Kant, he was walking every day same route. He was only late one day. And that’s a mystery, pettiness of philosophical discussion while he was late. But I forgot the name of this Japanese philosopher. Oh, Nishikida Kitaro is basically the best Chinese philosopher. I plan to read more carefully and see what we can learn from him about AI and basically develop this discussion further. And my call for imperfection, try to discover this lovely city. You will have, anyway, Diplo’s reporting. Therefore, you can read what was happening. But should it be official at this point? No. I’ll get in trouble with the secretariat and these things. And thank you for coming. Let’s walk the talk and enjoy the corridors and chats and basically continue this interesting debate about bottom-up AI and our right to be humanly imperfect. Thank you. Thank you, Sorina. Thank you.

Audience

Speech speed

148 words per minute

Speech length

1035 words

Speech time

420 secs

Jovan Kurbalija

Speech speed

150 words per minute

Speech length

8952 words

Speech time

3590 secs

Sorina Telenau

Speech speed

207 words per minute

Speech length

1649 words

Speech time

477 secs