WS #78 Intelligent machines and society: An open-ended conversation
Session at a Glance
Summary
This discussion focused on philosophical and ethical questions surrounding artificial intelligence (AI) and its impact on humanity. The speakers, Jovan Kurbalija and Sorina Teleanu from the Diplo Foundation, emphasized the need to move beyond surface-level discussions of AI ethics and biases to address more fundamental questions about human identity and agency in an AI-driven world. They raised concerns about the centralization of knowledge by large tech companies and advocated for bottom-up AI development to preserve diverse knowledge sources. The speakers questioned how AI might affect human communication and creativity, and whether humans should compete with machines for efficiency. They introduced the concept of a “right to be humanly imperfect” in contrast to AI’s pursuit of optimization. The discussion touched on the anthropomorphization of AI and the need to consider other forms of intelligence beyond human-like AI. Practical examples of AI tools for knowledge management and analysis were presented, demonstrating how AI can be used responsibly with proper attribution. Audience questions addressed topics such as AI ethics education, the potential personhood of advanced AI, and open-source approaches to AI development. The speakers concluded by proposing further philosophical discussions on AI’s impact across various cultural traditions, emphasizing the importance of examining what it means to be human in an AI era.
Keypoints
Major discussion points:
– The need to focus on immediate and practical impacts of AI rather than long-term hypotheticals
– Concerns about AI’s impact on human knowledge, communication, and identity
– The importance of maintaining human agency and imperfection in an AI-driven world
– Questions about the nature of AI intelligence compared to human intelligence
– The need for more philosophical and ethical considerations in AI governance discussions
Overall purpose:
The goal was to raise deeper philosophical questions about AI’s impact on humanity and encourage more critical thinking about AI beyond surface-level discussions of ethics and bias. The speakers aimed to challenge common AI narratives and highlight overlooked issues.
Tone:
The tone was thoughtful and somewhat skeptical of current AI hype and narratives. The speakers took a critical stance toward simplistic AI discussions while maintaining curiosity and openness to AI’s potential. There was an underlying sense of concern about AI’s societal impacts, balanced with calls for practical engagement rather than alarmism. The tone became more interactive and solution-oriented during the Q&A portion.
Speakers
– Jovan Kurbalija: Director of the Diplo Foundation and Head of Geneva Internet Platform
– Sorina Teleanu: Director of Knowledge at Diplo Foundation
– Mohammad Abdul Haque Anu: Secretary-General of Bangladesh Internet Governance Forum
– Tapani Tarvainen: Electronic Frontier Finland, Natta Foundation
– Henri-Jean Pollet: ISPA in Belgium
Additional speakers:
– Andrej Skrinjaric: Head of linguists at Diplo Foundation
Full session report
The Philosophical and Ethical Implications of AI: A Critical Examination
This discussion, featuring experts from the DiploFoundation, delved into the profound philosophical and ethical questions surrounding artificial intelligence (AI) and its impact on humanity. The speakers emphasised the need to move beyond surface-level discussions of AI ethics and biases to address more fundamental questions about human identity and agency in an AI-driven world.
Key Themes and Arguments:
1. Introduction and Framing
Jovan Kurbalija, Director of the Diplo Foundation, opened the discussion by stressing the importance of understanding basic AI concepts to engage in meaningful discussions beyond dominant narratives of bias and ethics. He argued for a more critical examination of AI’s impact on human knowledge and identity, proposing the concept of a “right to be humanly imperfect” in contrast to AI’s pursuit of optimisation.
Sorina Teleanu, Director of Knowledge at Diplo Foundation, presented a series of thought-provoking questions to frame the discussion. She questioned the anthropomorphisation of AI and the tendency to assign human attributes to machines. Teleanu raised concerns about how AI might affect human communication and creativity, encouraging consideration of other forms of intelligence beyond human-like AI.
2. Philosophical Considerations of AI
The discussion touched on various philosophical aspects of AI. Kurbalija introduced the concept of a “right to be humanly imperfect,” arguing for the preservation of human agency and imperfection in an AI-driven world. This idea resonated with other speakers, who expressed concern about the potential loss of human elements in pursuit of AI-driven efficiency.
Teleanu expanded on her concerns regarding the anthropomorphization of AI, highlighting the potential risks of attributing human characteristics to machines. She also raised important questions about the interplay between AI and neurotechnology, emphasizing the lack of privacy policies for brain data processing.
A thought-provoking perspective on the potential personhood of advanced AI was introduced. The idea that if Artificial General Intelligence (AGI) becomes indistinguishable from humans in capability, it might deserve human rights, challenged conventional notions of humanity and consciousness.
3. AI Governance and Development
The speakers agreed on the need to focus on immediate and practical impacts of AI rather than long-term hypotheticals. Kurbalija criticised ideological narratives that postpone addressing current issues in education, jobs, and daily life. He advocated for bottom-up AI development to preserve diverse knowledge sources and prevent the centralisation of knowledge by large tech companies.
Kurbalija also stressed the importance of defining accountability in AI development and deployment, arguing that legal principles regarding AI responsibility are fundamentally simple and should be applied accordingly.
Henri-Jean Pollet emphasised the importance of open-source models and data licensing for AI development. He proposed systems to validate and test AI outputs, similar to human education processes, to ensure reliability and prevent “hallucinations” in AI-generated content.
4. Human-AI Interaction and Ethics
The discussion touched on various aspects of human-AI interaction. Teleanu raised questions about the impact of AI on human-to-human communication, wondering how increased reliance on AI-generated text might change how humans interact with each other in the future. This point highlighted the need to consider the long-term sociocultural implications of AI integration.
Mohammad Abdul Haque Anu, Secretary-General of Bangladesh Internet Governance Forum, expressed concerns about AI ethics education and implementation. He questioned who should be responsible for teaching AI ethics and how to ensure adherence to ethical guidelines in AI development and deployment, particularly in the context of developing countries.
5. Diplo Foundation’s Approach to AI Development
Towards the end of the discussion, Kurbalija elaborated on Diplo Foundation’s approach to AI development. He explained their focus on creating AI tools that preserve and enhance human knowledge, particularly in the field of diplomacy. These tools aim to assist diplomats and policymakers by providing quick access to relevant information and analysis, while maintaining human oversight and decision-making.
Conclusion and Practical Demonstration:
The discussion concluded with a practical demonstration of AI tools developed by the Diplo Foundation. Kurbalija showcased how these tools can be used to analyze complex diplomatic texts and generate summaries, emphasizing the potential of AI to augment human capabilities in specialized fields.
The speakers emphasized the importance of continuing these philosophical discussions to examine what it means to be human in an AI era. Key unresolved issues included the effective implementation of AI ethics education, the long-term impacts of AI on human identity and interaction, and the ethical implications of AGI potentially becoming indistinguishable from humans.
This thought-provoking discussion challenged common AI narratives and highlighted overlooked issues, encouraging a more critical and philosophical approach to understanding AI’s role in shaping the future of humanity. The session ended with an invitation for continued dialogue and exploration of these complex issues.
Session Transcript
Jovan Kurbalija: No? No. Solution is very binary. Yes or no? Good. Well, welcome to our session and sorry for a slight delay for our colleagues online and those in the room. You can hear me? You made some expression, no? Waiting in and out. Okay. But we have to, because of people outside, we have to speak. No? I know, I can tell you, I was attending, I’ve been attending the last one, IGF since the first IGF. The biggest challenge, two biggest challenges are food sometimes and the second is sound. Therefore, it seems the sound works now? Yes. It’s okay. Good. Thank you for coming today. I’m Jovan Kurbalje, Director of the Diplo Foundation and Head of Geneva Internet Platform. My colleague Sorina Taleanu is Director of Knowledge at Diplo. Therefore, Sorina is working a lot on the intersection between artificial and human intelligence. Human, artificial, artificial, human. One minute, one minute and a half, two? Yeah, the screen is gone. Okay. I think you go ahead a little bit. I have permission. I think audience’s permission. Where are you coming from? I’m from Bangladesh. Bangladesh. Okay. You guys in Indian subcontinent, you invented number zero and put us in trouble. Otherwise, we were using Roman numbers and it was so easy. But you invented one and zero and digital world and all trouble starts, you know, including sound in this room. I’m always teasing my colleagues and friends from India, Bangladesh, because number zero, Shunya, was invented in the continent and they came through the Al-Khwarizmi, who invented the algorithm, first to Arab traders, and then it came to North Africa and Mr. Fibonacci. Music is coming. It’s better than me. Me. Speaking of digital challenges. Those are challenges. Oh, there was a proposal that I start singing, which is a terrible idea. I can tell you. Okay. Some people are giving up. I’m sorry. I’m sorry. Okay. I’ve been working on AI myself personally since 1992, when I did my master thesis on international law and expert systems. At that time, AI was about expert systems, rule-based AI. You basically try to cover, in this case international law, and do, if there is a breach of this, then, and the other thing. Now it’s been five years since neural networks started, and especially after the initiation of the question of chargeability, we have been focusing on AI, but in a slightly different way than other systems. We have been developing AI, and in addition we have been trying to see what are the philosophical, governance, political aspects of AI. The principle is that you don’t need to be a programmer, although we have quite a few programmers, but you have to understand basic concepts in order to discuss AI, otherwise the discussion ends up with unfortunately dominant narratives that we have here at IGF, and not only IGF, many meetings, bias, ethics, and we can generate typical AI speech. Mix of bias, mix of ethics, and what we notice is that there is a narration which is not going into the critical issues. What we basically did in this context was the following. I’m sorry. It should work. We were discussing, are we holding the globe together, or we are fighting machines and people, and most of our discussions, especially when the hype came in 2023 with ChatGPT, we were really concerned because being involved in AI, we were concerned that there was… almost surrealistic discussion. And those of you who are in this field, you know that there was a long-termism movement of effective altruism, basically send the message, you are, no? Like this? Upside, upside. Upside? Yes. Okay. So many details. That was basically long-termism movement, which was saying, don’t discuss AI. I’m simplifying. Don’t discuss AI currently. Let’s see what will happen in 20, 30 years, and let’s prevent that AI destroys humanity. I’m caricaturing a bit, but it was the narrative. You can recall the letter signed by lead scientist, stop AI, and these things. One worrying signal, which I noticed, was it became a bit of ideological narrative. And when there is ideology, then there is something fishy. I was born in socialist country, and I can detect in no time ideological narrative. And it was ideological narrative. More or less similar to communist narrative, which says, don’t ask any question now. Just enjoy, just follow the orders of the party, and in 30 or 40 years, we will be in the ideal society. If you complain, you go to gulag and you get in trouble. Now, that was a bit of initial narrative, and we said, okay, it’s a bit tricky. We have to make more serious discussion about it. Therefore, we started pushing for immediate risks of AI in the education, in jobs, in the day-to-day life, in use of AI by content moderation platform, by Google, in any walk of the life. And we were particularly concerned about one aspect, which you can consider as a mid-term aspect, which was a massive codification of human knowledge by a few platforms. Mainly platforms in the United States. States and China. Therefore, when you interact with ChargePT, you basically, as we know, you’re training the model. And that was, or Google, or Alibaba, it’s not matter where from it is coming, but the idea that knowledge is centralized and captured was very problematic. Therefore, we started developing bottom-up AI, trying to show that with relatively little resources, you can develop your own AI, and you can preserve your knowledge. And we say that it is technically feasible, financially affordable, and ethically desirable. Because knowledge, which is codified by AI, is that it defines our identity, our dignity. And that’s extremely important, that we know what is happening with our knowledge, or knowledge of the generations before us. This is more or less the context in which we have been doing that, but I’ll ask Serena now to build more on these discussions.
Sorina Teleanu: Let’s see if I can do this right. Good afternoon, everyone. Thank you for joining our session. I think what we wanted to do today is to have a bit of a dialogue around some of the more philosophical issues surrounding AI, because at least I personally feel we talk a lot about, you know, challenges and opportunities of AI, and we need to govern it, because how do we deal with the challenges, and how do we leverage opportunities? But what about the more human side of all of this? And I do have a few questions that I’m hoping we can go through quickly, and I’ll just, yeah, no, I’ll actually do it myself, if we can just maybe switch. I do like slides quite a bit. quite a lot, and at Diplo we do have quite a few very nice illustrations that I couldn’t miss an opportunity to share with you. So I’m going to do a bit of that, bear with me. And then hoping to have more questions from you as well. What you will see on the slide is mostly questions. Things that I hope we would feature more in our discussions on AI governance, on the impact of AI on economy, society, humanity and whatnot, and which I feel we miss more often than not. So starting with this, we talk a lot about large language models, right? And generative AI and chat GPT and all of these things. But what are some of these challenges in knowledge creation? How do we look at large language models and at generative AI tools? Are they a tool? Are they a shortcut? Are they our assistant, new coworker? How do we relate ourself with these tools? Are we even making conscious choices when we interact with them? Or are we exercising our agency as humans? To what extent? And yeah, the broader question, what roles do we imagine large language models playing vis-a-vis us humans as we interact with them? Are we missing the forest for the trees? We talk a lot about generative AI, but are there other forms of intelligent machines, agents, whatever you want to call them, that we might need to focus a bit more in our discussions on governance, policy, gain implications, and what does it all mean? And if we do need to do that, how do we actually get there? And then something that really, really bothers me is this way through which we do assign human attributes to AI. We do talk a lot about AI understanding, AI reasoning, and using these sorts of words that are much more adequate to use for human intelligence, right? But does AI actually reason? Does AI actually understand? And when we use this word, do we actually understand what we mean by them? And yeah, there is a bit of hype around anthropomorphizing. I spoke too much today. AI, and I’ll just give you one example. I’m not sure how many of you here might have attended the big event in Geneva, which happens on a yearly basis. This is the Global AI for Good Summit. Yes. I see at least one person who’s around Geneva, but he’s busy typing. Thank you for that. So last year, there was a lot of focus on humanoid robots. So you would walk to the conference venue and see a human-like looking robot here, another one there, another one there. But what I didn’t see was people actually questioning, okay, what does that robot actually mean? Does it understand? Does it reason? Does it think? Or is it just another way for us to hype technology in some way? You have them here as well. Yeah, we have robots here as well. And then a bit more on the interplay between AI and other technologies that tend to also join a bit the hype. Another example from exactly same conference. Last year, the focus was on neurotechnologies and how the interplay between AI and neurotechnology might be impacting the way we relate to these technologies in the future. There were many companies who are coming at the summit and presenting their neurotechnology devices, applications, and what not, and we had this random curiosity, okay, let’s see what kind of privacy policies these companies have when it comes to their processing of brain data. You know, if you use a neurodevice, there is some processing of brain data. What do you think we found out? Any guess?
Jovan Kurbalija: Out of how many, eight or nine?
Sorina Teleanu: I think eight or nine. Those are the companies that we looked at. One had a line in the privacy policy saying, well, we might be processing your brain data, but because you agreed to use the service, you also agreed to us processing the data, and then all others, their privacy policies were mainly related to cookies, how you interact with the website, but nothing about the technology itself. And then the question is, if you as an international organization invite these organizations, companies, to speak about how amazing neurotechnology can actually be at the end of the day, and I don’t know, help solve whatever problems, shouldn’t you also be a bit more careful about how they also deal with the human implications of this with human rights and what not? I think sometimes we talk, but we don’t also walk the talk in the policy space, and I’m hoping we will see a bit more of that going forward. More questions. When words lose their meaning, how many of you here use tools like ChatGPT? At least, okay, I’m seeing a lot of hands. I’m a bit worried, to be honest. Giovanna and I are also teaching at the College of Europe, and you know how it is. When you have an assignment which is write an essay, you just go to ChatGPT, you put it there, and you get your essay. We’re also seeing this in the policy space quite a lot. There was a funny anecdote from the head of an international organization in Geneva. Would you like to tell that shortly, about going to a conference and hearing the same?
Jovan Kurbalija: Yes, they went for a conference, and they were. hearing all same speeches by all opening statements and that was it. And we had one organization that created their strategy on 120 pages and we initially, it’s important organization, and we said let us read it and then somebody even didn’t dare to remove ChatGPT references and then we said oh my god what’s going on. That was funny anecdote with eight speeches or nine speeches basically. We then analyzed them and we found the patterns that were basically generated by. Even they don’t make an effort to go to Gemini or or other platforms, but everything was generated by ChargePT, okay. Okay, just for colleagues online, but there was a comment which is good comment and we often discuss that. We always ask how AI should be perfect, but then in the same time we are who we are and the speeches as you know even written by humans are not that exciting and then it was it was good good point. But what worried us was this huge document on AI strategy and we were thinking it’s many countries will read it as a strategy for AI for that organization. First, can they read hundred ten or twenty pages? Second, and second is it really expression of the policy interest of that organization? It’s not. It’s on very common sense level and that’s that’s it. Serena.
Sorina Teleanu: Thank you Jovan and speaking more of concerns, I think the first question right there is the one expressing the best what we have been discussing for quite a while at least at Diplo, but I don’t see it so much in the broader space. If we rely so much on AI tools to write our text and communicate in our emails or whatever else, right now it’s easy to kind of easy to spot what is AI generated text and what at least has some sort of human intervention. But if we end up relying so much on chat GPT and like tools, will we still sound like humans five, ten, fifteen, twenty years from now? And also what does it have what does yeah it happen with all this kind of how do I call it self-perpetuation of AI? If AI comes up with new text based on data already available now but five years from now all of the new data will still be AI generated. What does it also mean for broader issues of human knowledge and also for how us as humans actually relate to each other at the end of the day and how we communicate to each other. This is one of my favorite books. I’m not sure if someone in the room has actually read it. I think we also have this kind of obsession as humans to try to develop AI which is really like us. We want to generate you. No, general artificial intelligence to be as good as humans as every single task because I don’t know we want that to happen But what about other forms of intelligence out there? Can we develop intelligent machines that act more like octopuses? Which we have discovered recently that are quite smart right and intelligent more like fungi more like forests What about other forms of intelligence around us that we might want to borrow a bit from as we develop? Whatever we mean by intelligent machines. We tend to be so focused on us humans. We are at the center of the earth We have the best we know the best but maybe it’s not exactly that also as we look into developing technology Yes, and we’re having more trouble with Technology because why not and I’ll end with a few more questions. This is probably the overarching one What does it mean to still be human in an increasingly AI era? This is more on the interplay. I was mentioning earlier between Neurotechnology more invasive technologies AI and your one can cover that a bit later as well another example of this interplay What we have been trying to kind of advocate for in the policy space in Geneva, is this right to be humanly imperfect? Jovan, would you like to reflect on this a bit?
Jovan Kurbalija: Well, the idea is counterfactual and when I go to Human Rights Council and the human rights community they look at me As if I am from the other planet, but I was I was arguing for quite some time Three to five years and I even proposed the workshop at the at the IGF but probably the the whatever mug they dismissed that as I was arguing I’ve been arguing for the Idea that we have a right to be imperfect But our efficiency civilization made Centered about optimizing efficiency is basically making it unthinkable That you have a right to be lazy You have a right to be lazy, you have a right to make mistakes, you have a right to ask, you have a right to this. But if you really consider carefully, humanity made its breakthroughs when we were lazy. In ancient Greece, these people had plenty of time to walk through the parks and to think about philosophy. Or in Britain, all this tennis, football was invented in the British time, obviously. Some other people were working for the elite, that’s another story. But they were lazy and they were inventing things. Therefore, my argument is that we have to fight to the right to be imperfect, to refuse to remain natural, not to be hacked biologically, the right to disconnect, the right to be anonymous, and the right to be employed over machines. I have my bet that in five years’ time, I will be already retired, but some of you are younger, we will have at least one workshop at IGF of asking, do we have a right to be imperfect? And then I can offer it as a bet that we do it. But it’s a very serious question going beyond, let’s say, a catchy title. It’s going into what Sorina said, the critical question, what does it mean to be human today? And what will be our humanity in the relation to machines in the coming years?
Sorina Teleanu: Thank you. And it’s not so much about talking about robots coming and taking over and this Terminator business that used to be in focus for quite a while. It’s more about this very human-to-human interaction and how AI comes into play in this human-to-human interaction. We’ll end with a few more questions, and then we’re hoping you will be adding more questions at the bottom of our slide. So trying to wrap up, what do we actually want from AI? There is, yeah, maybe I shouldn’t. There is a quote from a company, maybe I shouldn’t call the company at least, developing artificial… It’s a prominent company. Yeah. It’s developing or trying to develop artificial general intelligence, again, the type of AI that would be at least as good as human at doing everything and anything. So the quote, I’m going to paraphrase it probably, goes a bit like this. Our aim as a company is to develop artificial general intelligence to figure out how to make it safe and then to figure out its benefits. So when I saw that statement, I was thinking, okay, but shouldn’t it be a bit like the other way around? Like sort of… out what are the benefits of AGI, then see how to make it safe, and then actually develop it? Isn’t it a bit of a wrong narrative there? And I think in our policy discussions at the IGF and elsewhere, we should be questioning these companies a bit more. I feel they’re just going around the place saying, hey, AI is going to solve all of our problems and we’re going to sleep happily every night because AI will do the work for us. Okay, but have we actually thought carefully about this? And again, it’s not about Terminator and these kind of robots killing humanity, but what does it mean to still be human? When you said sleep, we are sleepwalking. We kind of are, exactly. Because again, we don’t see many of these questions, unfortunately, in the policy debates. And we’re coming from Geneva, where every other day, at least, you do have a discussion on AI governance. You can confirm. Thank you for nodding. How many of these questions do you actually see in those debates? Yeah. Yeah. So I’m hoping we’ll be seeing more of them. Again, how do we interact with AI? To what extent are we even aware of these interactions? And how much of these interactions involve informed choices? How about our human agency? As I was saying, is AI having an impact on how we interact and how we relate with each other as human beings? Is AI making choices for us? Should it be making choices for us? Again, the notion of human agencies. I wanted to go to Jovan’s point about the right to be humanly imperfect and this focus on efficiency. In a world driven so much by economic growth and GDP and this way of measuring progress, can humans compete with machines? Should humans compete with machines? Should we just do what we’re good at and leave machines do repetitive tasks? And finally, is there a line to be drawn? Can we still draw this line at the point? Is it too late? Can we be asking more questions? Over to you, I would say. And I’m hoping you will have some reflections on some of these questions and ideally more questions because I think questions are important and we should be asking more of them.
Jovan Kurbalija: Internally, I’m enthusiastic on the AI side because I have the geeky approach. and Sorina is not enthusiastic. And for those of you who consulted Sorina’s book, I would probably repeat myself, she wrote a book on the Global Digital Compact. When it was adopted on 27th of September at the UN, Sorina has been following Compact for last 18 months. Her slides are shared by many governments. And I said, Sorina, let’s use AI and convert your slides into the book. And she said, OK, you know how Sorina is kind. She said, OK, maybe. And the next day I see Sorina typing. I said, Sorina, come on, let’s put the slides and we convert into the book and you have a book. Sorina wrote the book herself in 47 days on 200 pages. Here is the book. And I lost my battle that AI can help us in writing the… This is a book which was written in a very solid analysis in 47 days. Therefore, we have internal battle which is healthy debate with me being more optimistic and Sorina being more careful and pessimistic. But we do reporting from IGF through the website, I don’t know, with the use of AI. And we are going to ask at the end of the IGF, whole analysis of IGF, how many, in our view, realistic question about AI were asked over the five days. And how many are discussion on ethics, biases, and these things. Mind you, that’s important discussion. But we are very often not seeing a forest for a tree. Critical issues about the future of the knowledge. Therefore, we start with the questions or comments. Introduce yourself, please.
Mohammad Abdul Haque Anu: My name is Onu. I’m a Secretary-General of Bangladesh Internet Governance Forum. My question is, who institute teaching, giving teaching the AI ethics? Everybody knowing, everybody saying that we should follow the AI ethics. Who delivered a teaching process that we should follow this curriculum is that this is the AI it is My question.
Jovan Kurbalija: Okay. I’m first bit bit cautious about their ethics. I think there is a human ethics and I’m cautious about biases. For example, I’m a fool of biases. You are full of biases. We are biases historically culturally therefore there is a very important discussion how How to organize that and how to make a common-sense Curriculum not not too hyped and I’m afraid that a lot of energy in AI debates are is going to the let’s say ethics You have now close more than 1,200 guideline standards on AI and ethics that is Losing any context now, I would be careful on that. I wouldn’t focus on AI and ethics I will focus on how AI functions and what are implications for society direct implication We run AI apprenticeship people create their chatbot. They interact We tell them this makes sense. This doesn’t make sense. Is it going to offend somebody? What are the biases that could be tolerated? What are the biases which are illegal or can harm people therefore you put you put? General discussion on ethics. I love philosophy, but philosophy should be practical You know, you put it to the very practical level. This is missing and that That should be in our view developed through the apprenticeship programs Yep
Mohammad Abdul Haque Anu: Absolutely, I agree with you already we are suffering is the misinformation disinformation Misinformation malinformation here throw by social media so many channel given the There is no ethics. There is nobody follow the ethics Now we are facing upcoming day is the AI ethics ethics. How we are suffer and how we are manage this kind of thing. Nobody maintain the rules, nobody maintain the ethics, but everybody saying that we should follow the ethics.
Jovan Kurbalija: Let me say it’s very simple. Diplo has AI systems. I’m director of Diplo. If you go to our website and you ask the question and you feel insulted, you won’t, but if you feel insulted, I’m responsible. I mean, law is very simple since Hammurabi who invented the first legal rules. It’s very simple. You start the business or non-profit, you start something, nobody forced you to start it, you use it, you’re responsible if damage is created. I’m sorry to say, but you have some laws on AI on close to 200 pages and principles are legal. I’m lawyer by training. Legal principles are very simple. Diplo does the chatbot. Okay. Has anyone forced you to do it? No. Is your chatbot harming somebody? So far no, but if it harms, I’m responsible and that’s the end of the story. There are legal rules that apply to it and frankly speaking, I’ve been in this field for many years in AI, but I’m always amazed when I go to these AI events and I think then they go into algorithm models that machine will take over and then it becomes, but these issues are rather in its core rather simple and the law is codification of ethics. Don’t kill somebody. Don’t insult somebody. Don’t steal somebody’s property and that’s simple. We are trying to simplify discussion, but not oversimplify, because we found that a lot of energy, including, I’m sorry, in this space is focused on issues which are nice to talk about, but which are not even sometimes good philosophy. If we discuss good philosophy, it’s great. But sometimes it’s basically repeating notions on ethics, bias, and that’s a bit of my concern on it. Therefore, practical apprenticeship, responsibility, legal responsibility, focus on issues, concrete issues, train people to understand it. That would be my advice. We have a question over there. Let me just bring, you were last year as well, you know, you’re our followers.
Tapani Tarvainen: Okay, I’m Tapani Tarvainen from Electronic Frontier Finland, Natta Foundation. Now I want to jump straight to the deep end of the philosophical questions here. Assuming that AGI becomes actually possible, which I’m not sure of, and it’s as good as people in every way, so why isn’t it human at that point? You could argue that those machines, if they are our peers in every way, they are our offspring, basically, and they should have human rights at that point as well. So how do I know that you’re not a machine, and why should I care in the end? If, you know, because I don’t know what goes inside your brain. We don’t understand how we think for that matter. Now, I don’t think we are LLMs, you know, these large language models, nor any other type of present artificial intelligence. There are other kinds of, you know, AlphaGo or Stockfish, as somebody might know here. They are not LLMs, but they are still artificial intelligence, but perhaps, you know, we can come up with true AGI. And also I have to point out historical point here that neural networks have been studied since 1980s by Teova Kohonen, as you presumably know. So it’s not a new thing, but only now because the data available has made it more practical. But the philosophical point, is there a line to be drawn? Is there a line to be drawn reading from the screen? If and by whom? And at what point it becomes actually acceptable to treat robots as if they were human? Or does it I? Suggest that as soon as they start to behave enough human like that. We can’t take it tell it apart from talking to them Pinching them whatever Then they will be fighting for their human rights
Jovan Kurbalija: Fantastic question let me let me how I see it and then we’ll hear from Serena if you See it as basically we are powerful and we decided to run this world the not Chimpanzee not fungi not octopus and we say okay, we’re in charge It’s a human centered world all of you had there’s a natural or robotic aspect you are going to follow our rules I’m now simply simplifying probably over simplifying But your your your points are and I will make a few links if you We can go into there is a Ibn Sena Arab philosopher who did the flying flying Man story where basically he discuss what you said are we real? He basically argued that flying man who is floating maybe in this room Is it’s philosophical exercise But removed from the body from the sensory experiences and is there a free will is there a free consciousness of the flying man? it’s for me still one of the most fascinating lines of the Virtuality of thinking and to the point Many you you raise many issues, but that’s one the other issue which you raised is so what? It is sort of echoing into into so what why we should be worried? My colleague Andre who is our head of our linguists raise your hand Andre. Yeah He raised recently one interesting aspect which You can narrate later on this on this there was discussion this type of discussion in Abu Dhabi at one event and He said why we are afraid of AI Because It is the first machine which is not doing exactly what we expect. When you press the light, you got the light. When you press the accelerator in your car, you accelerate. When you write the text in the word processor, you have the text which you write. Always the reaction is basically expected, predictable. For the first time, we have a machine which is hallucinating and which looks like us. Therefore, we started being worried about AI because it is not anymore this precise machine like everything else before invented by technology. Suddenly, it’s like us. It hallucinates. It has the right to be imperfect. Imperfect. It has the right to be imperfect. Therefore, it was an interesting insight by Andrej and Hime later on. We have a question from you. Introduce yourself.
Henri-Jean Pollet: I’m Henry Pollet from ISPA in Belgium. I wanted to jump on what you said there because then we should submit AI to exams and graduation by having a smart question like the human to select them and bring them to something because… Sorry, I don’t know if it works. Okay. Now, I’m saying that because it can generate… There’s a lot of topics right now, but just jumping on that answer. If you don’t know what to generate, then you must know how can we have a system to validate what the AI is doing, what this engine is doing. You need to kind of like a human goes to a graduation process and at the end of it, through its education, you graduate. The AI should also be kind of testable and make sure that what you say is not hallucination. That’s what I wanted to say about that. And the second, if I may ask two other points. How would you see the AI… high because the information is coming from so many sources. So is there a way that you can conceive an AI system where you would have like an open, I wouldn’t say open interface to the data that is populating these processing engines of AI, but in a common way so that they can integrate because otherwise you have a kind of single model according to a topic map of some kind of people but not integrated. It would be great to have like a common efforts so everybody could work towards an AI of value because this kind, I wouldn’t say standard, but the model could be integrated afterwards. Today it’s like a big collection, like a big vacuum cleaner that is sucking all the possible data to different purposes and sometimes maybe a commercial purposes, more than intelligent purposes, but it is something that we should tackle. And the second aspect, the last comment I would say is what about using the model like open source because who’s the data from? Is it the one that produce it or is the one that collects it? And if the open source of software could give some interesting aspect because you have different kind of license. You have license that say my data is public domain, do whatever you want with it. Some others would keep it. No, this is a proprietary domain. If I give it to your model, you need to give me something in exchange or I give you free, but then you give it free. That’s the difference between all this license model in open source. Could that be an interesting part considering the data like software in a way? That was my question. Thank you.
Jovan Kurbalija: An excellent question. And while we are preparing the answer, it will be very concrete and answer with practical tools, not just narration, but how it can be done. Sorina, you can reflect a bit on the evaluation, validation of knowledge. I’m not of the, this.
Sorina Teleanu: Unfortunately, I kind of missed your question, so I’m not sure I’m in a good position to try to answer. So we’ll probably have to wait for Jovan to show us the practical things.
Jovan Kurbalija: Okay. Well, I was fast in finding the answer to your questions, which are critical questions. Yes, knowledge can be developed bottom up by us, and here is a very concrete example. We are basically, we use, all what we use is open source, from language models to all applications that we use. And we are now doing transcribing or reporting from IGF. Our session will be transcribed. And here is the key question. Let’s say yesterday was a session benefit, you can click on any session, and you have a session at glance, you have a report, full transcript. And what is very critical, you have also what were speakers and their knowledge and their input into the AI model. You have a speech length, you have knowledge graph for the overall session. You have in-depth analysis, did people agree? Today we had many agreements, not disagreement, but different view between differences, partial agreement and other elements. You don’t have it for the whole day. Therefore, what was discussed during the whole day, here is a knowledge graph of discussion. And it’s very busy. You can find your discussion and how you relate to all other discussion. Now the key question is, okay, we created then AI chatbot, and we ask, we can ask the question, let me ask the question. be human right to be imperfect? Probably nobody referred to it, but let’s see, maybe. Yes, five minutes. We basically processed through all transcripts and we get the answer. Common knowledge, that’s, but then what is the key? AI identifies on what basis the answer was given. Therefore, it makes a reference to what you said to, if you spoke at the session and you said this, that should be referred to you, not into some abstract chatbot where you have, let me ask some other question, which is more common. What should be AI governance? That’s everybody nowadays talking about this. I’m sure there will be some answer. Therefore, it is kind of just to explain. Therefore, we transcribe the public sessions into transcript, analyzed by AI system, and we had here some answer. Now, answer is based on sources where you can have exactly who said it, at what time, what were the pointers, and we always, this is our principle, whenever we generate any answer, it must be attributed to somebody or something for the sake of precision, for the sake of fairness. If it is book written by somebody, an article, this is the major problem. And your question pointed in this direction, can we do it? Yes, it’s doable. And big companies, OpenAI and Google and others can do it. Why they don’t do it? This is another question, but they cannot give us explanation that it is technically impossible. It is technically possible. And that’s basically what is, I would say, critical for the future of serious discussion about AI. Or here, let me, you have video summaries, you can go to our website, everything is public. And then here you have the answer with the details based on the logic and knowledge which was delivered yesterday and today here in this space. And here is, you find the answer and then here are the sources. What was the specific session, Wojci Aida, what AI decided to rely on this, Abdulla Alshmarani, Ala Zahir, Rupeng. Therefore the answer was based, AI decided to choose their paragraphs and that’s critical. And then when you click you go to the website, to the page with the transcript from that session and you can go through that session. Therefore your point, it is technically possible, financially affordable and ethically desirable. And I try to answer with concrete example, what we are experiencing today in the building. We got indication that we have five minutes. What I suggest, this issue started with the questions, there are many questions which Sorina listed, which we have to discuss. You can drop us an email, we are creating a small group of philosophers. We are proposing one idea to Norwegian host of IGF, to have a Sophie’s World for AI, those of you who read Sophie’s World, and to have a session with author of the Sophie’s World in Oslo and to ask him how he would be considering writing Sophie’s World today. But in the process we plan to engage discussion on Arab philosophy, Asian philosophy, Confucius, Buddhism, Christian, Ubuntu, African philosophy. Those are traditions that have to feed into this session. serious discussion. Ethics is important part, but I will say knowledge, education, what does it mean to be human? What does it mean to interact with each other? Is it going to be refined or remain the same? Those are critical issues in addition to ethics, bias and other things. Now, without risking of becoming persona non gratis with our generous host, I would like to thank you for patience, excellent questions. And we will leave the cards, you can, if you’re interested in this possible Sofie’s World, I don’t know if Norwegians are going to click on that. I would like to invite you with or without Sofie’s World that we continue this discussion and see how far we can move and provide more in-depth answers to your excellent questions. Thank you very much. Thank you. Thanks. Thanks for having me. Nice to meet you Eric.
Jovan Kurbalija
Speech speed
136 words per minute
Speech length
3755 words
Speech time
1653 seconds
AI’s impact on human knowledge and identity
Explanation
Kurbalija expresses concern about the centralization and capture of knowledge by a few platforms through AI. He emphasizes the importance of preserving knowledge and developing bottom-up AI to maintain control over our knowledge and identity.
Evidence
Diplo Foundation’s efforts to develop bottom-up AI and show that it is technically feasible, financially affordable, and ethically desirable.
Major Discussion Point
Philosophical and ethical implications of AI
Agreed with
Sorina Teleanu
Agreed on
Need for critical examination of AI’s impact on society
Focus on immediate risks of AI rather than long-term hypotheticals
Explanation
Kurbalija argues for focusing on the immediate risks of AI in education, jobs, and day-to-day life, rather than long-term hypothetical scenarios. He criticizes the ideological narrative surrounding AI that postpones addressing current issues.
Evidence
Comparison to communist narrative that promised an ideal society in the future while ignoring present concerns.
Major Discussion Point
AI governance and development
Right to be humanly imperfect in an AI-driven world
Explanation
Kurbalija proposes the idea of a right to be imperfect in an increasingly efficiency-driven world. He argues that human breakthroughs often come from periods of leisure and imperfection, which may be threatened by AI-driven optimization.
Evidence
Historical examples of inventions and philosophical developments during periods of leisure in ancient Greece and Britain.
Major Discussion Point
Human-AI interaction
Agreed with
Sorina Teleanu
Agreed on
Importance of preserving human agency and identity in AI development
Potential for bottom-up AI development to preserve human knowledge
Explanation
Kurbalija demonstrates the possibility of developing AI systems that preserve and attribute knowledge to its original sources. He argues that this approach is technically possible and ethically desirable for maintaining the integrity of human knowledge.
Evidence
Demonstration of Diplo Foundation’s AI system that transcribes IGF sessions, analyzes content, and provides attributed answers based on the discussions.
Major Discussion Point
Human-AI interaction
Sorina Teleanu
Speech speed
180 words per minute
Speech length
2129 words
Speech time
705 seconds
Need to question anthropomorphizing AI and assigning human attributes
Explanation
Teleanu expresses concern about the tendency to assign human attributes to AI, such as understanding and reasoning. She argues that this anthropomorphization may lead to misunderstandings about AI’s capabilities and nature.
Evidence
Example of humanoid robots at the Global AI for Good Summit and the lack of critical questioning about their actual capabilities.
Major Discussion Point
Philosophical and ethical implications of AI
Agreed with
Jovan Kurbalija
Agreed on
Need for critical examination of AI’s impact on society
Lack of privacy policies for brain data processing in neurotechnology
Explanation
Teleanu highlights the lack of adequate privacy policies for brain data processing in neurotechnology companies. She argues that this raises concerns about human rights and data protection in the context of AI and neurotechnology integration.
Evidence
Survey of privacy policies of neurotechnology companies presenting at the Global AI for Good Summit, finding only one with a relevant mention of brain data processing.
Major Discussion Point
AI governance and development
Impact of AI on human-to-human communication
Explanation
Teleanu raises concerns about the potential impact of AI on human communication and relationships. She questions whether reliance on AI-generated text will change how humans sound and interact with each other in the future.
Major Discussion Point
Human-AI interaction
Agreed with
Jovan Kurbalija
Agreed on
Importance of preserving human agency and identity in AI development
Mohammad Abdul Haque Anu
Speech speed
115 words per minute
Speech length
131 words
Speech time
68 seconds
Concerns about AI ethics education and implementation
Explanation
Anu questions who is responsible for teaching AI ethics and how it should be implemented. He expresses concern about the lack of adherence to ethical guidelines in current technologies and worries about similar issues arising with AI.
Evidence
Reference to existing problems with misinformation and disinformation in social media as an example of ethical challenges in technology.
Major Discussion Point
Human-AI interaction
Tapani Tarvainen
Speech speed
179 words per minute
Speech length
304 words
Speech time
101 seconds
Possibility of AGI becoming indistinguishable from humans
Explanation
Tarvainen raises philosophical questions about the nature of AGI if it becomes as capable as humans in every way. He suggests that at some point, highly advanced AI might be considered human and deserve human rights.
Major Discussion Point
Philosophical and ethical implications of AI
Henri-Jean Pollet
Speech speed
170 words per minute
Speech length
457 words
Speech time
161 seconds
Importance of validating AI outputs through testing
Explanation
Pollet suggests that AI systems should undergo testing and validation processes similar to human education and graduation. He argues that this would help ensure the reliability and accuracy of AI-generated outputs.
Major Discussion Point
AI governance and development
Importance of open source models and data licensing for AI development
Explanation
Pollet proposes the use of open source models and various data licensing approaches for AI development. He suggests that this could help address issues of data ownership and promote more collaborative and transparent AI development.
Evidence
Reference to different types of open source software licenses as a potential model for AI data licensing.
Major Discussion Point
AI governance and development
Agreements
Agreement Points
Need for critical examination of AI’s impact on society
Jovan Kurbalija
Sorina Teleanu
AI’s impact on human knowledge and identity
Need to question anthropomorphizing AI and assigning human attributes
Both speakers emphasize the importance of critically examining AI’s impact on human knowledge, identity, and societal interactions, rather than accepting prevalent narratives.
Importance of preserving human agency and identity in AI development
Jovan Kurbalija
Sorina Teleanu
Right to be humanly imperfect in an AI-driven world
Impact of AI on human-to-human communication
The speakers agree on the need to preserve human agency, imperfection, and authentic communication in the face of increasing AI integration in society.
Similar Viewpoints
Both argue for more transparent and collaborative approaches to AI development that preserve human knowledge and promote open access.
Jovan Kurbalija
Henri-Jean Pollet
Potential for bottom-up AI development to preserve human knowledge
Importance of open source models and data licensing for AI development
Unexpected Consensus
Concern about oversimplification of AI ethics discussions
Jovan Kurbalija
Mohammad Abdul Haque Anu
Focus on immediate risks of AI rather than long-term hypotheticals
Concerns about AI ethics education and implementation
Despite coming from different perspectives, both speakers express concern about the current state of AI ethics discussions, suggesting a need for more practical and immediate approaches.
Overall Assessment
Summary
The main areas of agreement revolve around the need for critical examination of AI’s societal impact, preservation of human agency and knowledge, and more practical approaches to AI ethics and development.
Consensus level
Moderate consensus on the importance of addressing immediate AI challenges and preserving human elements in AI development. This implies a shared recognition of the need for more nuanced and practical approaches to AI governance and ethics.
Differences
Different Viewpoints
Approach to AI ethics and governance
Jovan Kurbalija
Mohammad Abdul Haque Anu
Kurbalija expresses concern about the centralization and capture of knowledge by a few platforms through AI. He emphasizes the importance of preserving knowledge and developing bottom-up AI to maintain control over our knowledge and identity.
Anu questions who is responsible for teaching AI ethics and how it should be implemented. He expresses concern about the lack of adherence to ethical guidelines in current technologies and worries about similar issues arising with AI.
While both speakers are concerned about AI ethics, Kurbalija focuses on bottom-up AI development and knowledge preservation, while Anu emphasizes the need for clear ethical guidelines and implementation.
Unexpected Differences
Anthropomorphization of AI
Sorina Teleanu
Tapani Tarvainen
Teleanu expresses concern about the tendency to assign human attributes to AI, such as understanding and reasoning. She argues that this anthropomorphization may lead to misunderstandings about AI’s capabilities and nature.
Tarvainen raises philosophical questions about the nature of AGI if it becomes as capable as humans in every way. He suggests that at some point, highly advanced AI might be considered human and deserve human rights.
While Teleanu cautions against anthropomorphizing AI, Tarvainen unexpectedly considers the possibility of highly advanced AI being indistinguishable from humans and potentially deserving human rights. This difference highlights the complexity of defining the boundaries between human and artificial intelligence.
Overall Assessment
summary
The main areas of disagreement revolve around the approach to AI ethics and governance, the focus on immediate vs. long-term AI impacts, and the philosophical implications of advanced AI.
difference_level
The level of disagreement among the speakers is moderate. While there are differing perspectives on specific aspects of AI development and its implications, there is a general consensus on the importance of addressing AI’s impact on society. These differences highlight the complexity of AI governance and the need for multifaceted approaches to address various concerns.
Partial Agreements
Partial Agreements
Both speakers agree on the importance of addressing immediate AI impacts, but Kurbalija focuses more on practical risks in various sectors, while Teleanu emphasizes the potential changes in human communication and relationships.
Jovan Kurbalija
Sorina Teleanu
Kurbalija argues for focusing on the immediate risks of AI in education, jobs, and day-to-day life, rather than long-term hypothetical scenarios. He criticizes the ideological narrative surrounding AI that postpones addressing current issues.
Teleanu raises concerns about the potential impact of AI on human communication and relationships. She questions whether reliance on AI-generated text will change how humans sound and interact with each other in the future.
Similar Viewpoints
Both argue for more transparent and collaborative approaches to AI development that preserve human knowledge and promote open access.
Jovan Kurbalija
Henri-Jean Pollet
Potential for bottom-up AI development to preserve human knowledge
Importance of open source models and data licensing for AI development
Takeaways
Key Takeaways
There is a need for more philosophical and ethical discussions around AI beyond just bias and ethics
Current AI development and governance discussions often lack critical questioning of long-term implications
Bottom-up AI development is technically feasible and ethically desirable to preserve human knowledge
AI’s impact on human-to-human interaction and communication needs more consideration
There are concerns about anthropomorphizing AI and assigning human attributes to it inappropriately
Legal and ethical responsibility for AI systems needs to be clearly defined
Resolutions and Action Items
Proposal to create a ‘Sophie’s World for AI’ session at the next IGF in Norway to discuss AI from various philosophical traditions
Plan to continue the discussion on philosophical implications of AI among interested participants
Unresolved Issues
How to effectively implement AI ethics education and guidelines
The potential long-term impacts of AI on human identity and interaction
How to balance AI efficiency with preserving human imperfection and agency
The ethical implications of AGI potentially becoming indistinguishable from humans
How to ensure transparency and attribution in AI-generated content and knowledge
Suggested Compromises
Developing AI through apprenticeship programs to balance efficiency with human oversight
Using open source models and varied data licensing to allow for more inclusive AI development
Implementing systems to validate and test AI outputs similar to human education processes
Thought Provoking Comments
We have been developing AI, and in addition we have been trying to see what are the philosophical, governance, political aspects of AI. The principle is that you don’t need to be a programmer, although we have quite a few programmers, but you have to understand basic concepts in order to discuss AI, otherwise the discussion ends up with unfortunately dominant narratives that we have here at IGF, and not only IGF, many meetings, bias, ethics, and we can generate typical AI speech.
speaker
Jovan Kurbalija
reason
This comment challenges the typical AI discourse and emphasizes the need to understand fundamental concepts beyond just technical aspects.
impact
It set the tone for a more philosophical and governance-focused discussion, moving away from purely technical or ethical considerations.
What roles do we imagine large language models playing vis-a-vis us humans as we interact with them? Are we missing the forest for the trees? We talk a lot about generative AI, but are there other forms of intelligent machines, agents, whatever you want to call them, that we might need to focus a bit more in our discussions on governance, policy, gain implications, and what does it all mean?
speaker
Sorina Teleanu
reason
This series of questions broadens the scope of the AI discussion beyond just large language models and encourages thinking about diverse forms of AI.
impact
It prompted participants to consider a wider range of AI applications and their implications, moving the conversation beyond popular topics like ChatGPT.
Therefore, my argument is that we have to fight to the right to be imperfect, to refuse to remain natural, not to be hacked biologically, the right to disconnect, the right to be anonymous, and the right to be employed over machines.
speaker
Jovan Kurbalija
reason
This comment introduces a novel concept of human rights in the AI era, challenging the focus on efficiency and perfection.
impact
It sparked a new line of thinking about human values and rights in an AI-dominated world, shifting the discussion towards more philosophical considerations.
Assuming that AGI becomes actually possible, which I’m not sure of, and it’s as good as people in every way, so why isn’t it human at that point? You could argue that those machines, if they are our peers in every way, they are our offspring, basically, and they should have human rights at that point as well.
speaker
Tapani Tarvainen
reason
This comment raises profound questions about the nature of intelligence, consciousness, and rights in relation to AGI.
impact
It deepened the philosophical aspect of the discussion, prompting consideration of the ethical and legal implications of highly advanced AI.
How would you see the AI… high because the information is coming from so many sources. So is there a way that you can conceive an AI system where you would have like an open, I wouldn’t say open interface to the data that is populating these processing engines of AI, but in a common way so that they can integrate because otherwise you have a kind of single model according to a topic map of some kind of people but not integrated.
speaker
Henri-Jean Pollet
reason
This comment raises important questions about AI data sources, integration, and the potential for more open and collaborative AI development.
impact
It shifted the discussion towards practical considerations of AI development and data management, prompting thoughts on open-source approaches to AI.
Overall Assessment
These key comments shaped the discussion by moving it beyond surface-level considerations of AI ethics and bias, delving into deeper philosophical questions about the nature of intelligence, human rights in an AI era, and the practical challenges of AI development and governance. The conversation evolved from initial critiques of typical AI narratives to exploring novel concepts like the right to be imperfect and the potential personhood of advanced AI. This progression led to a rich, multifaceted dialogue that touched on technical, ethical, philosophical, and practical aspects of AI’s impact on society and human identity.
Follow-up Questions
How can we develop bottom-up AI to preserve knowledge and prevent centralization?
speaker
Jovan Kurbalija
explanation
This is important to address concerns about knowledge being centralized and captured by a few platforms, potentially impacting identity and dignity.
How will our communication and relationship with each other as humans change due to increased reliance on AI-generated text?
speaker
Sorina Teleanu
explanation
This explores the long-term implications of AI on human-to-human interactions and communication styles.
Can we develop intelligent machines that mimic other forms of intelligence found in nature, like octopuses or fungi?
speaker
Sorina Teleanu
explanation
This questions our human-centric approach to AI and suggests exploring alternative models of intelligence.
What does it mean to still be human in an increasingly AI-driven era?
speaker
Sorina Teleanu
explanation
This philosophical question is crucial for understanding the evolving relationship between humans and AI.
How can we implement a ‘right to be humanly imperfect’ in an efficiency-driven world?
speaker
Jovan Kurbalija
explanation
This explores the tension between human nature and the push for AI-driven efficiency.
Who should be responsible for teaching AI ethics?
speaker
Mohammad Abdul Haque Anu
explanation
This addresses the need for clear guidelines and responsibility in AI development and deployment.
At what point should we consider highly advanced AI as deserving of human rights?
speaker
Tapani Tarvainen
explanation
This philosophical question challenges our definitions of humanity and rights in relation to AI.
How can we develop a system to validate and test AI outputs to ensure they are not hallucinations?
speaker
Henri-Jean Pollet
explanation
This addresses the need for quality control and reliability in AI-generated content.
How can we create a common, integrated data model for AI that allows for collaboration while respecting data ownership?
speaker
Henri-Jean Pollet
explanation
This explores the potential for standardization and open collaboration in AI development.
Could open-source software licensing models be applied to AI training data to address issues of ownership and usage rights?
speaker
Henri-Jean Pollet
explanation
This proposes a potential solution to data ownership and usage concerns in AI development.
Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.