Ethical AI_ Keeping Humanity in the Loop While Innovating
20 Feb 2026 14:00h - 15:00h
Ethical AI_ Keeping Humanity in the Loop While Innovating
Session at a glance
Summary
This UNESCO-sponsored panel discussion focused on “Humanity in the Loop: Balancing Innovation and Ethics in the Age of AI,” featuring experts from academia, government, industry, and international organizations. The moderator, Dr. Maria Grazia, challenged the premise that innovation and ethics are in tension, arguing instead that they should be complementary forces in AI development. Dr. Tawfik Jelassi from UNESCO emphasized that ethical AI systems are more trustworthy and widely adopted, advocating for “ethical by design” approaches rather than post-hoc corrections. He highlighted UNESCO’s 2021 AI ethics recommendation, adopted by 193 member states, which calls for human oversight, non-discrimination, and cultural diversity.
Debjani Ghosh from NITI Aayog stressed that accountability lies with humans rather than technology, arguing for oversight built into the entire development process from design to commercialization. She emphasized the need to move beyond viewing AI as merely a regulatory afterthought. Brando Benifei discussed the EU’s risk-based AI Act, which identifies high-risk applications requiring oversight while prohibiting certain uses like predictive policing and emotional recognition in workplaces. Professor Virginia Dignam criticized current approaches to both innovation and regulation, arguing that AI development has been too Western-centric and calling for more inclusive design that incorporates diverse cultural perspectives.
Paula Goldman from Salesforce highlighted practical implementation challenges, noting that companies need transparency and trust to scale AI adoption successfully. The discussion emphasized the importance of collective intelligence over individual artificial general intelligence, with participants agreeing that the most inclusive AI systems tend to be the most commercially successful and technically superior.
Keypoints
Major Discussion Points:
– Challenging the Innovation vs. Ethics Trade-off: Multiple panelists rejected the premise that innovation and ethics are in opposition, arguing instead that ethical AI design actually enhances innovation by creating more trustworthy, inclusive, and ultimately successful products. The real tension was identified as being between innovation and regulation, not ethics.
– Moving from Principles to Practice: A central theme focused on the gap between having ethical AI principles (like UNESCO’s 193-country recommendation) and actually implementing them in real-world applications. Panelists emphasized the need for “ethics by design” rather than post-hoc solutions, with concrete mechanisms like oversight at every development stage.
– Human-Centered AI Development and Collective Intelligence: The discussion emphasized putting humans at the center of AI development, not just “in the loop.” This included ensuring diverse perspectives in AI design teams, addressing the exclusion of those who have experienced real-world problems (like power cuts), and reconceptualizing AGI as collective human intelligence rather than corporate-controlled superintelligence.
– Risk-Based Regulation and Global Cooperation: The conversation explored the EU’s risk-based regulatory approach, which identifies high-risk AI applications (healthcare, justice, workforce) for oversight while prohibiting certain uses entirely (predictive policing, emotional recognition in workplaces). Panelists stressed the need for global cooperation on issues like military AI use and existential risks.
– Inclusive Design as Superior Technology: A key insight emerged that the most inclusively designed AI systems are not only more ethical but also more commercially successful and technically superior, as they work across diverse populations, accents, abilities, and contexts.
Overall Purpose:
This UNESCO-sponsored panel aimed to explore how to balance AI innovation with ethical considerations, moving beyond theoretical principles to practical implementation strategies. The goal was to demonstrate that ethical AI development enhances rather than hinders innovation, while addressing the challenges of translating global ethical frameworks into actionable policies and practices across different cultural and economic contexts.
Overall Tone:
The discussion maintained an energetic and collaborative tone throughout, with panelists frequently building on each other’s points rather than disagreeing. The moderator deliberately kept the pace dynamic and engaging, noting it was “after lunch on Friday” after a long week. While there were some “controversial” or challenging statements (particularly from Virginia Dignam about doing both innovation and ethics “wrong”), these were delivered constructively. The tone became increasingly optimistic toward the end, culminating in a literal collective moment with a group photo, embodying the “collective intelligence” theme that emerged as a key solution.
Speakers
Speakers from the provided list:
– Tim Curtis – Regional Director for UNESCO for South Asia
– Maria Grazia – Chief of the Executive Office of UNESCO’s Social and Human Sciences sector, moderator, microeconomist specializing in innovation and new technologies
– Dr. Tawfik Jelassi – Assistant Director General for Communication and Information at UNESCO
– Virginia Dignam – Professor, Director of the AI Policy Lab at Umeå University, member of UNESCO’s AI Ethics Experts Without Borders
– Debjani Ghosh – Distinguished Fellow at NITI Aayog, former role with NASCOM
– Brando Benifei – Member of the European Parliament
– Paula Goldman – Chief Ethical and Humane Use Officer at Salesforce
– Audience – General audience member asking questions
– Rita Soni – Audience member asking questions
Additional speakers:
– Dr. Tawfiq Jilasi – Assistant Director General for Communication and Information (mentioned by Tim Curtis in introduction but appears to be the same person as Dr. Tawfik Jelassi)
– Professor Virginia Dignam – (Same as Virginia Dignam, referenced with title)
– Brado Benefai – (Appears to be the same person as Brando Benifei, mentioned in introduction)
– Rajan – CEO and founder of a startup, from Business Club TV
Full session report
This UNESCO-sponsored panel discussion, “Humanity in the Loop: Balancing Innovation and Ethics in the Age of AI,” brought together leading voices from academia, government, industry, and international organisations to challenge fundamental assumptions about AI development and governance. The afternoon session, moderated by Dr Maria Grazia from UNESCO’s Social and Human Sciences sector, featured Dr Tawfik Jelassi (UNESCO Assistant Director General), Professor Virginia Dignum (AI Policy Lab, Umeå University), Paula Goldman (Salesforce Chief Ethical and Humane Use Officer), Debjani Ghosh (NITI Aayog Distinguished Fellow), and Brando Benifei (European Parliament member).
Reframing the Innovation-Ethics Relationship
Dr Maria Grazia opened with a provocative challenge to the panel’s own title, arguing that the perceived trade-off between innovation and ethics represents a false dichotomy. Drawing on her background as a microeconomist, she noted that “to my knowledge, but that can be my ignorance, I have never seen one single study” demonstrating that regulatory frameworks have hindered innovation, productivity, or profitability in the pharmaceutical industry, despite extensive regulation. This analogy proved particularly apt given AI’s increasing pervasiveness in daily life.
Debjani Ghosh delivered perhaps the most transformative reframing of the entire discussion: “I don’t think the choice is between innovation and ethics. I really don’t. I think the choice is between do we use technology to ensure that everyone in the world is cancer-free, everyone in the world lives with dignity, everyone in the world has enough to eat, or do we use the technology to make the world a much bigger conflict zone, develop the next atom bomb, and worse.” This perspective fundamentally shifted the conversation from regulatory constraints to purposeful development.
Dr Jelassi reinforced this view from UNESCO’s global perspective, clarifying that the real tension is “not innovation versus ethics, but innovation versus regulation.” He emphasised UNESCO’s principle of “ethical by design,” arguing that ethical AI systems are inherently more trustworthy and widely adopted, creating a virtuous cycle that enhances rather than hinders innovation.
From Principles to Practice: Implementation Challenges
A central theme focused on the persistent gap between having ethical AI principles and implementing them in real-world applications. UNESCO’s 2021 AI Ethics Recommendation, adopted by 193 member states (though discussions began in 2019), provides a comprehensive framework calling for human oversight, non-discrimination, respect for cultural diversity, and environmental sustainability.
Debjani Ghosh addressed this implementation gap by emphasising that “oversight has to be built into the entire development process from design to commercialisation. And it has to be built with the right flag-offs at every part of the design and development process.” She highlighted India’s approach through seven working groups focusing on different aspects of AI implementation, from healthcare to agriculture, and mentioned the AI Impact Commons, which is available online and features impact stories from over 30 countries.
Paula Goldman provided practical insights from the corporate perspective, explaining that companies need concrete answers about performance monitoring, control mechanisms, and responsibility distribution. She emphasised that successful AI scaling requires companies to “put the people at the centre of the transformation,” giving employees voice in determining what applications actually work in practice.
Risk-Based Regulation and Global Governance
Brando Benifei, despite experiencing voice difficulties during the session, offered crucial insights into the European Union’s AI Act and its risk-based methodology. Drawing lessons from the social media era, he argued that waiting for problems to emerge before implementing regulation can lead to “unmodifiable consequences.” The EU’s approach identifies specific high-risk applications—such as AI use in healthcare, administration of justice, and workforce management—that require enhanced oversight.
Importantly, the framework prohibits certain applications entirely, including predictive policing, emotional recognition in workplaces and educational settings, and manipulative subliminal techniques. This represents a middle path between blanket prohibition and unrestricted development, focusing on ensuring data quality, cybersecurity, proper governance, and human control in high-risk applications.
The discussion highlighted the necessity of global cooperation for addressing transnational challenges. Benifei noted that whilst domestic regulations can address many AI applications, issues like military use of AI and existential risks from advanced AI systems require internationally coordinated responses that current frameworks cannot adequately address.
Cultural Perspectives and Democratising Development
Professor Virginia Dignum delivered some of the most provocative comments, challenging fundamental assumptions about AI development. She argued that current approaches use AI as a “hammer” to address any available problem rather than engaging in genuine innovation. More significantly, she highlighted cultural limitations: “AI has been developed extremely on the Western tradition, the Cartesian tradition. We think, therefore we are.” She contrasted this with the African Ubuntu tradition that emphasises “we are, therefore I am,” suggesting that AI developed from this collective perspective would be fundamentally different.
An audience member, Rita Soni, raised critical questions about representation in AI development, highlighting the exclusion of “impact workers”—the half million people worldwide who contribute to AI development but are often marginalised in discussions about the technology’s future. She questioned whether current developers, who may lack experience with infrastructure challenges like power cuts affecting billions globally, can adequately address diverse needs.
Debjani Ghosh responded by highlighting India’s efforts to democratise technology creation through initiatives like Startup India, noting that startup growth rates are actually higher in Tier 2, 3, and 4 cities than in major metropolitan areas.
Redefining AGI as Collective Intelligence
Virginia Dignum offered a provocative reconceptualisation of Artificial General Intelligence: “we already have AGI. We always had AGI. It’s called collective intelligence.” She argued that true AGI emerges when “we work together, we can do more than each one of us,” with AI technology supporting rather than supplanting human collaboration.
This redefinition challenged the dominant narrative of AGI as corporate-controlled superintelligence, providing a democratic alternative focused on augmenting human capabilities rather than replacing them. Debjani Ghosh reinforced this perspective by questioning why current AGI narratives focus on control rather than augmentation, arguing that sustainable businesses must keep humans at the centre of their operations.
Practical Applications and Real-World Impact
Despite philosophical debates, the discussion highlighted concrete examples of AI’s positive impact. Dr Jelassi shared examples from his recent visit to remote communities in southern Africa, where UNESCO’s interventions providing community radio, mobile connectivity, and early warning systems transformed previously isolated villages.
Paula Goldman provided examples from accessibility applications, describing AI tools that correct non-accessible code in real-time and browser extensions that fix usability issues for people with disabilities. These examples demonstrated how inclusive design principles lead to superior products that work better for all users.
Education and Implementation Challenges
Virginia Dignum highlighted significant gaps in current educational approaches, arguing that engineers need stronger grounding in humanities to understand problem contexts, while humanities scholars need greater precision in discussing AI technology. She noted that current engineering education focuses on solving problems without asking fundamental questions about why something is a problem, who has it, and what alternatives exist.
Paula Goldman emphasised that inclusive design approaches are not just ethically superior but commercially advantageous, noting that “the most inclusively designed technology is going to be the one that’s most successful.” However, Debjani Ghosh expressed scepticism about current industry practice: “I’m not sure if industry today is really putting human at the centre of the loop, but I think they need to.”
Unresolved Challenges and Future Directions
The discussion identified several critical unresolved challenges. As Debjani Ghosh noted, achieving ethical AI is complicated because “we humans don’t align to the same ethical values,” meaning there will always be both good and bad actors using AI technology. This suggests that technical solutions alone cannot address AI ethics challenges.
Global governance mechanisms for addressing transnational AI challenges, particularly military applications and existential risks, remain underdeveloped. While frameworks like UNESCO’s recommendation provide principles, translating these into effective international cooperation requires political will and institutional innovation beyond current capabilities.
Conclusion: Towards Human-Centred AI
The discussion revealed remarkable consensus across stakeholder groups on key principles: innovation and ethics are complementary rather than competing forces; human-centred approaches are essential for both ethical and commercial success; inclusive design produces superior outcomes; and proactive governance frameworks are necessary to prevent harmful applications while enabling beneficial ones.
The panel’s emphasis on collective intelligence over individual artificial intelligence provides a democratic alternative to corporate-controlled AI development, suggesting pathways for ensuring that AI truly serves humanity’s collective interests. The conversation demonstrated that effective AI governance requires moving beyond abstract principles to practical implementation strategies that embed diverse perspectives and lived experiences into development processes.
The session concluded with the panelists gathering for a group photograph, literally embodying the “collective intelligence” theme that had emerged as a central vision for AI’s future—one that prioritises human collaboration augmented by technology rather than technology replacing human agency.
Session transcript
This afternoon to this UNESCO sponsored event, my name is Tim Curtis, I’m the Regional Director for UNESCO for South Asia and very happy to have you all for the event today, Humanity in the Loop, Balancing Innovation and Ethics in the Age of AI. Of course we’re grateful to the Government of India for its collaboration on this session because we at UNESCO believe, which we at UNESCO believe goes to the heart of our engagement with the ethics of artificial intelligence and namely how to ensure an ethical and human AI centred deployment whilst also encouraging the development of artificial intelligence and innovation in a technology that can offer so many benefits to humanity. and including and in particular to the global south.
So it gives me great pleasure to just present today’s panellists and moderator. We’ve had Dr. Tawfiq Jilasi, who’s Assistant Director General for Communication and Information and who’s really been a pivotal figure in UNESCO’s work on AI ethics. Professor Virginia Dignam, who is a Director of the AI Policy Lab at Umeå University and she’s also a member of UNESCO’s AI Ethics Experts Without Borders and has been supporting UNESCO’s readiness assessment methodology in multiple countries. Also privileged to have Paula Goldman, Chief Ethical and Humane Use Officer at Salesforce, who are a member of UNESCO’s Business Council and she has really been leading by example in the private sector’s responsible AI ethics. Debjani Ghosh, a distinguished fellow Niti Aiyog who needs no introduction here in India, a household name in India for her role in building and leading India’s AI ecosystem.
Thank you for coming. And finally, a great pleasure to welcome Brado Benefai, a member of the European Parliament who will share his insights on the EU AI Act and how they have been able to navigate balancing innovation and ethics. And finally, of course, I’m a moderator Dr Maria Grazia from the Chief of the Executive Office of UNESCO’s Social and Human Sciences sector. Please, Maria Grazia, over to you.
Hello, good afternoon. So we’ll try to have this session very dynamic because it’s after lunch, it’s Friday, over five days, very interesting, a long week. So let me start by challenging the very title of this meeting, that is Balancing Innovation and Ethics in the Age of AI. Now, nobody’s first. effect. So I’m a microeconomician, which is a very complicated word, which looks like a rude word, but it’s not. It’s mathematics applied to economics and especially applied to understanding the dynamics of innovation and new technologies. Why I’m saying that, because of course the question of innovation, what drives innovation, how can we get more innovation, is something that we always ask by the time you study what drives productivity growth, what drives welfare and well -being.
And then at times we also hear this, that having constraints or having frameworks will actually hinder these dynamics. And the position of UNESCO has been very clear. The position is, this is not true. So what UNESCO has, actually the member states that have adopted UNESCO recommendation on the ethics of artificial intelligence already in 2021, which means that you all countries, including India, were discussing these issues already since 2019 to get to an agreement. is actually what it means and how can we put technologies at the service of humanities and not let anything that is technologically feasible go if that technological feasibility actually hurts people, hurts humanity. And so for us at UNESCO, ethics of AI means something very concrete.
It means AI, technologies, and here I would like to invite you to think that it’s technologies, it’s not one single element, it’s a lot of things, that actually abide by three simple things that too often we give for granted, whereas perhaps we want to think about it more, and these are human rights, human dignities, and fundamental freedoms. And if we are able to develop, deploy, and use technologies in a way that we abide to these three components, then for sure we do have technologies that serve humanity. And why? I’m challenging the very topic because too often we see… innovation, or actually the narrative that we use out there, that is used out there, puts innovation and ethics, or ethical AI, which actually means an AI that also throughout the life cycle is ethical, as trade -offs.
So if we innovate, it cannot be ethical because by the time it’s gone out, we don’t have the time to check on these things. Well, think of a parallel, and then we take it from there on the concrete dynamics of AI. But think, if you were to think about one sector that is very much regulated, perhaps what comes to mind is pharma, pharmaceutical. Now, to my knowledge, but that can be my ignorance, I have never seen one single study being able to prove that the regulation in that sector has actually hindered the innovativeness or actually the productivity or even the remuneration of the sector. So by the same token, and actually the pervasiveness of AI to some extent leads us to think to the pervasiveness of of the paracetamol, for instance, we use every day by the time we have an ad, like I think some of you this afternoon might have, and after listening to me, perhaps even more.
But, you know, it’s really the pervasiveness of technology that touches our life, each and every day in many ways. And this is what I think is important to discuss from different perspectives. And allow me to start with my ADG, ADG jealousy. And as I mentioned, from UNESCO, we give this global perspective, because the recommendation was adopted by 193 member states. Now, very often, what is very challenging is to go from principles to practice. That is, sometimes we know what we need to do, but then the question becomes, how do we translate it into practice? So, ADG jealousy, when do you see what are the biggest gaps that exist between going from principles and what instead is happening on the ground?
Thank you, Maria Grazia. maybe before I briefly answer your question let me say that you used the word of innovation and ethics I don’t see personally an issue, a contradiction between the two, I see it more between innovation and regulation because say to be creative, innovative you should free up the mind of the people, you should not constrain them, you should not tie their hands I used to be chair of a telecom operator board and there of course telecom and mobile phones and access to private data of consumers, the issue of regulation is paramount but we don’t want regulation that hinders innovation, I think here so I don’t see ethics and innovation being in contradiction to the contrary, I think they reinforce each other how is that?
Because clearly if you integrate ethical reflection in the design of AI systems of course if you do that AI systems will be more respected more trustworthy, more used and therefore more broadly deployed across society so I see ethics and innovation really reinforcing each other and quite often at UNESCO we say AI systems have to be ethical by design it should be done ex ante not ex post not when we see mistakes and hazards and risks and harmful impact of AI we say wait a minute let’s go back to see what went wrong in those models in the data sets, are there some biases etc so I think it has to be done from the very early stage and therefore innovation has to be human centric and has to be contextualized, there is no one size fits all, we know that what you can provide is an overarching framework so it’s a broad set of guidelines and principles as you said Maria Grazia and this is what the UNESCO recommendation on the ethics of AI is about You know that this recommendation has been so far the only global recommendation of its kind.
It was adopted back in 2021 by 193 member states of UNESCO, and it calls for human oversight, non -discrimination, respect for cultural diversity, respect for environmental sustainability. These are the principles that need to be translated into action and that need to be operationalized within a certain context.
Thank you very much, Elie Dji. Let’s actually go to Debjani, because I would like to go further into this operationalization question. So, from your work at NITI IOC, and also your experience with NASCOM, so what are the mechanisms that can really help embed the ethical reflection into what is the everyday life of both companies and sectors?
Thank you. Thank you, Deb. Okay. Thank you. Thank you for having me here. So, first of all, I’ll just go back to the topic, if I may, for a second, right? because I don’t think the choice is between innovation and ethics. I really don’t. I think the choice is between do we use technology to ensure that everyone in the world is cancer -free, everyone in the world lives with dignity, everyone in the world has enough to eat, or do we use the technology to make the world a much bigger conflict zone, develop the next atom bomb, and worse. So I think the choice is that. And therefore, the biggest challenge we have, and I hate applying the word, the label of ethics to technology, because I think the biggest challenge we have is can we, all the wisdom in this room, can we say that we will be successful in aligning every single human on this planet to the same ethical values?
The answer is no. No. we’re not going to be able to do that. And we know we’re not going to be able to do that. So as long as we humans don’t align to the same ethical values, you will always have good actors and you will always have bad actors, you know that technology is going to be used in ways that are non -ethical. So the accountability, you’ve talked about humanity in the loop, the accountability comes back to us. So I think it’s very important to sort of understand that because in all our dialogues on technology, we somehow delegate the accountability to technology. I don’t think we can as yet. Maybe in another 10 years when cognitive reasoning becomes a thing, maybe then, but not as yet because for somebody who actually builds codes and builds agents, I know they’re not that intelligent as yet.
So I think the accountability on humans is what we have to focus on. And going back to your question, if you’re talking about how does industry ensure? I mean, one of the things I’m very clear about that regulation is usually an afterthought. You develop the technology and then you say, okay, how do we now regulate it to ensure that it’s used right? And I think that has to fundamentally change. Oversight has to be built into the entire development process from design to commercialization. And it has to be built with the right flag -offs at every part of the design and development process. If you do that and you’re able to, you know, red tape the product that you are developing at every single stage to certain standards that have been developed, you are going to develop something that, and then hopefully after the entire development phase, there’s also a sandbox where you test out the impact.
You will get to a stage where ethics becomes by design versus an afterthought. And I think that’s what we have to move towards.
Thank you. I’d like to a bit change the order of the speakers because you brought in the argument of the regulators and you have one next to you that I’m going to refer to. And how do you see this relationship? Because we know fundamentally the regulation that has been pushed in Europe is a risk base. So what was the logic and how this relates to what she was discussing as the human oversight or even the redress of mechanisms that we might want to put in place in order to have AI that is ethical?
Well, first of all, excuse me for the voice, but that’s it. Exactly, but thanks to technology, you can hear me anyway. So I think that I… I can also adhere to the point that innovation and ethics are not one against the other. in fact this summit that is concentrating on impact on action, on diffusion, is not separate from keeping the track on on reflection, on safety on how to protect human rights how to make AI human centric, the things are interwined, the point is how do we regulate effectively and how we find a good balance, but I want to bring maybe a controversial point to the table because I have my strong conviction on this we have chosen globally, including in Europe, that has been often the forefront of regulating in one of those rooms now, I was with her in another panel there was Anu Bradford professor of Columbia University that has written the book The Brussels Effect so in fact EU has often opened the way for many regulatory pathways I mean even Europe has chosen when looking at the social media to actually not regulate we have let the social media diffuse without regulation and today we are discussing about limits for minors we heard about that also in the inaugural session we are discussing about misinformation and labelling of deepfakes even Prime Minister Modi talked about that in the inaugural session but we are doing it all now after a lot of things have happened and my point, that’s my opinion we have already unmodifiable consequences so I think that when we talk about when we should regulate we should regulate and we should regulate and we should regulate and we should regulate and we should regulate and we should regulate if we should let the innovation flow and act only ex post.
Sometimes we might be wrong and risk unchangeable effects. So we need to build a balance that doesn’t hinder innovation, but also identifies human rights challenges. The AI Act tried to build a risk -based approach, identifying areas where we need AI to be overseen, workforce use of AI, healthcare use of AI, administration of justice use of AI. We want to be sure when we deal with that that data used for training is quality data, cybersecurity is sufficient, the governance of the data is solid, and there is human control. These are examples of what we have identified. Everything. And in fact, we even chose to prohibit. a few use cases, for example, predictive policing, for example, emotional recognition in workplaces and in study places, manipulative subliminal techniques.
I don’t think it’s a taboo to choose that some use cases of AI, we don’t want them in our society, and we just keep them out. So I think this approach based on the risk, you can look if you like it this way, if you want to modify, but it’s an interesting perspective, because you can choose what you think is in need of a certain regulation, and you can also promote transparency, which I think is crucial to build trust. Without trust, especially in democratic contexts, it’s impossible to accelerate adoption of AI, which is still a big challenge from both the global north and the global south. The numbers tell us that a lot of companies, or public administrations that could benefit from an ethical and correct use of AI, they are not using it because they don’t know what could
You put forward a very important point, Brando, that is like perhaps we might not be able or we might not want to decide what the technology should do for us. But for sure we might want to discuss and agree on what we do not want the technology to do for us because these are unacceptable uses of deployment. And in this case, this also highlights the importance of awareness, of the centrality of people, of having this human -centered approach. And here I would like to invite Virginia into the conversation because of course you, as an educator, as part of this beautiful world of educators, as a professor, you have this constant contact and the ability to interact and nurture the humankind.
So what do we have to do to avoid that people are just consumers or, you know, are possibly exposed to it instead of stealing the technology to work where we want to go?
Sure. Thank you very much. Thank you for inviting me to be here. Again, like all my previous colleagues, I want to go back to the title. And I’m not going to talk about the balancing part. I’m just going to claim and to be controversial and to wake up all. We are doing both the innovation as the ethics and regulation side all wrong. We are doing it not in the way that it needs to be done. On the innovation side, we are doing it wrong because we are somehow understanding innovation as the capacity of using this hammer that we found out a couple of years ago of Gen AI or whatever. And now we want to use the hammer to nail any nail that we find out.
Innovation is much more than that. innovation is really challenging ourselves to go further. And I want to go back to a sentence that has come with me and is the main thing I’m taking from this summit today. In a couple of sessions ago where I spoke, someone was saying, most people developing AI never experienced power cuts, never experienced broken roads. I would like to go further. AI, and I have been working in AI for 40 years, all the different types of AI that existed before, has been developed extremely on the Western tradition, the Cartesian tradition. We think, therefore we are. I think, therefore I am. First it is individualistic, and then equates intelligence with cognition.
Human intelligence is much more than cognition. If you would think about AI developed for instance in the African Ubuntu tradition, it says, we are, therefore I am. It would be a completely different type of AI. So we do need to challenge ourselves not to go with this hammer that is there already and try to find the nails and call that innovation. It is not innovation. It’s just running around like chickens without heads and see if one of those hammers works. So that’s one. On the side of ethics and the regulation, we are also assuming there are two things that usually come with the idea, and especially in this type of combination, that ethics is this kind of finger that points, thou shalt behave, thou shalt be good, and that regulation is about prohibiting you to do things.
Neither ethics is the finger, nor regulation is necessarily only about prohibitions. Moreover, regulation like AI, like the hammer, like the telephone, is not about prohibiting you to do things. It is about prohibiting you to do things. It is about prohibiting you to do things. It is about prohibiting you to do things. It is about prohibiting you to do things. It is about prohibiting you to do things. artifact that we built we built regulation and we can apply to regulation and to ethics the application of ethics exactly the same type of principles that we apply to technology let’s experiment, let’s try let’s verify, let’s evaluate let’s see what’s there and not have this idea of the finger or the loss written in stone which stays there once and forever so that’s going back and now very quickly on your answer because I don’t want to take much time I think that education needs exactly to start by this point technology alone is not enough so we really need to up our education of the engineers, the computer scientists the data scientists on the humanity side we know as engineers we know very well how to solve a problem, we never ask ourselves why is this a problem who has this problem, what are the alternatives to my solution who gains, who loses, what is gained, what is lost this is humanity We need to somehow bring that together in the engineering case and in the humanities and social science case.
We need them, because I’m an engineer, to help us understand that we need to be much more precise in what we are talking about. AI at this moment is actually an empty signifier. It doesn’t mean nothing. Everything is AI. Nothing is AI. All kinds of things are AI. The applications are AI. The sectors are AI. The technology is AI. The research, everything is AI. And we cannot just go around with this word which actually means magic. In most politicians’ talks, it means magic. And we want to regulate magic. Okay, good luck. So we need the humanities, the social science, to really help us. As being precise about what are we doing. So this is the education we need.
fantastic you couldn’t have put it much easier to me to then ask Paola how are we doing that in companies because this is very easy to say we need to translate the principles the values in concrete models of that actually work work for a company, work to deliver results and work for people
yes indeed, well first of all thank you for that and I mean we were just talking about how this is our last speaking panel of the week and that was a fiery way of drawing things together, I really appreciate it, kind of an energy boost so yeah, I think the answer is actually much more practical and much less abstract than one might imagine and so I’ll just tell you a little bit about my experience I spend my days at Salesforce both testing our products and making sure that our AI has features baked into it so that our customers There’s no, I can observe what’s going on and know how to tweak the controls and understand, for example, when they should set for an AI agent to escalate to a human or a human to escalate back to AI and so on.
And when we do this, it’s not like we think we at Salesforce have all the answers because clearly we don’t and we serve a variety of industries and all over the world and so on. But everyone, all of our customers are basically asking the same questions, right? They’re asking, how do I know what kind of results I’m getting? How can I tell if something goes wrong? What are my options if something goes wrong? What part of AI ethics is your responsibility and what part is mine, right? And these questions don’t necessarily have the most mature answers because we’re in the early innings. of AI agents and a lot more work. to do. But actually, these are the right questions to be asking and also allows for some flexibility and some cultural or industry specificity for people to find the right answers to the questions.
So that would be part one of my answer. It’s actually very, very practical. To adopt AI, companies and organizations need to be able to trust that it’s going to work. They don’t want to be embarrassed by it, right? And they’re not going to be able to scale it if it doesn’t work. So that’s number one. The second thing is also increasingly what we’re finding when we work with companies on this is that the most successful companies at Scaling AI put the people at the center of the transformation. They work with not just top down, like you shall use this application. They give people a chance to sort of have a voice around what is actually working.
What is actually most useful to them in the day -to -day work? Where is AI going to actually help them and where is it kind of useless? right and it’s that kind of understanding of how work actually gets done what actual processes are going to benefit from that kind of application that I think is really important and allows people to sort of stay at the center of this large -scale transformation that we’re part of
that might happen or should happen in the context of making AI ethical by design?
Well, in my current role in ETIO, which is the think tank for government of India, we’re looking at what are the unlocks for technology, including AI, to ensure that we can use technology to solve for some of the biggest problems, right? Now, what Professor Virginia said about AI as a hammer, I think that’s a luxury of the developed countries, and I do agree with you when it comes to developed countries. But when you come to developing countries where you don’t have a lot of resources, you cannot afford to use the technology that takes a lot of deep investment to sort of do things where you’re not sure. You’re not sure of the ROIs. And one of the things, examples I want to give is as part of this summit, there were seven working groups that were set up looking at different problems.
I chaired one of the working groups on economic development and social good, which was all about impact and how do you scale impact, right? And we had around 50 countries participating. Now, one of the things that came out of that working group was, which is one of the outcomes of this summit, is the creation of AI Impact Commons globally, and it’s online. You guys can look it up, aiimpactcommons .global, which has impact stories from more than 30 countries, and counting, and it’s growing every day, with learnings on what kind of problems can be solved and how do you scale it. And by I said it’s a luxury of developed countries is because when you look at those impact stories, and most of them are from developing countries, and you’ll be amazed with the kind of problems they’re solving, from malnutrition to pharma, you know, to suicides, how do you lower pharma suicides by using technology to improve yield.
Thank you. ensure that they don’t suffer from climate changes and shocks. I mean, the problems are so inspiring. So I think it won’t be fair to say that we don’t know what problems we are solving today, and I will absolutely stand for that. And I think it’s – I’ll go back to what Paula said. I’m not sure if industry today is really putting human at the center of the loop, but I think they need to. They absolutely need to. I do, because as we develop technology, for example, the end goal right now of – seems like the end goal of AI, all the big companies are talking about, is AGI. Now, when you look at what does AGI mean, it’s about control.
Why do we want to build something to control everyone? Why don’t we want to build something that is going to augment lives? And if we could change the narrative, then I would say, yes, humans are at the center. Right now, I think we still have – we still have a lot of work to do to bring humans back into the center of the loop. And it’s something I think we have to realize and industry has to realize. that that is the only way you can build sustainable businesses. And that’s how you sort of build your staying power. So it’s going to be very important to do.
Absolutely. And it’s about having these different entities around the table, but also having different governments and having this multilateral setting talk to each other to have regulation or more generally, because at the end of the day, we talk a lot about regulation, but regulations are part of the policy framework that one could put in place. So actually, let’s go to Brando, because I was seeing he was kind of calling me with his eyes by the time we were talking, and I’m sure he wants to add on the multilateral setting. Please, over to you, Brando. Perhaps you were not calling me, but you’ve been called in. Never the less.
Well, I think that it’s very important that we use occasions like this, this summit, to… to advance a global cooperation framework. And for sure… it’s also a part of the mission of UNESCO to unite different cultures and approaches to what we are talking about. And you explained it earlier, the longstanding work of the organization. But I think that we need to face the reality that there are issues where global cooperation will be crucial and that it’s still not sufficient. Let’s think of military use of AI or the existential risks of losing control of very powerful AI models. This is something that is part of a controversial debate, we would say. But I wouldn’t dismiss renowned scientists that sustain that we are.
in a context where the lack of globally adopted rules are putting us in very significant danger. And this is also part of the idea of balancing innovation and ethics. Because for sure we need domestic rules to foster the best opportunities out of the various use cases of AI. In these days I met many companies that were working on very practical, extremely useful AI use cases to ameliorate our life. To ameliorate. To ameliorate societal good. But this cannot be left in the hands of just the… judgment of private sector companies that have a specific objective, profit for their owners or shareholders, it’s not societal good they might want to add that on top but that’s not their objective, it’s natural, so we need to have frameworks in place on what is our daily impact with AI and we need to build common standards the more broadly adopted standards we have globally the best will be to reach results but we also need a step further that is global cooperation on those issues where we cannot actually do very much domestically they are global issues and I think that with an increased geopolitical tension soon the use of AI for peace will be quite an important topic on which the international community has to find a way to take quick steps forward I hope that our leaders will deal with that
I can’t agree more with the need to coordinate and have an approach that is global and actually allow me the prerogative of the moderator to call my ADG Tophie I will take the consequences of that but what I would like to ask you is what it means to have people at the center and let’s remember that in your case, given the work you lead on the communication and information sector what is the role of the information Virginia was hinting at that before in terms of awareness could you please share a bit of those insights
Thank you Maria Garcia let me pick it up where Brando left it, he said AI for peace maybe some in the room know why UNESCO was created back in 1945 80 years ago almost to the day the mission of UNESCO was and has been to build peace in the minds of men and women how? through education culture, sciences, communication and information everything happens in the mindset of the people today of course we want AI to be a force for good but it could be also a force for hazards, for harm for risk I tend to say technology is neutral it depends what humans make out of it it could be a force for good it could be a force for you mentioned wars or unwanted things so yes humanity in the loop that’s fundamental I always ask myself and that’s my team at UNESCO I say if whatever we do in the field if that transforms lives then we are spot on if you make the beneficiaries of our educational program whatever if you can make them more successful through what you offer them then that’s impact.
Where is the impact? AI can transform lives, yes. And you mentioned to us some examples. It can help cure cancer, as you said, provide food for the needy people, and so on and so forth. We want that type of AI. And AI does not stand for artificial intelligence. AI stands for all -inclusive. That’s AI as well. So if you have that perspective to things, if you really put humanity in the loop, at the center, not only in the loop, in the center, and allow me one minute to share with you, I have been at UNESCO for five years. My most memorable day happened last week in a tiny village in remote southern Africa. A village in which people had no access to radio, no TV, no mobile telephony, no internet, nothing.
They always felt we were second -class citizens in this country. Imagine that you don’t have access to your own internet. Do you have that information? you don’t know what’s happening around you you cannot call your relatives living in other cities this was the case of 15 small communities what UNESCO did, it provided first community radios, set up a tower with transmission equipment so through the radio people have information know what’s happening and when we did that, telecom operators came in to plug in their equipment to provide mobile telephony, and then it became internet connectivity, and then UNESCO put in place early warning systems, because these areas were very much prone to floodings, and whenever that happened it wiped out the cattle the livelihood of the people, etc that’s transforming the lives of the people, AI can contribute in a huge way to that extent and I think if we put that at the center, then of course it has to be ethical, it has to be human centered, it has to be accountable, transparent, all the principles that we talked about, and then comes the issue of … advocacy, capacity development because more informed policy makers will go this route but if we don’t bring up awareness if we don’t do the advocacy and the capacity building and the training then of course we can see that some companies or some people going for the buck for the profit out of this technology not the social benefit not transforming lives
thanks very much all over to you because the company is at the end of the speech so over to the company and really how you see also this fact of including the other stakeholders in what you do and how that can transform and help you deliver on AI that is added by the company
well thank you for saying that and I actually think that it becomes more and more obvious that that’s actually the only way to scale the technology um um And, you know, but just think about, think about if you’re developing a technology that’s meant to serve many different markets and many different populations, that you need to know, for example, like we have in our AI agent, we have a voice capability. We need to know that that voice capability, even if we’re just talking about English, forget about other languages for a second. We’re just talking about English. It needs to work on different vernaculars of English, different accents, etc. I work a lot on product accessibility, right?
It needs to understand a deaf accent, for example. And so the most inclusively designed technology is going to be the one that’s most successful. It’s going to increase accuracy rates and so on. I also think this is to that end it’s actually a very very exciting time to be able to use AI for inclusion and so I mentioned for example product accessibility one of the things that to me that’s most hopeful and most exciting about this time is that like we’re starting to see AI agents that correct in real time we’re working my team is working on this at Salesforce correct in real time code that is not accessible or correct in real time a browser extension so that if you’re like on your phone and something comes up and maybe a common problem is you’re trying to zoom out or in and it breaks it will correct it in real time and these are the this is this kind of technology is the difference between someone that’s able to use some software to actually get their job done or someone that’s excluded from getting their job done and so again I guess I guess the point that I’m trying to make is the most inclusively designed technology is going to be the most commercially successful and also this is an incredibly exciting time to be doing
I’m really happy to hear from the voice of the industry that the more, so those that include are actually not making a favor to those that get included, but actually the AI, the systems get superior. And so that is something I think that’s another comment of a common legend out there that says, no, you know, it’s costly and perhaps then, you know, the profit is not there. What we are hearing from the voices of the companies is really like, well, no, because it’s a superior product, it’s a better product, it performs better. Last but not least, back to our Virginia. Especially here, I would like to listen from you about what you think is the role of a specific component of human capital, that is the skill.
And we have heard throughout this week the importance of upskilling, reskilling. And is that really the solution?
thank you very much firstly going back to if I made the impression that hammers are not useful it’s not the case there are many useful hammers my point is more like we need a toolbox we don’t need only hammers and even outside of the western world we are too much focused on hammers maybe the skills yes we really need to focus on skills we need to focus on our own capabilities on our lived experience and so on someone talked about AGI and indeed at this moment the AGI concept is about power is about providing power to those companies that claim they will build it how are they building it is what I call the play -doh approach they are putting all the data of the world with all the capacities of the world creating a huge ball of play -doh if anyone who played with We played out before, you know, that after you play, there is no color, there is no shape, there is nothing anymore.
It’s just a thing. And then, of course, that thing might do, but no one knows what’s inside, what came in, what came out, and so on. We need to go much more broader in understanding how this AGI is. What fundamentally AGI means, a system that is more intelligent than us, that can solve problems that we cannot. We already have AGI. We always had AGI. It’s called collective intelligence. The moment that we work together, we can do more than each one of us. If we are using the AI technology that we are developing to support this collaboration together, to develop the different skills, to integrate all our different capabilities, our different differences, our different experiences, our different capabilities, our different abilities, our different abilities, the different tools that we have developed.
then we get a much broader bouquet, not anymore a bowl of Play -Doh without color, but a huge bouquet of flowers of all those colors and so on. So AGI is about, and we cannot let the big companies run away with the concept of AGI by the idea that they are going to create God which is going to solve our problems. AGI is about us. It’s about putting all us together because our collective intelligence is really what, at the end of the day, is going to solve or to support us solving our problems. It’s just one more thing, and I think that’s also part of the skills. Technology, and there I disagree with you, is not neutral.
All technology embeds and encompasses our choices, our options, our data. All of that is part of. We have to understand technology as a non -neutral. artifact, and take those capabilities and also embrace the different perspectives and the different colors of this. But again, altogether, that’s the only way forward, is not giving up and hoping that AI is going to solve whatever complex problems we have. Now it’s really embracing and enforcing collective intelligence. That is AGI.
Excellent. Now, collective intelligence. Now we are going to have a collective set of questions, just a couple, because the time doesn’t allow for more. So, please, by the time you want to intervene, be absolutely short, say your name, say whom you want to ask the question to, and the question without doing the history of humankind before shopping with a person. So, I have to say, I spotted that surface, and there was a lady on this side. Now I think she got shy, and she just put the… So, let’s start by that gentleman. No, it’s the gentleman behind you, I’m sorry. is there I can do everything from moderating to giving you the part we are proactive and problem solving let’s go your name is
hello everyone myself Rajan I am from business club TV and I am the CEO and the founder of the startup so I have a very basic questions for professor Virginia Dignam yes so professor I have a question for you what is AI policy
Wow, okay, how many hours? Okay, very shortly, AI policy is about the tools, the capabilities, the skills, the information, the knowledge on the understanding how to address the impact of AI. Not the technology, not the designing of the technology, but really addressing the impact of this technology from the whole loop and all development from the beginning, asking ourselves, why are we using AI? Is that the best problem that we have? To the way we are developing it, to the way that we are evaluating. And addressing the impact of it.
No, I’m sorry, because we have to give it, let’s be inclusive, let’s allow the other. to speak as well. Please, that lady, yes, exactly, the one with the hand raised. It’s just down here, three rows ahead. I’m going to be gender equal, so one -on -one. I’m not going to have the men speak because typically you’re the fastest to raise your hand, the women, we are more sharp. Go ahead.
I love that. Thank you for that. Hi, my name is Rita Soni. I don’t know who should answer this question, but at the beginning of this panel, I heard someone say those that are developing AI and designing it probably have never experienced a power cut or potholes in the road. I thought that there would be more discussion about who is actually involved in the humans in the loop. Dave Donnie, you know me. So I have to ask this question about the people that are actually developing it and whether we’re thinking about responsibly employing them. Right now, we know that there’s overhauls of half a million people in the world. And so, I’m going to ask you to think about that.
that we consider impact workers. They’ve typically been excluded, but now they are. So how do we support this as a movement of getting those that have experienced power cuts to help design and develop it? This is a development -related question.
Who wants to attack it? Because we are over. That’s the last question, and then we will have to say thank you and continue the conversation in parallel.
Yeah, I fully, I mean, you know, if you’re talking about have developers suffered, to develop the technology, power cuts, anyone who’s working out of Bangalore or any Indian cities, yes, they have. They’ve definitely suffered in the development. Now, I think, Rita, the point you were making is how do we make it more inclusive? How do we bring in? And I think that’s something that goes back to the perennial question, is how do you ensure that you democratize not just access to technology, but you also democratize design and creation of the technology, right? And it’s not just gender. It’s also how do you diffuse it down to smaller cities, so people who are actually facing the problems in smaller cities.
And I think at least in India we are doing that through our initiators like Startup India, etc., which is more focused today on building capabilities in Tier 2, Tier 3 cities, not users, not just for adoption, but actually for design and development. So there’s a lot of focus, and I’m sure there are founders here who have come from the smallest of cities in India. And the best part is when we track the numbers, the growth of startups and founders is higher in the Tier 2, Tier 3, Tier 4 cities than in Tier 1 cities. So that tells us we’re doing something right.
enjoyed at least like half of as much I have enjoyed this panel. Please join me in thanking the journey from your to be and we’re going to do a large show so please stand up we’re going to do a selfie with all of you in the back come here stand like this so we’re all together this is our collective intelligence thank you thank you very much thank you thank you Thank you.
Dr. Tawfik Jelassi
Speech speed
156 words per minute
Speech length
961 words
Speech time
369 seconds
Innovation and ethics reinforce each other
Explanation
Dr. Jelassi stresses that technology itself is neutral and its impact depends on human choices, so ethical considerations must be embedded in every innovative effort. He argues that a human‑in‑the‑loop approach ensures that AI serves the common good.
Evidence
“technology is neutral it depends what humans make out of it it could be a force for good it could be a force for you mentioned wars or unwanted things so yes humanity in the loop that’s fundamental I always ask myself and that’s my team at UNESCO I say if whatever we do in the field if that transforms lives then we are spot on if you make the beneficiaries of our educational program whatever if you can make them more successful through what you offer them then that’s impact” [26]. “it has to be ethical, it has to be human centered, it has to be accountable, transparent, all the principles that we talked about” [37].
Major discussion point
Relationship between innovation, ethics, and regulation
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society | The enabling environment for digital development
Ethics‑by‑design and contextual frameworks essential
Explanation
He calls for AI to be built with ethical principles from the outset and adapted to cultural and sectoral contexts, ensuring accountability and transparency throughout the development cycle.
Evidence
“through education culture, sciences, communication and information everything happens in the mindset of the people today of course we want AI to be a force for good but it could be a force for hazards, for harm for risk I tend to say technology is neutral it depends what humans make out of it… humanity in the loop that’s fundamental” [26]. “it has to be ethical, it has to be human centered, it has to be accountable, transparent, all the principles that we talked about” [37].
Major discussion point
Operationalizing AI ethics – moving from principles to practice
Topics
Artificial intelligence | Capacity development
AI should be inclusive with advocacy and capacity‑building for policymakers
Explanation
Jelassi highlights the need for advocacy and capacity development so that policymakers are better informed and can guide AI deployment responsibly.
Evidence
“advocacy, capacity development because more informed policy makers will go this route but if we don’t bring up awareness if we don’t do the advocacy and the capacity building and the training then of course we can see that some companies or some people going for the buck for the profit out of this technology not the social benefit not transforming lives” [37].
Major discussion point
Human‑centered AI and accountability
Topics
Human rights and the ethical dimensions of the information society | Capacity development
Brando Benifei
Speech speed
119 words per minute
Speech length
947 words
Speech time
476 seconds
Regulation must be risk‑based and not stifle progress
Explanation
Benifei argues that domestic AI rules are needed to nurture opportunities while avoiding over‑regulation that could hinder innovation.
Evidence
“Because for sure we need domestic rules to foster the best opportunities out of the various use cases of AI” [31]. “But this cannot be left in the hands of just the… judgment of private sector companies that have a specific objective, profit for their owners or shareholders… we need to have frameworks in place on what is our daily impact with AI and we need to build common standards the more broadly adopted standards we have globally the best will be to reach results” [25].
Major discussion point
Relationship between innovation, ethics, and regulation
Topics
Artificial intelligence | The enabling environment for digital development
EU AI Act adopts risk‑based approach
Explanation
He notes that domestic AI rules should follow a risk‑based model that defines prohibited uses and ensures human control, mirroring the EU AI Act’s philosophy.
Evidence
“Because for sure we need domestic rules to foster the best opportunities out of the various use cases of AI” [31].
Major discussion point
Relationship between innovation, ethics, and regulation
Topics
Artificial intelligence | The enabling environment for digital development
Global cooperation needed for existential AI risks
Explanation
Benifei stresses that AI challenges such as military applications transcend borders and require coordinated international action.
Evidence
“global cooperation on those issues where we cannot actually do very much domestically they are global issues and I think that with an increased geopolitical tension soon the use of AI for peace will be quite an important topic on which the international community has to find a way to take quick steps forward” [25].
Major discussion point
Role of regulation and global governance
Topics
Artificial intelligence | The enabling environment for digital development
Multilateral frameworks essential for harmonising standards
Explanation
He argues that common, globally‑adopted standards are crucial to avoid fragmented AI rules and to ensure consistent protection worldwide.
Evidence
“we need to build common standards the more broadly adopted standards we have globally the best will be to reach results but we also need a step further that is global cooperation on those issues” [25].
Major discussion point
Role of regulation and global governance
Topics
Artificial intelligence | The enabling environment for digital development
Virginia Dignam
Speech speed
146 words per minute
Speech length
1372 words
Speech time
562 seconds
Innovation should go beyond a single “hammer”
Explanation
Dignam emphasizes that innovation must be broader than a single tool or approach and should draw on diverse cultural and societal traditions.
Evidence
“Innovation is much more than that” [8].
Major discussion point
Relationship between innovation, ethics, and regulation
Topics
Artificial intelligence | Social and economic development
Upskilling and reskilling are crucial; engineers must consider societal impact
Explanation
She points out that AI policy concerns the tools, capabilities, and skills needed to understand and manage AI’s impact, implying a need for continuous learning among engineers.
Evidence
“AI policy is about the tools, the capabilities, the skills, the information, the knowledge on the understanding how to address the impact of AI” [16]. “All technology embeds and encompasses our choices, our options, our data” [13].
Major discussion point
Education, skills, and collective intelligence
Topics
Capacity development | Artificial intelligence
Collective intelligence, not singular AGI, drives problem‑solving
Explanation
Dignam argues that human intelligence, encompassing more than mere cognition, is the real engine behind collective problem‑solving, rather than a single AGI system.
Evidence
“Human intelligence is much more than cognition” [27].
Major discussion point
Education, skills, and collective intelligence
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society
AI policy focuses on tools, capabilities, and impact management
Explanation
She clarifies that AI policy is not about the technology itself but about the broader ecosystem—tools, capabilities, and how we address AI’s societal impact.
Evidence
“AI policy is about the tools, the capabilities, the skills, the information, the knowledge on the understanding how to address the impact of AI” [16]. “Not the technology, not the designing of the technology, but really addressing the impact of this technology from the whole loop and all development from the beginning, asking ourselves, why are we using AI?” [18].
Major discussion point
Education, skills, and collective intelligence
Topics
Artificial intelligence | Capacity development
Maria Grazia
Speech speed
164 words per minute
Speech length
1794 words
Speech time
655 seconds
Ethics can coexist with innovation; title trade‑off misleading
Explanation
Grazia highlights that inclusive, ethically‑designed AI actually leads to superior products, disproving the notion of a trade‑off between ethics and innovation.
Evidence
“I’m really happy to hear from the voice of the industry that the more, so those that include are actually not making a favor to those that get included, but actually the AI, the systems get superior” [6]. “No, I’m sorry, because we have to give it, let’s be inclusive, let’s allow the other” [11].
Major discussion point
Relationship between innovation, ethics, and regulation
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society
UNESCO recommendation needs concrete actions
Explanation
She calls for UNESCO’s AI ethics recommendation to move from high‑level guidance to actionable implementation, such as ethical‑by‑design practices.
Evidence
“that might happen or should happen in the context of making AI ethical by design?” [32].
Major discussion point
Operationalizing AI ethics – moving from principles to practice
Topics
Artificial intelligence | Capacity development
Debjani Ghosh
Speech speed
164 words per minute
Speech length
1281 words
Speech time
466 seconds
Human oversight must be built into the entire development lifecycle
Explanation
Ghosh stresses that accountability requires oversight mechanisms from the design stage through commercialization.
Evidence
“Oversight has to be built into the entire development process from design to commercialization” [5].
Major discussion point
Operationalizing AI ethics – moving from principles to practice
Topics
Artificial intelligence | Building confidence and security in the use of ICTs
Sharing real‑world impact stories facilitates practical implementation
Explanation
She notes that platforms like the AI Impact Commons give practitioners a voice about what actually works, helping translate principles into practice.
Evidence
“They give people a chance to sort of have a voice around what is actually working” [40].
Major discussion point
Operationalizing AI ethics – moving from principles to practice
Topics
Artificial intelligence | Capacity development
Accountability ultimately lies with humans, not the technology
Explanation
Ghosh argues that responsibility for AI outcomes rests with people, emphasizing the need for a human‑in‑the‑loop approach.
Evidence
“technology is neutral it depends what humans make out of it it could be a force for good it could be a force for you mentioned wars or unwanted things so yes humanity in the loop that’s fundamental” [26].
Major discussion point
Human‑centered AI and accountability
Topics
Human rights and the ethical dimensions of the information society | Artificial intelligence
Startup India builds design and development capacity in Tier‑2/3 cities
Explanation
She highlights the Indian initiative that focuses on upskilling engineers in smaller cities, enabling them to participate in AI design and development.
Evidence
“And I think at least in India we are doing that through our initiators like Startup India, etc., which is more focused today on building capabilities in Tier 2, Tier 3 cities, not users, not just for adoption, but actually for design and development” [28].
Major discussion point
Inclusion of diverse developers and equitable design
Topics
Capacity development | The enabling environment for digital development
Democratizing design ensures inclusion of those facing real‑world constraints
Explanation
She stresses that AI design must involve people who live the constraints (e.g., power cuts) to avoid biased solutions.
Evidence
“that’s something that goes back to the perennial question, is how do you ensure that democratize not just access to technology, but you also democratize design and creation of the technology, right?” [36].
Major discussion point
Inclusion of diverse developers and equitable design
Topics
Closing all digital divides | Capacity development
Tim Curtis
Speech speed
73 words per minute
Speech length
339 words
Speech time
276 seconds
UNESCO’s global AI ethics recommendation provides a common foundation
Explanation
Curtis points to UNESCO’s Business Council as a conduit for bringing private‑sector leaders into the global AI ethics framework, underscoring its worldwide relevance.
Evidence
“Also privileged to have Paula Goldman, Chief Ethical and Humane Use Officer at Salesforce, who are a member of UNESCO’s Business Council and she has really been leading by example in the private sector’s responsible AI ethics” [1].
Major discussion point
Role of regulation and global governance
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society
Paula Goldman
Speech speed
159 words per minute
Speech length
846 words
Speech time
318 seconds
Trust and people‑centred scaling are key for AI adoption
Explanation
Goldman argues that scaling AI successfully requires understanding diverse markets and placing user needs at the core of product development.
Evidence
“well thank you for saying that and I actually think that it becomes more and more obvious that that’s actually the only way to scale the technology … you need to know, for example, like we have in our AI agent, we have a voice capability” [19].
Major discussion point
Human‑centered AI and accountability
Topics
Human rights and the ethical dimensions of the information society | Artificial intelligence
Inclusive design yields superior, commercially successful products
Explanation
She demonstrates that technologies designed for accessibility not only broaden inclusion but also drive market success.
Evidence
“And so the most inclusively designed technology is going to be the one that’s most successful” [3]. “I work a lot on product accessibility, right?” [2]. “And so the most inclusively designed technology is going to be the one that’s most successful” [4].
Major discussion point
Human‑centered AI and accountability
Topics
Closing all digital divides | Artificial intelligence
Impact stories give people a voice about what works
Explanation
Goldman notes that platforms sharing real‑world AI impact stories empower practitioners to highlight effective solutions.
Evidence
“They give people a chance to sort of have a voice around what is actually working” [40].
Major discussion point
Operationalizing AI ethics – moving from principles to practice
Topics
Artificial intelligence | Capacity development
Audience
Speech speed
101 words per minute
Speech length
45 words
Speech time
26 seconds
AI policy is about tools, capabilities, and managing impact
Explanation
Rajan asks what AI policy means, prompting a response that frames policy around tools, skills, and impact management rather than pure technology design.
Evidence
“hello everyone myself Rajan I am from business club TV and I am the CEO and the founder of the startup so I have a very basic questions for professor Virginia Dignam … what is AI policy” [17]. “AI policy is about the tools, the capabilities, the skills, the information, the knowledge on the understanding how to address the impact of AI” [16].
Major discussion point
Education, skills, and collective intelligence
Topics
Artificial intelligence | Capacity development
Rita Soni
Speech speed
161 words per minute
Speech length
167 words
Speech time
62 seconds
Democratising AI design requires involving people who experience real‑world constraints
Explanation
Soni raises the need to include developers who face everyday challenges such as power cuts, ensuring AI solutions are grounded in lived realities.
Evidence
“I heard someone say those that are developing AI and designing it probably have never experienced a power cut or potholes in the road” [33]. “So how do we support this as a movement of getting those that have experienced power cuts to help design and develop it?” [29].
Major discussion point
Inclusion of diverse developers and equitable design
Topics
Closing all digital divides | Capacity development
Agreements
Agreement points
Innovation and ethics are not contradictory but mutually reinforcing
Speakers
– Dr. Tawfik Jelassi
– Maria Grazia
– Debjani Ghosh
Arguments
Innovation and ethics are not contradictory but mutually reinforcing – ethics makes AI more trustworthy and widely adopted
Innovation and ethics should not be seen as trade-offs – both can be pursued simultaneously without hindering each other
The choice is not between innovation and ethics, but between using technology for good versus harmful purposes
Summary
All three speakers reject the false dichotomy between innovation and ethics, arguing that ethical considerations actually enhance innovation by making AI more trustworthy and widely adopted
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society
Human-centered approach is essential for AI development and deployment
Speakers
– Dr. Tawfik Jelassi
– Debjani Ghosh
– Paula Goldman
– Virginia Dignam
Arguments
AI systems must be ethical by design with human oversight built into the entire development process from conception
Accountability for AI ethics lies with humans, not technology, requiring oversight at every stage of development
Putting people at the center of AI transformation is crucial for successful scaling and adoption
Collective intelligence represents true AGI – humans working together with AI support rather than AI replacing human intelligence
Summary
All speakers emphasize that humans must remain central to AI development, deployment, and governance, with proper oversight and accountability mechanisms throughout the process
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society | Capacity development
Inclusive design leads to better AI products and outcomes
Speakers
– Paula Goldman
– Virginia Dignam
– Debjani Ghosh
Arguments
Inclusive design leads to more commercially successful and superior AI products
AI development has been dominated by Western Cartesian thinking, but other cultural approaches like Ubuntu could create different types of AI
Democratizing both access to and creation of technology is essential, including expanding development capabilities to smaller cities
Summary
Speakers agree that inclusive design approaches, incorporating diverse perspectives and experiences, result in superior AI products that work better for all users
Topics
Artificial intelligence | Closing all digital divides | Human rights and the ethical dimensions of the information society
Regulation should be proactive and built into development processes
Speakers
– Dr. Tawfik Jelassi
– Debjani Ghosh
– Brando Benifei
Arguments
AI systems must be ethical by design with human oversight built into the entire development process from conception
Regulation should be built into development processes rather than being an afterthought
Sometimes proactive regulation is necessary to prevent unchangeable negative consequences, rather than only acting after problems occur
Summary
All speakers advocate for proactive regulation and ethical considerations to be integrated from the beginning of AI development rather than applied as an afterthought
Topics
Artificial intelligence | The enabling environment for digital development | Human rights and the ethical dimensions of the information society
Similar viewpoints
Both speakers are critical of current AI development approaches and advocate for fundamental restructuring that keeps humans at the center rather than pursuing AI systems that seek control over humans
Speakers
– Virginia Dignam
– Debjani Ghosh
Arguments
Current approaches to both innovation and ethics in AI are fundamentally flawed and need restructuring
AI should augment human lives rather than seek control, with humans remaining central to the process
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society
Both speakers provide concrete examples of AI’s positive impact in developing countries and underserved communities, demonstrating practical applications that transform lives
Speakers
– Dr. Tawfik Jelassi
– Debjani Ghosh
Arguments
AI can transform lives by providing access to information, connectivity, and early warning systems in underserved communities
AI is already solving critical problems in developing countries, from malnutrition to climate resilience, as demonstrated by impact stories from 30+ countries
Topics
Artificial intelligence | Information and communication technologies for development | Social and economic development
All emphasize the need for more diverse perspectives in AI development, particularly including voices from developing countries and those who have experienced the problems AI aims to solve
Speakers
– Virginia Dignam
– Rita Soni
– Audience
Arguments
AI development has been dominated by Western Cartesian thinking, but other cultural approaches like Ubuntu could create different types of AI
Inclusive development requires involving people who have experienced the problems AI aims to solve, including impact workers and developers from diverse backgrounds
Most AI developers lack experience with infrastructure challenges faced by developing countries, limiting perspective on real-world problems
Topics
Artificial intelligence | Closing all digital divides | Human rights and the ethical dimensions of the information society
Unexpected consensus
Technology is not neutral
Speakers
– Virginia Dignam
– Dr. Tawfik Jelassi
Arguments
Technology is not neutral but embeds human choices and perspectives, requiring diverse viewpoints in development
AI can transform lives by providing access to information, connectivity, and early warning systems in underserved communities
Explanation
While Dr. Jelassi initially states ‘technology is neutral,’ both speakers ultimately agree that technology embeds human choices and values, making the development process and the people involved crucial to outcomes
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society
Private sector alignment with ethical principles
Speakers
– Paula Goldman
– Debjani Ghosh
Arguments
Inclusive design leads to more commercially successful and superior AI products
AI should augment human lives rather than seek control, with humans remaining central to the process
Explanation
Unexpectedly, both the private sector representative and government official agree that ethical, inclusive approaches are not just morally right but also commercially advantageous, challenging the assumption that business interests conflict with ethical considerations
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society | The enabling environment for digital development
Global cooperation necessity
Speakers
– Dr. Tawfik Jelassi
– Brando Benifei
– Tim Curtis
Arguments
UNESCO’s recommendation provides a global framework adopted by 193 member states for ethical AI principles
Global cooperation frameworks are essential for addressing issues like military AI use and existential risks
UNESCO believes in ensuring ethical and human-centered AI deployment while encouraging AI development and innovation that benefits humanity, particularly the global south
Explanation
Representatives from different international organizations and regions unexpectedly show strong consensus on the need for global cooperation and frameworks, despite often competing institutional interests
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society | The development of the WSIS framework
Overall assessment
Summary
The speakers demonstrated remarkable consensus on key principles: innovation and ethics are complementary rather than competing; human-centered approaches are essential; inclusive design produces superior outcomes; and proactive regulation is necessary. There was also strong agreement on the need for global cooperation and the importance of involving diverse perspectives in AI development.
Consensus level
High level of consensus across different stakeholder groups (academia, government, private sector, international organizations) suggests a mature understanding of AI governance challenges and potential for coordinated action. The alignment between commercial interests and ethical principles particularly strengthens the foundation for sustainable AI development policies.
Differences
Different viewpoints
Whether AI is being used appropriately as a problem-solving tool
Speakers
– Virginia Dignam
– Debjani Ghosh
Arguments
Current approaches to both innovation and ethics in AI are fundamentally flawed and need restructuring
AI is already solving critical problems in developing countries, from malnutrition to climate resilience, as demonstrated by impact stories from 30+ countries
Summary
Virginia argues that AI is being misused as a ‘hammer looking for nails’ and that innovation is wrongly understood as using AI for any problem. Debjani counters that this perspective is a ‘luxury of developed countries’ and that developing countries with limited resources are successfully using AI to solve well-defined, critical problems like malnutrition and climate resilience.
Topics
Artificial intelligence | Information and communication technologies for development
Whether technology is neutral
Speakers
– Virginia Dignam
– Dr. Tawfik Jelassi
Arguments
Technology is not neutral but embeds human choices and perspectives, requiring diverse viewpoints in development
AI can transform lives by providing access to information, connectivity, and early warning systems in underserved communities
Summary
Virginia explicitly states that technology is not neutral and embeds human choices, options, and data, while Dr. Jelassi takes the position that ‘technology is neutral – it depends what humans make out of it.’ This represents a fundamental philosophical disagreement about the nature of technology itself.
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society
The definition and approach to AGI (Artificial General Intelligence)
Speakers
– Virginia Dignam
– Debjani Ghosh
Arguments
Collective intelligence represents true AGI – humans working together with AI support rather than AI replacing human intelligence
AI should augment human lives rather than seek control, with humans remaining central to the process
Summary
Virginia redefines AGI as collective intelligence and criticizes the current ‘play-doh approach’ of combining all data, while Debjani questions why companies focus on AGI as control rather than augmentation. Though both critique current AGI approaches, Virginia offers a complete redefinition while Debjani focuses on changing the narrative from control to augmentation.
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society
Unexpected differences
The role of regulation in innovation
Speakers
– Dr. Tawfik Jelassi
– Maria Grazia
Arguments
Innovation and ethics are not contradictory but mutually reinforcing – ethics makes AI more trustworthy and widely adopted
Innovation and ethics should not be seen as trade-offs – both can be pursued simultaneously without hindering each other
Explanation
While both speakers ultimately agree that innovation and ethics can coexist, Dr. Jelassi makes a subtle but important distinction by stating he sees contradiction between ‘innovation and regulation’ rather than ‘innovation and ethics.’ This suggests different views on the role of formal regulatory frameworks versus ethical principles in guiding innovation.
Topics
Artificial intelligence | The enabling environment for digital development | Human rights and the ethical dimensions of the information society
The current state of human-centered AI in industry
Speakers
– Paula Goldman
– Debjani Ghosh
Arguments
Putting people at the center of AI transformation is crucial for successful scaling and adoption
AI should augment human lives rather than seek control, with humans remaining central to the process
Explanation
Despite both advocating for human-centered AI, there’s an unexpected disagreement about whether industry is actually implementing this. Paula describes current practices at Salesforce as human-centered, while Debjani explicitly states ‘I’m not sure if industry today is really putting human at the center of the loop, but I think they need to,’ suggesting skepticism about current industry practices.
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society
Overall assessment
Summary
The discussion revealed moderate levels of disagreement primarily around philosophical approaches to AI development, the role of regulation versus ethics, and assessments of current AI applications. Key areas of disagreement included whether AI is being appropriately applied (particularly in developing vs developed country contexts), the fundamental nature of technology neutrality, and definitions of AGI.
Disagreement level
The disagreement level was moderate but constructive, with speakers generally building on each other’s points rather than directly contradicting them. The disagreements were more about emphasis, approach, and philosophical foundations rather than fundamental opposition to core principles. This suggests a healthy debate that could lead to more nuanced and comprehensive AI governance approaches, though it also indicates that significant work remains to align different stakeholder perspectives on key implementation details.
Partial agreements
Partial agreements
Both agree on the need for proactive approaches to AI governance, but disagree on timing and methods. Dr. Jelassi emphasizes building ethics into design from the start, while Brando argues for early regulation to prevent irreversible consequences, using social media as a cautionary example.
Speakers
– Dr. Tawfik Jelassi
– Brando Benifei
Arguments
AI systems must be ethical by design with human oversight built into the entire development process from conception
Sometimes proactive regulation is necessary to prevent unchangeable negative consequences, rather than only acting after problems occur
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society | The enabling environment for digital development
Both agree on human-centered AI, but Paula focuses on practical implementation within companies (giving people voice in what works), while Debjani questions whether industry is actually doing this and calls for fundamental changes in how companies approach AI development.
Speakers
– Paula Goldman
– Debjani Ghosh
Arguments
Putting people at the center of AI transformation is crucial for successful scaling and adoption
AI should augment human lives rather than seek control, with humans remaining central to the process
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society | Capacity development
Both advocate for more inclusive approaches to AI development, but Virginia focuses on educational reform and interdisciplinary collaboration, while Paula emphasizes the commercial benefits of inclusive design and practical implementation in products.
Speakers
– Virginia Dignam
– Paula Goldman
Arguments
Engineers need education in humanities to understand problem context, while humanities scholars need precision in discussing AI technology
Inclusive design leads to more commercially successful and superior AI products
Topics
Artificial intelligence | Capacity development | Closing all digital divides
Similar viewpoints
Both speakers are critical of current AI development approaches and advocate for fundamental restructuring that keeps humans at the center rather than pursuing AI systems that seek control over humans
Speakers
– Virginia Dignam
– Debjani Ghosh
Arguments
Current approaches to both innovation and ethics in AI are fundamentally flawed and need restructuring
AI should augment human lives rather than seek control, with humans remaining central to the process
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society
Both speakers provide concrete examples of AI’s positive impact in developing countries and underserved communities, demonstrating practical applications that transform lives
Speakers
– Dr. Tawfik Jelassi
– Debjani Ghosh
Arguments
AI can transform lives by providing access to information, connectivity, and early warning systems in underserved communities
AI is already solving critical problems in developing countries, from malnutrition to climate resilience, as demonstrated by impact stories from 30+ countries
Topics
Artificial intelligence | Information and communication technologies for development | Social and economic development
All emphasize the need for more diverse perspectives in AI development, particularly including voices from developing countries and those who have experienced the problems AI aims to solve
Speakers
– Virginia Dignam
– Rita Soni
– Audience
Arguments
AI development has been dominated by Western Cartesian thinking, but other cultural approaches like Ubuntu could create different types of AI
Inclusive development requires involving people who have experienced the problems AI aims to solve, including impact workers and developers from diverse backgrounds
Most AI developers lack experience with infrastructure challenges faced by developing countries, limiting perspective on real-world problems
Topics
Artificial intelligence | Closing all digital divides | Human rights and the ethical dimensions of the information society
Takeaways
Key takeaways
Innovation and ethics in AI are not contradictory but mutually reinforcing – ethical AI systems are more trustworthy and widely adopted
The real choice is between using AI for societal good versus harmful purposes, not between innovation and ethics
Human accountability is central to AI ethics since technology embeds human choices and is not neutral
AI must be ethical by design with human oversight built into the entire development process from conception to deployment
Risk-based regulation can effectively balance innovation with ethical considerations by targeting specific high-risk applications
Collective intelligence – humans working together with AI support – represents true artificial general intelligence rather than AI replacing humans
Inclusive design leads to superior AI products that are more commercially successful and technically robust
AI is already demonstrating significant positive impact in developing countries, solving problems from malnutrition to climate resilience
Global cooperation frameworks are essential for addressing transnational AI challenges like military applications and existential risks
Education must bridge the gap between technical development and humanities understanding to create more contextually aware AI solutions
Resolutions and action items
Creation of AI Impact Commons globally (aiimpactcommons.global) featuring impact stories from 30+ countries with scaling learnings
UNESCO’s recommendation on AI ethics provides an operational framework adopted by 193 member states for implementation
Development of practical tools like real-time accessibility corrections and browser extensions to demonstrate inclusive AI applications
Expansion of startup and development capabilities to smaller cities, particularly in India through initiatives like Startup India
Implementation of red-flag checkpoints at every stage of AI development and testing in sandbox environments before commercialization
Unresolved issues
How to align diverse global ethical values when humans themselves don’t share the same ethical frameworks
How to democratize not just access to AI technology but also participation in its design and development
How to ensure adequate representation of people who have experienced the problems AI aims to solve in development teams
How to move beyond Western Cartesian thinking in AI development to incorporate diverse cultural approaches like Ubuntu philosophy
How to establish effective global governance mechanisms for military AI applications and existential risk management
How to balance proactive regulation with innovation speed, particularly regarding when to regulate before negative consequences become unchangeable
How to ensure responsible employment and inclusion of impact workers who are often excluded from AI development discussions
Suggested compromises
Risk-based regulatory approach that identifies specific high-risk AI applications requiring oversight while allowing innovation in lower-risk areas
Prohibition of certain unacceptable AI uses (like predictive policing and emotional recognition in workplaces) while permitting beneficial applications
Building regulatory frameworks into development processes rather than treating them as afterthoughts, with flexibility for cultural and industry specificity
Combining top-down policy frameworks with bottom-up input from end users about what actually works in practice
Treating regulation and ethics as experimental artifacts that can be tested, evaluated, and refined rather than fixed prohibitions
Focusing on transparency and human control requirements rather than blanket restrictions on AI development
Balancing global cooperation on transnational issues with domestic flexibility for context-specific implementations
Thought provoking comments
I don’t think the choice is between innovation and ethics. I really don’t. I think the choice is between do we use technology to ensure that everyone in the world is cancer-free, everyone in the world lives with dignity, everyone in the world has enough to eat, or do we use the technology to make the world a much bigger conflict zone, develop the next atom bomb, and worse.
Speaker
Debjani Ghosh
Reason
This comment fundamentally reframes the entire discussion by rejecting the premise that innovation and ethics are in tension. Instead, it positions the real choice as between beneficial versus harmful applications of technology, shifting focus from regulatory constraints to purposeful development.
Impact
This reframing influenced subsequent speakers to move away from the traditional innovation-vs-regulation debate and focus more on intentional, human-centered development. It set the tone for discussing AI as a tool that can be directed toward specific societal goals rather than an inevitable force that needs to be constrained.
We are doing both the innovation as the ethics and regulation side all wrong… On the innovation side, we are doing it wrong because we are somehow understanding innovation as the capacity of using this hammer that we found out a couple of years ago of Gen AI… Most people developing AI never experienced power cuts, never experienced broken roads… AI has been developed extremely on the Western tradition, the Cartesian tradition. We think, therefore we are… If you would think about AI developed for instance in the African Ubuntu tradition, it says, we are, therefore I am. It would be a completely different type of AI.
Speaker
Virginia Dignam
Reason
This is perhaps the most provocative comment in the discussion, challenging fundamental assumptions about both innovation and regulation while introducing crucial perspectives on cultural bias in AI development. The Ubuntu philosophy comparison offers a concrete alternative to Western-centric AI development.
Impact
This comment created a significant shift in the conversation, introducing questions of cultural representation and lived experience in AI development. It challenged other panelists to think beyond technical solutions and consider whose perspectives are embedded in current AI systems. The ‘hammer’ metaphor became a recurring reference point for discussing appropriate technology application.
We have chosen globally, including in Europe… to actually not regulate [social media] we have let the social media diffuse without regulation and today we are discussing about limits for minors… but we are doing it all now after a lot of things have happened… we have already unmodifiable consequences… Sometimes we might be wrong and risk unchangeable effects.
Speaker
Brando Benifei
Reason
This comment provides crucial historical context by drawing parallels between current AI regulation debates and past regulatory failures with social media. It introduces the concept of ‘unmodifiable consequences’ and argues for proactive rather than reactive regulation.
Impact
This historical perspective added urgency to the discussion and provided concrete justification for the EU’s risk-based approach to AI regulation. It shifted the conversation from abstract principles to practical lessons learned from previous technological deployments, influencing how other panelists discussed the timing and necessity of regulation.
Technology alone is not enough… we really need to up our education of the engineers, the computer scientists the data scientists on the humanity side… and in the humanities and social science case… we need them to help us understand that we need to be much more precise in what we are talking about. AI at this moment is actually an empty signifier. It doesn’t mean nothing. Everything is AI. Nothing is AI… we want to regulate magic.
Speaker
Virginia Dignam
Reason
This comment identifies a fundamental problem in AI discourse – the lack of precision in terminology and the need for interdisciplinary collaboration. The characterization of AI as ‘magic’ in political discourse is particularly insightful as it explains why regulation efforts often miss the mark.
Impact
This observation about AI as an ’empty signifier’ resonated throughout the remaining discussion, with other speakers becoming more specific about what aspects of AI they were addressing. It highlighted the need for better communication between technical and non-technical stakeholders and influenced the conversation toward more concrete, practical applications.
AGI is about us. It’s about putting all us together because our collective intelligence is really what, at the end of the day, is going to solve or to support us solving our problems… We already have AGI. We always had AGI. It’s called collective intelligence.
Speaker
Virginia Dignam
Reason
This comment completely redefines AGI (Artificial General Intelligence) from a technological achievement to a social one, challenging the narrative that individual companies will create superintelligent systems. It offers a democratic alternative to corporate-controlled AI development.
Impact
This redefinition of AGI as collective intelligence provided a powerful counter-narrative to corporate AI development and influenced the final discussion toward collaborative approaches. It connected back to UNESCO’s mission of bringing diverse perspectives together and reinforced the theme of human-centered AI development.
The most inclusively designed technology is going to be the one that’s most successful. It’s going to increase accuracy rates… the most inclusively designed technology is going to be the most commercially successful.
Speaker
Paula Goldman
Reason
This comment challenges the common assumption that inclusive design is costly or reduces profitability. Instead, it argues that inclusion improves technical performance and commercial success, providing a business case for ethical AI development.
Impact
This perspective helped bridge the gap between ethical imperatives and business realities, showing other panelists and the audience that inclusive design isn’t just morally right but also technically and commercially superior. It influenced the discussion toward practical implementation strategies rather than abstract principles.
Overall assessment
These key comments fundamentally transformed the discussion from a traditional ‘innovation versus ethics’ debate into a more nuanced conversation about purposeful, inclusive, and culturally-aware AI development. The most impactful interventions challenged basic assumptions – that innovation and ethics are in tension, that current AI development approaches are optimal, that regulation necessarily hinders progress, and that AGI is a technological rather than social achievement. Virginia Dignam’s provocative challenges to Western-centric AI development and Debjani Ghosh’s reframing of the core choice facing humanity elevated the conversation beyond technical considerations to fundamental questions about whose values and experiences shape AI systems. The discussion evolved from abstract principles to concrete strategies for embedding diverse perspectives and lived experiences into AI development, ultimately reinforcing UNESCO’s vision of human-centered, culturally-sensitive technology development.
Follow-up questions
How can we successfully align every single human on this planet to the same ethical values?
Speaker
Debjani Ghosh
Explanation
This is a fundamental challenge in AI ethics implementation, as different ethical frameworks across cultures and societies create complexity in developing universally acceptable AI systems
How do we democratize not just access to technology, but also democratize design and creation of technology?
Speaker
Rita Soni (audience member)
Explanation
This addresses the critical issue of inclusive development where those who experience real-world problems (like power cuts and infrastructure issues) should be involved in designing AI solutions
How do we support the movement of getting those that have experienced power cuts to help design and develop AI?
Speaker
Rita Soni (audience member)
Explanation
This focuses on ensuring that AI developers include people from diverse socioeconomic backgrounds who understand ground-level challenges
How do we responsibly employ the half a million people worldwide considered impact workers who have typically been excluded but are now involved in AI development?
Speaker
Rita Soni (audience member)
Explanation
This addresses labor rights and inclusion of workers who contribute to AI development but may be marginalized in the process
What would AI look like if developed in the African Ubuntu tradition (‘we are, therefore I am’) rather than the Western Cartesian tradition (‘I think, therefore I am’)?
Speaker
Virginia Dignam
Explanation
This suggests exploring alternative philosophical foundations for AI development that emphasize collective rather than individual intelligence
How can we build global cooperation frameworks for issues like military use of AI and existential risks of losing control of powerful AI models?
Speaker
Brando Benifei
Explanation
These are global challenges that require international coordination and cannot be addressed by domestic regulations alone
How do we move from AI being used as a ‘hammer looking for nails’ to genuine innovation that addresses real problems?
Speaker
Virginia Dignam
Explanation
This challenges the current approach to AI implementation and calls for more thoughtful application of technology to solve specific problems
How do we change the narrative from AGI (Artificial General Intelligence) being about control to being about augmenting lives?
Speaker
Debjani Ghosh
Explanation
This addresses the fundamental purpose and goals of advanced AI development, questioning whether current approaches serve humanity’s best interests
How can we make AI terminology more precise rather than using ‘AI’ as an empty signifier that means everything and nothing?
Speaker
Virginia Dignam
Explanation
This addresses the need for clearer definitions and understanding of what AI actually encompasses to enable better regulation and development
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event

