Ethical AI_ Keeping Humanity in the Loop While Innovating
20 Feb 2026 14:00h - 15:00h
Ethical AI_ Keeping Humanity in the Loop While Innovating
Summary
The UNESCO-sponsored panel “Humanity in the Loop” examined how to balance AI innovation with ethical safeguards, emphasizing a human-centred approach to technology deployment [1][2]. UNESCO’s 2021 Recommendation on the Ethics of Artificial Intelligence, adopted by 193 member states, frames ethical AI around human rights, dignity and fundamental freedoms and calls for these principles to be operationalised in practice [21][23][40-41]. Dr. Tawfik Jelassi argued that ethics and innovation are not contradictory but mutually reinforcing, insisting that ethical reflection must be built into AI design from the outset (“ethical by design”) and that the UNESCO recommendation provides a global framework for this [38-40][41]. Debjani Ghosh stressed that the real choice is whether technology serves humanity’s basic needs or fuels conflict, and that accountability ultimately rests with humans; she advocated embedding oversight throughout the development lifecycle and using “sandbox” testing to make ethics an integral part rather than an afterthought [49-56][65-68].
Brando Benifei described the EU AI Act’s risk-based approach, noting prohibited uses such as predictive policing and emotional-recognition, and argued that regulation must protect human rights without stifling innovation, while also highlighting the need for global cooperation on issues like military AI [78-84][80-86][191-198]. Virginia Dignam critiqued the narrow “hammer-and-nail” view of innovation, calling for broader, culturally diverse conceptions of AI (e.g., Ubuntu) and for education that equips engineers with social-science perspectives to avoid treating AI as a magical, neutral tool [100-112][124-138]. Paula Goldman shared Salesforce’s practical experience, explaining that embedding ethical controls, real-time accessibility features, and human-in-the-loop escalation mechanisms not only improves inclusivity but also yields superior, more marketable products [140-158][155-159].
The discussion repeatedly highlighted the importance of awareness, capacity-building and multilateral dialogue, with Dr. Jelassi recalling UNESCO’s grassroots projects that used communication tools to empower remote communities, illustrating how AI can be a force for good when coupled with education and advocacy [204-214]. Participants agreed that translating high-level principles into concrete, context-specific mechanisms-through regulation, industry practice, and education-is essential for trustworthy AI deployment [34][65-68][155-159][124-138]. Maria Grazia emphasized that the “human-centered” approach requires not only technical solutions but also deliberate policy instruments and stakeholder participation to define unacceptable uses of AI [88-91][186-188]. The audience raised concerns about involving developers from under-served regions, prompting Debjani to note India’s initiatives such as Startup India that aim to democratise AI design beyond major urban centres [290-298]. Overall, the panel concluded that a coordinated global framework, inclusive design, and continuous human oversight are necessary to ensure AI advances societal welfare while mitigating risks [191-198][226-230][236-242].
Keypoints
Major discussion points
– UNESCO’s core position that ethics and innovation are complementary, not opposing forces.
The moderator stresses that “the position of UNESCO… is this is not true” that regulation hinders innovation, and outlines the three pillars-human rights, dignity, and freedoms-that must guide AI ([20-24]). Dr. Tawfik reinforces this by stating that “ethics and innovation… reinforce each other” and that AI must be “ethical by design, ex-ante” ([38-41]).
– The need for a risk-based regulatory framework to balance innovation with safeguards.
Brando Benifei explains the EU AI Act’s risk-based approach, naming specific high-risk sectors and prohibited uses (e.g., predictive policing, emotional-recognition) and argues that regulation must be proactive rather than purely ex-post ([74-84]). Maria’s follow-up highlights the importance of defining “what we do not want the technology to do” as a regulatory baseline ([88-90]).
– Embedding ethical oversight throughout the AI development lifecycle.
Debjani Ghosh argues that oversight must be built “into the entire development process from design to commercialization” with “flag-offs at every part” and sandbox testing, turning ethics into a design principle rather than an afterthought ([65-69]).
– Broadening the conceptual and cultural foundations of AI through education and collective intelligence.
Virginia Dignam critiques the “hammer” metaphor and Western-centric, individualistic AI traditions, calling for diverse epistemologies (e.g., African Ubuntu) and a “toolbox” of skills and perspectives ([106-112][124-138]). She later expands this to collective intelligence as the true “AGI,” emphasizing non-neutrality of technology and the need for interdisciplinary skill-building ([235-254]).
– Practical industry steps toward inclusive, trustworthy AI.
Paula Goldman describes concrete practices at Salesforce: real-time monitoring, escalation protocols, and designing for accessibility (e.g., handling different accents and disabilities), arguing that inclusive design yields superior, more marketable products ([140-159][220-227]).
Overall purpose / goal of the discussion
The UNESCO-sponsored panel “Humanity in the Loop: Balancing Innovation and Ethics in the Age of AI” was convened to explore how AI can be advanced responsibly. Participants from UNESCO, government, academia, and industry shared perspectives on translating UNESCO’s global AI ethics recommendation into actionable policies, regulatory models, educational curricula, and corporate practices that keep humans at the centre of AI development and deployment.
Overall tone and its evolution
– Opening tone: Formal and optimistic, with the moderator framing the debate as a constructive challenge to the “innovation vs. ethics” narrative ([13-18]).
– Mid-session tone: Becomes more critical and reflective; Virginia delivers a “controversial” critique of current innovation paradigms ([96-112]), and Debjani stresses the difficulty of universal ethical alignment ([51-56]).
– Later tone: Shifts toward collaborative problem-solving, highlighting concrete regulatory proposals (EU AI Act) and practical industry measures (Salesforce’s inclusive design) ([74-84][140-159]).
– Closing tone: Hopeful and inclusive, emphasizing collective intelligence, global cooperation, and the need to democratize both access and design of AI ([191-199][235-254]).
Overall, the discussion moves from high-level framing, through critical analysis of gaps, to concrete solutions and a unifying call for global, multidisciplinary cooperation.
Speakers
– Tim Curtis
– Role/Title: Regional Director for UNESCO South Asia
– Area of Expertise: UNESCO regional leadership, AI policy and innovation
– Debjani Ghosh
– Role/Title: Distinguished Fellow, NITI Aayog; member of the ETIO think-tank for the Government of India
– Area of Expertise: AI ecosystem development, policy formulation, economic and social development initiatives [S4][S5]
– Dr. Tawfik Jelassi
– Role/Title: Assistant Director General for Communication and Information, UNESCO
– Area of Expertise: Communication, information & knowledge societies; AI ethics and governance [S6][S7][S8]
– Brando Benifei
– Role/Title: Member of the European Parliament
– Area of Expertise: EU AI Act, risk-based AI regulation, international AI policy coordination [S9][S10]
– Paula Goldman
– Role/Title: Chief Ethical and Humane Use Officer, Salesforce
– Area of Expertise: Ethical AI implementation in industry, responsible AI product design
– Virginia Dignam
– Role/Title: Professor and Director of the AI Policy Lab, Umeå University; member of UNESCO’s AI Ethics Experts Without Borders
– Area of Expertise: AI policy, AI ethics, interdisciplinary education and research [S15]
– Rita Soni
– Role/Title: Audience participant (no formal title provided)
– Area of Expertise: (not specified)
– Maria Grazia
– Role/Title: Chief of the Executive Office of UNESCO’s Social and Human Sciences sector; Moderator of the panel
– Area of Expertise: Microeconomics, innovation dynamics, AI governance and ethics [S20]
– Audience
– Role/Title: General audience members (including individuals such as “Rajan”)
– Area of Expertise: (not specified)
Additional speakers:
– (None – all speakers appearing in the transcript are covered in the list above.)
The session opened with Tim Curtis, UNESCO’s Regional Director for South Asia, welcoming participants to the UNESCO-sponsored panel “Humanity in the Loop: Balancing Innovation and Ethics in the Age of AI” and thanking the Government of India for its collaboration. He outlined UNESCO’s aim to promote ethical, human-centred AI while supporting innovation, especially in the Global South [1-3].
Curtis introduced the panellists: Dr Tawfik Jelassi, Assistant Director-General for Communication and Information and a lead of UNESCO’s AI-ethics work; Professor Virginia Dignam, director of the AI Policy Lab at Umeå University and member of UNESCO’s AI Ethics Experts Without Borders; Paula Goldman, Chief Ethical and Humane Use Officer at Salesforce and member of UNESCO’s Business Council; Debjani Ghosh, distinguished fellow at NITI Aayog and architect of India’s AI ecosystem; and Brando Benifei, Member of the European Parliament who would discuss the EU AI Act. The moderator was Dr Maria Grazia from UNESCO’s Social and Human Sciences sector [4-10].
Maria Grazia opened by questioning the premise of the title, arguing that innovation and ethics need not be opposed. Drawing on her micro-economics background, she linked innovation to productivity, welfare and well-being and noted that regulation does not necessarily hinder these dynamics. She reminded the audience of the 2021 UNESCO Recommendation on the Ethics of Artificial Intelligence-adopted by 193 member states and, to date, the only global recommendation of its kind-built on three non-negotiable pillars: human rights, human dignity and fundamental freedoms [12-25][26-33][34-38].
Dr Jelassi responded that there is no contradiction between ethics and innovation; the real tension lies between innovation and over-regulation. He argued that embedding ethical reflection from the design stage makes AI systems more trustworthy and therefore more innovative, emphasizing UNESCO’s “ethical-by-design” principle and the Recommendation’s calls for human oversight, non-discrimination, cultural respect and environmental sustainability [38-41][42-45].
Debjani Ghosh then explored how the high-level principles can be operationalised. She reframed the debate as a choice between using AI to eradicate suffering (e.g., disease, food insecurity, loss of dignity) or to amplify conflict. Acknowledging that universal ethical alignment is impossible, she insisted that accountability must remain with people, not algorithms, and advocated lifecycle “flag-off” checkpoints and sandbox testing to embed ethics as a design principle rather than an after-thought [48-56][57-69].
Brando Benifei described the EU AI Act’s risk-based approach. He identified high-risk sectors such as workforce, health and justice, and outlined strict requirements on data quality, cybersecurity, governance and human control, while prohibiting applications like predictive policing, workplace emotion-recognition and manipulative subliminal techniques. He argued that proactive, risk-based regulation protects human rights without stifling innovation and called for global cooperation on trans-national challenges such as military AI [70-84][85-87][191-198].
Professor Virginia Dignam critiqued the prevailing “hammer-and-nail” metaphor, warning that treating any new AI tool as a universal “hammer” that can nail every problem limits true innovation. She advocated a broader toolbox that incorporates diverse epistemologies, citing the African Ubuntu philosophy (“we are, therefore I am”) as an alternative to the Western Cartesian view (“I think, therefore I am”). She stressed that AI is an “empty signifier” and that engineers need interdisciplinary training to ask why a problem matters, who benefits and who loses, urging a focus on collective intelligence rather than a monolithic AGI [96-112][124-138][235-254].
Paula Goldman explained how Salesforce translates these ideas into practice. Her team continuously monitors AI agents, defines escalation points where control shifts between AI and humans, and builds inclusive, real-time accessibility features, e.g., accent-aware voice agents and on-the-fly UI corrections for users with disabilities. She argued that inclusive design is a commercial advantage, yielding products that perform better and achieve greater market uptake [140-159][220-227].
Audience Q&A
– Rajan asked “What is AI policy?” – Prof. Dignam answered that AI policy concerns the tools, skills and knowledge needed to assess AI’s impact throughout its lifecycle, not the technical design itself [263-266].
– Rita Soni raised concerns about developers in low-resource settings. Debjani Ghosh replied that democratizing AI design is essential, citing India’s “Startup India” programme and the AI Impact Commons platform (aiimpactcommons.global), which aggregates impact stories from more than 30 countries on issues such as malnutrition, pharma-related suicides and climate resilience [74-80][276-286][290-298].
After Benifei’s remarks, Maria Grazia redirected the discussion to Dr Jelassi, who reiterated UNESCO’s mission to build peace through education, culture and information. He recounted a recent visit to a remote Southern-African village that lacked radio or Internet; UNESCO’s provision of community radios, telecom infrastructure and early-warning systems transformed lives, illustrating how AI can serve humanity when people are truly at the centre [81-88].
Finally, Maria Grazia thanked the panelists, invited a group selfie and formally closed the session, emphasizing the need for continued multilateral dialogue and collective intelligence [89-90].
Consensus & actions – The panel agreed that (i) innovation and ethics are complementary; (ii) UNESCO’s 2021 Recommendation provides a universal set of principles that must be operationalised through lifecycle-wide ethical checkpoints; (iii) ultimate accountability resides with humans; (iv) capacity-building and interdisciplinary education are vital; and (v) global, inclusive cooperation-especially with the Global South-is essential for coherent AI governance. Proposed actions include urging member states to translate the Recommendation into national policies, expanding the AI Impact Commons, adopting an ethics-by-design lifecycle model with mandatory checkpoints and sandbox testing, creating risk-based regulatory sandboxes, investing in interdisciplinary up-skilling programmes, and fostering multilateral forums to align standards on prohibited uses and address cross-border risks such as military AI [2][38-41][55-60][124-138][191-199][40-41][65-69][141-149][191-198][220-227][290-298].
Unresolved issues highlighted were the challenge of achieving global consensus on ethical values amid cultural diversity, mechanisms for turning UNESCO’s high-level principles into enforceable regulations, systematic inclusion of developers from underserved regions, a precise definition of “AI policy” distinct from technical standards, and robust monitoring frameworks for accountability when harms occur. These gaps point to the need for further research, pilot projects and sustained dialogue [48-56][100-112][263-266][276-286].
This afternoon to this UNESCO sponsored event, my name is Tim Curtis, I’m the Regional Director for UNESCO for South Asia and very happy to have you all for the event today, Humanity in the Loop, Balancing Innovation and Ethics in the Age of AI. Of course we’re grateful to the Government of India for its collaboration on this session because we at UNESCO believe, which we at UNESCO believe goes to the heart of our engagement with the ethics of artificial intelligence and namely how to ensure an ethical and human AI centred deployment whilst also encouraging the development of artificial intelligence and innovation in a technology that can offer so many benefits to humanity. and including and in particular to the global south.
So it gives me great pleasure to just present today’s panellists and moderator. We’ve had Dr. Tawfiq Jilasi, who’s Assistant Director General for Communication and Information and who’s really been a pivotal figure in UNESCO’s work on AI ethics. Professor Virginia Dignam, who is a Director of the AI Policy Lab at Umeå University and she’s also a member of UNESCO’s AI Ethics Experts Without Borders and has been supporting UNESCO’s readiness assessment methodology in multiple countries. Also privileged to have Paula Goldman, Chief Ethical and Humane Use Officer at Salesforce, who are a member of UNESCO’s Business Council and she has really been leading by example in the private sector’s responsible AI ethics. Debjani Ghosh, a distinguished fellow Niti Aiyog who needs no introduction here in India, a household name in India for her role in building and leading India’s AI ecosystem.
Thank you for coming. And finally, a great pleasure to welcome Brado Benefai, a member of the European Parliament who will share his insights on the EU AI Act and how they have been able to navigate balancing innovation and ethics. And finally, of course, I’m a moderator Dr Maria Grazia from the Chief of the Executive Office of UNESCO’s Social and Human Sciences sector. Please, Maria Grazia, over to you.
Hello, good afternoon. So we’ll try to have this session very dynamic because it’s after lunch, it’s Friday, over five days, very interesting, a long week. So let me start by challenging the very title of this meeting, that is Balancing Innovation and Ethics in the Age of AI. Now, nobody’s first. effect. So I’m a microeconomician, which is a very complicated word, which looks like a rude word, but it’s not. It’s mathematics applied to economics and especially applied to understanding the dynamics of innovation and new technologies. Why I’m saying that, because of course the question of innovation, what drives innovation, how can we get more innovation, is something that we always ask by the time you study what drives productivity growth, what drives welfare and well -being.
And then at times we also hear this, that having constraints or having frameworks will actually hinder these dynamics. And the position of UNESCO has been very clear. The position is, this is not true. So what UNESCO has, actually the member states that have adopted UNESCO recommendation on the ethics of artificial intelligence already in 2021, which means that you all countries, including India, were discussing these issues already since 2019 to get to an agreement. is actually what it means and how can we put technologies at the service of humanities and not let anything that is technologically feasible go if that technological feasibility actually hurts people, hurts humanity. And so for us at UNESCO, ethics of AI means something very concrete.
It means AI, technologies, and here I would like to invite you to think that it’s technologies, it’s not one single element, it’s a lot of things, that actually abide by three simple things that too often we give for granted, whereas perhaps we want to think about it more, and these are human rights, human dignities, and fundamental freedoms. And if we are able to develop, deploy, and use technologies in a way that we abide to these three components, then for sure we do have technologies that serve humanity. And why? I’m challenging the very topic because too often we see… innovation, or actually the narrative that we use out there, that is used out there, puts innovation and ethics, or ethical AI, which actually means an AI that also throughout the life cycle is ethical, as trade -offs.
So if we innovate, it cannot be ethical because by the time it’s gone out, we don’t have the time to check on these things. Well, think of a parallel, and then we take it from there on the concrete dynamics of AI. But think, if you were to think about one sector that is very much regulated, perhaps what comes to mind is pharma, pharmaceutical. Now, to my knowledge, but that can be my ignorance, I have never seen one single study being able to prove that the regulation in that sector has actually hindered the innovativeness or actually the productivity or even the remuneration of the sector. So by the same token, and actually the pervasiveness of AI to some extent leads us to think to the pervasiveness of of the paracetamol, for instance, we use every day by the time we have an ad, like I think some of you this afternoon might have, and after listening to me, perhaps even more.
But, you know, it’s really the pervasiveness of technology that touches our life, each and every day in many ways. And this is what I think is important to discuss from different perspectives. And allow me to start with my ADG, ADG jealousy. And as I mentioned, from UNESCO, we give this global perspective, because the recommendation was adopted by 193 member states. Now, very often, what is very challenging is to go from principles to practice. That is, sometimes we know what we need to do, but then the question becomes, how do we translate it into practice? So, ADG jealousy, when do you see what are the biggest gaps that exist between going from principles and what instead is happening on the ground?
Thank you, Maria Grazia. maybe before I briefly answer your question let me say that you used the word of innovation and ethics I don’t see personally an issue, a contradiction between the two, I see it more between innovation and regulation because say to be creative, innovative you should free up the mind of the people, you should not constrain them, you should not tie their hands I used to be chair of a telecom operator board and there of course telecom and mobile phones and access to private data of consumers, the issue of regulation is paramount but we don’t want regulation that hinders innovation, I think here so I don’t see ethics and innovation being in contradiction to the contrary, I think they reinforce each other how is that?
Because clearly if you integrate ethical reflection in the design of AI systems of course if you do that AI systems will be more respected more trustworthy, more used and therefore more broadly deployed across society so I see ethics and innovation really reinforcing each other and quite often at UNESCO we say AI systems have to be ethical by design it should be done ex ante not ex post not when we see mistakes and hazards and risks and harmful impact of AI we say wait a minute let’s go back to see what went wrong in those models in the data sets, are there some biases etc so I think it has to be done from the very early stage and therefore innovation has to be human centric and has to be contextualized, there is no one size fits all, we know that what you can provide is an overarching framework so it’s a broad set of guidelines and principles as you said Maria Grazia and this is what the UNESCO recommendation on the ethics of AI is about You know that this recommendation has been so far the only global recommendation of its kind.
It was adopted back in 2021 by 193 member states of UNESCO, and it calls for human oversight, non -discrimination, respect for cultural diversity, respect for environmental sustainability. These are the principles that need to be translated into action and that need to be operationalized within a certain context.
Thank you very much, Elie Dji. Let’s actually go to Debjani, because I would like to go further into this operationalization question. So, from your work at NITI IOC, and also your experience with NASCOM, so what are the mechanisms that can really help embed the ethical reflection into what is the everyday life of both companies and sectors?
Thank you. Thank you, Deb. Okay. Thank you. Thank you for having me here. So, first of all, I’ll just go back to the topic, if I may, for a second, right? because I don’t think the choice is between innovation and ethics. I really don’t. I think the choice is between do we use technology to ensure that everyone in the world is cancer -free, everyone in the world lives with dignity, everyone in the world has enough to eat, or do we use the technology to make the world a much bigger conflict zone, develop the next atom bomb, and worse. So I think the choice is that. And therefore, the biggest challenge we have, and I hate applying the word, the label of ethics to technology, because I think the biggest challenge we have is can we, all the wisdom in this room, can we say that we will be successful in aligning every single human on this planet to the same ethical values?
The answer is no. No. we’re not going to be able to do that. And we know we’re not going to be able to do that. So as long as we humans don’t align to the same ethical values, you will always have good actors and you will always have bad actors, you know that technology is going to be used in ways that are non -ethical. So the accountability, you’ve talked about humanity in the loop, the accountability comes back to us. So I think it’s very important to sort of understand that because in all our dialogues on technology, we somehow delegate the accountability to technology. I don’t think we can as yet. Maybe in another 10 years when cognitive reasoning becomes a thing, maybe then, but not as yet because for somebody who actually builds codes and builds agents, I know they’re not that intelligent as yet.
So I think the accountability on humans is what we have to focus on. And going back to your question, if you’re talking about how does industry ensure? I mean, one of the things I’m very clear about that regulation is usually an afterthought. You develop the technology and then you say, okay, how do we now regulate it to ensure that it’s used right? And I think that has to fundamentally change. Oversight has to be built into the entire development process from design to commercialization. And it has to be built with the right flag -offs at every part of the design and development process. If you do that and you’re able to, you know, red tape the product that you are developing at every single stage to certain standards that have been developed, you are going to develop something that, and then hopefully after the entire development phase, there’s also a sandbox where you test out the impact.
You will get to a stage where ethics becomes by design versus an afterthought. And I think that’s what we have to move towards.
Thank you. I’d like to a bit change the order of the speakers because you brought in the argument of the regulators and you have one next to you that I’m going to refer to. And how do you see this relationship? Because we know fundamentally the regulation that has been pushed in Europe is a risk base. So what was the logic and how this relates to what she was discussing as the human oversight or even the redress of mechanisms that we might want to put in place in order to have AI that is ethical?
Well, first of all, excuse me for the voice, but that’s it. Exactly, but thanks to technology, you can hear me anyway. So I think that I… I can also adhere to the point that innovation and ethics are not one against the other. in fact this summit that is concentrating on impact on action, on diffusion, is not separate from keeping the track on on reflection, on safety on how to protect human rights how to make AI human centric, the things are interwined, the point is how do we regulate effectively and how we find a good balance, but I want to bring maybe a controversial point to the table because I have my strong conviction on this we have chosen globally, including in Europe, that has been often the forefront of regulating in one of those rooms now, I was with her in another panel there was Anu Bradford professor of Columbia University that has written the book The Brussels Effect so in fact EU has often opened the way for many regulatory pathways I mean even Europe has chosen when looking at the social media to actually not regulate we have let the social media diffuse without regulation and today we are discussing about limits for minors we heard about that also in the inaugural session we are discussing about misinformation and labelling of deepfakes even Prime Minister Modi talked about that in the inaugural session but we are doing it all now after a lot of things have happened and my point, that’s my opinion we have already unmodifiable consequences so I think that when we talk about when we should regulate we should regulate and we should regulate and we should regulate and we should regulate and we should regulate and we should regulate if we should let the innovation flow and act only ex post.
Sometimes we might be wrong and risk unchangeable effects. So we need to build a balance that doesn’t hinder innovation, but also identifies human rights challenges. The AI Act tried to build a risk -based approach, identifying areas where we need AI to be overseen, workforce use of AI, healthcare use of AI, administration of justice use of AI. We want to be sure when we deal with that that data used for training is quality data, cybersecurity is sufficient, the governance of the data is solid, and there is human control. These are examples of what we have identified. Everything. And in fact, we even chose to prohibit. a few use cases, for example, predictive policing, for example, emotional recognition in workplaces and in study places, manipulative subliminal techniques.
I don’t think it’s a taboo to choose that some use cases of AI, we don’t want them in our society, and we just keep them out. So I think this approach based on the risk, you can look if you like it this way, if you want to modify, but it’s an interesting perspective, because you can choose what you think is in need of a certain regulation, and you can also promote transparency, which I think is crucial to build trust. Without trust, especially in democratic contexts, it’s impossible to accelerate adoption of AI, which is still a big challenge from both the global north and the global south. The numbers tell us that a lot of companies, or public administrations that could benefit from an ethical and correct use of AI, they are not using it because they don’t know what could
You put forward a very important point, Brando, that is like perhaps we might not be able or we might not want to decide what the technology should do for us. But for sure we might want to discuss and agree on what we do not want the technology to do for us because these are unacceptable uses of deployment. And in this case, this also highlights the importance of awareness, of the centrality of people, of having this human -centered approach. And here I would like to invite Virginia into the conversation because of course you, as an educator, as part of this beautiful world of educators, as a professor, you have this constant contact and the ability to interact and nurture the humankind.
So what do we have to do to avoid that people are just consumers or, you know, are possibly exposed to it instead of stealing the technology to work where we want to go?
Sure. Thank you very much. Thank you for inviting me to be here. Again, like all my previous colleagues, I want to go back to the title. And I’m not going to talk about the balancing part. I’m just going to claim and to be controversial and to wake up all. We are doing both the innovation as the ethics and regulation side all wrong. We are doing it not in the way that it needs to be done. On the innovation side, we are doing it wrong because we are somehow understanding innovation as the capacity of using this hammer that we found out a couple of years ago of Gen AI or whatever. And now we want to use the hammer to nail any nail that we find out.
Innovation is much more than that. innovation is really challenging ourselves to go further. And I want to go back to a sentence that has come with me and is the main thing I’m taking from this summit today. In a couple of sessions ago where I spoke, someone was saying, most people developing AI never experienced power cuts, never experienced broken roads. I would like to go further. AI, and I have been working in AI for 40 years, all the different types of AI that existed before, has been developed extremely on the Western tradition, the Cartesian tradition. We think, therefore we are. I think, therefore I am. First it is individualistic, and then equates intelligence with cognition. Human intelligence is much more than cognition.
If you would think about AI developed for instance in the African Ubuntu tradition, it says, we are, therefore I am. It would be a completely different type of AI. So we do need to challenge ourselves not to go with this hammer that is there already and try to find the nails and call that innovation. It is not innovation. It’s just running around like chickens without heads and see if one of those hammers works. So that’s one. On the side of ethics and the regulation, we are also assuming there are two things that usually come with the idea, and especially in this type of combination, that ethics is this kind of finger that points, thou shalt behave, thou shalt be good, and that regulation is about prohibiting you to do things.
Neither ethics is the finger, nor regulation is necessarily only about prohibitions. Moreover, regulation like AI, like the hammer, like the telephone, is not about prohibiting you to do things. It is about prohibiting you to do things. It is about prohibiting you to do things. It is about prohibiting you to do things. It is about prohibiting you to do things. It is about prohibiting you to do things. artifact that we built we built regulation and we can apply to regulation and to ethics the application of ethics exactly the same type of principles that we apply to technology let’s experiment, let’s try let’s verify, let’s evaluate let’s see what’s there and not have this idea of the finger or the loss written in stone which stays there once and forever so that’s going back and now very quickly on your answer because I don’t want to take much time I think that education needs exactly to start by this point technology alone is not enough so we really need to up our education of the engineers, the computer scientists the data scientists on the humanity side we know as engineers we know very well how to solve a problem, we never ask ourselves why is this a problem who has this problem, what are the alternatives to my solution who gains, who loses, what is gained, what is lost this is humanity We need to somehow bring that together in the engineering case and in the humanities and social science case.
We need them, because I’m an engineer, to help us understand that we need to be much more precise in what we are talking about. AI at this moment is actually an empty signifier. It doesn’t mean nothing. Everything is AI. Nothing is AI. All kinds of things are AI. The applications are AI. The sectors are AI. The technology is AI. The research, everything is AI. And we cannot just go around with this word which actually means magic. In most politicians’ talks, it means magic. And we want to regulate magic. Okay, good luck. So we need the humanities, the social science, to really help us. As being precise about what are we doing. So this is the education we need.
fantastic you couldn’t have put it much easier to me to then ask Paola how are we doing that in companies because this is very easy to say we need to translate the principles the values in concrete models of that actually work work for a company, work to deliver results and work for people
yes indeed, well first of all thank you for that and I mean we were just talking about how this is our last speaking panel of the week and that was a fiery way of drawing things together, I really appreciate it, kind of an energy boost so yeah, I think the answer is actually much more practical and much less abstract than one might imagine and so I’ll just tell you a little bit about my experience I spend my days at Salesforce both testing our products and making sure that our AI has features baked into it so that our customers There’s no, I can observe what’s going on and know how to tweak the controls and understand, for example, when they should set for an AI agent to escalate to a human or a human to escalate back to AI and so on.
And when we do this, it’s not like we think we at Salesforce have all the answers because clearly we don’t and we serve a variety of industries and all over the world and so on. But everyone, all of our customers are basically asking the same questions, right? They’re asking, how do I know what kind of results I’m getting? How can I tell if something goes wrong? What are my options if something goes wrong? What part of AI ethics is your responsibility and what part is mine, right? And these questions don’t necessarily have the most mature answers because we’re in the early innings. of AI agents and a lot more work. to do. But actually, these are the right questions to be asking and also allows for some flexibility and some cultural or industry specificity for people to find the right answers to the questions.
So that would be part one of my answer. It’s actually very, very practical. To adopt AI, companies and organizations need to be able to trust that it’s going to work. They don’t want to be embarrassed by it, right? And they’re not going to be able to scale it if it doesn’t work. So that’s number one. The second thing is also increasingly what we’re finding when we work with companies on this is that the most successful companies at Scaling AI put the people at the center of the transformation. They work with not just top down, like you shall use this application. They give people a chance to sort of have a voice around what is actually working.
What is actually most useful to them in the day -to -day work? Where is AI going to actually help them and where is it kind of useless? right and it’s that kind of understanding of how work actually gets done what actual processes are going to benefit from that kind of application that I think is really important and allows people to sort of stay at the center of this large -scale transformation that we’re part of
that might happen or should happen in the context of making AI ethical by design?
Well, in my current role in ETIO, which is the think tank for government of India, we’re looking at what are the unlocks for technology, including AI, to ensure that we can use technology to solve for some of the biggest problems, right? Now, what Professor Virginia said about AI as a hammer, I think that’s a luxury of the developed countries, and I do agree with you when it comes to developed countries. But when you come to developing countries where you don’t have a lot of resources, you cannot afford to use the technology that takes a lot of deep investment to sort of do things where you’re not sure. You’re not sure of the ROIs. And one of the things, examples I want to give is as part of this summit, there were seven working groups that were set up looking at different problems.
I chaired one of the working groups on economic development and social good, which was all about impact and how do you scale impact, right? And we had around 50 countries participating. Now, one of the things that came out of that working group was, which is one of the outcomes of this summit, is the creation of AI Impact Commons globally, and it’s online. You guys can look it up, aiimpactcommons .global, which has impact stories from more than 30 countries, and counting, and it’s growing every day, with learnings on what kind of problems can be solved and how do you scale it. And by I said it’s a luxury of developed countries is because when you look at those impact stories, and most of them are from developing countries, and you’ll be amazed with the kind of problems they’re solving, from malnutrition to pharma, you know, to suicides, how do you lower pharma suicides by using technology to improve yield.
Thank you. ensure that they don’t suffer from climate changes and shocks. I mean, the problems are so inspiring. So I think it won’t be fair to say that we don’t know what problems we are solving today, and I will absolutely stand for that. And I think it’s – I’ll go back to what Paula said. I’m not sure if industry today is really putting human at the center of the loop, but I think they need to. They absolutely need to. I do, because as we develop technology, for example, the end goal right now of – seems like the end goal of AI, all the big companies are talking about, is AGI. Now, when you look at what does AGI mean, it’s about control.
Why do we want to build something to control everyone? Why don’t we want to build something that is going to augment lives? And if we could change the narrative, then I would say, yes, humans are at the center. Right now, I think we still have – we still have a lot of work to do to bring humans back into the center of the loop. And it’s something I think we have to realize and industry has to realize. that that is the only way you can build sustainable businesses. And that’s how you sort of build your staying power. So it’s going to be very important to do.
Absolutely. And it’s about having these different entities around the table, but also having different governments and having this multilateral setting talk to each other to have regulation or more generally, because at the end of the day, we talk a lot about regulation, but regulations are part of the policy framework that one could put in place. So actually, let’s go to Brando, because I was seeing he was kind of calling me with his eyes by the time we were talking, and I’m sure he wants to add on the multilateral setting. Please, over to you, Brando. Perhaps you were not calling me, but you’ve been called in. Never the less.
Well, I think that it’s very important that we use occasions like this, this summit, to… to advance a global cooperation framework. And for sure… it’s also a part of the mission of UNESCO to unite different cultures and approaches to what we are talking about. And you explained it earlier, the longstanding work of the organization. But I think that we need to face the reality that there are issues where global cooperation will be crucial and that it’s still not sufficient. Let’s think of military use of AI or the existential risks of losing control of very powerful AI models. This is something that is part of a controversial debate, we would say. But I wouldn’t dismiss renowned scientists that sustain that we are.
in a context where the lack of globally adopted rules are putting us in very significant danger. And this is also part of the idea of balancing innovation and ethics. Because for sure we need domestic rules to foster the best opportunities out of the various use cases of AI. In these days I met many companies that were working on very practical, extremely useful AI use cases to ameliorate our life. To ameliorate. To ameliorate societal good. But this cannot be left in the hands of just the… judgment of private sector companies that have a specific objective, profit for their owners or shareholders, it’s not societal good they might want to add that on top but that’s not their objective, it’s natural, so we need to have frameworks in place on what is our daily impact with AI and we need to build common standards the more broadly adopted standards we have globally the best will be to reach results but we also need a step further that is global cooperation on those issues where we cannot actually do very much domestically they are global issues and I think that with an increased geopolitical tension soon the use of AI for peace will be quite an important topic on which the international community has to find a way to take quick steps forward I hope that our leaders will deal with that
I can’t agree more with the need to coordinate and have an approach that is global and actually allow me the prerogative of the moderator to call my ADG Tophie I will take the consequences of that but what I would like to ask you is what it means to have people at the center and let’s remember that in your case, given the work you lead on the communication and information sector what is the role of the information Virginia was hinting at that before in terms of awareness could you please share a bit of those insights
Thank you Maria Garcia let me pick it up where Brando left it, he said AI for peace maybe some in the room know why UNESCO was created back in 1945 80 years ago almost to the day the mission of UNESCO was and has been to build peace in the minds of men and women how? through education culture, sciences, communication and information everything happens in the mindset of the people today of course we want AI to be a force for good but it could be also a force for hazards, for harm for risk I tend to say technology is neutral it depends what humans make out of it it could be a force for good it could be a force for you mentioned wars or unwanted things so yes humanity in the loop that’s fundamental I always ask myself and that’s my team at UNESCO I say if whatever we do in the field if that transforms lives then we are spot on if you make the beneficiaries of our educational program whatever if you can make them more successful through what you offer them then that’s impact.
Where is the impact? AI can transform lives, yes. And you mentioned to us some examples. It can help cure cancer, as you said, provide food for the needy people, and so on and so forth. We want that type of AI. And AI does not stand for artificial intelligence. AI stands for all -inclusive. That’s AI as well. So if you have that perspective to things, if you really put humanity in the loop, at the center, not only in the loop, in the center, and allow me one minute to share with you, I have been at UNESCO for five years. My most memorable day happened last week in a tiny village in remote southern Africa. A village in which people had no access to radio, no TV, no mobile telephony, no internet, nothing.
They always felt we were second -class citizens in this country. Imagine that you don’t have access to your own internet. Do you have that information? you don’t know what’s happening around you you cannot call your relatives living in other cities this was the case of 15 small communities what UNESCO did, it provided first community radios, set up a tower with transmission equipment so through the radio people have information know what’s happening and when we did that, telecom operators came in to plug in their equipment to provide mobile telephony, and then it became internet connectivity, and then UNESCO put in place early warning systems, because these areas were very much prone to floodings, and whenever that happened it wiped out the cattle the livelihood of the people, etc that’s transforming the lives of the people, AI can contribute in a huge way to that extent and I think if we put that at the center, then of course it has to be ethical, it has to be human centered, it has to be accountable, transparent, all the principles that we talked about, and then comes the issue of …
advocacy, capacity development because more informed policy makers will go this route but if we don’t bring up awareness if we don’t do the advocacy and the capacity building and the training then of course we can see that some companies or some people going for the buck for the profit out of this technology not the social benefit not transforming lives
thanks very much all over to you because the company is at the end of the speech so over to the company and really how you see also this fact of including the other stakeholders in what you do and how that can transform and help you deliver on AI that is added by the company
well thank you for saying that and I actually think that it becomes more and more obvious that that’s actually the only way to scale the technology um um And, you know, but just think about, think about if you’re developing a technology that’s meant to serve many different markets and many different populations, that you need to know, for example, like we have in our AI agent, we have a voice capability. We need to know that that voice capability, even if we’re just talking about English, forget about other languages for a second. We’re just talking about English. It needs to work on different vernaculars of English, different accents, etc. I work a lot on product accessibility, right?
It needs to understand a deaf accent, for example. And so the most inclusively designed technology is going to be the one that’s most successful. It’s going to increase accuracy rates and so on. I also think this is to that end it’s actually a very very exciting time to be able to use AI for inclusion and so I mentioned for example product accessibility one of the things that to me that’s most hopeful and most exciting about this time is that like we’re starting to see AI agents that correct in real time we’re working my team is working on this at Salesforce correct in real time code that is not accessible or correct in real time a browser extension so that if you’re like on your phone and something comes up and maybe a common problem is you’re trying to zoom out or in and it breaks it will correct it in real time and these are the this is this kind of technology is the difference between someone that’s able to use some software to actually get their job done or someone that’s excluded from getting their job done and so again I guess I guess the point that I’m trying to make is the most inclusively designed technology is going to be the most commercially successful and also this is an incredibly exciting time to be doing
I’m really happy to hear from the voice of the industry that the more, so those that include are actually not making a favor to those that get included, but actually the AI, the systems get superior. And so that is something I think that’s another comment of a common legend out there that says, no, you know, it’s costly and perhaps then, you know, the profit is not there. What we are hearing from the voices of the companies is really like, well, no, because it’s a superior product, it’s a better product, it performs better. Last but not least, back to our Virginia. Especially here, I would like to listen from you about what you think is the role of a specific component of human capital, that is the skill.
And we have heard throughout this week the importance of upskilling, reskilling. And is that really the solution?
thank you very much firstly going back to if I made the impression that hammers are not useful it’s not the case there are many useful hammers my point is more like we need a toolbox we don’t need only hammers and even outside of the western world we are too much focused on hammers maybe the skills yes we really need to focus on skills we need to focus on our own capabilities on our lived experience and so on someone talked about AGI and indeed at this moment the AGI concept is about power is about providing power to those companies that claim they will build it how are they building it is what I call the play -doh approach they are putting all the data of the world with all the capacities of the world creating a huge ball of play -doh if anyone who played with We played out before, you know, that after you play, there is no color, there is no shape, there is nothing anymore.
It’s just a thing. And then, of course, that thing might do, but no one knows what’s inside, what came in, what came out, and so on. We need to go much more broader in understanding how this AGI is. What fundamentally AGI means, a system that is more intelligent than us, that can solve problems that we cannot. We already have AGI. We always had AGI. It’s called collective intelligence. The moment that we work together, we can do more than each one of us. If we are using the AI technology that we are developing to support this collaboration together, to develop the different skills, to integrate all our different capabilities, our different differences, our different experiences, our different capabilities, our different abilities, our different abilities, the different tools that we have developed.
then we get a much broader bouquet, not anymore a bowl of Play -Doh without color, but a huge bouquet of flowers of all those colors and so on. So AGI is about, and we cannot let the big companies run away with the concept of AGI by the idea that they are going to create God which is going to solve our problems. AGI is about us. It’s about putting all us together because our collective intelligence is really what, at the end of the day, is going to solve or to support us solving our problems. It’s just one more thing, and I think that’s also part of the skills. Technology, and there I disagree with you, is not neutral.
All technology embeds and encompasses our choices, our options, our data. All of that is part of. We have to understand technology as a non -neutral. artifact, and take those capabilities and also embrace the different perspectives and the different colors of this. But again, altogether, that’s the only way forward, is not giving up and hoping that AI is going to solve whatever complex problems we have. Now it’s really embracing and enforcing collective intelligence. That is AGI.
Excellent. Now, collective intelligence. Now we are going to have a collective set of questions, just a couple, because the time doesn’t allow for more. So, please, by the time you want to intervene, be absolutely short, say your name, say whom you want to ask the question to, and the question without doing the history of humankind before shopping with a person. So, I have to say, I spotted that surface, and there was a lady on this side. Now I think she got shy, and she just put the… So, let’s start by that gentleman. No, it’s the gentleman behind you, I’m sorry. is there I can do everything from moderating to giving you the part we are proactive and problem solving let’s go your name is
hello everyone myself Rajan I am from business club TV and I am the CEO and the founder of the startup so I have a very basic questions for professor Virginia Dignam yes so professor I have a question for you what is AI policy
Wow, okay, how many hours? Okay, very shortly, AI policy is about the tools, the capabilities, the skills, the information, the knowledge on the understanding how to address the impact of AI. Not the technology, not the designing of the technology, but really addressing the impact of this technology from the whole loop and all development from the beginning, asking ourselves, why are we using AI? Is that the best problem that we have? To the way we are developing it, to the way that we are evaluating. And addressing the impact of it.
No, I’m sorry, because we have to give it, let’s be inclusive, let’s allow the other. to speak as well. Please, that lady, yes, exactly, the one with the hand raised. It’s just down here, three rows ahead. I’m going to be gender equal, so one -on -one. I’m not going to have the men speak because typically you’re the fastest to raise your hand, the women, we are more sharp. Go ahead.
I love that. Thank you for that. Hi, my name is Rita Soni. I don’t know who should answer this question, but at the beginning of this panel, I heard someone say those that are developing AI and designing it probably have never experienced a power cut or potholes in the road. I thought that there would be more discussion about who is actually involved in the humans in the loop. Dave Donnie, you know me. So I have to ask this question about the people that are actually developing it and whether we’re thinking about responsibly employing them. Right now, we know that there’s overhauls of half a million people in the world. And so, I’m going to ask you to think about that.
that we consider impact workers. They’ve typically been excluded, but now they are. So how do we support this as a movement of getting those that have experienced power cuts to help design and develop it? This is a development -related question.
Who wants to attack it? Because we are over. That’s the last question, and then we will have to say thank you and continue the conversation in parallel.
Yeah, I fully, I mean, you know, if you’re talking about have developers suffered, to develop the technology, power cuts, anyone who’s working out of Bangalore or any Indian cities, yes, they have. They’ve definitely suffered in the development. Now, I think, Rita, the point you were making is how do we make it more inclusive? How do we bring in? And I think that’s something that goes back to the perennial question, is how do you ensure that you democratize not just access to technology, but you also democratize design and creation of the technology, right? And it’s not just gender. It’s also how do you diffuse it down to smaller cities, so people who are actually facing the problems in smaller cities.
And I think at least in India we are doing that through our initiators like Startup India, etc., which is more focused today on building capabilities in Tier 2, Tier 3 cities, not users, not just for adoption, but actually for design and development. So there’s a lot of focus, and I’m sure there are founders here who have come from the smallest of cities in India. And the best part is when we track the numbers, the growth of startups and founders is higher in the Tier 2, Tier 3, Tier 4 cities than in Tier 1 cities. So that tells us we’re doing something right.
enjoyed at least like half of as much I have enjoyed this panel. Please join me in thanking the journey from your to be and we’re going to do a large show so please stand up we’re going to do a selfie with all of you in the back come here stand like this so we’re all together this is our collective intelligence thank you thank you very much thank you thank you Thank you.
And then at times we also hear this, that having constraints or having frameworks will actually hinder these dynamics. And the position of UNESCO has been very clear. The position is, this is not true…
EventUNESCO Director Guilherme Canela emphasized that innovation and human rights protection are not opposing forces but complementary elements, arguing that the international human rights system actually …
EventIn conclusion, the analysis provided insight into various arguments and concerns surrounding AI, internet languages, and UNESCO’s role in promoting content diversity and multilingualism. From the risk…
EventUNESCO’s representative, Guilherme Canela de Souza Godoy, stressed that innovation and human rights protection should not be viewed as contradictory goals, emphasizing that good innovation benefits ev…
EventIn conclusion, generative AI technology has the potential for positive impacts in multiple industries. It enhances communication, contributes to scientific advancements, and improves online safety. A …
EventThe discussion revealed several unresolved tensions, particularly the fundamental disagreement between risk-based and emergent-technology governance approaches. The balance between innovation and safe…
EventLegal and regulatory | Economic References to financial crises being born from misleaded or dangerous financial innovations and their high costs Regulation and innovation must work together, not in …
EventBalancing regulation and innovation Paola Galvez argues that regulation is needed, but the focus should be on how to regulate rather than whether to regulate. She emphasizes the importance of balanci…
Event– **Ethics as foundational rather than an afterthought**: The panelists emphasized that ethics should be embedded from the very beginning of technology development, not added later. UNESCO’s approach …
EventAlexandra Krastins Lopes: Great, thanks. It’s an honor to contribute to this important discussion. And while I have a proudly funded civil society organization called Laboratory of Public Policy and I…
EventMoira de Roche:Yes, that’s why I said, Don, we’ve always looked at everything through an ethical lens and we believe that with technology, everything’s got to be done through an ethical lens. It’s wha…
EventThis comment expanded the education discussion beyond formal systems to include organic, curiosity-driven learning. It reinforced the theme of accessibility and diffusion that ran throughout the discu…
Event“Thank you so much, Mr. Nandan.”<a href=”https://dig.watch/event/india-ai-impact-summit-2026/ethical-ai_-keeping-humanity-in-the-loop-while-innovating?diplo-deep-link-text=We%27ve+had+Dr.+Tawfiq+Jilas…
Event<strong>Moderator:</strong> sci -fi movies that we grew up watching and what it primarily also reminds me of is in specific terms the avengers right the avengers are the superheroes and they’re trying…
EventHigh level of consensus with strong alignment on fundamental principles and practical approaches. This suggests the AI governance discussion has matured beyond basic debates about whether to adopt AI …
EventSecond, they want to close capacity gaps. Many developing countries need infrastructure, skills, and compute to participate fully in AI economy, and inclusivity and equal access are essential here. Th…
EventAbhishek Singh:So the details of side events will be up on the website very soon, hopefully by next week or so. And we would welcome registrations for the side events from non-members also. So, and th…
EventAnd in fact, the platform that the committee recommended in some sense was to also help to Uberize, to create demand, to also build skills also. So as simple as long as you have a simple phone, you co…
EventPractical steps forward:
BlogThe overall tone was formal yet optimistic. Speakers acknowledged the serious challenges posed by rapid technological change but expressed confidence in the ability of democratic institutions and mult…
EventThe overall tone was formal yet optimistic. Speakers expressed enthusiasm about the potential of digital technologies while acknowledging challenges. There was a strong emphasis on collaboration and i…
EventThe discussion maintained a consistently professional, collaborative, and optimistic tone throughout. Speakers demonstrated genuine enthusiasm for international cooperation and shared commitment to ad…
EventThe tone is consistently formal, diplomatic, and optimistic throughout. It maintains a ceremonial quality appropriate for a high-level international gathering, with speakers expressing honor, gratitud…
EventThe discussion maintained a professional, collaborative tone throughout, with speakers demonstrating expertise while acknowledging the complexity of the challenges. The tone was constructive but reali…
EventMelodena Stephens: So this is a tough one, right? Because when I look at ethics, I think ethics are great. The line between good and bad is a difficult one. So on one hand I go, I want to increase…
EventThe level of disagreement among speakers is moderate. While there is general consensus on the need to address global challenges such as climate change, economic development, and security, there are si…
EventThank you. Thank you, Deb. Okay. Thank you. Thank you for having me here. So, first of all, I’ll just go back to the topic, if I may, for a second, right? because I don’t think the choice is between i…
Event_reportingThe discussion began with a notably realistic and somewhat pessimistic assessment of global cooperation challenges, but progressively became more optimistic and solution-oriented. The moderator explic…
EventThe discussion maintained an optimistic yet pragmatic tone throughout. While acknowledging significant challenges around infrastructure, energy, skills, and governance, speakers consistently emphasize…
EventThe discussion maintained a constructive and collaborative tone throughout, with participants building on each other’s ideas rather than debating opposing viewpoints. The tone was solution-oriented an…
EventThis comment emphasizes the critical importance of collaboration while also pushing for concrete actions rather than just discussion. It sets a tone of practical problem-solving.
EventThe discussion maintained an optimistic and collaborative tone throughout, characterized by visionary leadership and practical problem-solving. Speakers demonstrated enthusiasm for AI’s transformative…
EventThe tone is consistently inspirational and collaborative throughout. The speaker maintains an optimistic, forward-looking perspective while emphasizing inclusivity and global cooperation. There is a s…
EventThe tone was collaborative and solution-oriented throughout, with participants acknowledging both the urgency and complexity of the challenges. Speakers maintained a pragmatic optimism, recognizing si…
EventThe conversation maintained an optimistic and collaborative tone throughout, with participants sharing practical solutions and success stories. While acknowledging significant challenges like the digi…
EventThe tone was consistently collaborative, optimistic, and forward-looking throughout the discussion. Speakers maintained an enthusiastic and inclusive approach, emphasizing partnership over competition…
EventThe tone was consistently collaborative, optimistic, and mission-driven throughout the conversation. Speakers demonstrated mutual respect and shared commitment to inclusive AI development. The atmosph…
Event“Dr Tawfik Jelassi is the Assistant Director‑General for Communication and Information at UNESCO”
The knowledge base lists Tawfik Jelassi as UNESCO’s Assistant Director General for Communication and Information [S119].
“Dr Maria Grazia is from UNESCO’s Social and Human Sciences sector”
UNESCO’s records identify Dr Mariagrazia Squicciarini (also referred to as Dr Maria Grazia) as the CEO of the Social and Human Sciences sector [S22].
“UNESCO aims to promote ethical, human‑centred AI while supporting innovation, especially in the Global South”
UNESCO’s three-pronged approach – fostering AI opportunities, mitigating risks, and addressing harms – reflects this dual focus on ethical, human-centred AI and innovation, with particular attention to the Global South [S84].
“Regulation does not necessarily hinder innovation; efficient ethical regulation can guide innovation toward benefiting humanity”
UNESCO emphasizes that innovation and regulation are not contradictory and that well-designed ethical regulations should steer innovation positively [S46].
“The UNESCO Recommendation is built on three non‑negotiable pillars: human rights, human dignity and fundamental freedoms”
The recommendation’s principles are rooted in human rights and also highlight inclusivity, sustainability, transparency and explainability, providing a broader set of values beyond the three pillars mentioned [S127].
The panel displayed a high degree of consensus across multiple dimensions: (i) innovation and ethics are compatible; (ii) ethical principles must be operationalised throughout the AI lifecycle; (iii) human accountability is paramount; (iv) capacity building and education are essential; (v) global, inclusive cooperation is required, especially involving the Global South.
Strong consensus – most speakers reiterated overlapping arguments, indicating a shared understanding that ethical, human‑centred AI can coexist with innovation when supported by concrete practices, capacity development and multilateral frameworks. This consensus provides a solid foundation for coordinated policy actions and collaborative initiatives in AI governance.
The panel exhibits broad consensus that ethical, human‑centred AI is essential and that innovation should not be sacrificed. However, substantial disagreement persists on how to operationalise ethics – whether through early design integration, risk‑based regulation, or broader cultural re‑thinking – and on the appropriate scope of regulation, with some advocating strong, pre‑emptive bans and others warning against over‑regulation. These divergences reflect differing institutional lenses (UNESCO policy, EU law, corporate practice, academic critique) and suggest that achieving coordinated global governance will require reconciling ex‑ante design mandates with flexible, context‑sensitive regulatory models.
Moderate to high. While the overarching goal of trustworthy, human‑centred AI is shared, the lack of alignment on timing, mechanisms, and the philosophical framing of innovation and regulation could impede the formulation of cohesive policies and slow the translation of ethical principles into practice.
The discussion began with a theoretical framing of ‘balancing innovation and ethics.’ Maria Grazia’s challenge to this framing acted as a catalyst, prompting speakers to reconceptualise the relationship as synergistic rather than antagonistic. Dr. Jelassi’s ‘ethics‑by‑design’ stance, Debjani’s focus on societal outcomes, Brando’s concrete EU policy example, and Virginia’s cultural‑pluralism critique each introduced new dimensions—operational mechanisms, global governance, and epistemic diversity—that redirected the conversation from abstract principles to actionable pathways. Paula’s industry‑level illustration that inclusive design drives commercial success reinforced the emerging consensus that ethics fuels innovation. Audience input from Rita highlighted the need for inclusive developer participation, closing the loop on the panel’s theme of ‘humanity in the loop.’ Collectively, these pivotal comments shifted the tone from a high‑level debate to a concrete, multi‑stakeholder roadmap, underscoring that ethical AI is achievable through early‑stage design, inclusive education, targeted regulation, and global cooperation.
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event

