Ethical AI_ Keeping Humanity in the Loop While Innovating

20 Feb 2026 14:00h - 15:00h

Ethical AI_ Keeping Humanity in the Loop While Innovating

Session at a glanceSummary, keypoints, and speakers overview

Summary

The UNESCO-sponsored panel “Humanity in the Loop” examined how to balance AI innovation with ethical safeguards, emphasizing a human-centred approach to technology deployment [1][2]. UNESCO’s 2021 Recommendation on the Ethics of Artificial Intelligence, adopted by 193 member states, frames ethical AI around human rights, dignity and fundamental freedoms and calls for these principles to be operationalised in practice [21][23][40-41]. Dr. Tawfik Jelassi argued that ethics and innovation are not contradictory but mutually reinforcing, insisting that ethical reflection must be built into AI design from the outset (“ethical by design”) and that the UNESCO recommendation provides a global framework for this [38-40][41]. Debjani Ghosh stressed that the real choice is whether technology serves humanity’s basic needs or fuels conflict, and that accountability ultimately rests with humans; she advocated embedding oversight throughout the development lifecycle and using “sandbox” testing to make ethics an integral part rather than an afterthought [49-56][65-68].


Brando Benifei described the EU AI Act’s risk-based approach, noting prohibited uses such as predictive policing and emotional-recognition, and argued that regulation must protect human rights without stifling innovation, while also highlighting the need for global cooperation on issues like military AI [78-84][80-86][191-198]. Virginia Dignam critiqued the narrow “hammer-and-nail” view of innovation, calling for broader, culturally diverse conceptions of AI (e.g., Ubuntu) and for education that equips engineers with social-science perspectives to avoid treating AI as a magical, neutral tool [100-112][124-138]. Paula Goldman shared Salesforce’s practical experience, explaining that embedding ethical controls, real-time accessibility features, and human-in-the-loop escalation mechanisms not only improves inclusivity but also yields superior, more marketable products [140-158][155-159].


The discussion repeatedly highlighted the importance of awareness, capacity-building and multilateral dialogue, with Dr. Jelassi recalling UNESCO’s grassroots projects that used communication tools to empower remote communities, illustrating how AI can be a force for good when coupled with education and advocacy [204-214]. Participants agreed that translating high-level principles into concrete, context-specific mechanisms-through regulation, industry practice, and education-is essential for trustworthy AI deployment [34][65-68][155-159][124-138]. Maria Grazia emphasized that the “human-centered” approach requires not only technical solutions but also deliberate policy instruments and stakeholder participation to define unacceptable uses of AI [88-91][186-188]. The audience raised concerns about involving developers from under-served regions, prompting Debjani to note India’s initiatives such as Startup India that aim to democratise AI design beyond major urban centres [290-298]. Overall, the panel concluded that a coordinated global framework, inclusive design, and continuous human oversight are necessary to ensure AI advances societal welfare while mitigating risks [191-198][226-230][236-242].


Keypoints


Major discussion points


UNESCO’s core position that ethics and innovation are complementary, not opposing forces.


The moderator stresses that “the position of UNESCO… is this is not true” that regulation hinders innovation, and outlines the three pillars-human rights, dignity, and freedoms-that must guide AI ([20-24]). Dr. Tawfik reinforces this by stating that “ethics and innovation… reinforce each other” and that AI must be “ethical by design, ex-ante” ([38-41]).


The need for a risk-based regulatory framework to balance innovation with safeguards.


Brando Benifei explains the EU AI Act’s risk-based approach, naming specific high-risk sectors and prohibited uses (e.g., predictive policing, emotional-recognition) and argues that regulation must be proactive rather than purely ex-post ([74-84]). Maria’s follow-up highlights the importance of defining “what we do not want the technology to do” as a regulatory baseline ([88-90]).


Embedding ethical oversight throughout the AI development lifecycle.


Debjani Ghosh argues that oversight must be built “into the entire development process from design to commercialization” with “flag-offs at every part” and sandbox testing, turning ethics into a design principle rather than an afterthought ([65-69]).


Broadening the conceptual and cultural foundations of AI through education and collective intelligence.


Virginia Dignam critiques the “hammer” metaphor and Western-centric, individualistic AI traditions, calling for diverse epistemologies (e.g., African Ubuntu) and a “toolbox” of skills and perspectives ([106-112][124-138]). She later expands this to collective intelligence as the true “AGI,” emphasizing non-neutrality of technology and the need for interdisciplinary skill-building ([235-254]).


Practical industry steps toward inclusive, trustworthy AI.


Paula Goldman describes concrete practices at Salesforce: real-time monitoring, escalation protocols, and designing for accessibility (e.g., handling different accents and disabilities), arguing that inclusive design yields superior, more marketable products ([140-159][220-227]).


Overall purpose / goal of the discussion


The UNESCO-sponsored panel “Humanity in the Loop: Balancing Innovation and Ethics in the Age of AI” was convened to explore how AI can be advanced responsibly. Participants from UNESCO, government, academia, and industry shared perspectives on translating UNESCO’s global AI ethics recommendation into actionable policies, regulatory models, educational curricula, and corporate practices that keep humans at the centre of AI development and deployment.


Overall tone and its evolution


Opening tone: Formal and optimistic, with the moderator framing the debate as a constructive challenge to the “innovation vs. ethics” narrative ([13-18]).


Mid-session tone: Becomes more critical and reflective; Virginia delivers a “controversial” critique of current innovation paradigms ([96-112]), and Debjani stresses the difficulty of universal ethical alignment ([51-56]).


Later tone: Shifts toward collaborative problem-solving, highlighting concrete regulatory proposals (EU AI Act) and practical industry measures (Salesforce’s inclusive design) ([74-84][140-159]).


Closing tone: Hopeful and inclusive, emphasizing collective intelligence, global cooperation, and the need to democratize both access and design of AI ([191-199][235-254]).


Overall, the discussion moves from high-level framing, through critical analysis of gaps, to concrete solutions and a unifying call for global, multidisciplinary cooperation.


Speakers

Tim Curtis


– Role/Title: Regional Director for UNESCO South Asia


– Area of Expertise: UNESCO regional leadership, AI policy and innovation


Debjani Ghosh


– Role/Title: Distinguished Fellow, NITI Aayog; member of the ETIO think-tank for the Government of India


– Area of Expertise: AI ecosystem development, policy formulation, economic and social development initiatives [S4][S5]


Dr. Tawfik Jelassi


– Role/Title: Assistant Director General for Communication and Information, UNESCO


– Area of Expertise: Communication, information & knowledge societies; AI ethics and governance [S6][S7][S8]


Brando Benifei


– Role/Title: Member of the European Parliament


– Area of Expertise: EU AI Act, risk-based AI regulation, international AI policy coordination [S9][S10]


Paula Goldman


– Role/Title: Chief Ethical and Humane Use Officer, Salesforce


– Area of Expertise: Ethical AI implementation in industry, responsible AI product design


Virginia Dignam


– Role/Title: Professor and Director of the AI Policy Lab, Umeå University; member of UNESCO’s AI Ethics Experts Without Borders


– Area of Expertise: AI policy, AI ethics, interdisciplinary education and research [S15]


Rita Soni


– Role/Title: Audience participant (no formal title provided)


– Area of Expertise: (not specified)


Maria Grazia


– Role/Title: Chief of the Executive Office of UNESCO’s Social and Human Sciences sector; Moderator of the panel


– Area of Expertise: Microeconomics, innovation dynamics, AI governance and ethics [S20]


Audience


– Role/Title: General audience members (including individuals such as “Rajan”)


– Area of Expertise: (not specified)


Additional speakers:


(None – all speakers appearing in the transcript are covered in the list above.)


Full session reportComprehensive analysis and detailed insights

The session opened with Tim Curtis, UNESCO’s Regional Director for South Asia, welcoming participants to the UNESCO-sponsored panel “Humanity in the Loop: Balancing Innovation and Ethics in the Age of AI” and thanking the Government of India for its collaboration. He outlined UNESCO’s aim to promote ethical, human-centred AI while supporting innovation, especially in the Global South [1-3].


Curtis introduced the panellists: Dr Tawfik Jelassi, Assistant Director-General for Communication and Information and a lead of UNESCO’s AI-ethics work; Professor Virginia Dignam, director of the AI Policy Lab at Umeå University and member of UNESCO’s AI Ethics Experts Without Borders; Paula Goldman, Chief Ethical and Humane Use Officer at Salesforce and member of UNESCO’s Business Council; Debjani Ghosh, distinguished fellow at NITI Aayog and architect of India’s AI ecosystem; and Brando Benifei, Member of the European Parliament who would discuss the EU AI Act. The moderator was Dr Maria Grazia from UNESCO’s Social and Human Sciences sector [4-10].


Maria Grazia opened by questioning the premise of the title, arguing that innovation and ethics need not be opposed. Drawing on her micro-economics background, she linked innovation to productivity, welfare and well-being and noted that regulation does not necessarily hinder these dynamics. She reminded the audience of the 2021 UNESCO Recommendation on the Ethics of Artificial Intelligence-adopted by 193 member states and, to date, the only global recommendation of its kind-built on three non-negotiable pillars: human rights, human dignity and fundamental freedoms [12-25][26-33][34-38].


Dr Jelassi responded that there is no contradiction between ethics and innovation; the real tension lies between innovation and over-regulation. He argued that embedding ethical reflection from the design stage makes AI systems more trustworthy and therefore more innovative, emphasizing UNESCO’s “ethical-by-design” principle and the Recommendation’s calls for human oversight, non-discrimination, cultural respect and environmental sustainability [38-41][42-45].


Debjani Ghosh then explored how the high-level principles can be operationalised. She reframed the debate as a choice between using AI to eradicate suffering (e.g., disease, food insecurity, loss of dignity) or to amplify conflict. Acknowledging that universal ethical alignment is impossible, she insisted that accountability must remain with people, not algorithms, and advocated lifecycle “flag-off” checkpoints and sandbox testing to embed ethics as a design principle rather than an after-thought [48-56][57-69].


Brando Benifei described the EU AI Act’s risk-based approach. He identified high-risk sectors such as workforce, health and justice, and outlined strict requirements on data quality, cybersecurity, governance and human control, while prohibiting applications like predictive policing, workplace emotion-recognition and manipulative subliminal techniques. He argued that proactive, risk-based regulation protects human rights without stifling innovation and called for global cooperation on trans-national challenges such as military AI [70-84][85-87][191-198].


Professor Virginia Dignam critiqued the prevailing “hammer-and-nail” metaphor, warning that treating any new AI tool as a universal “hammer” that can nail every problem limits true innovation. She advocated a broader toolbox that incorporates diverse epistemologies, citing the African Ubuntu philosophy (“we are, therefore I am”) as an alternative to the Western Cartesian view (“I think, therefore I am”). She stressed that AI is an “empty signifier” and that engineers need interdisciplinary training to ask why a problem matters, who benefits and who loses, urging a focus on collective intelligence rather than a monolithic AGI [96-112][124-138][235-254].


Paula Goldman explained how Salesforce translates these ideas into practice. Her team continuously monitors AI agents, defines escalation points where control shifts between AI and humans, and builds inclusive, real-time accessibility features, e.g., accent-aware voice agents and on-the-fly UI corrections for users with disabilities. She argued that inclusive design is a commercial advantage, yielding products that perform better and achieve greater market uptake [140-159][220-227].


Audience Q&A


Rajan asked “What is AI policy?” – Prof. Dignam answered that AI policy concerns the tools, skills and knowledge needed to assess AI’s impact throughout its lifecycle, not the technical design itself [263-266].


Rita Soni raised concerns about developers in low-resource settings. Debjani Ghosh replied that democratizing AI design is essential, citing India’s “Startup India” programme and the AI Impact Commons platform (aiimpactcommons.global), which aggregates impact stories from more than 30 countries on issues such as malnutrition, pharma-related suicides and climate resilience [74-80][276-286][290-298].


After Benifei’s remarks, Maria Grazia redirected the discussion to Dr Jelassi, who reiterated UNESCO’s mission to build peace through education, culture and information. He recounted a recent visit to a remote Southern-African village that lacked radio or Internet; UNESCO’s provision of community radios, telecom infrastructure and early-warning systems transformed lives, illustrating how AI can serve humanity when people are truly at the centre [81-88].


Finally, Maria Grazia thanked the panelists, invited a group selfie and formally closed the session, emphasizing the need for continued multilateral dialogue and collective intelligence [89-90].


Consensus & actions – The panel agreed that (i) innovation and ethics are complementary; (ii) UNESCO’s 2021 Recommendation provides a universal set of principles that must be operationalised through lifecycle-wide ethical checkpoints; (iii) ultimate accountability resides with humans; (iv) capacity-building and interdisciplinary education are vital; and (v) global, inclusive cooperation-especially with the Global South-is essential for coherent AI governance. Proposed actions include urging member states to translate the Recommendation into national policies, expanding the AI Impact Commons, adopting an ethics-by-design lifecycle model with mandatory checkpoints and sandbox testing, creating risk-based regulatory sandboxes, investing in interdisciplinary up-skilling programmes, and fostering multilateral forums to align standards on prohibited uses and address cross-border risks such as military AI [2][38-41][55-60][124-138][191-199][40-41][65-69][141-149][191-198][220-227][290-298].


Unresolved issues highlighted were the challenge of achieving global consensus on ethical values amid cultural diversity, mechanisms for turning UNESCO’s high-level principles into enforceable regulations, systematic inclusion of developers from underserved regions, a precise definition of “AI policy” distinct from technical standards, and robust monitoring frameworks for accountability when harms occur. These gaps point to the need for further research, pilot projects and sustained dialogue [48-56][100-112][263-266][276-286].


Session transcriptComplete transcript of the session
Tim Curtis

This afternoon to this UNESCO sponsored event, my name is Tim Curtis, I’m the Regional Director for UNESCO for South Asia and very happy to have you all for the event today, Humanity in the Loop, Balancing Innovation and Ethics in the Age of AI. Of course we’re grateful to the Government of India for its collaboration on this session because we at UNESCO believe, which we at UNESCO believe goes to the heart of our engagement with the ethics of artificial intelligence and namely how to ensure an ethical and human AI centred deployment whilst also encouraging the development of artificial intelligence and innovation in a technology that can offer so many benefits to humanity. and including and in particular to the global south.

So it gives me great pleasure to just present today’s panellists and moderator. We’ve had Dr. Tawfiq Jilasi, who’s Assistant Director General for Communication and Information and who’s really been a pivotal figure in UNESCO’s work on AI ethics. Professor Virginia Dignam, who is a Director of the AI Policy Lab at Umeå University and she’s also a member of UNESCO’s AI Ethics Experts Without Borders and has been supporting UNESCO’s readiness assessment methodology in multiple countries. Also privileged to have Paula Goldman, Chief Ethical and Humane Use Officer at Salesforce, who are a member of UNESCO’s Business Council and she has really been leading by example in the private sector’s responsible AI ethics. Debjani Ghosh, a distinguished fellow Niti Aiyog who needs no introduction here in India, a household name in India for her role in building and leading India’s AI ecosystem.

Thank you for coming. And finally, a great pleasure to welcome Brado Benefai, a member of the European Parliament who will share his insights on the EU AI Act and how they have been able to navigate balancing innovation and ethics. And finally, of course, I’m a moderator Dr Maria Grazia from the Chief of the Executive Office of UNESCO’s Social and Human Sciences sector. Please, Maria Grazia, over to you.

Maria Grazia

Hello, good afternoon. So we’ll try to have this session very dynamic because it’s after lunch, it’s Friday, over five days, very interesting, a long week. So let me start by challenging the very title of this meeting, that is Balancing Innovation and Ethics in the Age of AI. Now, nobody’s first. effect. So I’m a microeconomician, which is a very complicated word, which looks like a rude word, but it’s not. It’s mathematics applied to economics and especially applied to understanding the dynamics of innovation and new technologies. Why I’m saying that, because of course the question of innovation, what drives innovation, how can we get more innovation, is something that we always ask by the time you study what drives productivity growth, what drives welfare and well -being.

And then at times we also hear this, that having constraints or having frameworks will actually hinder these dynamics. And the position of UNESCO has been very clear. The position is, this is not true. So what UNESCO has, actually the member states that have adopted UNESCO recommendation on the ethics of artificial intelligence already in 2021, which means that you all countries, including India, were discussing these issues already since 2019 to get to an agreement. is actually what it means and how can we put technologies at the service of humanities and not let anything that is technologically feasible go if that technological feasibility actually hurts people, hurts humanity. And so for us at UNESCO, ethics of AI means something very concrete.

It means AI, technologies, and here I would like to invite you to think that it’s technologies, it’s not one single element, it’s a lot of things, that actually abide by three simple things that too often we give for granted, whereas perhaps we want to think about it more, and these are human rights, human dignities, and fundamental freedoms. And if we are able to develop, deploy, and use technologies in a way that we abide to these three components, then for sure we do have technologies that serve humanity. And why? I’m challenging the very topic because too often we see… innovation, or actually the narrative that we use out there, that is used out there, puts innovation and ethics, or ethical AI, which actually means an AI that also throughout the life cycle is ethical, as trade -offs.

So if we innovate, it cannot be ethical because by the time it’s gone out, we don’t have the time to check on these things. Well, think of a parallel, and then we take it from there on the concrete dynamics of AI. But think, if you were to think about one sector that is very much regulated, perhaps what comes to mind is pharma, pharmaceutical. Now, to my knowledge, but that can be my ignorance, I have never seen one single study being able to prove that the regulation in that sector has actually hindered the innovativeness or actually the productivity or even the remuneration of the sector. So by the same token, and actually the pervasiveness of AI to some extent leads us to think to the pervasiveness of of the paracetamol, for instance, we use every day by the time we have an ad, like I think some of you this afternoon might have, and after listening to me, perhaps even more.

But, you know, it’s really the pervasiveness of technology that touches our life, each and every day in many ways. And this is what I think is important to discuss from different perspectives. And allow me to start with my ADG, ADG jealousy. And as I mentioned, from UNESCO, we give this global perspective, because the recommendation was adopted by 193 member states. Now, very often, what is very challenging is to go from principles to practice. That is, sometimes we know what we need to do, but then the question becomes, how do we translate it into practice? So, ADG jealousy, when do you see what are the biggest gaps that exist between going from principles and what instead is happening on the ground?

Dr. Tawfik Jelassi

Thank you, Maria Grazia. maybe before I briefly answer your question let me say that you used the word of innovation and ethics I don’t see personally an issue, a contradiction between the two, I see it more between innovation and regulation because say to be creative, innovative you should free up the mind of the people, you should not constrain them, you should not tie their hands I used to be chair of a telecom operator board and there of course telecom and mobile phones and access to private data of consumers, the issue of regulation is paramount but we don’t want regulation that hinders innovation, I think here so I don’t see ethics and innovation being in contradiction to the contrary, I think they reinforce each other how is that?

Because clearly if you integrate ethical reflection in the design of AI systems of course if you do that AI systems will be more respected more trustworthy, more used and therefore more broadly deployed across society so I see ethics and innovation really reinforcing each other and quite often at UNESCO we say AI systems have to be ethical by design it should be done ex ante not ex post not when we see mistakes and hazards and risks and harmful impact of AI we say wait a minute let’s go back to see what went wrong in those models in the data sets, are there some biases etc so I think it has to be done from the very early stage and therefore innovation has to be human centric and has to be contextualized, there is no one size fits all, we know that what you can provide is an overarching framework so it’s a broad set of guidelines and principles as you said Maria Grazia and this is what the UNESCO recommendation on the ethics of AI is about You know that this recommendation has been so far the only global recommendation of its kind.

It was adopted back in 2021 by 193 member states of UNESCO, and it calls for human oversight, non -discrimination, respect for cultural diversity, respect for environmental sustainability. These are the principles that need to be translated into action and that need to be operationalized within a certain context.

Maria Grazia

Thank you very much, Elie Dji. Let’s actually go to Debjani, because I would like to go further into this operationalization question. So, from your work at NITI IOC, and also your experience with NASCOM, so what are the mechanisms that can really help embed the ethical reflection into what is the everyday life of both companies and sectors?

Debjani Ghosh

Thank you. Thank you, Deb. Okay. Thank you. Thank you for having me here. So, first of all, I’ll just go back to the topic, if I may, for a second, right? because I don’t think the choice is between innovation and ethics. I really don’t. I think the choice is between do we use technology to ensure that everyone in the world is cancer -free, everyone in the world lives with dignity, everyone in the world has enough to eat, or do we use the technology to make the world a much bigger conflict zone, develop the next atom bomb, and worse. So I think the choice is that. And therefore, the biggest challenge we have, and I hate applying the word, the label of ethics to technology, because I think the biggest challenge we have is can we, all the wisdom in this room, can we say that we will be successful in aligning every single human on this planet to the same ethical values?

The answer is no. No. we’re not going to be able to do that. And we know we’re not going to be able to do that. So as long as we humans don’t align to the same ethical values, you will always have good actors and you will always have bad actors, you know that technology is going to be used in ways that are non -ethical. So the accountability, you’ve talked about humanity in the loop, the accountability comes back to us. So I think it’s very important to sort of understand that because in all our dialogues on technology, we somehow delegate the accountability to technology. I don’t think we can as yet. Maybe in another 10 years when cognitive reasoning becomes a thing, maybe then, but not as yet because for somebody who actually builds codes and builds agents, I know they’re not that intelligent as yet.

So I think the accountability on humans is what we have to focus on. And going back to your question, if you’re talking about how does industry ensure? I mean, one of the things I’m very clear about that regulation is usually an afterthought. You develop the technology and then you say, okay, how do we now regulate it to ensure that it’s used right? And I think that has to fundamentally change. Oversight has to be built into the entire development process from design to commercialization. And it has to be built with the right flag -offs at every part of the design and development process. If you do that and you’re able to, you know, red tape the product that you are developing at every single stage to certain standards that have been developed, you are going to develop something that, and then hopefully after the entire development phase, there’s also a sandbox where you test out the impact.

You will get to a stage where ethics becomes by design versus an afterthought. And I think that’s what we have to move towards.

Maria Grazia

Thank you. I’d like to a bit change the order of the speakers because you brought in the argument of the regulators and you have one next to you that I’m going to refer to. And how do you see this relationship? Because we know fundamentally the regulation that has been pushed in Europe is a risk base. So what was the logic and how this relates to what she was discussing as the human oversight or even the redress of mechanisms that we might want to put in place in order to have AI that is ethical?

Brando Benifei

Well, first of all, excuse me for the voice, but that’s it. Exactly, but thanks to technology, you can hear me anyway. So I think that I… I can also adhere to the point that innovation and ethics are not one against the other. in fact this summit that is concentrating on impact on action, on diffusion, is not separate from keeping the track on on reflection, on safety on how to protect human rights how to make AI human centric, the things are interwined, the point is how do we regulate effectively and how we find a good balance, but I want to bring maybe a controversial point to the table because I have my strong conviction on this we have chosen globally, including in Europe, that has been often the forefront of regulating in one of those rooms now, I was with her in another panel there was Anu Bradford professor of Columbia University that has written the book The Brussels Effect so in fact EU has often opened the way for many regulatory pathways I mean even Europe has chosen when looking at the social media to actually not regulate we have let the social media diffuse without regulation and today we are discussing about limits for minors we heard about that also in the inaugural session we are discussing about misinformation and labelling of deepfakes even Prime Minister Modi talked about that in the inaugural session but we are doing it all now after a lot of things have happened and my point, that’s my opinion we have already unmodifiable consequences so I think that when we talk about when we should regulate we should regulate and we should regulate and we should regulate and we should regulate and we should regulate and we should regulate if we should let the innovation flow and act only ex post.

Sometimes we might be wrong and risk unchangeable effects. So we need to build a balance that doesn’t hinder innovation, but also identifies human rights challenges. The AI Act tried to build a risk -based approach, identifying areas where we need AI to be overseen, workforce use of AI, healthcare use of AI, administration of justice use of AI. We want to be sure when we deal with that that data used for training is quality data, cybersecurity is sufficient, the governance of the data is solid, and there is human control. These are examples of what we have identified. Everything. And in fact, we even chose to prohibit. a few use cases, for example, predictive policing, for example, emotional recognition in workplaces and in study places, manipulative subliminal techniques.

I don’t think it’s a taboo to choose that some use cases of AI, we don’t want them in our society, and we just keep them out. So I think this approach based on the risk, you can look if you like it this way, if you want to modify, but it’s an interesting perspective, because you can choose what you think is in need of a certain regulation, and you can also promote transparency, which I think is crucial to build trust. Without trust, especially in democratic contexts, it’s impossible to accelerate adoption of AI, which is still a big challenge from both the global north and the global south. The numbers tell us that a lot of companies, or public administrations that could benefit from an ethical and correct use of AI, they are not using it because they don’t know what could

Maria Grazia

You put forward a very important point, Brando, that is like perhaps we might not be able or we might not want to decide what the technology should do for us. But for sure we might want to discuss and agree on what we do not want the technology to do for us because these are unacceptable uses of deployment. And in this case, this also highlights the importance of awareness, of the centrality of people, of having this human -centered approach. And here I would like to invite Virginia into the conversation because of course you, as an educator, as part of this beautiful world of educators, as a professor, you have this constant contact and the ability to interact and nurture the humankind.

So what do we have to do to avoid that people are just consumers or, you know, are possibly exposed to it instead of stealing the technology to work where we want to go?

Virginia Dignam

Sure. Thank you very much. Thank you for inviting me to be here. Again, like all my previous colleagues, I want to go back to the title. And I’m not going to talk about the balancing part. I’m just going to claim and to be controversial and to wake up all. We are doing both the innovation as the ethics and regulation side all wrong. We are doing it not in the way that it needs to be done. On the innovation side, we are doing it wrong because we are somehow understanding innovation as the capacity of using this hammer that we found out a couple of years ago of Gen AI or whatever. And now we want to use the hammer to nail any nail that we find out.

Innovation is much more than that. innovation is really challenging ourselves to go further. And I want to go back to a sentence that has come with me and is the main thing I’m taking from this summit today. In a couple of sessions ago where I spoke, someone was saying, most people developing AI never experienced power cuts, never experienced broken roads. I would like to go further. AI, and I have been working in AI for 40 years, all the different types of AI that existed before, has been developed extremely on the Western tradition, the Cartesian tradition. We think, therefore we are. I think, therefore I am. First it is individualistic, and then equates intelligence with cognition. Human intelligence is much more than cognition.

If you would think about AI developed for instance in the African Ubuntu tradition, it says, we are, therefore I am. It would be a completely different type of AI. So we do need to challenge ourselves not to go with this hammer that is there already and try to find the nails and call that innovation. It is not innovation. It’s just running around like chickens without heads and see if one of those hammers works. So that’s one. On the side of ethics and the regulation, we are also assuming there are two things that usually come with the idea, and especially in this type of combination, that ethics is this kind of finger that points, thou shalt behave, thou shalt be good, and that regulation is about prohibiting you to do things.

Neither ethics is the finger, nor regulation is necessarily only about prohibitions. Moreover, regulation like AI, like the hammer, like the telephone, is not about prohibiting you to do things. It is about prohibiting you to do things. It is about prohibiting you to do things. It is about prohibiting you to do things. It is about prohibiting you to do things. It is about prohibiting you to do things. artifact that we built we built regulation and we can apply to regulation and to ethics the application of ethics exactly the same type of principles that we apply to technology let’s experiment, let’s try let’s verify, let’s evaluate let’s see what’s there and not have this idea of the finger or the loss written in stone which stays there once and forever so that’s going back and now very quickly on your answer because I don’t want to take much time I think that education needs exactly to start by this point technology alone is not enough so we really need to up our education of the engineers, the computer scientists the data scientists on the humanity side we know as engineers we know very well how to solve a problem, we never ask ourselves why is this a problem who has this problem, what are the alternatives to my solution who gains, who loses, what is gained, what is lost this is humanity We need to somehow bring that together in the engineering case and in the humanities and social science case.

We need them, because I’m an engineer, to help us understand that we need to be much more precise in what we are talking about. AI at this moment is actually an empty signifier. It doesn’t mean nothing. Everything is AI. Nothing is AI. All kinds of things are AI. The applications are AI. The sectors are AI. The technology is AI. The research, everything is AI. And we cannot just go around with this word which actually means magic. In most politicians’ talks, it means magic. And we want to regulate magic. Okay, good luck. So we need the humanities, the social science, to really help us. As being precise about what are we doing. So this is the education we need.

Maria Grazia

fantastic you couldn’t have put it much easier to me to then ask Paola how are we doing that in companies because this is very easy to say we need to translate the principles the values in concrete models of that actually work work for a company, work to deliver results and work for people

Paula Goldman

yes indeed, well first of all thank you for that and I mean we were just talking about how this is our last speaking panel of the week and that was a fiery way of drawing things together, I really appreciate it, kind of an energy boost so yeah, I think the answer is actually much more practical and much less abstract than one might imagine and so I’ll just tell you a little bit about my experience I spend my days at Salesforce both testing our products and making sure that our AI has features baked into it so that our customers There’s no, I can observe what’s going on and know how to tweak the controls and understand, for example, when they should set for an AI agent to escalate to a human or a human to escalate back to AI and so on.

And when we do this, it’s not like we think we at Salesforce have all the answers because clearly we don’t and we serve a variety of industries and all over the world and so on. But everyone, all of our customers are basically asking the same questions, right? They’re asking, how do I know what kind of results I’m getting? How can I tell if something goes wrong? What are my options if something goes wrong? What part of AI ethics is your responsibility and what part is mine, right? And these questions don’t necessarily have the most mature answers because we’re in the early innings. of AI agents and a lot more work. to do. But actually, these are the right questions to be asking and also allows for some flexibility and some cultural or industry specificity for people to find the right answers to the questions.

So that would be part one of my answer. It’s actually very, very practical. To adopt AI, companies and organizations need to be able to trust that it’s going to work. They don’t want to be embarrassed by it, right? And they’re not going to be able to scale it if it doesn’t work. So that’s number one. The second thing is also increasingly what we’re finding when we work with companies on this is that the most successful companies at Scaling AI put the people at the center of the transformation. They work with not just top down, like you shall use this application. They give people a chance to sort of have a voice around what is actually working.

What is actually most useful to them in the day -to -day work? Where is AI going to actually help them and where is it kind of useless? right and it’s that kind of understanding of how work actually gets done what actual processes are going to benefit from that kind of application that I think is really important and allows people to sort of stay at the center of this large -scale transformation that we’re part of

Maria Grazia

that might happen or should happen in the context of making AI ethical by design?

Debjani Ghosh

Well, in my current role in ETIO, which is the think tank for government of India, we’re looking at what are the unlocks for technology, including AI, to ensure that we can use technology to solve for some of the biggest problems, right? Now, what Professor Virginia said about AI as a hammer, I think that’s a luxury of the developed countries, and I do agree with you when it comes to developed countries. But when you come to developing countries where you don’t have a lot of resources, you cannot afford to use the technology that takes a lot of deep investment to sort of do things where you’re not sure. You’re not sure of the ROIs. And one of the things, examples I want to give is as part of this summit, there were seven working groups that were set up looking at different problems.

I chaired one of the working groups on economic development and social good, which was all about impact and how do you scale impact, right? And we had around 50 countries participating. Now, one of the things that came out of that working group was, which is one of the outcomes of this summit, is the creation of AI Impact Commons globally, and it’s online. You guys can look it up, aiimpactcommons .global, which has impact stories from more than 30 countries, and counting, and it’s growing every day, with learnings on what kind of problems can be solved and how do you scale it. And by I said it’s a luxury of developed countries is because when you look at those impact stories, and most of them are from developing countries, and you’ll be amazed with the kind of problems they’re solving, from malnutrition to pharma, you know, to suicides, how do you lower pharma suicides by using technology to improve yield.

Thank you. ensure that they don’t suffer from climate changes and shocks. I mean, the problems are so inspiring. So I think it won’t be fair to say that we don’t know what problems we are solving today, and I will absolutely stand for that. And I think it’s – I’ll go back to what Paula said. I’m not sure if industry today is really putting human at the center of the loop, but I think they need to. They absolutely need to. I do, because as we develop technology, for example, the end goal right now of – seems like the end goal of AI, all the big companies are talking about, is AGI. Now, when you look at what does AGI mean, it’s about control.

Why do we want to build something to control everyone? Why don’t we want to build something that is going to augment lives? And if we could change the narrative, then I would say, yes, humans are at the center. Right now, I think we still have – we still have a lot of work to do to bring humans back into the center of the loop. And it’s something I think we have to realize and industry has to realize. that that is the only way you can build sustainable businesses. And that’s how you sort of build your staying power. So it’s going to be very important to do.

Maria Grazia

Absolutely. And it’s about having these different entities around the table, but also having different governments and having this multilateral setting talk to each other to have regulation or more generally, because at the end of the day, we talk a lot about regulation, but regulations are part of the policy framework that one could put in place. So actually, let’s go to Brando, because I was seeing he was kind of calling me with his eyes by the time we were talking, and I’m sure he wants to add on the multilateral setting. Please, over to you, Brando. Perhaps you were not calling me, but you’ve been called in. Never the less.

Brando Benifei

Well, I think that it’s very important that we use occasions like this, this summit, to… to advance a global cooperation framework. And for sure… it’s also a part of the mission of UNESCO to unite different cultures and approaches to what we are talking about. And you explained it earlier, the longstanding work of the organization. But I think that we need to face the reality that there are issues where global cooperation will be crucial and that it’s still not sufficient. Let’s think of military use of AI or the existential risks of losing control of very powerful AI models. This is something that is part of a controversial debate, we would say. But I wouldn’t dismiss renowned scientists that sustain that we are.

in a context where the lack of globally adopted rules are putting us in very significant danger. And this is also part of the idea of balancing innovation and ethics. Because for sure we need domestic rules to foster the best opportunities out of the various use cases of AI. In these days I met many companies that were working on very practical, extremely useful AI use cases to ameliorate our life. To ameliorate. To ameliorate societal good. But this cannot be left in the hands of just the… judgment of private sector companies that have a specific objective, profit for their owners or shareholders, it’s not societal good they might want to add that on top but that’s not their objective, it’s natural, so we need to have frameworks in place on what is our daily impact with AI and we need to build common standards the more broadly adopted standards we have globally the best will be to reach results but we also need a step further that is global cooperation on those issues where we cannot actually do very much domestically they are global issues and I think that with an increased geopolitical tension soon the use of AI for peace will be quite an important topic on which the international community has to find a way to take quick steps forward I hope that our leaders will deal with that

Maria Grazia

I can’t agree more with the need to coordinate and have an approach that is global and actually allow me the prerogative of the moderator to call my ADG Tophie I will take the consequences of that but what I would like to ask you is what it means to have people at the center and let’s remember that in your case, given the work you lead on the communication and information sector what is the role of the information Virginia was hinting at that before in terms of awareness could you please share a bit of those insights

Dr. Tawfik Jelassi

Thank you Maria Garcia let me pick it up where Brando left it, he said AI for peace maybe some in the room know why UNESCO was created back in 1945 80 years ago almost to the day the mission of UNESCO was and has been to build peace in the minds of men and women how? through education culture, sciences, communication and information everything happens in the mindset of the people today of course we want AI to be a force for good but it could be also a force for hazards, for harm for risk I tend to say technology is neutral it depends what humans make out of it it could be a force for good it could be a force for you mentioned wars or unwanted things so yes humanity in the loop that’s fundamental I always ask myself and that’s my team at UNESCO I say if whatever we do in the field if that transforms lives then we are spot on if you make the beneficiaries of our educational program whatever if you can make them more successful through what you offer them then that’s impact.

Where is the impact? AI can transform lives, yes. And you mentioned to us some examples. It can help cure cancer, as you said, provide food for the needy people, and so on and so forth. We want that type of AI. And AI does not stand for artificial intelligence. AI stands for all -inclusive. That’s AI as well. So if you have that perspective to things, if you really put humanity in the loop, at the center, not only in the loop, in the center, and allow me one minute to share with you, I have been at UNESCO for five years. My most memorable day happened last week in a tiny village in remote southern Africa. A village in which people had no access to radio, no TV, no mobile telephony, no internet, nothing.

They always felt we were second -class citizens in this country. Imagine that you don’t have access to your own internet. Do you have that information? you don’t know what’s happening around you you cannot call your relatives living in other cities this was the case of 15 small communities what UNESCO did, it provided first community radios, set up a tower with transmission equipment so through the radio people have information know what’s happening and when we did that, telecom operators came in to plug in their equipment to provide mobile telephony, and then it became internet connectivity, and then UNESCO put in place early warning systems, because these areas were very much prone to floodings, and whenever that happened it wiped out the cattle the livelihood of the people, etc that’s transforming the lives of the people, AI can contribute in a huge way to that extent and I think if we put that at the center, then of course it has to be ethical, it has to be human centered, it has to be accountable, transparent, all the principles that we talked about, and then comes the issue of …

advocacy, capacity development because more informed policy makers will go this route but if we don’t bring up awareness if we don’t do the advocacy and the capacity building and the training then of course we can see that some companies or some people going for the buck for the profit out of this technology not the social benefit not transforming lives

Maria Grazia

thanks very much all over to you because the company is at the end of the speech so over to the company and really how you see also this fact of including the other stakeholders in what you do and how that can transform and help you deliver on AI that is added by the company

Paula Goldman

well thank you for saying that and I actually think that it becomes more and more obvious that that’s actually the only way to scale the technology um um And, you know, but just think about, think about if you’re developing a technology that’s meant to serve many different markets and many different populations, that you need to know, for example, like we have in our AI agent, we have a voice capability. We need to know that that voice capability, even if we’re just talking about English, forget about other languages for a second. We’re just talking about English. It needs to work on different vernaculars of English, different accents, etc. I work a lot on product accessibility, right?

It needs to understand a deaf accent, for example. And so the most inclusively designed technology is going to be the one that’s most successful. It’s going to increase accuracy rates and so on. I also think this is to that end it’s actually a very very exciting time to be able to use AI for inclusion and so I mentioned for example product accessibility one of the things that to me that’s most hopeful and most exciting about this time is that like we’re starting to see AI agents that correct in real time we’re working my team is working on this at Salesforce correct in real time code that is not accessible or correct in real time a browser extension so that if you’re like on your phone and something comes up and maybe a common problem is you’re trying to zoom out or in and it breaks it will correct it in real time and these are the this is this kind of technology is the difference between someone that’s able to use some software to actually get their job done or someone that’s excluded from getting their job done and so again I guess I guess the point that I’m trying to make is the most inclusively designed technology is going to be the most commercially successful and also this is an incredibly exciting time to be doing

Maria Grazia

I’m really happy to hear from the voice of the industry that the more, so those that include are actually not making a favor to those that get included, but actually the AI, the systems get superior. And so that is something I think that’s another comment of a common legend out there that says, no, you know, it’s costly and perhaps then, you know, the profit is not there. What we are hearing from the voices of the companies is really like, well, no, because it’s a superior product, it’s a better product, it performs better. Last but not least, back to our Virginia. Especially here, I would like to listen from you about what you think is the role of a specific component of human capital, that is the skill.

And we have heard throughout this week the importance of upskilling, reskilling. And is that really the solution?

Virginia Dignam

thank you very much firstly going back to if I made the impression that hammers are not useful it’s not the case there are many useful hammers my point is more like we need a toolbox we don’t need only hammers and even outside of the western world we are too much focused on hammers maybe the skills yes we really need to focus on skills we need to focus on our own capabilities on our lived experience and so on someone talked about AGI and indeed at this moment the AGI concept is about power is about providing power to those companies that claim they will build it how are they building it is what I call the play -doh approach they are putting all the data of the world with all the capacities of the world creating a huge ball of play -doh if anyone who played with We played out before, you know, that after you play, there is no color, there is no shape, there is nothing anymore.

It’s just a thing. And then, of course, that thing might do, but no one knows what’s inside, what came in, what came out, and so on. We need to go much more broader in understanding how this AGI is. What fundamentally AGI means, a system that is more intelligent than us, that can solve problems that we cannot. We already have AGI. We always had AGI. It’s called collective intelligence. The moment that we work together, we can do more than each one of us. If we are using the AI technology that we are developing to support this collaboration together, to develop the different skills, to integrate all our different capabilities, our different differences, our different experiences, our different capabilities, our different abilities, our different abilities, the different tools that we have developed.

then we get a much broader bouquet, not anymore a bowl of Play -Doh without color, but a huge bouquet of flowers of all those colors and so on. So AGI is about, and we cannot let the big companies run away with the concept of AGI by the idea that they are going to create God which is going to solve our problems. AGI is about us. It’s about putting all us together because our collective intelligence is really what, at the end of the day, is going to solve or to support us solving our problems. It’s just one more thing, and I think that’s also part of the skills. Technology, and there I disagree with you, is not neutral.

All technology embeds and encompasses our choices, our options, our data. All of that is part of. We have to understand technology as a non -neutral. artifact, and take those capabilities and also embrace the different perspectives and the different colors of this. But again, altogether, that’s the only way forward, is not giving up and hoping that AI is going to solve whatever complex problems we have. Now it’s really embracing and enforcing collective intelligence. That is AGI.

Maria Grazia

Excellent. Now, collective intelligence. Now we are going to have a collective set of questions, just a couple, because the time doesn’t allow for more. So, please, by the time you want to intervene, be absolutely short, say your name, say whom you want to ask the question to, and the question without doing the history of humankind before shopping with a person. So, I have to say, I spotted that surface, and there was a lady on this side. Now I think she got shy, and she just put the… So, let’s start by that gentleman. No, it’s the gentleman behind you, I’m sorry. is there I can do everything from moderating to giving you the part we are proactive and problem solving let’s go your name is

Audience

hello everyone myself Rajan I am from business club TV and I am the CEO and the founder of the startup so I have a very basic questions for professor Virginia Dignam yes so professor I have a question for you what is AI policy

Virginia Dignam

Wow, okay, how many hours? Okay, very shortly, AI policy is about the tools, the capabilities, the skills, the information, the knowledge on the understanding how to address the impact of AI. Not the technology, not the designing of the technology, but really addressing the impact of this technology from the whole loop and all development from the beginning, asking ourselves, why are we using AI? Is that the best problem that we have? To the way we are developing it, to the way that we are evaluating. And addressing the impact of it.

Maria Grazia

No, I’m sorry, because we have to give it, let’s be inclusive, let’s allow the other. to speak as well. Please, that lady, yes, exactly, the one with the hand raised. It’s just down here, three rows ahead. I’m going to be gender equal, so one -on -one. I’m not going to have the men speak because typically you’re the fastest to raise your hand, the women, we are more sharp. Go ahead.

Rita Soni

I love that. Thank you for that. Hi, my name is Rita Soni. I don’t know who should answer this question, but at the beginning of this panel, I heard someone say those that are developing AI and designing it probably have never experienced a power cut or potholes in the road. I thought that there would be more discussion about who is actually involved in the humans in the loop. Dave Donnie, you know me. So I have to ask this question about the people that are actually developing it and whether we’re thinking about responsibly employing them. Right now, we know that there’s overhauls of half a million people in the world. And so, I’m going to ask you to think about that.

that we consider impact workers. They’ve typically been excluded, but now they are. So how do we support this as a movement of getting those that have experienced power cuts to help design and develop it? This is a development -related question.

Maria Grazia

Who wants to attack it? Because we are over. That’s the last question, and then we will have to say thank you and continue the conversation in parallel.

Debjani Ghosh

Yeah, I fully, I mean, you know, if you’re talking about have developers suffered, to develop the technology, power cuts, anyone who’s working out of Bangalore or any Indian cities, yes, they have. They’ve definitely suffered in the development. Now, I think, Rita, the point you were making is how do we make it more inclusive? How do we bring in? And I think that’s something that goes back to the perennial question, is how do you ensure that you democratize not just access to technology, but you also democratize design and creation of the technology, right? And it’s not just gender. It’s also how do you diffuse it down to smaller cities, so people who are actually facing the problems in smaller cities.

And I think at least in India we are doing that through our initiators like Startup India, etc., which is more focused today on building capabilities in Tier 2, Tier 3 cities, not users, not just for adoption, but actually for design and development. So there’s a lot of focus, and I’m sure there are founders here who have come from the smallest of cities in India. And the best part is when we track the numbers, the growth of startups and founders is higher in the Tier 2, Tier 3, Tier 4 cities than in Tier 1 cities. So that tells us we’re doing something right.

Maria Grazia

enjoyed at least like half of as much I have enjoyed this panel. Please join me in thanking the journey from your to be and we’re going to do a large show so please stand up we’re going to do a selfie with all of you in the back come here stand like this so we’re all together this is our collective intelligence thank you thank you very much thank you thank you Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (38)
Factual NotesClaims verified against the Diplo knowledge base (6)
Confirmedhigh

“Dr Tawfik Jelassi is the Assistant Director‑General for Communication and Information at UNESCO”

The knowledge base lists Tawfik Jelassi as UNESCO’s Assistant Director General for Communication and Information [S119].

Confirmedhigh

“Dr Maria Grazia is from UNESCO’s Social and Human Sciences sector”

UNESCO’s records identify Dr Mariagrazia Squicciarini (also referred to as Dr Maria Grazia) as the CEO of the Social and Human Sciences sector [S22].

Confirmedhigh

“The 2021 UNESCO Recommendation on the Ethics of Artificial Intelligence was adopted by 193 member states”

The recommendation was adopted two years ago by 193 UNESCO member states, demonstrating its worldwide acceptance [S26] and is described as a global normative foundation [S126].

Additional Contextmedium

“UNESCO aims to promote ethical, human‑centred AI while supporting innovation, especially in the Global South”

UNESCO’s three-pronged approach – fostering AI opportunities, mitigating risks, and addressing harms – reflects this dual focus on ethical, human-centred AI and innovation, with particular attention to the Global South [S84].

Additional Contextmedium

“Regulation does not necessarily hinder innovation; efficient ethical regulation can guide innovation toward benefiting humanity”

UNESCO emphasizes that innovation and regulation are not contradictory and that well-designed ethical regulations should steer innovation positively [S46].

Additional Contextlow

“The UNESCO Recommendation is built on three non‑negotiable pillars: human rights, human dignity and fundamental freedoms”

The recommendation’s principles are rooted in human rights and also highlight inclusivity, sustainability, transparency and explainability, providing a broader set of values beyond the three pillars mentioned [S127].

External Sources (127)
S1
https://dig.watch/event/india-ai-impact-summit-2026/ethical-ai_-keeping-humanity-in-the-loop-while-innovating — So it gives me great pleasure to just present today’s panellists and moderator. We’ve had Dr. Tawfiq Jilasi, who’s Assis…
S2
Ethical AI_ Keeping Humanity in the Loop While Innovating — -Dr. Tawfiq Jilasi- Assistant Director General for Communication and Information (mentioned by Tim Curtis in introductio…
S3
AI That Empowers Safety Growth and Social Inclusion in Action — – Ankit Bose- Tim Curtis- Rein Tammsaar
S4
Ethical AI_ Keeping Humanity in the Loop While Innovating — -Debjani Ghosh- Distinguished Fellow at NITI Aayog, former role with NASCOM
S5
Panel Discussion: 01 — -Debjani Ghosh- Distinguished Fellow, Niti Aayog (role: moderating the ministerial conversation)
S6
WSIS+20 High-Level Event 2025 Inaugural Session: Celebrating Two Decades and Achieving Future Milestones Together — ### UNESCO Assistant Director-General Tawfik Jelassi – **Tawfik Jelassi** – Role/Title: Assistant Director General for …
S7
Day 0 Event #119 Roam X Driving WSIS Implementation and Digital Cooperation — – **Tawfik Jelassi** – Assistant Director General of UNESCO for Communication and Information, delivered keynote remarks…
S8
DC-OER The Transformative Role of OER in Digital Inclusion | IGF 2023 — Dr. Tawfik Jelassi, Assistant Director-General for Communication and Information Sector, UNESCO
S10
Ethical AI_ Keeping Humanity in the Loop While Innovating — -Brado Benefai- (Appears to be the same person as Brando Benifei, mentioned in introduction) -Brando Benifei- Member of…
S11
Open Forum #72 European Parliament Delegation to the IGF & the Youth IGF — – Brando Benifei: Member of European Parliament (mentioned but not in speakers list)
S12
Ethical AI_ Keeping Humanity in the Loop While Innovating — -Paula Goldman- Chief Ethical and Humane Use Officer at Salesforce
S13
How the EU’s GPAI Code Shapes Safe and Trustworthy AI Governance India AI Impact Summit 2026 — -Paula Goldman: Area of expertise, role, and title not mentioned in the transcript
S14
https://dig.watch/event/india-ai-impact-summit-2026/from-technical-safety-to-societal-impact-rethinking-ai-governanc — I think we can just continue the discussion and I hope we’ll do. This is today just a start. I also hope that we will be…
S15
Ethical AI_ Keeping Humanity in the Loop While Innovating — -Professor Virginia Dignam- (Same as Virginia Dignam, referenced with title)
S16
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Tatjana Titareva: Thank you so much. Today’s session’s focus is to discuss the roadmap for AI Policy Lab that we have de…
S17
https://dig.watch/event/india-ai-impact-summit-2026/welfare-for-all-ensuring-equitable-ai-in-the-worlds-democracies — Thank you so much. My name is Rita Soni. I work with a company that’s operating in small -town India, delivering all the…
S19
Ethical AI_ Keeping Humanity in the Loop While Innovating — I love that. Thank you for that. Hi, my name is Rita Soni. I don’t know who should answer this question, but at the begi…
S20
Ethical AI_ Keeping Humanity in the Loop While Innovating — -Maria Grazia- Chief of the Executive Office of UNESCO’s Social and Human Sciences sector, moderator, microeconomist spe…
S21
https://dig.watch/event/india-ai-impact-summit-2026/ethical-ai_-keeping-humanity-in-the-loop-while-innovating — Thank you for coming. And finally, a great pleasure to welcome Brado Benefai, a member of the European Parliament who wi…
S22
Open Forum #82 Catalyzing Equitable AI Impact the Role of International Cooperation — – **Dr. Maria Grazia Grani** – CEO from the Social and Human Sciences Sector UNESCO (mentioned in introduction but appea…
S23
WS #280 the DNS Trust Horizon Safeguarding Digital Identity — – **Audience** – Individual from Senegal named Yuv (role/title not specified)
S24
Building the Workforce_ AI for Viksit Bharat 2047 — -Audience- Role/Title: Professor Charu from Indian Institute of Public Administration (one identified audience member), …
S25
Nri Collaborative Session Navigating Global Cyber Threats Via Local Practices — – **Audience** – Dr. Nazar (specific role/title not clearly mentioned)
S26
Searching for Standards: The Global Competition to Govern AI | IGF 2023 — The UNESCO recommendation on AI ethics has become a critical guide for global AI governance. It was adopted two years ag…
S27
IGF 2023 WS #313 Generative AI systems facing UNESCO AI Ethics Recommendation — Additionally, IFAP focuses on building capacities to address the ethical concerns arising from the use of frontier techn…
S28
Shaping an inclusive global action to anticipate quantum technologies — Such international cooperation is crucial to bridge the digital divide, enabling holistic participation in developing gl…
S29
Diplomacy amid Disorder / DAVOS 2025 — Need for collaboration between global north and south
S30
“Re” Generative AI: Using Artificial and Human Intelligence in tandem for innovation — Audience:I am dealing. I’m a professor of ethics. And I’m dealing with AI and ethics in some years. And I’m struggling a…
S31
DC-Inclusion & DC-PAL: Transformative digital inclusion: Building a gender-responsive and inclusive framework for the underserved — – Tawfik Jelassi: Assistant Director General of Communication and Information Sector of UNESCO Tawfik Jelassi: Good mo…
S32
Day 0 Event #252 Editorial Media and Big Tech Dependency the Material Conditions for a Free and Resilient NeWS Media — Chris Disspain warns against using the term ‘regulation’ because it can be misinterpreted by authoritarian governments, …
S33
Ministerial Roundtable — – **Tawfik Jelassi** – ADGE of UNESCO (Assistant Director-General for Education) Ms. Doreen Bogdan-Martin, Mr. Tomas La…
S34
Ethics and AI | Part 5 — The principles stipulated by the Convention do not come with anything that would deal with issues which we have identifi…
S35
Ethics and AI | Part 6 — Even if the Act itself does not make direct reference to “ethics”, it is closely tied to the broader context of ethical …
S36
Transcript from the hearing — Now, regulation is often said to stifle innovation. But there is no real trade off between safety and innovation. An AI …
S37
WSIS Action Line C10: Ethics in AI: Shaping a Human-Centred Future in the Digital Age — – **Ethics as foundational rather than an afterthought**: The panelists emphasized that ethics should be embedded from t…
S38
WS #294 AI Sandboxes Responsible Innovation in Developing Countries — When sandboxing AI solutions, it’s important to consider that individuals will be affected regardless of whether their p…
S39
The digital economy in the age of AI: Implications for developing countries (UNCTAD) — Artificial Intelligence (AI) has the potential to revolutionise industries, enhance efficiency, and support innovation a…
S40
Impact of the Rise of Generative AI on Developing Countries | IGF 2023 Town Hall #29 — Another aspect discussed is the need to redefine the term ‘developing countries.’ This argument emphasises the existence…
S41
AI for Good Technology That Empowers People — The discussion revealed that edge AI is not merely a fallback solution for areas with poor connectivity, but rather enab…
S42
Scaling Innovation Building a Robust AI Startup Ecosystem — This comment is insightful because it explicitly addresses the geographic democratization of innovation in India, acknow…
S43
GermanAsian AI Partnerships Driving Talent Innovation the Future — The focus on tier-2 and tier-3 cities in India exemplifies this inclusive approach, supported by Dr. Azariah’s evidence …
S44
The Innovation Beneath AI: The US-India Partnership powering the AI Era — A significant announcement was Google’s Climate Technology Center, developed in partnership with the Office of Principal…
S45
Human Rights-Centered Global Governance of Quantum Technologies: Implications for AI, Digital Rights, and the Digital Divide — UNESCO Director Guilherme Canela emphasized that innovation and human rights protection are not opposing forces but comp…
S46
La découvrabilité des contenus numérique: un facteur de diversité culturelle et de développement (Délégation Wallonie-Bruxelles, Belgian Mission to the UN in Geneva) — The analysis also highlighted the importance of implementing ethical principles and existing consensuses on a global sca…
S47
Tokenisation and the Future of Global Finance: A World Economic Forum 2026 Panel Discussion — Legal and regulatory | Economic References to financial crises being born from misleaded or dangerous financial innovat…
S48
Large Language Models on the Web: Anticipating the challenge | IGF 2023 WS #217 — In conclusion, generative AI technology has the potential for positive impacts in multiple industries. It enhances commu…
S49
WS #162 Overregulation: Balance Policy and Innovation in Technology — Balancing regulation and innovation Paola Galvez argues that regulation is needed, but the focus should be on how to re…
S50
The Future of AI in the Judiciary: Launch of the UNESCO Guidelines for the use of AI Systems in the Judiciary — Ms. Amanda Leal:And I think to contextualize, I wanted to bring two points. One about the governance throughout the AI s…
S52
The Foundation of AI Democratizing Compute Data Infrastructure — The emphasis on community participation, data sovereignty, and alternative technical architectures suggests AI developme…
S53
Driving Indias AI Future Growth Innovation and Impact — These key comments fundamentally shaped the discussion by expanding it beyond technical infrastructure to encompass trus…
S54
Shaping AI’s Story Trust Responsibility & Real-World Outcomes — High level of consensus with strong alignment on fundamental principles and practical approaches. This suggests the AI g…
S55
Towards a Safer South Launching the Global South AI Safety Research Network — The tone was collaborative and urgent throughout, with speakers expressing both excitement about the network’s potential…
S56
WS #438 Digital Dilemmaai Ethical Foresight Vs Regulatory Roulette — – **Implementation focus**: Early-stage development influence versus enforcement cooperation – **Regulatory mechanisms*…
S57
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — I use often use cars as an example. I know it’s a bit silly, but I also like to use the child seat, you know, because I …
S58
Ethical AI_ Keeping Humanity in the Loop While Innovating — Treating regulation and ethics as experimental artifacts that can be tested, evaluated, and refined rather than fixed pr…
S59
Why science metters in global AI governance — The panel discussion explored practical challenges in the science-policy interface, with experts from India, France, WHO…
S60
WS #100 Integrating the Global South in Global AI Governance — Key issues highlighted included the technology gap between developed and developing nations, regulatory uncertainty in m…
S61
How to make AI governance fit for purpose? — Given that AI technologies are inherently global, effective governance requires international engagement and cooperation…
S62
Laying the foundations for AI governance — International Cooperation and Standards Need for international cooperation despite geopolitical challenges
S63
Trade regulations in the digital environment: Is there a gender component? (UNCTAD) — In conclusion, the analysis reinforces the potential of digitalisation and emerging technologies, such as artificial int…
S64
Robotics and the Medical Internet of Things /MIoT — In summary, the analysis highlights the importance of inclusive technology design and ensuring that technological advanc…
S65
WS #45 Fostering EthicsByDesign w DataGovernance & Multistakeholder — Rosanna Fanni: Thank you. Thank you very much. and also thanks for all my fellow panelists. I think a lot of things …
S66
WSIS Action Line C10: Ethics in AI: Shaping a Human-Centred Future in the Digital Age — – **Challenges of implementing ethics in commercial environments**: Multiple speakers highlighted the tension between go…
S67
Ethics and AI | Part 3 — In November 2021, UNESCO adopted theRecommendation on the Ethics of Artificial Intelligence, marking its first global st…
S68
The fading of human agency in automated systems — In many settings, humans retain formal accountability while losing meaningful influence over outcomes. When a decision i…
S69
ICT vulnerabilities: Who is responsible for minimising risks? | Introduction — Human intervention is necessary; the problem can’t be completely solved by technology alone Responsibility also lies wi…
S70
AI in Action: When technology serves humanity — Across these domains (conservation, disaster response, language preservation, small business, and agriculture), technolo…
S71
Driving Indias AI Future Growth Innovation and Impact — Less regulation preferred to avoid curtailing innovation Rajgopal advocates for minimal regulation to avoid stifling in…
S72
Tackling disinformation in electoral context — While some regulation is necessary, over-regulation should be avoided as it could stifle innovation and growth in the di…
S73
New Technologies and the Impact on Human Rights — Balanced regulatory approach Regulation should be proportionate and risk-based, focused on actual likely harms rather t…
S74
Responsible AI in India Leadership Ethics & Global Impact part1_2 — The discussion highlighted the importance of collaborative regulation development, where industry expertise informs regu…
S75
The open-source gambit: How America plans to outpace AI rivals by democratising tech — A “worker-first AI agenda” is the key social pillar of the Plan. The focus is on helping workers reskill and build capac…
S76
WS #283 AI Agents: Ensuring Responsible Deployment — Capacity development | Online education Government Perspectives and Regulatory Approaches Need for enhanced education …
S77
Open Forum #17 AI Regulation Insights From Parliaments — Capacity building and education are essential for all stakeholders Development | Capacity development
S78
Ethics and AI | Part 5 — The principles stipulated by the Convention do not come with anything that would deal with issues which we have identifi…
S79
Ethics and AI | Part 2 — 4.An ethic is framework, or guiding principle, and it’s often moral. […]  A social ethic might include “treating people …
S80
Main Session | Policy Network on Artificial Intelligence — Yves Iradukunda : Thank you, and good afternoon. It’s great to be here in this critical conversation, and thanks to t…
S81
Ethical AI_ Keeping Humanity in the Loop While Innovating — And then at times we also hear this, that having constraints or having frameworks will actually hinder these dynamics. A…
S82
Human Rights-Centered Global Governance of Quantum Technologies: Implications for AI, Digital Rights, and the Digital Divide — UNESCO Director Guilherme Canela emphasized that innovation and human rights protection are not opposing forces but comp…
S83
La découvrabilité des contenus numérique: un facteur de diversité culturelle et de développement (Délégation Wallonie-Bruxelles, Belgian Mission to the UN in Geneva) — In conclusion, the analysis provided insight into various arguments and concerns surrounding AI, internet languages, and…
S84
WS #110 AI Innovation Responsible Development Ethical Imperatives — UNESCO’s representative, Guilherme Canela de Souza Godoy, stressed that innovation and human rights protection should no…
S85
Large Language Models on the Web: Anticipating the challenge | IGF 2023 WS #217 — In conclusion, generative AI technology has the potential for positive impacts in multiple industries. It enhances commu…
S86
Secure Finance Risk-Based AI Policy for the Banking Sector — The discussion revealed several unresolved tensions, particularly the fundamental disagreement between risk-based and em…
S87
Tokenisation and the Future of Global Finance: A World Economic Forum 2026 Panel Discussion — Legal and regulatory | Economic References to financial crises being born from misleaded or dangerous financial innovat…
S88
WS #162 Overregulation: Balance Policy and Innovation in Technology — Balancing regulation and innovation Paola Galvez argues that regulation is needed, but the focus should be on how to re…
S89
WSIS Action Line C10: Ethics in AI: Shaping a Human-Centred Future in the Digital Age — – **Ethics as foundational rather than an afterthought**: The panelists emphasized that ethics should be embedded from t…
S90
WS #438 Digital Dilemmaai Ethical Foresight Vs Regulatory Roulette — Alexandra Krastins Lopes: Great, thanks. It’s an honor to contribute to this important discussion. And while I have a pr…
S91
“Re” Generative AI: Using Artificial and Human Intelligence in tandem for innovation — Moira de Roche:Yes, that’s why I said, Don, we’ve always looked at everything through an ethical lens and we believe tha…
S92
Comprehensive Report: China’s AI Plus Economy Initiative – A Strategic Discussion on Artificial Intelligence Development and Implementation — This comment expanded the education discussion beyond formal systems to include organic, curiosity-driven learning. It r…
S93
Building Population-Scale Digital Public Infrastructure for AI — “Thank you so much, Mr. Nandan.”[4]. “We’ll start by taking a quick group photograph together and then begin the discuss…
S94
Harnessing Collective AI for India’s Social and Economic Development — <strong>Moderator:</strong> sci -fi movies that we grew up watching and what it primarily also reminds me of is in speci…
S96
Shaping AI’s Story Trust Responsibility &amp; Real-World Outcomes — High level of consensus with strong alignment on fundamental principles and practical approaches. This suggests the AI g…
S97
AI That Empowers Safety Growth and Social Inclusion in Action — Second, they want to close capacity gaps. Many developing countries need infrastructure, skills, and compute to particip…
S98
GPAI: A Multistakeholder Initiative on Trustworthy AI | IGF 2023 Open Forum #111 — Abhishek Singh:So the details of side events will be up on the website very soon, hopefully by next week or so. And we w…
S99
Building Inclusive Societies with AI — And in fact, the platform that the committee recommended in some sense was to also help to Uberize, to create demand, to…
S101
Opening — The overall tone was formal yet optimistic. Speakers acknowledged the serious challenges posed by rapid technological ch…
S102
WAIGF Opening Ceremony &amp; Keynote — The overall tone was formal yet optimistic. Speakers expressed enthusiasm about the potential of digital technologies wh…
S103
Collaborative Innovation Ecosystem and Digital Transformation: Accelerating the Achievement of Global Sustainable Development Goals (SDGs) — The discussion maintained a consistently professional, collaborative, and optimistic tone throughout. Speakers demonstra…
S104
Opening address of the co-chairs of the AI Governance Dialogue — The tone is consistently formal, diplomatic, and optimistic throughout. It maintains a ceremonial quality appropriate fo…
S105
Strengthening Corporate Accountability on Inclusive, Trustworthy, and Rights-based Approach to Ethical Digital Transformation — The discussion maintained a professional, collaborative tone throughout, with speakers demonstrating expertise while ack…
S106
WS #31 Cybersecurity in AI: balancing innovation and risks — Melodena Stephens: So this is a tough one, right? Because when I look at ethics, I think ethics are great. The line b…
S107
(Day 1) General Debate – General Assembly, 79th session: afternoon session — The level of disagreement among speakers is moderate. While there is general consensus on the need to address global cha…
S108
https://dig.watch/event/india-ai-impact-summit-2026/ethical-ai_-keeping-humanity-in-the-loop-while-innovating — Thank you. Thank you, Deb. Okay. Thank you. Thank you for having me here. So, first of all, I’ll just go back to the top…
S109
Smart Regulation Rightsizing Governance for the AI Revolution — The discussion began with a notably realistic and somewhat pessimistic assessment of global cooperation challenges, but …
S110
Shaping the Future AI Strategies for Jobs and Economic Development — The discussion maintained an optimistic yet pragmatic tone throughout. While acknowledging significant challenges around…
S111
WS #462 Bridging the Compute Divide a Global Alliance for AI — The discussion maintained a constructive and collaborative tone throughout, with participants building on each other’s i…
S112
Panel 5 – Ensuring Digital Resilience: Linking Submarine Cables to Broader Resilience Goals — This comment emphasizes the critical importance of collaboration while also pushing for concrete actions rather than jus…
S113
AI Meets Agriculture Building Food Security and Climate Resilien — The discussion maintained an optimistic and collaborative tone throughout, characterized by visionary leadership and pra…
S114
Bridging the AI innovation gap — The tone is consistently inspirational and collaborative throughout. The speaker maintains an optimistic, forward-lookin…
S115
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — The tone was collaborative and solution-oriented throughout, with participants acknowledging both the urgency and comple…
S116
Welfare for All Ensuring Equitable AI in the Worlds Democracies — The conversation maintained an optimistic and collaborative tone throughout, with participants sharing practical solutio…
S117
Democratizing AI: Open foundations and shared resources for global impact — The tone was consistently collaborative, optimistic, and forward-looking throughout the discussion. Speakers maintained …
S118
How Multilingual AI Bridges the Gap to Inclusive Access — The tone was consistently collaborative, optimistic, and mission-driven throughout the conversation. Speakers demonstrat…
S119
WSIS Action Line C7 E-learning — – **Tawfik Jelassi** – Assistant Director General for Communication and Information at UNESCO Tawfik Jelassi, UNESCO’s …
S120
Leaders TalkX: Gateway to Knowledge: Empowering Global Access Through Digital — Lori Schulman warmly initiated the Leader Talks panel with a welcome, thanking the audience for their patience and pledg…
S121
WSIS Action Line C7: e-Learning: Empowering Educators and learners: Enhancing Teacher Training and e-Learning for Digital Inclusion — At the WSIS Plus 20 event, a session chaired by Zeynep Varoglu focused on Action Line 7, which addresses the empowerment…
S122
WSIS Action Line C10: The Future of the Ethical Dimensions of the Information Society — Dr. Mariagrazia Squicciarini:Absolutely. We are for plural inclusivity. The last word to Ashu. Dr. Mariagrazia Squiccia…
S123
Main Topic 3 – Innovation and ethical implication  — Good morning. Vanya Skoric, serving as the Programme Director at the European Center for Not-for-Profit Law, spotlights …
S124
Technology Rewiring Global Finance: A Panel Discussion Summary — Koffey emphasized that regulation must be a force for economic growth and innovation, breeding adoption and trust throug…
S125
Building fair markets in the algorithmic age (The Dialogue) — The current system struggles with jurisdictional and sovereignty issues, as companies are often not based in the territo…
S126
AI diplomacy — We are, in essence, searching for a common language to discuss AI ethics, safety, and security. We can see the early res…
S127
Evolving AI, evolving governance: from principles to action | IGF 2023 WS #196 — The recommendation includes values of human rights, inclusivity, and sustainability, serving as a guide for developers a…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
T
Tim Curtis
2 arguments73 words per minute339 words276 seconds
Argument 1
UNESCO’s stance: ethical frameworks do not hinder innovation; they guide AI to serve humanity
EXPLANATION
Tim emphasizes that UNESCO believes ethical AI does not impede technological progress but rather ensures AI serves humanity, especially in the Global South. He frames ethics as central to responsible AI deployment while still encouraging innovation.
EVIDENCE
Tim states that UNESCO’s belief is that ethical AI deployment should not hinder innovation and that AI can offer many benefits to humanity, particularly in the Global South [2]. He also references the 2021 UNESCO recommendation on AI ethics, showing that member states have been discussing these issues since 2019 to put technology at the service of humanity [21].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
UNESCO explicitly states that ethical frameworks are not a barrier to innovation but a guide for AI to serve humanity, countering the notion that constraints hinder progress [S2]; this view is reinforced by analyses that argue safety and innovation are not trade-offs [S36].
MAJOR DISCUSSION POINT
Innovation, Ethics, and Regulation are Not Mutually Exclusive
AGREED WITH
Dr. Tawfik Jelassi, Debjani Ghosh, Brando Benifei, Maria Grazia
Argument 2
UNESCO’s partnership with India exemplifies the need for South‑South collaboration and inclusive global standards
EXPLANATION
Tim highlights the collaboration with the Government of India as an example of UNESCO’s commitment to inclusive, South‑South cooperation in AI ethics. This partnership showcases how global standards can be shaped through joint efforts with developing countries.
EVIDENCE
Tim thanks the Government of India for its collaboration on the session and notes UNESCO’s engagement with the Global South in AI ethics and innovation [2].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
UNESCO’s engagement with the Global South, including India, is highlighted as a model for inclusive standards and South-South cooperation [S28]; recent US-India AI collaborations further illustrate the value of such partnerships [S44].
MAJOR DISCUSSION POINT
Global Cooperation and Multilateral Governance
AGREED WITH
Brando Benifei, Virginia Dignam, Debjani Ghosh, Maria Grazia
D
Dr. Tawfik Jelassi
4 arguments156 words per minute961 words369 seconds
Argument 1
Innovation and ethics reinforce each other; ethical design leads to more trusted, widely adopted AI
EXPLANATION
Tawfik argues that integrating ethical reflection early in AI design makes systems more trustworthy and thus more widely adopted, showing that ethics and innovation are complementary rather than contradictory.
EVIDENCE
He explains that ethical design makes AI systems more respected, trustworthy and widely used, and that AI should be ethical by design, not after-the-fact, citing the UNESCO recommendation as the global framework for this approach [38-40].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Panel discussions stress that ethics should be foundational, not an afterthought, and that trustworthy AI drives adoption [S37]; UNESCO’s stance that ethical AI does not impede innovation supports this view [S2].
MAJOR DISCUSSION POINT
Innovation, Ethics, and Regulation are Not Mutually Exclusive
AGREED WITH
Tim Curtis, Debjani Ghosh, Brando Benifei, Maria Grazia
Argument 2
UNESCO Recommendation provides a global set of principles that need concrete translation into actions
EXPLANATION
Tawfik points out that the UNESCO recommendation outlines high‑level principles, but these must be operationalised into concrete actions on the ground to be effective.
EVIDENCE
He notes that the UNESCO recommendation calls for human oversight, non-discrimination, cultural diversity and environmental sustainability, which need to be translated into practice [40-41].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The UNESCO recommendation, adopted by 193 member states, offers high-level principles that require operationalisation through concrete actions and capacity-building programmes [S26][S27][S37].
MAJOR DISCUSSION POINT
Operationalising AI Ethics: From Principles to Practice
AGREED WITH
Debjani Ghosh, Paula Goldman, Virginia Dignam, Maria Grazia
Argument 3
Human oversight, non‑discrimination, cultural respect, and environmental sustainability are core to ethical AI
EXPLANATION
Tawfik reiterates that ethical AI must be built on three pillars: respect for human rights, dignity and fundamental freedoms, which include oversight, non‑discrimination, cultural respect and sustainability.
EVIDENCE
He lists the core components of ethical AI as human oversight, non-discrimination, respect for cultural diversity and environmental sustainability as defined in the UNESCO recommendation [40-41].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The UNESCO recommendation explicitly lists human oversight, non-discrimination, cultural diversity and environmental sustainability as key pillars of ethical AI [S26]; related analyses discuss how these principles shape responsible AI governance [S34].
MAJOR DISCUSSION POINT
Human‑Centred AI and Accountability (“Humanity in the Loop”)
AGREED WITH
Debjani Ghosh, Tim Curtis
Argument 4
Capacity‑building, advocacy, and awareness‑raising are vital for policymakers to implement ethical AI
EXPLANATION
Tawfik shares a field example where UNESCO’s capacity‑building activities (community radios, early‑warning systems) transformed lives, illustrating the importance of advocacy and training for ethical AI adoption.
EVIDENCE
He describes a remote African village where UNESCO introduced community radios, later enabling mobile and internet connectivity, and early-warning systems for floods, showing how capacity-building leads to tangible impact [214-218] and underscores the need for advocacy and training [204-209].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
UNESCO’s IFAP programme and other capacity-building initiatives are cited as essential for translating ethical guidelines into practice and raising awareness among policymakers [S27][S31].
MAJOR DISCUSSION POINT
Global Cooperation and Multilateral Governance
AGREED WITH
Virginia Dignam, Paula Goldman, Debjani Ghosh, Maria Grazia
D
Debjani Ghosh
5 arguments164 words per minute1281 words466 seconds
Argument 1
No trade‑off between innovation and ethics; accountability must remain with humans, not delegated to technology
EXPLANATION
Debjani stresses that the real choice is how technology is used, not whether it is ethical, and that ultimate accountability lies with humans rather than being outsourced to algorithms.
EVIDENCE
She argues that the choice is between using technology for good (e.g., cancer-free world) versus harmful purposes, and that accountability must stay with humans, not be delegated to technology, noting that we cannot align everyone on the same ethical values [49-56] and that accountability rests on people [57-60].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
UNESCO emphasizes that accountability rests with humans, not algorithms, and that safety and innovation are not mutually exclusive [S2][S36].
MAJOR DISCUSSION POINT
Innovation, Ethics, and Regulation are Not Mutually Exclusive
AGREED WITH
Tim Curtis, Dr. Tawfik Jelassi, Brando Benifei, Maria Grazia
Argument 2
Embed ethical oversight throughout the AI lifecycle – design, development, sandbox testing – rather than as an afterthought
EXPLANATION
Debjani calls for ethics to be built into every stage of AI development, from design to commercialization, with checkpoints and sandbox testing to ensure compliance before deployment.
EVIDENCE
She describes the need for oversight at each stage of design and development, with flag-offs and sandbox testing to make ethics by design rather than an afterthought [65-69].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI sandbox frameworks call for continuous human oversight at every stage of development, aligning with calls for ethics-by-design throughout the lifecycle [S38][S37][S2].
MAJOR DISCUSSION POINT
Operationalising AI Ethics: From Principles to Practice
AGREED WITH
Dr. Tawfik Jelassi, Paula Goldman, Virginia Dignam, Maria Grazia
Argument 3
AI Impact Commons showcases how developing countries solve local problems with AI, highlighting diverse use‑cases
EXPLANATION
Debjani presents the AI Impact Commons as a platform that collects impact stories from over 30 countries, demonstrating how AI is applied to address issues like malnutrition, pharma suicides, and climate resilience in developing contexts.
EVIDENCE
She mentions chairing a working group that produced the AI Impact Commons (aiimpactcommons.global) with stories from more than 30 countries solving problems from malnutrition to pharma suicides and climate shocks [166-171].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Studies on AI’s role in developing economies illustrate diverse, locally-driven use cases, mirroring the AI Impact Commons portfolio [S39][S40].
MAJOR DISCUSSION POINT
Inclusive and Diverse Perspectives in AI Development
AGREED WITH
Tim Curtis, Brando Benifei, Virginia Dignam, Maria Grazia
Argument 4
Building capabilities in Tier‑2/3 cities democratizes AI design and fuels inclusive innovation
EXPLANATION
Debjani explains that initiatives like Startup India are focusing on Tier‑2 and Tier‑3 cities to develop AI talent, leading to higher startup growth outside major metros and promoting inclusive design.
EVIDENCE
She notes that programs such as Startup India are building capabilities in Tier-2/3 cities, resulting in higher startup growth in those areas compared to Tier-1 cities [292-300].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Research on India’s AI ecosystem shows that tier-2 and tier-3 cities are emerging hubs of innovation, expanding talent pools beyond traditional metros [S42][S43].
MAJOR DISCUSSION POINT
Education, Skills, and Upskilling for Ethical AI
AGREED WITH
Dr. Tawfik Jelassi, Virginia Dignam, Paula Goldman, Maria Grazia
Argument 5
Accountability ultimately rests with people; ethics cannot be outsourced to algorithms
EXPLANATION
Debjani reiterates that accountability for AI outcomes must remain with human actors, as technology itself cannot be held responsible, emphasizing the need for human governance.
EVIDENCE
She states that accountability comes back to humans and that delegating accountability to technology is not feasible at present [55-60].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
UNESCO’s position stresses human accountability for AI outcomes, reinforcing that ethical responsibility cannot be delegated to technology [S2][S36].
MAJOR DISCUSSION POINT
Human‑Centred AI and Accountability (“Humanity in the Loop”)
AGREED WITH
Dr. Tawfik Jelassi, Tim Curtis
B
Brando Benifei
4 arguments119 words per minute947 words476 seconds
Argument 1
Innovation and ethics can coexist; risk‑based regulation (EU AI Act) balances both and prevents harmful uses
EXPLANATION
Brando argues that innovation and ethics are not opposed; the EU AI Act’s risk‑based approach exemplifies how regulation can protect human rights while still allowing innovation to flourish.
EVIDENCE
He describes the EU AI Act’s risk-based framework, identifying high-risk sectors, ensuring data quality, cybersecurity, human control, and prohibiting certain uses such as predictive policing and emotional recognition [78-84].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The EU AI Act’s risk-based framework exemplifies how regulation can protect rights while fostering innovation, echoing EU Ethics Guidelines for Trustworthy AI [S35][S36][S1].
MAJOR DISCUSSION POINT
Innovation, Ethics, and Regulation are Not Mutually Exclusive
AGREED WITH
Tim Curtis, Dr. Tawfik Jelassi, Debjani Ghosh, Maria Grazia
Argument 2
Risk‑based regulatory approach identifies high‑risk sectors, mandates transparency, and even bans certain applications
EXPLANATION
Brando details how the EU AI Act uses a risk‑based methodology to target specific sectors, enforce transparency, and outright ban applications deemed unacceptable, illustrating practical regulation of AI.
EVIDENCE
He lists examples of regulated sectors (workforce, healthcare, justice), requirements for quality data and governance, and bans on predictive policing, emotional recognition, and manipulative techniques [80-84].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The EU AI Act specifies high-risk sectors, transparency obligations and bans on applications such as predictive policing and emotion recognition [S35][S36].
MAJOR DISCUSSION POINT
Operationalising AI Ethics: From Principles to Practice
Argument 3
A worldwide cooperation framework is required for issues that transcend borders, such as military AI and existential risks
EXPLANATION
Brando stresses that certain AI challenges, like military applications and existential threats, cannot be addressed by national rules alone and need coordinated global governance.
EVIDENCE
He cites the need for global cooperation on military AI and existential risks, warning that lack of globally adopted rules puts humanity in danger [195-199].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for global north-south collaboration on AI governance, especially for military and existential risks, are highlighted in recent diplomatic discussions [S29][S28].
MAJOR DISCUSSION POINT
Global Cooperation and Multilateral Governance
AGREED WITH
Tim Curtis, Virginia Dignam, Debjani Ghosh, Maria Grazia
Argument 4
Regulation must consider human‑rights challenges across contexts, ensuring no community is left behind
EXPLANATION
Brando argues that regulation should protect human rights and exclude harmful AI uses, ensuring equitable outcomes for all societies.
EVIDENCE
He emphasizes that regulation must identify human-rights challenges, balance innovation, and mentions prohibitions on specific high-risk uses as examples of protecting rights [79-84].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Both UNESCO’s human-rights-focused recommendation and the EU’s AI Act stress the need for regulations that safeguard rights and avoid exclusionary outcomes [S26][S35][S36].
MAJOR DISCUSSION POINT
Inclusive and Diverse Perspectives in AI Development
P
Paula Goldman
4 arguments159 words per minute846 words318 seconds
Argument 1
Companies must turn ethical principles into practical controls, ensuring transparency, trust, and the ability to intervene when needed
EXPLANATION
Paula explains that Salesforce embeds ethical controls into its AI products, providing mechanisms for monitoring, escalation, and transparency so that customers can trust and manage AI outcomes.
EVIDENCE
She describes testing products, setting escalation points to humans, answering customer questions about results, failures and responsibility, and emphasizes practical, flexible solutions for different industries [141-149].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
UNESCO’s guidance advocates embedding ethics into concrete controls and monitoring mechanisms throughout product development [S37]; sandbox experiences reinforce the need for such operational safeguards [S38].
MAJOR DISCUSSION POINT
Operationalising AI Ethics: From Principles to Practice
AGREED WITH
Dr. Tawfik Jelassi, Debjani Ghosh, Virginia Dignam, Maria Grazia
Argument 2
Placing people at the centre of transformation drives adoption and ensures AI serves real work needs
EXPLANATION
Paula notes that successful AI scaling puts people at the centre, gathering user feedback on usefulness and integrating it into daily workflows, which leads to higher adoption and relevance.
EVIDENCE
She highlights that the most successful companies give employees a voice about AI usefulness, focus on real work processes, and keep people central to large-scale transformation [155-159].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Human-centred AI approaches argue that placing users at the core of design increases relevance and uptake, a view echoed in UNESCO’s ethical AI framework [S37][S36].
MAJOR DISCUSSION POINT
Human‑Centred AI and Accountability (“Humanity in the Loop”)
Argument 3
Product accessibility—supporting different languages, accents, and abilities—creates superior, inclusive AI solutions
EXPLANATION
Paula stresses that designing AI to handle diverse linguistic and accessibility needs (e.g., different English accents, deaf accents) not only promotes inclusion but also improves overall product performance.
EVIDENCE
She cites the need for AI voice capabilities to work across vernaculars, accents and for deaf users, arguing that inclusive design leads to higher accuracy and commercial success [221-227].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI for Good initiatives stress the importance of designing for linguistic and accessibility diversity to improve performance and inclusion [S41].
MAJOR DISCUSSION POINT
Inclusive and Diverse Perspectives in AI Development
Argument 4
Companies invest in upskilling staff to manage AI agents, escalation protocols, and ethical decision‑making
EXPLANATION
Paula describes how Salesforce trains its workforce to understand AI behavior, set escalation protocols, and handle ethical dilemmas, highlighting the importance of continuous skill development.
EVIDENCE
She mentions that staff are taught when AI should escalate to humans, how to monitor outcomes, and how to address ethical issues, reflecting an upskilling effort [140-148].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Capacity-building programmes that train staff on AI governance, escalation and ethical decision-making are highlighted as best practices by UNESCO and related capacity-building projects [S27][S31].
MAJOR DISCUSSION POINT
Education, Skills, and Upskilling for Ethical AI
AGREED WITH
Dr. Tawfik Jelassi, Virginia Dignam, Debjani Ghosh, Maria Grazia
V
Virginia Dignam
4 arguments146 words per minute1372 words562 seconds
Argument 1
Innovation should draw on varied cultural traditions (e.g., Ubuntu) rather than a single Western paradigm
EXPLANATION
Virginia argues that AI innovation rooted in diverse cultural philosophies, such as the African Ubuntu tradition, would produce fundamentally different and potentially more inclusive AI systems compared to the dominant Cartesian, individualistic approach.
EVIDENCE
She contrasts the Western Cartesian tradition (“I think, therefore I am”) with the Ubuntu perspective (“We are, therefore I am”), suggesting that AI built on Ubuntu would differ markedly [106-112].
MAJOR DISCUSSION POINT
Inclusive and Diverse Perspectives in AI Development
AGREED WITH
Tim Curtis, Brando Benifei, Debjani Ghosh, Maria Grazia
Argument 2
Education must equip engineers with tools to ask “why” and assess impacts, moving beyond vague AI buzzwords
EXPLANATION
Virginia calls for engineering curricula that integrate humanities and social sciences, enabling engineers to question the purpose, beneficiaries, and trade‑offs of AI solutions rather than focusing solely on technical aspects.
EVIDENCE
She stresses the need for engineers to ask why a problem exists, who gains or loses, and to combine technical skills with humanities to be precise about AI’s impact [124-130].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
UNESCO’s calls for integrating humanities and social sciences into engineering curricula aim to foster critical questioning of AI’s societal impact [S37][S34].
MAJOR DISCUSSION POINT
Operationalising AI Ethics: From Principles to Practice
AGREED WITH
Dr. Tawfik Jelassi, Paula Goldman, Debjani Ghosh, Maria Grazia
Argument 3
Collective intelligence and inclusive participation are essential for truly human‑centred AI
EXPLANATION
Virginia proposes that AI should amplify collective intelligence, bringing together diverse skills and perspectives, rather than being a monolithic tool, thereby ensuring AI serves humanity as a shared resource.
EVIDENCE
She describes AGI as a collective intelligence that emerges when people collaborate, likening it to a bouquet of diverse flowers rather than a single colour of Play-Doh, and stresses the need for inclusive participation [236-244].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
IFAP’s emphasis on inclusive, equitable societies underscores the role of collective intelligence and broad stakeholder participation in shaping human-centred AI [S27].
MAJOR DISCUSSION POINT
Human‑Centred AI and Accountability (“Humanity in the Loop”)
Argument 4
Engineers need training that integrates humanities and social sciences to ask critical “why” questions
EXPLANATION
Virginia reiterates that engineering education must blend technical expertise with humanities to foster critical thinking about AI’s societal impact, ensuring responsible development.
EVIDENCE
She again emphasizes the importance of asking why a problem matters, who benefits, and integrating humanities into engineering practice [124-130].
MAJOR DISCUSSION POINT
Education, Skills, and Upskilling for Ethical AI
M
Maria Grazia
4 arguments164 words per minute1794 words655 seconds
Argument 1
The session title is challenged to highlight that constraints need not stifle productivity
EXPLANATION
Maria questions the premise that innovation and ethics are at odds, arguing that appropriate frameworks can actually enhance productivity rather than hinder it.
EVIDENCE
She explicitly states she will challenge the title and argues that constraints or frameworks do not necessarily hinder innovation or productivity [13-20].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Analyses argue that safety and innovation are not mutually exclusive, and that appropriate frameworks can actually boost productivity [S36].
MAJOR DISCUSSION POINT
Innovation, Ethics, and Regulation are Not Mutually Exclusive
AGREED WITH
Tim Curtis, Dr. Tawfik Jelassi, Debjani Ghosh, Brando Benifei
Argument 2
Multilateral dialogue helps align diverse regulatory approaches and share best practices
EXPLANATION
Maria underscores the importance of multilateral platforms, such as UNESCO’s global perspective, for harmonising AI regulations and fostering cooperation among nations.
EVIDENCE
She references the need for a multilateral setting to discuss regulation and align approaches, noting UNESCO’s global perspective and the role of dialogue [186-188].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
North-South diplomatic initiatives and UNESCO’s global cooperation efforts illustrate the value of multilateral dialogue for harmonising AI regulation [S29][S28].
MAJOR DISCUSSION POINT
Global Cooperation and Multilateral Governance
AGREED WITH
Tim Curtis, Brando Benifei, Virginia Dignam, Debjani Ghosh
Argument 3
UNESCO’s mission links education, culture, and communication to embed humanity in AI deployment
EXPLANATION
Maria points out that UNESCO’s historic mission of building peace through education, culture and communication provides a foundation for placing humanity at the centre of AI initiatives.
EVIDENCE
She cites UNESCO’s original mission to build peace via education, culture, science and communication, and connects this to the need for human-centred AI [203-204].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
UNESCO’s 2021 AI ethics recommendation, rooted in its broader mission of education, culture and communication, provides the normative basis for human-centred AI [S26][S28].
MAJOR DISCUSSION POINT
Human‑Centred AI and Accountability (“Humanity in the Loop”)
Argument 4
Ongoing capacity development for policymakers and practitioners is essential for responsible AI deployment
EXPLANATION
Maria stresses that translating ethical principles into practice requires continuous capacity‑building for both policymakers and implementers, ensuring effective and responsible AI use.
EVIDENCE
She mentions the challenge of moving from principles to practice and the need for capacity development, referencing UNESCO’s global perspective and the importance of multilateral dialogue [33-36] and [186-188].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Capacity-building initiatives for policymakers are identified as crucial for moving from principles to practice in AI governance [S27][S31][S16].
MAJOR DISCUSSION POINT
Human‑Centred AI and Accountability (“Humanity in the Loop”)
AGREED WITH
Dr. Tawfik Jelassi, Virginia Dignam, Paula Goldman, Debjani Ghosh
R
Rita Soni
1 argument161 words per minute167 words62 seconds
Argument 1
Developers from regions facing power cuts, poor infrastructure, etc., must be included in design processes
EXPLANATION
Rita asks how to ensure that AI developers who experience real‑world challenges like power outages are involved in creating solutions, advocating for inclusive design that reflects diverse lived experiences.
EVIDENCE
She references the earlier comment about developers never experiencing power cuts, and asks how to involve those affected in the design of AI systems [278-286].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Edge-AI solutions designed for low-connectivity environments demonstrate the importance of involving developers who understand such constraints, reinforcing inclusive design principles [S41][S28].
MAJOR DISCUSSION POINT
Inclusive and Diverse Perspectives in AI Development
A
Audience
2 arguments101 words per minute45 words26 seconds
Argument 1
Defining AI policy involves tools, skills, and impact assessment beyond mere technology design
EXPLANATION
The audience member asks for a definition of AI policy, and Virginia responds that it concerns the tools, capabilities, skills and impact assessment needed to address AI’s societal effects, not just the technology itself.
EVIDENCE
The audience member asks the question about AI policy [263]; Virginia answers that AI policy is about tools, capabilities, skills and impact assessment rather than technology design [264-269].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The AI Policy Research Roadmap stresses that effective AI policy must address tools, capabilities, skills and impact assessment, not just technical design [S16]; UNESCO’s broader policy perspective aligns with this view [S37].
MAJOR DISCUSSION POINT
Global Cooperation and Multilateral Governance
Argument 2
Global policy frameworks should support continuous learning and skill development to keep pace with AI advances
EXPLANATION
The audience highlights the need for policy frameworks that facilitate ongoing education and upskilling so societies can adapt to rapid AI developments.
EVIDENCE
Following the earlier discussion on AI policy, the audience emphasizes that policy must enable continuous learning and skill development to stay current with AI advances (derived from the same exchange) [263] and [264-269].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Continuous learning and upskilling are highlighted as essential components of AI policy frameworks to stay abreast of rapid technological change [S16][S27].
MAJOR DISCUSSION POINT
Education, Skills, and Upskilling for Ethical AI
Agreements
Agreement Points
No trade‑off between innovation and ethics; ethical frameworks do not hinder innovation but can support it.
Speakers: Tim Curtis, Dr. Tawfik Jelassi, Debjani Ghosh, Brando Benifei, Maria Grazia
UNESCO’s stance: ethical frameworks do not hinder innovation; they guide AI to serve humanity Innovation and ethics reinforce each other; ethical design leads to more trusted, widely adopted AI No trade‑off between innovation and ethics; accountability must remain with humans, not delegated to technology Innovation and ethics can coexist; risk‑based regulation (EU AI Act) balances both and prevents harmful uses The session title is challenged to highlight that constraints need not stifle productivity
All speakers affirm that innovation and ethics are compatible and that ethical guidelines or regulation need not impede technological progress; instead they can enhance trust and productivity [2][38-40][49-56][74-77][13-20].
POLICY CONTEXT (KNOWLEDGE BASE)
This view aligns with discussions that regulation and ethics can act as enablers rather than barriers, as highlighted in WS #438 where speakers challenged the narrative that regulation stifles innovation and promoted flexible principle-based approaches [S56]; it is also echoed in perspectives treating ethics and regulation as experimental artifacts that can be refined over time [S58].
Ethics must be embedded throughout the AI lifecycle and translated into concrete practices.
Speakers: Dr. Tawfik Jelassi, Debjani Ghosh, Paula Goldman, Virginia Dignam, Maria Grazia
UNESCO Recommendation provides a global set of principles that need concrete translation into actions Embed ethical oversight throughout the AI lifecycle – design, development, sandbox testing – rather than as an afterthought Companies must turn ethical principles into practical controls, ensuring transparency, trust, and the ability to intervene when needed Education must equip engineers with tools to ask “why” and assess impacts, moving beyond vague AI buzzwords Multilateral dialogue helps align diverse regulatory approaches and share best practices
Speakers concur that high-level ethical principles must be operationalised at each stage of AI development, with oversight, testing, practical controls, and interdisciplinary education to make them effective [40-41][65-69][141-149][124-130][186-188].
POLICY CONTEXT (KNOWLEDGE BASE)
The need for lifecycle-wide ethics is central to the “Ethics-by-Design” approach presented in WS #45 and reinforced by UNESCO’s Recommendation on the Ethics of Artificial Intelligence, which calls for human-centred principles throughout development [S65][S67]; implementation challenges in commercial settings further underscore this requirement [S66].
Human accountability remains central; technology itself cannot be held responsible.
Speakers: Dr. Tawfik Jelassi, Debjani Ghosh, Tim Curtis
Human oversight, non‑discrimination, cultural respect, and environmental sustainability are core to ethical AI Accountability ultimately rests with people; ethics cannot be outsourced to algorithms UNESCO believes ethical AI deployment should be human‑centred
All agree that ultimate responsibility for AI outcomes lies with people, not the algorithms, emphasizing human oversight and accountability [40-41][55-60][2].
POLICY CONTEXT (KNOWLEDGE BASE)
Analyses of automated systems emphasize that formal accountability stays with humans, even when decision-making is delegated to AI, confirming that responsibility remains human-centric and extends to platforms distributing the technology [S68][S69].
Capacity development, education and upskilling are essential for responsible AI.
Speakers: Dr. Tawfik Jelassi, Virginia Dignam, Paula Goldman, Debjani Ghosh, Maria Grazia
Capacity‑building, advocacy, and awareness‑raising are vital for policymakers to implement ethical AI Education must equip engineers with tools to ask “why” and assess impacts, moving beyond vague AI buzzwords Companies invest in upskilling staff to manage AI agents, escalation protocols, and ethical decision‑making Building capabilities in Tier‑2/3 cities democratizes AI design and fuels inclusive innovation Ongoing capacity development for policymakers and practitioners is essential for responsible AI deployment
There is broad consensus that continuous training, education and capacity-building-both for policymakers and industry practitioners-are crucial to embed ethics in AI practice [214-218][124-130][140-148][292-300][186-188].
POLICY CONTEXT (KNOWLEDGE BASE)
Multiple forums stress capacity building as foundational for AI governance, citing the need for local data infrastructure and training in the Global South [S60], online education and stakeholder engagement for responsible deployment [S76], and broader calls for capacity development across all actors [S77].
Global/multilateral cooperation and inclusive standards, especially involving the Global South, are needed for AI governance.
Speakers: Tim Curtis, Brando Benifei, Virginia Dignam, Debjani Ghosh, Maria Grazia
UNESCO’s partnership with India exemplifies the need for South‑South collaboration and inclusive global standards A worldwide cooperation framework is required for issues that transcend borders, such as military AI and existential risks Innovation should draw on varied cultural traditions (e.g., Ubuntu) rather than a single Western paradigm AI Impact Commons showcases how developing countries solve local problems with AI, highlighting diverse use‑cases Multilateral dialogue helps align diverse regulatory approaches and share best practices
Speakers highlight the importance of inclusive, multilateral approaches that bring together diverse cultural perspectives and the Global South to shape AI standards and address cross-border challenges [2][191-199][106-112][166-171][186-188].
POLICY CONTEXT (KNOWLEDGE BASE)
Panels on integrating the Global South highlight technology gaps and the necessity of international cooperation and inclusive standards to enable AI innovation worldwide [S60]; this is reinforced by calls for multilateral frameworks despite geopolitical tensions [S61] and by the broader argument for international standards as a foundation for AI governance [S62].
Similar Viewpoints
Both stress that ethical considerations must be integrated from the outset and throughout the AI development process, turning high‑level principles into concrete, lifecycle‑wide actions [38-40][40-41][65-69].
Speakers: Dr. Tawfik Jelassi, Debjani Ghosh
Innovation and ethics reinforce each other; ethical design leads to more trusted, widely adopted AI UNESCO Recommendation provides a global set of principles that need concrete translation into actions Embed ethical oversight throughout the AI lifecycle – design, development, sandbox testing – rather than as an afterthought
Both underline that inclusive, culturally aware design and interdisciplinary education are key to building AI systems that are both effective and socially responsible [221-227][106-112][124-130].
Speakers: Paula Goldman, Virginia Dignam
Product accessibility—supporting different languages, accents, and abilities—creates superior, inclusive AI solutions Innovation should draw on varied cultural traditions (e.g., Ubuntu) rather than a single Western paradigm Education must equip engineers with tools to ask “why” and assess impacts, moving beyond vague AI buzzwords
Both argue that regulatory or ethical frameworks can coexist with, and even support, innovation rather than impede it [2][74-77].
Speakers: Tim Curtis, Brando Benifei
UNESCO’s stance: ethical frameworks do not hinder innovation; they guide AI to serve humanity Innovation and ethics can coexist; risk‑based regulation (EU AI Act) balances both and prevents harmful uses
Both emphasize the necessity of global, multilateral cooperation to address AI challenges that go beyond national jurisdictions [186-188][191-199].
Speakers: Maria Grazia, Brando Benifei
Multilateral dialogue helps align diverse regulatory approaches and share best practices A worldwide cooperation framework is required for issues that transcend borders, such as military AI and existential risks
Unexpected Consensus
Inclusive design not only serves social goals but also yields superior, more commercially successful AI products.
Speakers: Paula Goldman, Virginia Dignam
Product accessibility—supporting different languages, accents, and abilities—creates superior, inclusive AI solutions Innovation should draw on varied cultural traditions (e.g., Ubuntu) rather than a single Western paradigm
While industry often focuses on profitability, both speakers converge on the view that designing AI to accommodate diverse linguistic, cultural and accessibility needs improves overall performance and market success, a point not explicitly linked before the discussion [221-227][106-112].
POLICY CONTEXT (KNOWLEDGE BASE)
Research on inclusive technology design demonstrates that such approaches benefit society and improve market performance, supporting the claim that inclusive design drives commercial success [S64]; the Ethics-by-Design discourse further links inclusive practices to better product outcomes [S65].
Overall Assessment

The panel displayed a high degree of consensus across multiple dimensions: (i) innovation and ethics are compatible; (ii) ethical principles must be operationalised throughout the AI lifecycle; (iii) human accountability is paramount; (iv) capacity building and education are essential; (v) global, inclusive cooperation is required, especially involving the Global South.

Strong consensus – most speakers reiterated overlapping arguments, indicating a shared understanding that ethical, human‑centred AI can coexist with innovation when supported by concrete practices, capacity development and multilateral frameworks. This consensus provides a solid foundation for coordinated policy actions and collaborative initiatives in AI governance.

Differences
Different Viewpoints
Timing and method of integrating ethics into AI development – whether ethics should be built‑in from the design stage (ex‑ante) or can be addressed later as an after‑thought, and whether ethical frameworks act as constraints or enablers.
Speakers: Maria Grazia, Dr. Tawfik Jelassi, Debjani Ghosh, Brando Benifei, Virginia Dignam
Maria challenges the title and suggests that frameworks might hinder innovation [13-20]. Tawfik argues that ethics and innovation reinforce each other and that AI must be ethical by design, not after the fact [38-40]. Debjani stresses that oversight must be built into every stage of the AI lifecycle, turning ethics into a design principle rather than an after-thought [65-69]. Brando promotes a risk-based regulatory approach that should be applied proactively to avoid irreversible harms, implying early governance rather than purely post-deployment checks [78-84]. Virginia criticises the current “hammer” view of innovation and says ethics is often treated as a finger that merely points, not as an integral part of design, calling for a broader toolbox [100-115][117-123].
Speakers diverge on whether ethical considerations are a necessary early design constraint or a later regulatory add‑on, with Maria fearing possible hindrance, Tawfik, Debjani and Brando advocating ex‑ante integration, and Virginia questioning the prevailing simplistic view of both innovation and ethics.
POLICY CONTEXT (KNOWLEDGE BASE)
The debate mirrors WS #438’s contrast between early-stage influence and later enforcement of ethical guidelines [S56]; it is also reflected in discussions that treat ethics and regulation as experimental, adaptable tools rather than fixed constraints [S58], and in the Ethics-by-Design narrative that stresses ex-ante integration [S65].
Scope and nature of regulation – risk‑based bans and proactive rules versus a more flexible, minimal‑intervention stance that avoids stifling innovation.
Speakers: Maria Grazia, Brando Benifei, Virginia Dignam, Debjani Ghosh
Maria notes that regulation might impede productivity and questions whether we should decide what technology should do for us, focusing instead on what not to allow [88-90]. Brando outlines the EU AI Act’s risk-based framework, including bans on predictive policing and emotion-recognition, arguing that such regulation is essential to protect human rights while still fostering innovation [78-84]. Virginia argues that regulation is often reduced to a prohibitive “finger” and should not be seen only as a tool for prohibition, calling for a more nuanced, experimental approach [117-123]. Debjani points out that regulation is usually an after-thought and must be fundamentally changed to be embedded throughout the development process [62-64].
While all participants agree regulation is needed, they disagree on how extensive it should be: Maria worries about over‑regulation, Brando defends a strong risk‑based regime with explicit bans, Virginia warns against viewing regulation merely as prohibition, and Debjani calls for integrating regulation early rather than as a post‑hoc fix.
POLICY CONTEXT (KNOWLEDGE BASE)
This tension is evident in WS #438’s comparison of flexible principle-based regulation versus binding law [S56]; similar viewpoints are expressed in India’s advocacy for minimal regulation to preserve innovation [S71]; and in calls for proportionate, risk-based regulatory frameworks that balance harms with innovation benefits [S73].
Definition of innovation – whether innovation is merely the application of new tools (the “hammer” metaphor) or a deeper, culturally‑informed challenge that goes beyond single‑purpose technologies.
Speakers: Virginia Dignam, Maria Grazia, Brando Benifei, Debjani Ghosh
Virginia critiques the reduction of innovation to using a hammer for any nail, urging a broader toolbox and cultural diversity in AI design [100-115]. Maria, while not directly defining innovation, challenges the notion that constraints necessarily hinder it, implying a more expansive view of innovation’s drivers [13-20]. Brando emphasizes that innovation must coexist with ethical safeguards, suggesting that innovation is not just tool-use but must respect human-rights challenges [78-84]. Debjani stresses that innovation should be directed toward solving real human problems (e.g., cancer-free world) rather than being an abstract pursuit [48-51].
The panelists differ on what counts as genuine innovation: Virginia calls for culturally‑rooted, problem‑oriented creativity, whereas other speakers treat innovation more generally as technological progress that must be balanced with ethics.
POLICY CONTEXT (KNOWLEDGE BASE)
The discussion resonates with analogies like the child-seat example used to argue that safety (ethical) measures do not impede innovation, illustrating a broader conception of innovation beyond mere tool deployment [S57]; similar themes appear in broader innovation-vs-regulation debates [S71].
Unexpected Differences
Regulation as prohibition versus regulation as an enabling, experimental tool
Speakers: Brando Benifei, Virginia Dignam
Brando presents regulation (EU AI Act) as a necessary, risk-based framework that includes explicit bans on high-risk uses to protect rights [78-84]. Virginia repeatedly describes regulation as a “finger” that merely points and a series of prohibitions, arguing that this view is too narrow and should be replaced by a more experimental, toolbox-oriented approach [117-123].
Both speakers support regulation but clash on its character: Brando sees bans as essential safeguards, whereas Virginia warns that viewing regulation solely as prohibition limits innovation and fails to capture its potential as a flexible, experimental instrument.
POLICY CONTEXT (KNOWLEDGE BASE)
Panels such as WS #438 and Ethical AI sessions frame regulation not as a prohibitive barrier but as an experimental, enabling artifact that can be iteratively refined to support responsible AI development [S56][S58].
Perception of AI as an ‘empty signifier’ versus AI as a concrete, problem‑solving tool
Speakers: Virginia Dignam, Debjani Ghosh
Virginia claims AI is currently an empty signifier, a vague term that needs precise definition and grounding in collective intelligence [124-133]. Debjani points to concrete impact stories from the AI Impact Commons, showing AI already solving specific problems in developing countries (malnutrition, pharma suicides, climate resilience) [166-171].
Virginia’s abstract critique of AI’s conceptual fuzziness contrasts with Debjani’s presentation of tangible AI applications, revealing an unexpected tension between viewing AI as a nebulous concept versus a set of real‑world solutions.
Overall Assessment

The panel exhibits broad consensus that ethical, human‑centred AI is essential and that innovation should not be sacrificed. However, substantial disagreement persists on how to operationalise ethics – whether through early design integration, risk‑based regulation, or broader cultural re‑thinking – and on the appropriate scope of regulation, with some advocating strong, pre‑emptive bans and others warning against over‑regulation. These divergences reflect differing institutional lenses (UNESCO policy, EU law, corporate practice, academic critique) and suggest that achieving coordinated global governance will require reconciling ex‑ante design mandates with flexible, context‑sensitive regulatory models.

Moderate to high. While the overarching goal of trustworthy, human‑centred AI is shared, the lack of alignment on timing, mechanisms, and the philosophical framing of innovation and regulation could impede the formulation of cohesive policies and slow the translation of ethical principles into practice.

Partial Agreements
The speakers share the goal of ethical, trustworthy AI, yet propose different pathways – international normative guidance, internal governance processes, corporate product‑level controls, and statutory risk‑based regulation – to achieve it [38-40][65-69][141-149][78-84].
Speakers: Dr. Tawfik Jelassi, Debjani Ghosh, Paula Goldman, Brando Benifei
All agree that AI must be trustworthy and human-centred, but differ on implementation: Tawfik stresses UNESCO’s global recommendation as the guiding framework [38-40]; Debjani calls for lifecycle oversight with flag-offs and sandbox testing [65-69]; Paula describes concrete product controls, escalation points and user-feedback loops within Salesforce [141-149]; Brando outlines a risk-based legal regime with sector-specific requirements and bans [78-84].
While agreeing on the importance of inclusive capacity development, they differ on the primary mechanism – multilateral policy forums, national impact‑story platforms, or curriculum reform – to embed humanity in AI.
Speakers: Maria Grazia, Debjani Ghosh, Virginia Dignam
All emphasize the need for inclusive capacity building: Maria calls for multilateral dialogue and capacity development for policymakers [33-37][186-188]; Debjani highlights AI Impact Commons and tier-2/3 city initiatives to democratise AI design [166-171][292-300]; Virginia stresses education that blends engineering with humanities to ask ‘why’ and incorporate diverse cultural perspectives [124-130][106-112].
Takeaways
Key takeaways
Innovation, ethics and regulation are not mutually exclusive; ethical design can enhance trust and adoption of AI. UNESCO’s 2021 Recommendation provides global principles (human rights, dignity, freedoms) that must be translated into concrete actions across the AI lifecycle. Human‑centred AI requires accountability to remain with people, not delegated to algorithms; oversight should be built in from design through deployment. Risk‑based regulatory approaches (e.g., EU AI Act) can balance innovation with protection by identifying high‑risk sectors and prohibiting harmful uses. Global cooperation and multilateral governance are essential for cross‑border challenges such as military AI and existential risks. Inclusive and culturally diverse perspectives (e.g., Ubuntu tradition, developers from low‑resource settings) enrich AI innovation and avoid a single Western paradigm. Education and up‑skilling must integrate technical, humanities, and social‑science knowledge so engineers can ask “why” and assess impact. Practical industry examples (Salesforce) show that embedding transparency, escalation mechanisms, and accessibility leads to superior, market‑ready AI.
Resolutions and action items
Encourage member states to continue operationalising UNESCO’s AI ethics recommendation through national frameworks and capacity‑building programmes. Develop and promote the AI Impact Commons platform to share impact stories and best practices, especially from developing countries. Adopt a ‘ethics‑by‑design’ lifecycle model that includes ethical checkpoints, sandbox testing, and transparent documentation before commercial release. Support the creation of risk‑based regulatory sandboxes that allow innovation while ensuring high‑risk applications are monitored or prohibited. Invest in up‑skilling programmes for engineers, data scientists and policymakers that blend technical training with humanities and social‑science perspectives. Facilitate multilateral dialogue (UNESCO‑EU‑India) to align standards on prohibited AI uses (e.g., predictive policing, emotion‑recognition in workplaces).
Unresolved issues
How to achieve global consensus on a common set of ethical values when cultural, political and economic contexts differ markedly. Specific mechanisms for translating high‑level UNESCO principles into enforceable national or sectoral regulations remain unclear. Ways to systematically include developers from under‑served regions (e.g., those experiencing power cuts) in AI design processes were raised but not detailed. The definition and scope of “AI policy” and how it should be differentiated from technical standards need further clarification. Methods for monitoring compliance with ethical checkpoints and for assigning accountability when harms occur were not fully resolved.
Suggested compromises
Adopt a risk‑based regulatory framework that bans clearly harmful applications while allowing lower‑risk innovation to proceed under oversight. Implement flexible, context‑specific ethical guidelines rather than a rigid one‑size‑fits‑all approach, enabling adaptation to local realities. Combine regulatory requirements with industry self‑governance (e.g., internal ethics boards, sandbox testing) to reduce compliance burden while maintaining safeguards. Promote inclusive design toolkits that provide a variety of “tools” (not just a single “hammer”) to accommodate diverse cultural and technical needs.
Thought Provoking Comments
I’m challenging the very title of this meeting, that is *Balancing Innovation and Ethics in the Age of AI*. Innovation and ethics are not a trade‑off; they can reinforce each other, just as regulation in pharma has not hindered innovation.
She reframes the central premise of the panel, turning a presumed dichotomy into a question about how the two can be synergistic. By invoking the pharma analogy she introduces a concrete counter‑example that many participants had not considered.
Her challenge prompted the first round of responses that explicitly addressed the relationship between innovation, ethics and regulation. It set the tone for a constructive rather than adversarial debate and opened space for speakers to discuss operationalising ethics rather than viewing it as a barrier.
Speaker: Maria Grazia
I don’t see a contradiction between ethics and innovation; I see it between innovation and regulation. Ethical design makes AI more trustworthy and therefore more widely adopted – ethics and innovation reinforce each other.
He shifts the focus from a perceived conflict to a complementary relationship, emphasizing ‘ethics‑by‑design’ and the need for early‑stage human‑centred safeguards.
His point directly answered Maria’s challenge and provided a framework (ethical design ex‑ante) that other panelists referenced. It steered the conversation toward practical integration of ethics in the development lifecycle.
Speaker: Dr. Tawfik Jelassi
The real choice is not between innovation and ethics, but between using technology to make everyone cancer‑free, fed and dignified, or using it to create conflict and weapons. We cannot align every human on the same ethical values, so accountability must stay with people, not the technology.
She reframes the debate from abstract principle to concrete societal outcomes and highlights the limits of universal ethical consensus, stressing human accountability.
Her framing broadened the discussion from technical guidelines to societal purpose, prompting Brando Benifei to discuss risk‑based regulation and Virginia Dignam to question the very notion of ‘innovation’ as a hammer‑and‑nail metaphor.
Speaker: Debjani Ghosh
Innovation is more than just using the latest hammer (e.g., generative AI) to nail any problem. We need diverse epistemologies – imagine AI built on the African Ubuntu tradition ‘we are, therefore I am’ rather than the Western Cartesian ‘I think, therefore I am’. Ethics and regulation should be experimental tools, not immutable commandments.
She introduces cultural pluralism into AI design, challenges the dominant Western epistemology, and reconceptualises ethics and regulation as iterative, experimental processes rather than static rules.
Her cultural critique sparked a shift toward discussing inclusivity and education. It led Paula Goldman to stress practical, inclusive product design, and Debjani to highlight the ‘luxury’ of developed‑country perspectives versus the needs of developing nations.
Speaker: Virginia Dignam
Regulation should not be an afterthought. Oversight must be built into every stage of development, with ‘red‑tape’ checkpoints and sandbox testing, so ethics becomes by‑design rather than a post‑hoc fix.
She provides a concrete procedural roadmap for embedding ethics, moving the conversation from abstract principles to actionable governance mechanisms.
This concrete suggestion influenced Brando Benifei’s description of the EU AI Act’s risk‑based approach and reinforced Paula Goldman’s emphasis on iterative testing and human escalation points in AI systems.
Speaker: Debjani Ghosh
The EU’s risk‑based AI Act shows that we can prohibit certain high‑risk uses (e.g., predictive policing, emotion recognition at work) while still fostering innovation elsewhere. Transparency is crucial for trust, especially in democratic societies.
He brings a concrete policy example that balances prohibition with innovation, illustrating how regulation can be selective rather than blanket, and underscores the role of trust.
His example gave the panel a real‑world reference point, prompting Maria Grazia to ask about the role of multilateral cooperation and leading Dr. Jelassi to discuss global peace‑building dimensions of AI.
Speaker: Brando Benifei
Education must bridge the gap between engineers and humanities. Engineers need to ask ‘why is this a problem, who benefits, who loses?’ and the humanities must help make AI a precise, non‑magical term.
She identifies the root cause of ethical lapses as disciplinary silos and proposes interdisciplinary education as the remedy, moving the debate from policy to capacity‑building.
This comment deepened the conversation about skill development, influencing later remarks by Paula Goldman on practical training and by Debjani on up‑skilling in Tier‑2/3 Indian cities.
Speaker: Virginia Dignam
At UNESCO we built community radios in a remote African village, then telecoms followed, then internet and early‑warning systems. That shows how technology, when placed at the centre of people’s lives, can transform societies.
He provides a vivid, ground‑level case study that illustrates the ‘human‑in‑the‑loop’ principle in action, moving the discussion from theory to tangible impact.
The anecdote reinforced the panel’s emphasis on human‑centred deployment and inspired other speakers (e.g., Paula Goldman) to talk about inclusive product design and real‑world testing.
Speaker: Dr. Tawfik Jelassi
AI agents that correct accessibility issues in real time (e.g., fixing broken UI for a deaf user) demonstrate that inclusive design is not a cost but a commercial advantage – the more inclusive the product, the more successful it is.
She links ethical design directly to business value, countering the myth that inclusion is a financial burden and providing a concrete example of ethical AI in practice.
Her point shifted the conversation toward the business case for ethics, prompting Maria Grazia to highlight that inclusion improves performance, and reinforcing the earlier claim that ethics and innovation are mutually supportive.
Speaker: Paula Goldman
We must democratise not just access to AI but also its design. Developers in Tier‑2/3 cities face power cuts and infrastructure challenges; we need to bring those lived experiences into AI creation to ensure relevance and fairness.
She raises a practical equity issue – the inclusion of under‑represented developers – that had not been explicitly addressed, linking back to earlier cultural critiques.
Her question prompted Debjani Ghosh to discuss initiatives like Startup India and the AI Impact Commons, highlighting concrete steps to broaden participation in AI development.
Speaker: Rita Soni (audience)
Overall Assessment

The discussion began with a theoretical framing of ‘balancing innovation and ethics.’ Maria Grazia’s challenge to this framing acted as a catalyst, prompting speakers to reconceptualise the relationship as synergistic rather than antagonistic. Dr. Jelassi’s ‘ethics‑by‑design’ stance, Debjani’s focus on societal outcomes, Brando’s concrete EU policy example, and Virginia’s cultural‑pluralism critique each introduced new dimensions—operational mechanisms, global governance, and epistemic diversity—that redirected the conversation from abstract principles to actionable pathways. Paula’s industry‑level illustration that inclusive design drives commercial success reinforced the emerging consensus that ethics fuels innovation. Audience input from Rita highlighted the need for inclusive developer participation, closing the loop on the panel’s theme of ‘humanity in the loop.’ Collectively, these pivotal comments shifted the tone from a high‑level debate to a concrete, multi‑stakeholder roadmap, underscoring that ethical AI is achievable through early‑stage design, inclusive education, targeted regulation, and global cooperation.

Follow-up Questions
What are the biggest gaps between UNESCO’s AI ethics principles and their implementation on the ground?
Identifying practical barriers is essential to move from high‑level recommendations to actionable policies and practices.
Speaker: Maria Grazia (moderator) / Dr. Tawfik Jelassi
What mechanisms can effectively embed ethical reflection into the everyday operations of companies and sectors?
Concrete frameworks are needed so that ethical considerations become routine rather than an after‑thought in product development.
Speaker: Maria Grazia (moderator) / Debjani Ghosh
How should “human oversight” and redress mechanisms be defined and operationalised within AI regulation?
Clear guidance on oversight is required to ensure AI systems respect human rights while remaining innovative.
Speaker: Maria Grazia (moderator) / Brando Benifei
How can we prevent people from being merely consumers of AI and instead empower them to shape and direct the technology?
Education and participatory approaches are needed so that citizens actively influence AI development rather than passively receive it.
Speaker: Maria Grazia (moderator) / Virginia Dignam
What concrete models or tools can translate AI ethics principles into actionable practices for companies?
Businesses need practical, scalable solutions to embed ethics into product design, testing, and deployment.
Speaker: Maria Grazia (moderator) / Paula Goldman
What exactly constitutes an “AI policy” and how does it differ from technical design or regulation?
A clear definition helps policymakers, companies, and educators align on the scope and objectives of AI governance.
Speaker: Rajan (audience) / Virginia Dignam
How can developers from low‑resource contexts (e.g., areas with frequent power cuts) be included in AI design and development?
Inclusive design requires democratizing access to AI development tools and training for people who experience the challenges AI aims to solve.
Speaker: Rita Soni (audience) / Debjani Ghosh
What global cooperation frameworks are needed to address high‑risk AI applications such as military use and existential threats?
International research and policy coordination are crucial to prevent unregulated deployment of potentially dangerous AI systems.
Speaker: Brando Benifei
How can the terminology around AI be clarified, given that it is often used as an “empty signifier”?
Research into precise definitions will improve public discourse, policy drafting, and interdisciplinary collaboration.
Speaker: Virginia Dignam
What are the measurable impacts of AI projects in developing countries, and how can successful models be scaled?
Studying impact stories (e.g., via the AI Impact Commons) will provide evidence for effective AI interventions and guide replication.
Speaker: Debjani Ghosh
What evidence exists on the effectiveness of risk‑based AI regulation (e.g., the EU AI Act) in balancing innovation and ethics?
Empirical research is needed to assess whether risk‑based approaches achieve intended safety outcomes without stifling innovation.
Speaker: Brando Benifei (implied)
How does collective intelligence compare to the concept of AGI, and what research is needed to understand their relationship?
Exploring collective intelligence as a practical alternative to speculative AGI can inform more realistic AI governance strategies.
Speaker: Virginia Dignam

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.