Leaders TalkX: Moral pixels: painting an ethical landscape in the information society

9 Jul 2025 16:45h - 17:15h

Leaders TalkX: Moral pixels: painting an ethical landscape in the information society

Session at a glance

Summary

The discussion centered on ethical considerations and human rights in the digital age, particularly focusing on artificial intelligence and emerging technologies as part of the WSIS Action Line framework. The session featured ministers and experts from Belgium, Libya, Cuba, Kenya, Poland, and other countries sharing their national approaches to digital ethics and AI governance. Belgium’s Minister Vanessa Matz emphasized the importance of digital inclusion and accessibility, highlighting their AI ecosystem that brings together public and private actors to ensure ethical AI deployment with transparent governance. Libya’s Minister Abdul Baset Albaour cautioned against delegating ethical decision-making to machines, noting that while humans make decisions based on emotion, experience, and wisdom, AI relies solely on algorithms and data, making it unreliable for ethical choices.


Cuba’s representative outlined their comprehensive approach to digital citizenship education through 642 technology centers that have trained over 5 million people, emphasizing ethical, safe, and innovative use of digital technologies. Kenya’s Stephen Isaboke discussed the balance between protecting freedom of expression and access to information while preventing hate speech and misinformation, particularly among youth using platforms like TikTok and X. Poland’s Jacek Oko proposed using AI as an educational tool to teach people about AI risks, advocating for AI-powered personalized learning assistants to help vulnerable populations understand digital literacy and identify manipulated content.


Professor Salma Abbasi identified six key ethical concerns including misinformation proliferation, algorithmic bias, privacy surveillance, behavioral manipulation, declining critical thinking, and loss of cultural nuances in AI systems. She recommended transparency in AI design, robust human oversight, and accountability frameworks with consequences for failures. The panelists collectively emphasized that addressing AI ethics requires international cooperation, investment in education, transparent governance frameworks, and ensuring that technology serves humanity while respecting cultural values and human dignity.


Keypoints

**Major Discussion Points:**


– **Balancing AI ethics with innovation and rights**: Multiple speakers addressed the challenge of ensuring ethical AI development while maintaining freedom of expression, access to information, and innovation capacity. Kenya’s representative specifically discussed the “creative tension” between media freedom and ethical regulation.


– **Human-centered AI development and decision-making**: Libya’s minister emphasized that humans and machines make decisions differently – humans use emotion, experience, and wisdom while AI relies on algorithms and data. The consensus was that ethical decision-making should not be fully delegated to machines.


– **Education and digital literacy as fundamental safeguards**: Several speakers highlighted education as crucial for ethical AI use. Cuba outlined their extensive technology training programs, while Poland advocated for using AI itself as a tool to educate people about AI risks and benefits.


– **Misinformation, disinformation, and algorithmic bias**: Professor Abbasi provided a comprehensive analysis of six key risks including the proliferation of deepfakes, persistent discrimination through algorithmic bias, privacy concerns, behavioral manipulation, and the decline of critical thinking skills.


– **Need for transparency, accountability, and regulatory frameworks**: All speakers emphasized the importance of transparent AI systems, human oversight, and robust regulatory frameworks. There was particular concern for protecting vulnerable populations including children, elderly, and those with disabilities.


**Overall Purpose:**


The discussion aimed to explore how to apply ethics and human rights principles to emerging technologies, particularly artificial intelligence, within the context of building an inclusive information society. This was part of a WSIS Action Line session focused on painting an “ethical landscape” for the digital age.


**Overall Tone:**


The tone was professional and collaborative throughout, with speakers sharing practical experiences and solutions rather than engaging in debate. While there was underlying concern about AI risks and challenges, the discussion maintained a constructive and forward-looking approach. The tone remained consistently diplomatic and solution-oriented, with speakers building upon each other’s points and emphasizing the need for international cooperation and shared responsibility in addressing these challenges.


Speakers

– **Participant**: Role/Title: Not specified, Area of expertise: Not specified


– **Anriette Esterhuysen**: Role/Title: High-level track facilitator/Moderator, Area of expertise: Digital rights and governance (described as “veteran of this space”), From: South Africa


– **Vanessa Matz**: Role/Title: Minister of Digital Transformation, Area of expertise: Digital transformation and ethics in information society, From: Belgium


– **Abdulbaset Albaour**: Role/Title: Minister for General Authority for Communication and Information Technology, Area of expertise: AI and machine decision-making, From: Libya


– **Ernesto Rodriguez Hernandez**: Role/Title: First Vice Minister, Ministry of Communications, Area of expertise: Digital transformation and AI ethics education, From: Republic of Cuba


– **Stephen Isaboke**: Role/Title: Principal Secretary from the Ministry of Information, Communication and the Digital Economy, State Department for Broadcasting and Telecommunications, Area of expertise: Information access rights and media freedom balance, From: Kenya


– **Jacek Oko**: Role/Title: President of the Office of Electronic Communications, Area of expertise: AI regulation and digital education, From: Poland


– **Salma Abbasi**: Role/Title: Founder, Chairperson and CEO of the EU Worldwide Group, Area of expertise: AI ethics, digital rights, and child protection (described as “veteran of this space”), From: Not specified


**Additional speakers:**


None identified beyond the provided speakers names list.


Full session report

# Discussion Report: Ethics and Human Rights in the Digital Age


## Executive Summary


This discussion, part of the “Leaders’ Talks, Moral Pixels – Painting an Ethical Landscape in the Information Society” session on day three of the WSIS framework meetings, was facilitated by Anriette Esterhuysen. The session brought together ministers and digital governance experts from Belgium, Libya, Cuba, Kenya, and Poland to examine ethical considerations and human rights implications in the digital age, with particular focus on artificial intelligence and emerging technologies.


The conversation revealed shared priorities around education, human oversight, and transparency in AI governance, while highlighting different national approaches to implementation and regulation.


## Key Participants and Their Contributions


### Belgium – Minister Vanessa Maerts


Minister Maerts outlined Belgium’s systematic approach to digital transformation, emphasizing that digital services must be accessible to all without exception. Belgium has established requirements that for each digital service, non-digital alternatives must be provided for vulnerable groups. The country has created an AI ecosystem bringing together public and private actors to provide ethical and legal advice on AI deployment, alongside launching an observatory for AI and digital technologies to reinforce transparency and facilitate citizen dialogue. Maerts stressed that digital technology must be at the service of humans by being safe, ethical, and inclusive.


### Libya – Minister Abdul Baset Abul


Minister Abul provided a cautionary perspective on AI decision-making, drawing a distinction between human and machine processes. He argued that humans make decisions based on emotion, experience, and wisdom, while AI relies on algorithms and data. He emphasized that AI decisions are irreversible unlike human decisions, stating: “That’s in my opinion, we cannot trust the machine to take decision.”


### Cuba – First Vice Minister Ernesto Rodriguez Hernandez


Speaking in Spanish with translation, Minister Rodriguez Hernandez presented Cuba’s comprehensive digital education infrastructure through 642 technology centers that have trained over 5 million people, mostly young people. Cuba has graduated 17,000 engineers and created a university in 2022. The country has approved a digital transformation policy, digital agenda, and AI development strategy under an ethical framework, with digital technology topics taught with an ethical, safe, and innovative approach in universities.


### Kenya – Principal Secretary Stephen Mortari Saboke


Principal Secretary Saboke discussed the balance between rights and regulation, introducing the concept of “creative tension” between competing rights. He emphasized that governments must balance freedom of expression and access to information with ethical regulation, noting concerns about platforms like TikTok and X, as well as cybercrime issues. He argued that “we don’t have to abandon rights in order to respect rights.”


### Poland – President Jacek Oko, Office of Electronic Communications


President Oko advocated for using AI as an educational tool to teach people about AI risks and benefits. He proposed that AI could serve as a personalized learning assistant, particularly for people with special needs and intellectual disabilities. He referenced the EU Digital Services Act and its oversight capabilities, stating: “Let us not be afraid of AI. On the contrary, let us use it as a powerful tool in this educational mission.” He emphasized cooperation with non-governmental organizations and educators.


### Professor Salma Abbasi


Professor Abbasi provided a detailed analysis of AI-related risks, identifying six key ethical concerns in her framework: misinformation proliferation through deepfakes, algorithmic bias reinforcing discrimination, privacy and surveillance concerns, behavioral manipulation particularly affecting children, declining critical thinking abilities, and loss of cultural nuances in AI systems. She called for robust regulatory frameworks and highlighted the need for inclusive approaches, particularly for Global South countries rapidly adopting AI.


## Areas of Consensus


### Education and Capacity Building


All speakers emphasized education as fundamental for ethical AI governance. Cuba’s extensive technology center network, Poland’s advocacy for AI-powered educational tools, and other participants’ focus on human capacity building demonstrated broad agreement on educational approaches.


### Human Oversight


Multiple speakers, particularly Libya’s minister and Professor Abbasi, stressed the importance of maintaining human control and oversight in AI systems, emphasizing that machines should not make decisions independently.


### Transparency Requirements


Belgium’s observatory approach, Poland’s reference to EU oversight capabilities, and Professor Abbasi’s call for auditable algorithms reflected shared views on the need for transparent AI systems and public dialogue.


## Different Approaches


### Trust in AI Systems


A notable difference emerged between Libya’s skepticism about trusting machines for decision-making and Poland’s more optimistic stance about embracing AI as a tool, particularly for education.


### Regulatory Frameworks


Speakers presented different approaches to oversight, with some emphasizing governmental frameworks while others, like Poland’s representative, advocated for greater cooperation with non-governmental organizations and educational institutions.


## Key Challenges Identified


The discussion highlighted several ongoing challenges:


– Ensuring AI systems respect cultural contexts and local values


– Developing appropriate regulatory frameworks for rapidly evolving technology


– Balancing innovation with protection, particularly for vulnerable populations


– Addressing the digital divide and ensuring Global South participation in AI governance


– Protecting children from potential negative effects while leveraging educational benefits


## Technical Context


The session experienced some technical difficulties with computer and microphone issues, as noted by the moderator. Presentations from Belgium and Cuba included translation from French and Spanish respectively. A speaker from the Philippines was expected but did not appear.


## Conclusion


The discussion demonstrated broad international agreement on fundamental principles of AI ethics, particularly around education, human oversight, and transparency. While implementation approaches varied based on national contexts and priorities, participants showed commitment to ensuring that digital technologies serve human needs while respecting rights and cultural values. The conversation reflected growing international dialogue on AI governance, with emphasis on inclusive development and the need for continued cooperation between nations at different stages of digital transformation.


Session transcript

Participant: Ladies and gentlemen, we are going to start our session very soon. Dear participants, we would like to welcome you to our next Leaders’ Talks, Moral Pixels – Painting an Ethical Landscape in the Information Society. We would like to invite to the stage Ms. Anriette Estenhauer, who is going to be our high-level track facilitator.


Anriette Esterhuysen: Good afternoon, everyone who is with us. virtually and in the room. I know things are a little bit, it’s day three and things are becoming a little bit chaotic. We have ministerial meetings, but we want to start on time as close as possible because there’s another session after us. So, I’ll introduce myself. I think, have I been introduced? My name is Anriette Esterhuysen, I’m from South Africa and I’ll be moderating this session. So, we have a very distinguished panel. This session is going to look at the WSIS Action Line that deals with ethics and human rights and particularly in how we apply ethics and human rights to emerging technologies such as artificial intelligence. So, I’m going to invite the panellists to come. I think next to me I have, well, let me introduce them in order of speaking. Our first speaker, and they can all come if they are here, is Her Excellency Minister Maats from Belgium. Is she with us yet? Not yet. She’ll be joining us, so let’s move on to who’s next. We also have, sorry, this is difficult to manipulate the mic and the keyboard at the same time. From Libya, next to me, we have His Excellency Mr. Abdul Baset Abul, Minister for General Authority for Communication and Information Technology, and he’ll be our second speaker. And after him, we’ll have from Cuba, His Excellency Mr. Ernesto Rodríguez Hernández, First Vice Minister, Ministry of Communications from the Republic of Cuba. He’ll be our third speaker. Thanks very much to all of you ministers for rushing downstairs. After Cuba, we’re going to have already here, and thanks for being the first one to walk to the stage, Stephen Mortari Saboke, Principal Secretary from the Ministry of Information, Communication and the Digital Economy, State Department for Broadcasting and Telecommunications. From the Philippines, is Miss Ella Blanca López with us? Not yet. Thanks for that, Levi. From Poland, I know he’s here, I’ve just spoken to him, Mr. Jacek Orko, President of the Office of Electronic Communications from Poland. And after that, we have, last but definitely not least, Professor Selma Abassi. She’s the founder and chairperson and CEO of the EU Worldwide Group, and like me, she’s a veteran of this space. So, we will probably be joint, so don’t feel that there’s disruption. If you see other dignitaries going to the top of the stage. I’m trying to get to the top of my screen. Escaped, thank you. Just, do we have, oh, has she arrived? Perfect. Is that Miss López or Miss Lanz? Thanks very much. I’ll introduce you when I give you the floor. So, to start us, I’m going to go to our first speaker, which is Her Excellency, the Minister from Belgium. I need to get rid of this. I’m so sorry about this. Could you hold this for me, please? Thanks, Minister. So, from Belgium, Her Excellency is Vanessa Maerts. She’s the Minister from the Ministry of Digital Transformation. And the question that we have for her is, how is Belgium dealing with this challenge of applying ethics, dealing with digital transformation and building an inclusive information society, particularly with the challenges related to artificial intelligence? And you’ll be responding in French, is that right? So, please, everyone, keep your headphones on or look at the transcript. And Minister, you have three minutes and the time is in front of you. Please, go ahead.


Vanessa Matz: Merci. Thank you very much. So, ladies and gentlemen, the question of ethics in the information society is a fundamental priority that I have carried. It’s one of the mandates I have within the Belgium federal government. It’s a topic we’re all dealing with at national and international level. Ethics is not just principle. It incarnates also the accessibility and the inclusion. It is absolutely imperative that the digital services be accessible to all men and women without exception. This includes vulnerable groups for whom in Belgium we will always want to ensure alternatives, non-digital alternatives at each digital online service. This is our way to guarantee a true equality of access. I also give a strong importance to the improvement of digital public services. Initiatives like training of public agents in first line and the accompanying of the citizens and promotion of digital inclusion are one of the concrete examples. Ethics to guide the development of our technology. Let’s take the artificial intelligence. In Belgium, we have created an ecosystem AI for Belgium that brings together public and private actors of the sector. These ecosystems offer advice on ethical aspects and legal aspects of AI, ensuring that the deployment respects the norms and regulations, all the while ensuring a transparent governance. Transparency is fundamental, particularly regarding algorithms used in the public services, which is why we have launched an observatory of artificial intelligence and of the new digital technologies in order to reinforce this transparency and facilitate the dialogue between citizens and the users. We also need to take particular attention for youth who are particularly vulnerable to the ethical issues linked to digitization. Digital technology needs to be at the service of humans by being safe, ethical and inclusive for all. Digital technology cannot just be blown back from the sky, it needs to be the fruit of a constant dialogue and active cooperation between all competent authorities and at all levels. facilitate cooperation. The summit is a unique opportunity to reinforce this international cooperation and to ensure that digitization benefits everyone in the respect of ethical principles that guide our actions. Thank you very much.


Anriette Esterhuysen: You came to time absolutely perfectly. I was nervous for no reason. And Mr. Albaour, Your Excellency, the question we are asking you is could we or should we be delegating our ethical decision-making to machines? Are we doing it? And if that is happening, who should determine the framework, the rights and moral framework that guides


Abdulbaset Albaour: these systems? Good afternoon. Thank you for this question. As you know, now in these days, the most topics have been taken in AI. Before answering your question, I want to explain how the machine or how the AI take decision and what’s different between the human how to take decision and machine take the decision. Human make decision dependent on the emotion, experience, also the wisdom. But AI and machine take decision dependent on the algorithms and data. When we talk algorithms and data, we talk about the accuracy of data, also the design of algorithms, how to design these algorithms. Sometime when take the decision by human, we can maybe go back before the take decision and take another decision. But AI and machine, when take the decision, we cannot go back before the decision. That’s in my opinion, we cannot trust the machine to take decision.


Anriette Esterhuysen: Thanks very much and a very legitimate caution. Next, we have from Cuba and he’ll be responding to us in Spanish. So again, have your headphones on. Mr. Hernandez, Your Excellency, your question, I am sorry, I’m having terrible problems with my computer here. I apologize. I’m usually very well prepared. And you come from Cuba, a country that’s facing so many challenges and climate change not being least of them. And how are you facing this challenge of preparing new generations to make ethical and safe use of digital technologies?


Ernesto Rodriguez Hernandez: Before I answer your question, I would like to thank the organizers of the session for giving me the honor of participating in this session. The government of Cuba and the state of Cuba have always attached great importance to the development of information and telecommunication technologies. An example of this was ratified in the 2019 Cuban constitution, which establishes the social development plan for 2030. Additionally, we have declared that digital transformation is one of the pillars of the government, along with science and innovation and social well-being. In order to make this clear and make this a reality in 2024, the policy for digital transformation was approved. The digital agenda that implements it was also approved. And the strategy for the development and use of artificial intelligence were approved. And we believe that that should be done cautiously and under an ethical framework. Precisely, we do have what we need and we call the digital citizenship, which is related to respect to privacy, verification of sources before you disseminate information to avoid discriminatory and offensive and hate speech, and to foster the ability to denounce said practices, to have robust digital accreditation, avoiding and making sure that you carry out updates to digital platforms and their security patches. To this end, we have a network of 642 technology centers in Cuba called the Youth Computer and Electronics Club, and we have been able to train over 5 million Cuban, most of them young people. Additionally, we have specialties, specialism courses in all the universities in the country. In 2022, we created a university specialized in computer sciences, which has seen the graduation of over 17,000 engineers. As part of the general curricular strategy, digital technology topics are taught under an ethical, safe and innovative approach. These actions, together with the implementation of pedagogical modalities and the mediation of technology, ensure quality learning that contributes to coherent integration of educational centers, families and the community in general, under an ethical, safe and responsible use of


Anriette Esterhuysen: digital technologies. Thank you so much, ma’am. Thank you very much for that. If we do want human centric AI, we need to invest in human capacity, and I think you’ve outlined that so clearly. Next, we’re moving to Kenya. So, Mr. Isaboki, how do governments and how do you feel they can and should they balance, on the one hand, ensuring rights to access to information, freedom of expression and the ability to innovate, while also ensuring that there is consideration of


Stephen Isaboke: ethics and values? Thank you, thank you. I think in Kenya, including the current scenario, that there’s an ongoing kind of, I’ll call it, creative tension between the right to access information and media freedom, and obviously innovation, on the other hand, and I think the whole area of ethical regulation, to actually then ensure that there’s a balance between the access to information and also respect for the law. So, the Kenyan constitution actually provides for freedom of the media, access to information, and indeed freedom to expression, but that freedom is actually not unlimited. There are safeguards around, for example, incitement to violence, you know, anything that actually is hate speech or anything that actually causes civil disorder, and all that, and I think that’s really the balance that the authorities must balance between that and allowing for, especially the youth, who are actually very, very much sort of into the AI space, into the information space, where they apply a lot of the latest sort of technology and platforms, TikTok, X, and the rest of the platforms to communicate, and in some instances they might end up communicating or miscommunicating and misinforming, and in the process also sometimes infringe on the rule of law, and sometimes that can catch up with cybercrime and all that. But as a government, we are obviously committed to ensuring that we enable and encourage innovation, encourage free expression, but again, ensuring that there’s a balanced approach to protect rights and also build trust and resilience, you know, in that democratic and digital space. Thank you.


Anriette Esterhuysen: Thank you very much for that, and also, you know, for keeping to time, and I think, and that makes the point that we don’t have to abandon rights in order to respect rights, and in fact, as you said, there are ways of balancing rights when some rights impede on other rights. We have rights frameworks that can help us deal with that, so thanks very much for mentioning that. We’ve heard about the importance of education for AI and capability in AI in order to be able to use it ethically and well in a rights-respecting way, but Mr. Jacek Oko, you’ve got a really interesting topic, which is to talk about how can we use AI? Can we use AI to educate people about the risks of AI? Thank you for the invitation to this important forum.


Jacek Oko: The AI revolution was experienced as two sides of a coin. One, on the hand, there is a tremendous potential, and on the other hand, real risks. Therefore, as regulators and policemakers, we must first protect universal ethical values from the flood of false content. Today, generating a deepfake or disinformation that looks confusingly real is not only possible, but it’s also alarmingly easy. This is a fundamental challenge for the cohesion of our Of course, we are not totally inactive. In the European Union, we already have specific regulations. Such as the Digital Services Act. This is an important tool which gives us, the regulators, the ability to oversee the moderation of illegal content, ensure transparency of online advertisement and allows us to fight against disinformation. But regulations alone are not enough. Therefore, I want to emphasize that education is the most important. Education is crucial in building social resilience. Education that allows each and every citizen, from children to seniors, to distinguish manipulated content from the true one and to understand the intentions behind them, whether they were generated in a good or bad way. However, and this is the key part of the answer to the post-question, let us not be afraid of AI. On the contrary, let us use it as a powerful tool in this educational mission. Let us treat it as a personalized learning assistant aimed at people with special needs, with intellectual disabilities, on the different autism spectrum of the seniors for whom traditional methods can be a barrier. AI can adapt content, explain complex issues in a simple way and create interactive safe environments for learning about the digital world. Who would do that? This is the question. I think we should let’s trust not governmental organizations, let’s trust educators and let’s cooperate with them as an administration. So far, we have measured our strength against our intentions. No, our intentions remain strong, but we can fully respond to them with the power of AI. Our primary goal is to create a safe Internet. But safe Internet in the age of AI means much more than just fast and It means the Internet free from manipulation, which once again becomes what we meant to be from the beginning, a reliable and verified source of knowledge. I have at the end a call. So let’s use AI to teach about AI. Thank you.


Anriette Esterhuysen: Thank you very much and for that challenge. And I think that reminder that if we approach emerging technologies just from a place of fear, we will fail to effectively utilize the positive potential. So thanks for outlining that. And do we have, we don’t have a virtual speaker and I think our speaker from the Philippines is not here. So we have a little bit more time. But our last speaker is Professor Salma Abbasi. Salman, in an era where AI and digital technologies shape our perceptions and decisions, you know, we’ve heard from Kenya as well how that happens in terms of the media and content, online content. How do we ensure ethical accountability? And especially when it’s so much of this, when algorithms actually operate beyond human oversight or even if there’s some human oversight, it’s often not visible or transparent.


Salma Abbasi: Thank you very much. First of all, I really appreciate the opportunity to be on this stage with these distinguished panelists. And I think this is a very important question for us to discuss. As we adopt AI rapidly, we have many, many ethical considerations to have. And I believe that my colleagues have said that the biggest challenge we have is the risk of trusting misinformation, disinformation, and the deep fake. I believe that there are six components to this, and I’ll go through them very quickly. The perforation of misinformation, disinformation, every minister has mentioned. The dramatic acceleration of people believing the false narrative, especially young children, is a problem. The manipulation and the distortion of facts have been seen on the streets of the United Kingdom last year when our societies were polarized and now remains in that situation. The geopolitical dynamics and those who have the power of AI are distorting the facts and there’s no recourse at the moment. The second is the persistent discrimination of the algorithmic bias that reinforces the systemic biases that we have and the programmers that remain in that bias world. The stereotype, the inequity, particularly impacting children and women and the elderly, as my colleague has said from Poland. We need to identify and understand the inequities because they are shaping the digital environment of our kids. The third concern is the privacy and continuous surveillance, which is articulated beautifully by Meredith yesterday, the president of Signal. We have vast amounts of data that people are grasping and analyzing our behavior, our patterns, our vulnerabilities, our fears, and then manipulating that. I’m more concerned about the young and the people with intellectual disabilities. The advancement of commercial exploitation is vast. 700 billion dollars commercial industry for cosmetics frightens me. The individuals do not give consent and are being manipulated. The fourth risk of manipulation is the influence of behavior. The radical increase of gender-based violence, technology-facilitated violence, the narrative of misogyny in society, which is measured, is because our young boys are being exposed to bad social media influences. The ethics, the morals are missing. Young girls are being exploited by technology-facilitated tools hidden in games, which we are not aware of. What we have to do is understand this shift in what is being commercially exploiting as fun because it’s not. It’s penetrating private spaces. Our fears, our perceptions are being shaped. The behaviors of aggression and hate, all the ministers mentioned hate. This is an unrealistic portrayal of the decline of the well-being of children. When I look at the fifth, it’s the critical thinking. Children’s attention span is very short. I’m looking at the time I’m going to erase. It’s very important for us to understand that we’re misleading the children in showing them that this is the way and the only way, the AI way. We need to balance the offline and offline critical thinking ability. The sixth most important, which I think our minister from Libya mentioned, is the nuances of the social cultural norm. All the things that we learn from our grandparents, our culture that is not digital, AI is missing all of that in its analysis. It’s priceless because it’s our cultural knowledge and heritage that is not easily documented. There are three things that I would like to recommend very quickly. The transparency in the design and development, auditable algorithms. We need to know what data they used, what were the parameters they set, and most importantly, how do we check that it’s gender-neutral in its definitions. The second is the oversight and governance, which we will discuss tomorrow. But the human oversight is a must. Human intervention blindly following algorithms is a big mistake. It does make mistakes. The data has errors. The programmer could make a mistake. The regulatory framework needs to be robust and reinforced. My colleague from Cuba, I met your regulators and we discussed this very issue. And the third and final one is robust accountability with consequences. There needs to be a consequence if a duty of care is derelict and a child commits suicide. And finally, many countries from the global south that are rapidly embracing AI without the adequate regulatory frameworks in place and safeguards, we need to collaborate closely to work to build an inclusive framework that is localized and contextualized so that we can incorporate the voices of the global south to ensure that it is shaped by them, for them. The future of AI must be grounded in our shared values with empathy, humanity, and accountability for human dignity for everyone. This is the only way we can ensure that artificial intelligence is not just artificial, but it’s there to ensure a just, secure, and sustainable future for the next generation that we are responsible for. Thank you so much.


Anriette Esterhuysen: Thanks very much, Salma. Thanks to this wonderful panel. We’ve heard about the support for ecosystems, the integration of digital public infrastructure from Belgium, the importance of human centeredness, human rights, balancing rights, but also respecting those rights, incredible value of education and investing in future generations from Cuba. The innovative approach, let’s not be overwhelmed by fear from Poland. And then, Salma, your reminder that we do need frameworks and standards. And I think everyone mentioned the importance of transparency. Thank you very much. Thanks for joining. And thanks to our leaders for inspiring us. Thank you. Recording stopped. Thank you. Dear participants, we would like to welcome you to our final


V

Vanessa Matz

Speech speed

119 words per minute

Speech length

358 words

Speech time

179 seconds

Digital services must be accessible to all without exception, including vulnerable groups who need non-digital alternatives

Explanation

Matz argues that ethics in digital transformation must include accessibility and inclusion for all people. She emphasizes that vulnerable groups should always have non-digital alternatives available when digital services are provided to ensure true equality of access.


Evidence

Belgium ensures alternatives, non-digital alternatives at each digital online service


Major discussion point

Ethics and Human Rights in Digital Transformation


Topics

Development | Human rights


Digital technology needs to be safe, ethical and inclusive for all, serving humans rather than replacing human judgment

Explanation

Matz contends that digital technology should be human-centered and cannot simply be imposed without consideration. She argues for constant dialogue and cooperation between authorities to ensure technology serves humanity while respecting ethical principles.


Evidence

Digital technology cannot just be blown back from the sky, it needs to be the fruit of a constant dialogue and active cooperation between all competent authorities and at all levels


Major discussion point

Ethics and Human Rights in Digital Transformation


Topics

Human rights | Sociocultural


Created AI ecosystem bringing together public and private actors to provide ethical and legal advice

Explanation

Matz describes Belgium’s approach to AI governance through creating a collaborative ecosystem. This system brings together various stakeholders to ensure AI deployment respects norms and regulations while maintaining transparent governance.


Evidence

In Belgium, we have created an ecosystem AI for Belgium that brings together public and private actors of the sector


Major discussion point

Governance and Regulatory Frameworks


Topics

Legal and regulatory | Human rights


Launched observatory for AI and digital technologies to reinforce transparency and facilitate citizen dialogue

Explanation

Matz explains Belgium’s initiative to create an observatory focused on AI and digital technologies. This institution aims to increase transparency, particularly regarding algorithms used in public services, and to improve communication between citizens and users.


Evidence

We have launched an observatory of artificial intelligence and of the new digital technologies in order to reinforce this transparency and facilitate the dialogue between citizens and the users


Major discussion point

Governance and Regulatory Frameworks


Topics

Legal and regulatory | Human rights


Agreed with

– Jacek Oko
– Salma Abbasi
– Anriette Esterhuysen

Agreed on

Transparency in AI systems and governance is crucial


Digital transformation requires constant dialogue and cooperation between competent authorities at all levels

Explanation

Matz emphasizes that successful digital transformation cannot be achieved in isolation but requires ongoing collaboration. She views international cooperation as essential to ensure digitization benefits everyone while respecting ethical principles.


Evidence

The summit is a unique opportunity to reinforce this international cooperation and to ensure that digitization benefits everyone in the respect of ethical principles


Major discussion point

International Cooperation and Capacity Building


Topics

Legal and regulatory | Development


S

Stephen Isaboke

Speech speed

129 words per minute

Speech length

264 words

Speech time

122 seconds

Governments must balance freedom of expression and access to information with ethical regulation and respect for law

Explanation

Isaboke describes the challenge governments face in maintaining democratic freedoms while ensuring responsible use of technology. He emphasizes the need for a balanced approach that protects rights while building trust and resilience in the digital space.


Evidence

There’s an ongoing kind of creative tension between the right to access information and media freedom, and obviously innovation, on the other hand, and the whole area of ethical regulation


Major discussion point

Ethics and Human Rights in Digital Transformation


Topics

Human rights | Legal and regulatory


Agreed with

– Anriette Esterhuysen

Agreed on

Balancing rights and freedoms with ethical considerations


Constitutional freedoms have safeguards against incitement to violence, hate speech, and civil disorder

Explanation

Isaboke explains that while Kenya’s constitution provides for media freedom and access to information, these rights are not unlimited. He outlines specific legal boundaries that exist to prevent harmful content while still allowing for innovation and free expression.


Evidence

The Kenyan constitution actually provides for freedom of the media, access to information, and indeed freedom to expression, but that freedom is actually not unlimited. There are safeguards around, for example, incitement to violence, hate speech or anything that actually causes civil disorder


Major discussion point

Ethics and Human Rights in Digital Transformation


Topics

Human rights | Legal and regulatory


Agreed with

– Anriette Esterhuysen

Agreed on

Balancing rights and freedoms with ethical considerations


A

Abdulbaset Albaour

Speech speed

106 words per minute

Speech length

149 words

Speech time

84 seconds

Humans make decisions based on emotion, experience, and wisdom, while AI relies on algorithms and data

Explanation

Albaour contrasts human decision-making processes with AI systems to highlight fundamental differences. He argues that human decisions incorporate emotional intelligence, lived experience, and wisdom, while AI decisions are purely based on algorithmic processing and data analysis.


Evidence

Human make decision dependent on the emotion, experience, also the wisdom. But AI and machine take decision dependent on the algorithms and data


Major discussion point

AI Decision-Making and Human Oversight


Topics

Human rights | Sociocultural


Agreed with

– Salma Abbasi

Agreed on

Human oversight is essential in AI decision-making


AI decisions are irreversible unlike human decisions, making machines untrustworthy for decision-making

Explanation

Albaour points out a critical limitation of AI systems – their inability to reconsider or reverse decisions once made. He contrasts this with human decision-making, where people can reconsider and change their minds, leading him to conclude that machines cannot be trusted with decision-making.


Evidence

Sometime when take the decision by human, we can maybe go back before the take decision and take another decision. But AI and machine, when take the decision, we cannot go back before the decision


Major discussion point

AI Decision-Making and Human Oversight


Topics

Human rights | Legal and regulatory


Agreed with

– Salma Abbasi

Agreed on

Human oversight is essential in AI decision-making


E

Ernesto Rodriguez Hernandez

Speech speed

109 words per minute

Speech length

336 words

Speech time

184 seconds

Created 642 technology centers training over 5 million Cubans, mostly young people, in digital citizenship

Explanation

Hernandez describes Cuba’s comprehensive approach to digital education through a network of technology centers. These centers focus on teaching digital citizenship, which includes respect for privacy, source verification, and avoiding discriminatory speech.


Evidence

We have a network of 642 technology centers in Cuba called the Youth Computer and Electronics Club, and we have been able to train over 5 million Cuban, most of them young people


Major discussion point

Education and Digital Literacy


Topics

Development | Sociocultural


Agreed with

– Jacek Oko
– Anriette Esterhuysen

Agreed on

Education is fundamental for ethical AI and digital literacy


Digital technology topics are taught under an ethical, safe and innovative approach in universities

Explanation

Hernandez outlines Cuba’s educational strategy that integrates ethical considerations into technology education at the university level. This approach ensures that future professionals understand both the technical and ethical dimensions of digital technologies.


Evidence

In 2022, we created a university specialized in computer sciences, which has seen the graduation of over 17,000 engineers. Digital technology topics are taught under an ethical, safe and innovative approach


Major discussion point

Education and Digital Literacy


Topics

Development | Sociocultural


Agreed with

– Jacek Oko
– Anriette Esterhuysen

Agreed on

Education is fundamental for ethical AI and digital literacy


Approved digital transformation policy, digital agenda, and AI development strategy under ethical framework

Explanation

Hernandez describes Cuba’s comprehensive policy approach to digital transformation, emphasizing that AI development should be conducted cautiously within an ethical framework. This represents a systematic governmental approach to managing technological advancement.


Evidence

In 2024, the policy for digital transformation was approved. The digital agenda that implements it was also approved. And the strategy for the development and use of artificial intelligence were approved


Major discussion point

Governance and Regulatory Frameworks


Topics

Legal and regulatory | Development


J

Jacek Oko

Speech speed

130 words per minute

Speech length

400 words

Speech time

184 seconds

Generating deepfakes and disinformation that looks real is alarmingly easy, threatening social cohesion

Explanation

Oko warns about the accessibility of technology that can create convincing false content, presenting this as a fundamental challenge to social stability. He emphasizes that the ease of creating such content poses serious risks to societal trust and cohesion.


Evidence

Today, generating a deepfake or disinformation that looks confusingly real is not only possible, but it’s also alarmingly easy. This is a fundamental challenge for the cohesion of our society


Major discussion point

Risks and Challenges of AI


Topics

Cybersecurity | Sociocultural


Education is crucial for building social resilience and helping citizens distinguish manipulated content from true content

Explanation

Oko argues that education is the most important tool for combating AI-related risks, emphasizing its role in building societal resilience. He believes education should enable all citizens, from children to seniors, to identify manipulated content and understand the intentions behind it.


Evidence

Education is crucial in building social resilience. Education that allows each and every citizen, from children to seniors, to distinguish manipulated content from the true one and to understand the intentions behind them


Major discussion point

Education and Digital Literacy


Topics

Sociocultural | Development


Agreed with

– Ernesto Rodriguez Hernandez
– Anriette Esterhuysen

Agreed on

Education is fundamental for ethical AI and digital literacy


AI can serve as a personalized learning assistant for people with special needs and intellectual disabilities

Explanation

Oko presents a positive application of AI in education, suggesting it can be used as a tool to help vulnerable populations. He argues that AI can adapt content, simplify complex issues, and create safe learning environments for those who face barriers with traditional educational methods.


Evidence

Let us treat it as a personalized learning assistant aimed at people with special needs, with intellectual disabilities, on the different autism spectrum of the seniors for whom traditional methods can be a barrier


Major discussion point

Education and Digital Literacy


Topics

Human rights | Development


Digital Services Act provides regulators ability to oversee content moderation and fight disinformation

Explanation

Oko describes the European Union’s regulatory approach to managing AI risks through the Digital Services Act. This legislation gives regulators tools to oversee content moderation, ensure transparency in online advertising, and combat disinformation.


Evidence

In the European Union, we already have specific regulations. Such as the Digital Services Act. This is an important tool which gives us, the regulators, the ability to oversee the moderation of illegal content, ensure transparency of online advertisement and allows us to fight against disinformation


Major discussion point

Governance and Regulatory Frameworks


Topics

Legal and regulatory | Cybersecurity


Agreed with

– Vanessa Matz
– Salma Abbasi
– Anriette Esterhuysen

Agreed on

Transparency in AI systems and governance is crucial


S

Salma Abbasi

Speech speed

149 words per minute

Speech length

811 words

Speech time

325 seconds

Misinformation and disinformation proliferation leads to dramatic acceleration of false narrative belief, especially among children

Explanation

Abbasi identifies the rapid spread of false information as a critical concern, particularly highlighting how children are vulnerable to believing false narratives. She connects this to real-world consequences, referencing social polarization events in the UK.


Evidence

The manipulation and the distortion of facts have been seen on the streets of the United Kingdom last year when our societies were polarized and now remains in that situation


Major discussion point

Risks and Challenges of AI


Topics

Cybersecurity | Human rights


Algorithmic bias reinforces systemic discrimination, particularly impacting children, women, and elderly

Explanation

Abbasi warns about how AI systems can perpetuate and amplify existing societal biases through algorithmic discrimination. She emphasizes that programmers’ biases become embedded in systems, creating persistent discrimination that particularly affects vulnerable populations.


Evidence

The persistent discrimination of the algorithmic bias that reinforces the systemic biases that we have and the programmers that remain in that bias world. The stereotype, the inequity, particularly impacting children and women and the elderly


Major discussion point

Risks and Challenges of AI


Topics

Human rights | Sociocultural


Technology-facilitated gender-based violence and exploitation of young people through games and social media

Explanation

Abbasi highlights the concerning rise in technology-facilitated violence, particularly gender-based violence and the exploitation of young people. She points to the influence of social media on young boys and the hidden exploitation of girls through gaming platforms.


Evidence

The radical increase of gender-based violence, technology-facilitated violence, the narrative of misogyny in society, which is measured, is because our young boys are being exposed to bad social media influences. Young girls are being exploited by technology-facilitated tools hidden in games


Major discussion point

Risks and Challenges of AI


Topics

Human rights | Cybersecurity


AI lacks understanding of social cultural norms and heritage knowledge from previous generations

Explanation

Abbasi argues that AI systems miss crucial cultural and social knowledge that is passed down through generations but not easily documented. She emphasizes that this cultural heritage and wisdom from grandparents represents priceless knowledge that AI cannot capture or analyze.


Evidence

All the things that we learn from our grandparents, our culture that is not digital, AI is missing all of that in its analysis. It’s priceless because it’s our cultural knowledge and heritage that is not easily documented


Major discussion point

Risks and Challenges of AI


Topics

Sociocultural | Human rights


Children’s attention spans are shortening and critical thinking abilities need to be balanced between online and offline

Explanation

Abbasi expresses concern about the impact of AI and digital technologies on children’s cognitive development. She argues that there’s a dangerous trend of presenting AI as the only way forward, which undermines children’s ability to think critically and balance digital with offline experiences.


Evidence

Children’s attention span is very short. It’s very important for us to understand that we’re misleading the children in showing them that this is the way and the only way, the AI way


Major discussion point

Education and Digital Literacy


Topics

Human rights | Sociocultural


Human oversight is essential as algorithms can make mistakes due to data errors or programmer errors

Explanation

Abbasi emphasizes the critical need for human intervention in AI systems, arguing that blindly following algorithms is dangerous. She points out that AI systems are fallible due to potential data errors and programmer mistakes, making human oversight mandatory.


Evidence

Human intervention blindly following algorithms is a big mistake. It does make mistakes. The data has errors. The programmer could make a mistake


Major discussion point

AI Decision-Making and Human Oversight


Topics

Human rights | Legal and regulatory


Agreed with

– Abdulbaset Albaour

Agreed on

Human oversight is essential in AI decision-making


Blindly following algorithms without human intervention is a significant mistake

Explanation

Abbasi warns against over-reliance on algorithmic decision-making without proper human oversight. She argues that this approach is fundamentally flawed and dangerous, emphasizing the need for human judgment in AI-assisted processes.


Evidence

Human intervention blindly following algorithms is a big mistake. It does make mistakes. The data has errors. The programmer could make a mistake


Major discussion point

AI Decision-Making and Human Oversight


Topics

Human rights | Legal and regulatory


Agreed with

– Abdulbaset Albaour

Agreed on

Human oversight is essential in AI decision-making


Need for transparency in AI design, auditable algorithms, and robust accountability with consequences

Explanation

Abbasi calls for comprehensive transparency measures in AI development, including the ability to audit algorithms and understand their parameters. She emphasizes the need for accountability mechanisms with real consequences, particularly when AI failures lead to serious harm.


Evidence

We need to know what data they used, what were the parameters they set, and most importantly, how do we check that it’s gender-neutral in its definitions. There needs to be a consequence if a duty of care is derelict and a child commits suicide


Major discussion point

Governance and Regulatory Frameworks


Topics

Legal and regulatory | Human rights


Agreed with

– Vanessa Matz
– Jacek Oko
– Anriette Esterhuysen

Agreed on

Transparency in AI systems and governance is crucial


Global South countries need collaborative frameworks that are localized and contextualized

Explanation

Abbasi highlights the particular vulnerability of Global South countries that are rapidly adopting AI without adequate regulatory frameworks. She calls for collaborative efforts to build inclusive frameworks that incorporate local voices and contexts rather than imposing external standards.


Evidence

Many countries from the global south that are rapidly embracing AI without the adequate regulatory frameworks in place and safeguards, we need to collaborate closely to work to build an inclusive framework that is localized and contextualized


Major discussion point

International Cooperation and Capacity Building


Topics

Development | Legal and regulatory


Future of AI must be grounded in shared values with empathy, humanity, and accountability

Explanation

Abbasi concludes with a call for AI development to be fundamentally grounded in human values and dignity. She emphasizes that AI should not just be artificial but should serve to create a just, secure, and sustainable future for the next generation.


Evidence

The future of AI must be grounded in our shared values with empathy, humanity, and accountability for human dignity for everyone. This is the only way we can ensure that artificial intelligence is not just artificial, but it’s there to ensure a just, secure, and sustainable future for the next generation


Major discussion point

International Cooperation and Capacity Building


Topics

Human rights | Development


A

Anriette Esterhuysen

Speech speed

120 words per minute

Speech length

1206 words

Speech time

601 seconds

Rights frameworks can help balance competing rights without abandoning fundamental rights

Explanation

Esterhuysen emphasizes that when some rights impede on other rights, there are established rights frameworks that can help governments and societies deal with these conflicts. She argues that it’s not necessary to abandon rights in order to respect other rights, but rather to find ways of balancing them appropriately.


Evidence

We don’t have to abandon rights in order to respect rights, and in fact, as you said, there are ways of balancing rights when some rights impede on other rights. We have rights frameworks that can help us deal with that


Major discussion point

Ethics and Human Rights in Digital Transformation


Topics

Human rights | Legal and regulatory


Agreed with

– Stephen Isaboke

Agreed on

Balancing rights and freedoms with ethical considerations


Approaching emerging technologies from fear will prevent effective utilization of positive potential

Explanation

Esterhuysen warns against letting fear dominate our approach to new technologies like AI. She argues that if we are overwhelmed by fear and focus only on risks, we will fail to harness the beneficial capabilities that these technologies can offer society.


Evidence

If we approach emerging technologies just from a place of fear, we will fail to effectively utilize the positive potential


Major discussion point

Ethics and Human Rights in Digital Transformation


Topics

Human rights | Development


Human-centric AI requires investment in human capacity and capability building

Explanation

Esterhuysen emphasizes that for AI to truly serve humanity, there must be substantial investment in developing human capabilities and capacity. She highlights this as a fundamental requirement for ensuring that AI development remains centered on human needs and values.


Evidence

If we do want human centric AI, we need to invest in human capacity


Major discussion point

Education and Digital Literacy


Topics

Development | Human rights


Agreed with

– Ernesto Rodriguez Hernandez
– Jacek Oko

Agreed on

Education is fundamental for ethical AI and digital literacy


Transparency is a common theme across all approaches to ethical AI governance

Explanation

Esterhuysen identifies transparency as a recurring and fundamental element mentioned by all panelists in their approaches to AI governance. She presents this as a unifying principle that spans different countries and regulatory approaches to ensuring ethical AI development.


Evidence

I think everyone mentioned the importance of transparency


Major discussion point

Governance and Regulatory Frameworks


Topics

Legal and regulatory | Human rights


Agreed with

– Vanessa Matz
– Jacek Oko
– Salma Abbasi

Agreed on

Transparency in AI systems and governance is crucial


P

Participant

Speech speed

64 words per minute

Speech length

56 words

Speech time

52 seconds

The session focuses on WSIS Action Line dealing with ethics and human rights in emerging technologies like AI

Explanation

The participant introduces the session’s scope, explaining that it will examine how ethics and human rights principles apply to emerging technologies, particularly artificial intelligence. This sets the framework for discussing the intersection of technology development and ethical considerations.


Evidence

This session is going to look at the WSIS Action Line that deals with ethics and human rights and particularly in how we apply ethics and human rights to emerging technologies such as artificial intelligence


Major discussion point

Ethics and Human Rights in Digital Transformation


Topics

Human rights | Legal and regulatory


Agreements

Agreement points

Education is fundamental for ethical AI and digital literacy

Speakers

– Ernesto Rodriguez Hernandez
– Jacek Oko
– Anriette Esterhuysen

Arguments

Created 642 technology centers training over 5 million Cubans, mostly young people, in digital citizenship


Digital technology topics are taught under an ethical, safe and innovative approach in universities


Education is crucial for building social resilience and helping citizens distinguish manipulated content from true content


Human-centric AI requires investment in human capacity and capability building


Summary

All speakers emphasized that education and capacity building are essential for ensuring ethical use of AI and digital technologies, with particular focus on training citizens to navigate digital challenges responsibly


Topics

Development | Sociocultural | Human rights


Human oversight is essential in AI decision-making

Speakers

– Abdulbaset Albaour
– Salma Abbasi

Arguments

AI decisions are irreversible unlike human decisions, making machines untrustworthy for decision-making


Humans make decisions based on emotion, experience, and wisdom, while AI relies on algorithms and data


Human oversight is essential as algorithms can make mistakes due to data errors or programmer errors


Blindly following algorithms without human intervention is a significant mistake


Summary

Both speakers strongly advocate for maintaining human control and oversight in AI systems, emphasizing that machines cannot be trusted to make decisions independently due to their limitations and potential for errors


Topics

Human rights | Legal and regulatory


Transparency in AI systems and governance is crucial

Speakers

– Vanessa Matz
– Jacek Oko
– Salma Abbasi
– Anriette Esterhuysen

Arguments

Launched observatory for AI and digital technologies to reinforce transparency and facilitate citizen dialogue


Digital Services Act provides regulators ability to oversee content moderation and fight disinformation


Need for transparency in AI design, auditable algorithms, and robust accountability with consequences


Transparency is a common theme across all approaches to ethical AI governance


Summary

Multiple speakers emphasized transparency as a fundamental requirement for ethical AI governance, including transparent algorithms, oversight mechanisms, and public dialogue about AI systems


Topics

Legal and regulatory | Human rights


Balancing rights and freedoms with ethical considerations

Speakers

– Stephen Isaboke
– Anriette Esterhuysen

Arguments

Governments must balance freedom of expression and access to information with ethical regulation and respect for law


Constitutional freedoms have safeguards against incitement to violence, hate speech, and civil disorder


Rights frameworks can help balance competing rights without abandoning fundamental rights


Summary

Both speakers agreed that it’s possible and necessary to balance fundamental rights like freedom of expression with ethical considerations and legal safeguards, without abandoning core rights principles


Topics

Human rights | Legal and regulatory


Similar viewpoints

Both ministers emphasized inclusive approaches to digital transformation that specifically consider vulnerable populations and ensure no one is left behind in the digital transition

Speakers

– Vanessa Matz
– Ernesto Rodriguez Hernandez

Arguments

Digital services must be accessible to all without exception, including vulnerable groups who need non-digital alternatives


Created 642 technology centers training over 5 million Cubans, mostly young people, in digital citizenship


Topics

Human rights | Development


Both speakers identified the ease of creating convincing false content as a major threat to society, with particular concern about its impact on social cohesion and vulnerable populations

Speakers

– Jacek Oko
– Salma Abbasi

Arguments

Generating deepfakes and disinformation that looks real is alarmingly easy, threatening social cohesion


Misinformation and disinformation proliferation leads to dramatic acceleration of false narrative belief, especially among children


Topics

Cybersecurity | Sociocultural


Both countries have developed comprehensive policy frameworks and multi-stakeholder approaches to ensure AI development occurs within ethical boundaries

Speakers

– Vanessa Matz
– Ernesto Rodriguez Hernandez

Arguments

Approved digital transformation policy, digital agenda, and AI development strategy under ethical framework


Created AI ecosystem bringing together public and private actors to provide ethical and legal advice


Topics

Legal and regulatory | Development


Unexpected consensus

Using AI to combat AI-related risks

Speakers

– Jacek Oko

Arguments

AI can serve as a personalized learning assistant for people with special needs and intellectual disabilities


Explanation

While most speakers focused on AI risks and the need for human oversight, Oko presented an unexpected consensus-building approach of using AI itself as a solution to AI-related problems, particularly in education and accessibility


Topics

Human rights | Development


Cultural knowledge gaps in AI systems

Speakers

– Salma Abbasi

Arguments

AI lacks understanding of social cultural norms and heritage knowledge from previous generations


Explanation

This represents an unexpected area where there was implicit consensus – the recognition that AI systems fundamentally lack cultural wisdom and intergenerational knowledge, which wasn’t directly challenged by other speakers


Topics

Sociocultural | Human rights


Overall assessment

Summary

The speakers demonstrated strong consensus on key principles including the importance of education and capacity building, the need for human oversight in AI systems, transparency requirements, and the possibility of balancing rights with ethical considerations. There was also agreement on the risks posed by misinformation and the need for inclusive approaches to digital transformation.


Consensus level

High level of consensus on fundamental principles, with speakers from different regions and backgrounds converging on similar approaches to ethical AI governance. This suggests a mature understanding of the challenges and potential solutions, with implications for developing international frameworks and standards for AI ethics that could have broad acceptance across different political and cultural contexts.


Differences

Different viewpoints

Trust in AI for decision-making

Speakers

– Abdulbaset Albaour
– Jacek Oko

Arguments

That’s in my opinion, we cannot trust the machine to take decision


Let us not be afraid of AI. On the contrary, let us use it as a powerful tool in this educational mission


Summary

Albaour fundamentally argues against trusting machines for decision-making due to their reliance on algorithms and data versus human emotion, experience, and wisdom. Oko takes a more optimistic stance, advocating for embracing AI as a powerful tool rather than fearing it, particularly in education.


Topics

Human rights | Legal and regulatory


Approach to AI regulation and oversight

Speakers

– Jacek Oko
– Salma Abbasi

Arguments

Let’s trust not governmental organizations, let’s trust educators and let’s cooperate with them as an administration


The regulatory framework needs to be robust and reinforced


Summary

Oko advocates for trusting non-governmental organizations and educators for AI oversight, emphasizing cooperation with administration. Abbasi calls for robust and reinforced regulatory frameworks, suggesting a more structured governmental approach to AI governance.


Topics

Legal and regulatory | Development


Unexpected differences

Role of fear in approaching AI technology

Speakers

– Anriette Esterhuysen
– Salma Abbasi

Arguments

If we approach emerging technologies just from a place of fear, we will fail to effectively utilize the positive potential


The manipulation and the distortion of facts have been seen on the streets of the United Kingdom last year when our societies were polarized


Explanation

While Esterhuysen warns against fear-based approaches to AI that might prevent utilizing positive potential, Abbasi provides extensive evidence of real-world harms from AI systems, including social polarization, gender-based violence, and exploitation of children. This creates an unexpected tension between optimistic utilization and cautionary risk assessment.


Topics

Human rights | Sociocultural


Overall assessment

Summary

The discussion revealed relatively low levels of direct disagreement, with most conflicts centered around the degree of trust in AI systems and the appropriate balance between regulation and innovation. The main areas of disagreement were: fundamental trust in AI decision-making capabilities, regulatory approaches (governmental vs. non-governmental oversight), and the balance between embracing AI potential versus addressing its risks.


Disagreement level

Low to moderate disagreement level. The speakers largely shared common goals of ethical AI development, human-centered technology, and the importance of education and transparency. However, they differed significantly in their approaches to achieving these goals, particularly regarding the role of regulation, the trustworthiness of AI systems, and the balance between innovation and caution. These disagreements have important implications as they reflect fundamental philosophical differences about AI governance that could impact policy development and international cooperation efforts.


Partial agreements

Partial agreements

Similar viewpoints

Both ministers emphasized inclusive approaches to digital transformation that specifically consider vulnerable populations and ensure no one is left behind in the digital transition

Speakers

– Vanessa Matz
– Ernesto Rodriguez Hernandez

Arguments

Digital services must be accessible to all without exception, including vulnerable groups who need non-digital alternatives


Created 642 technology centers training over 5 million Cubans, mostly young people, in digital citizenship


Topics

Human rights | Development


Both speakers identified the ease of creating convincing false content as a major threat to society, with particular concern about its impact on social cohesion and vulnerable populations

Speakers

– Jacek Oko
– Salma Abbasi

Arguments

Generating deepfakes and disinformation that looks real is alarmingly easy, threatening social cohesion


Misinformation and disinformation proliferation leads to dramatic acceleration of false narrative belief, especially among children


Topics

Cybersecurity | Sociocultural


Both countries have developed comprehensive policy frameworks and multi-stakeholder approaches to ensure AI development occurs within ethical boundaries

Speakers

– Vanessa Matz
– Ernesto Rodriguez Hernandez

Arguments

Approved digital transformation policy, digital agenda, and AI development strategy under ethical framework


Created AI ecosystem bringing together public and private actors to provide ethical and legal advice


Topics

Legal and regulatory | Development


Takeaways

Key takeaways

Digital services must be accessible to all populations, including vulnerable groups who require non-digital alternatives to ensure true equality of access


Human oversight is essential in AI systems as machines cannot be trusted to make irreversible decisions based solely on algorithms and data, unlike humans who use emotion, experience, and wisdom


Education and digital literacy are fundamental for building social resilience, with emphasis on teaching citizens to distinguish between authentic and manipulated content


AI poses significant risks including easy generation of deepfakes and disinformation, algorithmic bias that reinforces discrimination, and technology-facilitated violence particularly affecting children and women


Transparency in AI design and development is crucial, requiring auditable algorithms and robust accountability frameworks with consequences for failures


International cooperation is needed to develop localized and contextualized AI frameworks, especially for Global South countries rapidly adopting AI without adequate regulatory safeguards


AI should be used as a tool to educate about AI risks rather than being approached solely from a place of fear, particularly for people with special needs and disabilities


Constitutional rights frameworks can help balance freedom of expression and access to information with ethical regulation and protection against hate speech and violence


Resolutions and action items

Belgium created an AI ecosystem bringing together public and private actors to provide ethical and legal advice on AI deployment


Belgium launched an observatory for AI and digital technologies to reinforce transparency and facilitate citizen dialogue


Cuba established 642 technology centers that have trained over 5 million people in digital citizenship


Cuba approved digital transformation policy, digital agenda, and AI development strategy under an ethical framework


Need to implement auditable algorithms with transparency in design and development processes


Establish robust regulatory frameworks with human oversight requirements and accountability mechanisms with consequences


Unresolved issues

How to effectively regulate AI systems that operate beyond human oversight or with limited transparency


How to address the cultural and heritage knowledge gaps in AI systems that lack understanding of social cultural norms


How to balance innovation and free expression while preventing technology-facilitated violence and exploitation


How to ensure Global South countries can develop adequate regulatory frameworks while rapidly adopting AI technologies


How to address the commercial exploitation in AI systems, particularly the $700 billion cosmetics industry manipulation mentioned


How to effectively combat the shortened attention spans and declining critical thinking abilities in children due to AI exposure


Suggested compromises

Provide non-digital alternatives alongside digital services to ensure inclusion of vulnerable populations while advancing digitalization


Use AI as a personalized learning assistant for people with special needs while maintaining human oversight and intervention capabilities


Apply constitutional safeguards against hate speech and violence while preserving freedom of expression and access to information


Collaborate between governmental and non-governmental organizations, educators, and administrators to leverage AI for educational purposes


Balance online and offline critical thinking development to maintain human cognitive abilities while embracing AI benefits


Develop localized AI frameworks that incorporate Global South voices while building on existing international cooperation structures


Thought provoking comments

Human make decision dependent on the emotion, experience, also the wisdom. But AI and machine take decision dependent on the algorithms and data… Sometime when take the decision by human, we can maybe go back before the take decision and take another decision. But AI and machine, when take the decision, we cannot go back before the decision. That’s in my opinion, we cannot trust the machine to take decision.

Speaker

Abdulbaset Albaour (Libya)


Reason

This comment provides a fundamental philosophical distinction between human and machine decision-making processes. It introduces the critical concept of irreversibility in AI decisions and highlights the absence of emotional intelligence and experiential wisdom in algorithmic processes. This cuts to the core of the ethical debate about AI delegation.


Impact

This comment established a cautionary tone that influenced subsequent speakers to address the limitations of AI. It shifted the discussion from purely technical considerations to fundamental questions about the nature of decision-making and trust in automated systems.


But as a government, we are obviously committed to ensuring that we enable and encourage innovation, encourage free expression, but again, ensuring that there’s a balanced approach to protect rights and also build trust and resilience… we don’t have to abandon rights in order to respect rights, and in fact… there are ways of balancing rights when some rights impede on other rights.

Speaker

Stephen Isaboke (Kenya)


Reason

This comment introduces the sophisticated concept of ‘creative tension’ between competing rights and reframes the discussion from a zero-sum perspective to one of dynamic balance. It challenges the false dichotomy that you must choose between innovation and rights protection.


Impact

This shifted the conversation from viewing rights and innovation as opposing forces to understanding them as complementary elements that require careful balancing. It provided a practical framework for policy-making that influenced the moderator’s summary and likely shaped how other participants viewed the regulatory challenge.


Let us not be afraid of AI. On the contrary, let us use it as a powerful tool in this educational mission… So let’s use AI to teach about AI.

Speaker

Jacek Oko (Poland)


Reason

This comment represents a paradigm shift from defensive to proactive thinking about AI. It’s counterintuitive and innovative – using the very technology that poses risks as a solution to educate about those risks. It challenges the fear-based approach that often dominates AI discussions.


Impact

This comment introduced a new dimension to the discussion by proposing AI as part of the solution rather than just the problem. It moved the conversation from purely regulatory and cautionary approaches to exploring innovative educational applications, demonstrating how emerging technologies can be leveraged for positive outcomes.


The nuances of the social cultural norm. All the things that we learn from our grandparents, our culture that is not digital, AI is missing all of that in its analysis. It’s priceless because it’s our cultural knowledge and heritage that is not easily documented.

Speaker

Salma Abbasi


Reason

This comment introduces a profound and often overlooked dimension – the loss of intergenerational wisdom and cultural knowledge in AI systems. It highlights how AI’s reliance on documented, digitized data excludes vast repositories of human knowledge passed down through oral traditions and cultural practices.


Impact

This comment deepened the discussion by introducing cultural and heritage considerations that hadn’t been explicitly addressed. It expanded the scope from technical and regulatory concerns to include preservation of human cultural wisdom, adding a more holistic perspective to the ethical framework discussion.


Many countries from the global south that are rapidly embracing AI without the adequate regulatory frameworks in place and safeguards, we need to collaborate closely to work to build an inclusive framework that is localized and contextualized so that we can incorporate the voices of the global south to ensure that it is shaped by them, for them.

Speaker

Salma Abbasi


Reason

This comment addresses a critical gap in global AI governance – the exclusion of Global South perspectives in framework development. It challenges the assumption that AI ethical frameworks can be universally applied without considering local contexts and power dynamics.


Impact

This comment brought attention to global equity issues in AI governance, shifting the discussion from primarily technical and national perspectives to international cooperation and inclusive development. It highlighted the need for collaborative, culturally sensitive approaches to AI ethics.


Overall assessment

These key comments collectively transformed the discussion from a series of national policy presentations into a nuanced exploration of fundamental questions about AI ethics. The Libyan minister’s philosophical distinction between human and machine decision-making established a foundational framework that influenced subsequent speakers to address AI limitations more critically. The Kenyan representative’s concept of ‘creative tension’ and rights balancing provided a sophisticated policy framework that moved beyond simplistic trade-offs. The Polish speaker’s innovative proposal to use AI for AI education introduced solution-oriented thinking, while Professor Abbasi’s comments on cultural knowledge and Global South inclusion expanded the scope to encompass heritage preservation and global equity. Together, these interventions elevated the conversation from technical implementation details to fundamental questions about human agency, cultural preservation, rights balancing, and global justice in the age of AI. The discussion evolved from individual country reports to a collaborative exploration of shared challenges and innovative solutions.


Follow-up questions

How can we effectively balance offline and online critical thinking abilities in children’s education?

Speaker

Salma Abbasi


Explanation

This addresses the concern about children’s shortened attention spans and the risk of misleading them into thinking AI is the only way, highlighting the need to develop comprehensive educational approaches


How can we incorporate cultural knowledge and heritage that is not easily documented into AI systems?

Speaker

Salma Abbasi


Explanation

This addresses the gap in AI systems missing social cultural norms and traditional knowledge passed down through generations, which is crucial for culturally appropriate AI development


What specific regulatory frameworks and safeguards should Global South countries implement when rapidly adopting AI?

Speaker

Salma Abbasi


Explanation

This is critical as many developing countries are embracing AI without adequate protections in place, requiring collaborative frameworks that are localized and contextualized


How can we ensure auditable algorithms with transparent data sources and gender-neutral parameters?

Speaker

Salma Abbasi


Explanation

This addresses the need for transparency in AI design and development, particularly regarding what data is used, parameter settings, and bias prevention


What constitutes effective human oversight in AI systems and how can we prevent blind following of algorithms?

Speaker

Salma Abbasi


Explanation

This addresses the critical need for human intervention in AI decision-making processes, especially given that algorithms can make mistakes and data can contain errors


How can we establish robust accountability mechanisms with real consequences for AI-related harm?

Speaker

Salma Abbasi


Explanation

This addresses the need for accountability when duty of care is neglected and serious harm occurs, such as technology-facilitated violence or exploitation leading to severe consequences


How can AI be effectively used as a personalized learning assistant for people with special needs and intellectual disabilities?

Speaker

Jacek Oko


Explanation

This explores the positive potential of AI in education, particularly for adapting content and creating safe learning environments for vulnerable populations


What are the most effective methods for citizens to distinguish manipulated content from authentic content?

Speaker

Jacek Oko


Explanation

This addresses the fundamental challenge of deepfakes and disinformation, requiring practical solutions for media literacy across all age groups


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.