Day 0 Event #261 Navigating Ethical Dilemmas in AI-Generated Content

23 Jun 2025 14:45h - 15:45h

Day 0 Event #261 Navigating Ethical Dilemmas in AI-Generated Content

Session at a glance

Summary

This discussion focused on the ethical implementation of artificial intelligence in digital media and journalism, hosted by R&W Media as part of their presentation of the Harlem Declaration initiative. Lei Ma, director of media innovation at R&W Media, introduced the organization’s mission to support independent digital media in upholding human rights and advancing public good across over 40 countries. Ernst Noorman, Dutch Ambassador for Cyber Affairs, outlined the Netherlands’ approach to information integrity online, emphasizing human rights as the foundation for AI policy, the importance of multi-stakeholder collaboration, and the need for algorithmic transparency and accountability.


The session featured case studies from global south organizations demonstrating practical AI implementation challenges. Taysir Mathlouthi from Hamleh, a Palestinian digital rights organization, discussed their work on platform accountability during conflicts and their development of localized AI models for content moderation in Hebrew and Arabic languages. Sanskriti Panday from Yuva, a youth-led organization in Nepal, shared how her team uses AI tools like ChatGPT and Canva while maintaining ethical guidelines, particularly when addressing sensitive topics around sexual and reproductive health rights.


Laura Becana Ball from the Global Forum for Media Development highlighted the importance of including journalist voices in digital governance discussions and introduced the Journalism Cloud Alliance to make AI tools more accessible to newsrooms worldwide. The discussion culminated with the presentation of an ethical AI checklist based on six guiding principles from the Harlem Declaration, designed to help media organizations implement responsible AI practices while addressing challenges like checklist fatigue and resource constraints in global majority countries.


Keypoints

## Major Discussion Points:


– **Introduction of R&W Media and the Harlem Declaration**: Lei Ma presented R&W Media as an international media development organization focused on supporting independent digital media to uphold human rights. The session centered around introducing the Harlem Declaration – an international commitment to promote ethical AI in digital media with six ethical principles and practical actions, including an ethical AI checklist.


– **Government Policy Framework for Information Integrity**: Ernst Noorman from the Dutch Ministry of Foreign Affairs outlined key policy elements for maintaining information integrity online, including human rights protection, legal/regulatory measures, AI governance, media pluralism, digital literacy, and algorithmic transparency. He emphasized multi-stakeholder approaches and referenced the EU’s Digital Services Act as a model.


– **Real-world AI Implementation Challenges from Global South Organizations**: Partners from Hamleh (Palestine) and Yuva (Nepal) shared practical experiences using AI tools in conflict-affected and resource-constrained settings. They discussed challenges including algorithmic bias, environmental concerns, content authenticity verification, and the need for localized AI solutions that understand cultural contexts and minority languages.


– **Practical Framework for Ethical AI Implementation**: The session presented an “AI-supported” approach that centers people over technology, emphasizing human oversight rather than automation. Discussion focused on translating ethical principles into practical checklists that organizations can use throughout their workflow, from planning to content production.


– **Organizational Implementation Barriers and Solutions**: Participants addressed challenges in implementing ethical AI frameworks, including checklist fatigue, resource constraints, rapidly evolving technology, and the need for organizational buy-in. The discussion emphasized making ethical considerations part of regular workflow rather than additional bureaucratic burden.


## Overall Purpose:


The session aimed to introduce the Harlem Declaration’s ethical AI framework and gather input on developing practical tools (particularly an ethical AI checklist) that media organizations and civil society groups can use to implement responsible AI practices in their daily work.


## Overall Tone:


The discussion maintained a collaborative and constructive tone throughout. It began formally with organizational presentations but became increasingly interactive and practical as speakers shared real-world challenges and solutions. The tone was earnest and solution-oriented, with participants openly discussing both the potential and limitations of AI tools. There was a sense of urgency around addressing ethical AI implementation, balanced with realistic acknowledgment of resource constraints faced by organizations, particularly in the Global South.


Speakers

– **Lei Ma**: Director of media innovation at R&W media, coordinator for the IGF dynamic co-creation on sustainability of journalism and news media


– **Ernst Noorman**: Ambassador for cyber affairs from the Dutch Ministry of Foreign Affairs


– **Sanskriti Panday**: President of Juva, a youth-led organization from Nepal that empowers young people to lead on gender equality, SRHR and civic engagement


– **Laura Becana Ball**: Works with GFMD (Global Forum for Media Development), a network of more than 200 organizations working to support journalism and media development


– **Taysir Mathlouthi**: EU advocacy officer at Hamleh, the Arab Center for the Advancement of Social Media (Palestinian digital rights organization)


– **Participant 1**: Role/title not specified


– **Participant 2**: Role/title not specified


**Additional speakers:**


– **Insaf Ben-Nassar**: Coordinator of the IGF (mentioned by Lei Ma at the beginning but did not speak in the transcript)


Full session report

# Discussion Report: Ethical Implementation of Artificial Intelligence in Digital Media and Journalism


## Introduction and Context


This discussion was hosted by R&W Media as part of their presentation of the Harlem Declaration initiative, examining the ethical implementation of artificial intelligence in digital media and journalism. The session was coordinated by Lei Ma, Director of Media Innovation at R&W Media, who also serves as coordinator for the IGF Dynamic Coalition on Sustainability of Journalism and News Media.


The conversation brought together representatives from government, civil society organisations, media development networks, and advocacy groups, creating a multi-stakeholder dialogue spanning perspectives from different regions. Lei Ma noted that one partner from the Middle East was unable to attend due to being “stuck at an airport” because of the current crisis, highlighting the real-world challenges facing international collaboration.


The discussion focused on practical challenges of implementing ethical AI frameworks while addressing resource constraints and operational realities faced by media organisations and civil society groups. Due to overlapping sessions, some speakers needed to leave early, which affected the planned interactive elements.


## Organisational Presentations and Frameworks


### R&W Media and the Harlem Declaration Initiative


Lei Ma introduced R&W Media as an international media development organisation supporting independent digital media to uphold human rights and advance public good across more than 40 countries. The organisation focuses on media viability and information integrity.


The central presentation focused on the Harlem Declaration, described as an international commitment to promote ethical AI in digital media. This initiative encompasses six ethical principles designed to guide media organisations in responsible AI implementation, accompanied by practical tools including an ethical AI checklist. Lei Ma mentioned that printed reports and QR codes were available for participants.


A key concept introduced was the “AI-supported approach,” which centers people over technology and emphasises human oversight rather than automation. This approach distinguishes between AI as replacement versus AI as augmentation, advocating for assistance rather than automation in media workflows.


### Government Policy Framework


Ernst Noorman, Ambassador for Cyber Affairs from the Dutch Ministry of Foreign Affairs, outlined governmental approaches to maintaining information integrity online. He emphasised that human rights must serve as the cornerstone of information integrity policy, specifically highlighting freedom of expression and access to information.


Noorman advocated for legal and regulatory measures that comply with international human rights laws while avoiding restrictive legislation. He prioritised multi-stakeholder collaboration between governments, technology companies, civil society organisations, and academic institutions. He referenced the EU’s Digital Services Act as a framework example, emphasising algorithmic transparency and accountability measures.


Additional priorities included AI governance frameworks, media pluralism protection, digital literacy initiatives, and transparency requirements for algorithmic decision-making processes. Noorman also mentioned visiting R&W Media’s offices in Haarlem, providing context for the collaboration.


## Perspectives from Civil Society Organisations


### Digital Rights in Conflict Settings


Taysir Mathlouthi from Hamleh, the Arab Centre for the Advancement of Social Media, provided perspective on AI implementation in conflict-affected settings. Hamleh works on platform accountability and develops localised AI solutions for Arabic and Hebrew-speaking communities.


Mathlouthi described how “war is not only offline but online as well,” introducing concepts of digital dehumanisation and positioning AI as potentially weaponised. He discussed how generative AI can contribute to dehumanisation and how identification tools may be employed in targeting within conflict zones.


His organisation’s practical work includes developing AI models for hate speech classification in Hebrew and Arabic languages, using localised data that understands cultural and linguistic nuances. This addresses gaps in mainstream AI tools that often fail to comprehend Global South contexts and languages.


Mathlouthi advocated for an “ethics by design” strategy, arguing that ethical considerations must be built into AI systems from inception rather than addressing risks after they occur. He emphasised the need for affected communities to take leadership in AI development rather than solely adapting existing solutions.


### Youth-Led Organisation Challenges


Sanskriti Panday, President of Yuva, a youth-led organisation from Nepal focused on gender equality and civic engagement, discussed practical challenges faced by small organisations implementing ethical AI practices.


Panday described AI as becoming essential for resource-constrained organisations, creating tension between practical necessities and ethical considerations. Her organisation uses tools including ChatGPT, Canva, and Grammarly while attempting to maintain ethical guidelines, particularly when addressing sensitive topics around sexual and reproductive health rights.


She highlighted the complexity of maintaining ethical oversight in distributed organisations, noting difficulty controlling AI use among local partners across different regions of Nepal. Panday identified critical needs including AI literacy training, toolkits for ethical use, better alternatives to current tools, and transparency mechanisms for content verification.


She also raised environmental sustainability concerns, noting that AI-generated video content poses environmental challenges that her organisation considers non-negotiable.


### Media Development Network Perspective


Laura Becana Ball from the Global Forum for Media Development (GFMD) provided insights from a network representing more than 200 organisations protecting media freedom and supporting journalism worldwide. She emphasised ensuring journalist voices are represented in digital governance discussions.


GFMD engages in major policy initiatives including the EU Digital Services Act, AI Act, and Global Digital Compact. Becana Ball highlighted rising expenses of AI tools and cloud computing infrastructure as threats to public interest journalism sustainability.


In response, GFMD launched the Journalism Cloud Alliance, aimed at making AI tools and infrastructure more accessible and affordable for newsrooms worldwide, addressing resource constraints that prevent many media organisations from implementing ethical AI practices.


## Implementation Framework Discussion


### The AI-Supported Approach Details


The discussion explored operationalising the people-centred approach through practical tools, particularly the ethical AI checklist being developed as part of the Harlem Declaration initiative. The checklist guides organisations through ethical considerations at various workflow stages, from planning to content production and distribution.


The conversation addressed practical barriers including “checklist fatigue” and resource constraints, particularly for Global South organisations. However, the discussion reframed checklists from compliance tools to reflection frameworks, emphasising meaningful consideration of responsible decision-making when using AI tools.


The approach emphasises organisational buy-in and integration of ethical considerations into regular workflows rather than treating them as additional bureaucratic processes.


## Key Areas of Agreement and Tension


### Shared Principles


Speakers demonstrated consensus around human rights principles as foundational to AI governance, with agreement spanning freedom of expression, access to information, and protection of vulnerable communities. There was also broad agreement on the importance of multi-stakeholder collaboration and transparency requirements for AI systems.


Environmental considerations emerged as a shared concern, with multiple speakers independently prioritising environmental impacts of AI deployment.


### Different Approaches


Tensions emerged around regulatory timing and approaches. While Noorman advocated for frameworks using existing legal structures, Mathlouthi emphasised “ethics by design” strategies building ethical considerations into systems from inception.


Different perspectives also emerged on implementation strategies, with some focusing on practical checklist implementation within existing structures while others emphasised fundamental redesign of AI development processes.


## Ongoing Challenges


Several challenges remained unresolved, including preventing checklist fatigue while ensuring organisational buy-in, addressing resource constraints of small organisations, and managing content verification in rapidly evolving technological contexts.


Broader systemic challenges include addressing platform biases in conflict-affected settings and ensuring meaningful participation of marginalised communities in AI governance discussions.


## Action Items and Next Steps


The discussion concluded with concrete commitments including encouraging endorsement of the Harlem Declaration and continued co-creation of the ethical AI checklist tool with partner organisations.


Participants were invited to join the Dynamic Coalition on Sustainability of News Media and Journalism, while GFMD’s Journalism Cloud Alliance was highlighted as addressing practical accessibility challenges.


The conversation demonstrated that while consensus exists around ethical AI principles, substantial work remains in translating these into practical, sustainable implementation strategies that account for diverse contexts, resources, and challenges faced by media organisations and civil society groups globally.


Session transcript

Lei Ma: Thank you very much. And now I would like to welcome the coordinator of the IGF, Insaf Ben-Nassar, to say a few words. Good afternoon. My name is Lei Ma, welcome to this session. My name is Lei Ma, I’m the director of media innovation at the R&W media. I’m also the coordinator for the IGF dynamic co-creation on sustainability of journalism and news media. Thank you for inviting me to this session, I’m also the co-coordinator of the IGF dynamic co-creation for R&W media. Thank you. Here is today’s agenda. I will do a brief introduction of R&W media and then we will have two guest speakers and followed by AI user stories from our partners from global south and besides we are going to also showcase the ethical AI checklist and then we will have two speakers. So, R&W media is an international media development organization based in Harlem, the Netherlands, dedicated to harness the power of independent digital media to uphold human rights and advance the public good. Our vision is open society powered by digital media and our mission is that we support public interest digital media to champion human rights and advance the public good. So, R&W media is a global organization that is based in the United States and we have more than 40 countries across Europe, North Africa, the Middle East, West and East Africa, East Asia and Central and South America. So, R&W media, we also have a training center called RTC, it was set up in 1968. So, in the past 57 years, we have provided journalism and journalism training to more than 60 countries and we have also trained professionals from over 110 countries. So, what do we do? So, actually, we do three things. We incorporate innovative and local relevant digital media learning solutions that drive engagement and impact. We facilitate strategic media co-creation, partnership and movement building. We focus on technological, social media and equilibrium performance and travel regulation processing and also tech platform accountability. So, basically, we are focusing on two things, media with ability and information integrativity. For media with ability, we are focusing on digital inclusion and global connectivity. We are focusing on digital inclusion and global connectivity, so we can ensure public interest in media in the global south can achieve the financial independence and also editorial freedom. So, for information integrity, we address the growing challenge of information disorder by rebuilding the public confidence in information ecosystem. So, we are working on three cross-cutting themes, including digital inclusion, digital inclusion, digital integrity, and safeguarding digital safety and well-being. We are also working on three cross-cutting themes, including ethical, technology and AI deployment, democratic discourse on gender, and also promoting inclusive migration narrative. So, as I introduced, under media, we are a service provider. We provide nine different types of service services, and we are also a global partner, so we have over 1,000 partners in different countries and universities. So, we have over 50 solutions supported by our ESO 9001 certification across ethical AI and emerging technology, digital safety and accessibility, journalism, digital media, and so on. And there is a table at the entrance, and you can find our flyers of our solution portfolio if you want to have a look. So, we have over 1,000 partners, and we have over 1,000 partners in different countries, and we have over 300 million people every year, and mainly young people from global south, and we have almost 350 million social media rates in 2024, and 91 per cent of our audience report knowledge and attitude change, and 79 per cent of our audience report significant behavior change, and, as I said, we have provided over 1,000 solutions, and we have over 1,000 solutions, and we have 91 per cent of partners satisfaction rate, and also 94 per cent of our alumni reported a positive career change. So, I think there is one thing I want to mention is in the past three or four years, together with our partners, we successfully worked for meta to change their content advertising and also moderation policy. Last but not least, I would like to introduce a new initiative from us called Harlem Declaration. You can find a copy also at the entrance on that table. So it is an international collective commitment to promote ethical AI in digital media and champion AI for advancing inclusive, safe, and reliable digital space. It was born in the city of Harlem, a city known for the digital innovation in digital media, and it was created with together 88 public interest media from the global south, and also experts from 34 countries. It outlines six ethical values and principles, and also six tangible actions, including an ethical AI checklist, so this checklist will be the centre for today. So today, I would like to welcome you to this event, and I would like to invite you to join us as a partner in this initiative to promote ethical media outlets, CSOs, and academic organisations. I would like to encourage you guys to talk to us and, if you want to join in this international commitment, international movement, please sign your name and reach out to us. Okay. So, as an organisation based in the Netherlands, it would be my honour to present our first guest speaker, Mr Ernst Norman, the ambassador for cyber affairs from the Dutch Ministry of Foreign Affairs. So, Mr Ernst, the floor is yours.


Ernst Noorman: Thank you very much, Lei, and it’s always a pleasure to be together with the RNW media in an event, and I really can recommend also, if you have ever a chance to be in Haarlem or the Netherlands, to go and visit their offices. It’s in a beautiful building, which was before a prison building, but right now a centre with lots of activity and start-ups and, of course, RNW media, so please take the opportunity, if you have a chance, to visit them. Now, on the information integrity, for us, information integrity online is essential for promoting enjoyment of freedom of expression, which includes the right to seek, receive, and impart information and ideas. This is why we started in 2023 with the global declaration on information integrity, together with Canada, which was signed by 36 countries. With this approach, we tried to formulate a positive agenda on information online, rather than just talk about banning or debunking disinformation. Now, what do we see as the main elements for information integrity? First of all, human rights should be at the core of your policy on this. You must uphold the freedom of expression, opinion, and access to information as fundamental rights. You must ensure measures to protect information integrity, which comply with international human rights laws and especially with the international covenant on civil and political rights. And for that, you have to have also legal and regulatory measures in place. You have to implement appropriate laws and platform governance aligned with privacy and human rights obligations. And at the same time, you have to avoid restrictive laws that infringe on digital freedoms. I think the DSA Act from the European Union is an excellent example on how the EU tries to cope to do this. And on AI, it’s important to manage emerging technologies responsibly and monitor and regulate generative AI and emerging tech through multi-stakeholder dialogue. And you have to ensure any responses are appropriate, proportionate to risk and uphold international human rights law. And again, I mentioned the EU with the EU AI act as an example how we have tried to do this. Indeed also through a multi-stakeholder approach trying to hear to all the insights and voices from the different partners in the digital community. An important part further is to promote diversity and media pluralism. We have to support independent pluralistic media and diverse content including local languages and cultures. I think local languages promoted is also an extremely important part. It will be in the WSIS discussion. We have to safeguard journalism and access to credible information to counter disinformation. Another point is to strengthen digital and media literacy and I think this is also where RNW Media comes in with their experience in this field. We have to invest in civic education to empower individuals to critically assess online content. We have to build society resilience against misinformation and online harms and further we have to protect vulnerable and targeted groups. Unfortunately very relevant today this subject. We have to address defamation that targets women, the LGBTI groups, individuals, persons with disabilities, indigenous peoples and other marginalized groups and further we have to embrace a multi-stakeholder approach. We have to collaborate with governments, tech companies, civil society, academics and experts. I always say multi-stakeholder approach is not a religion. It is there because we believe in it because it created the internet as it is today and will further the resilience of the internet and the plural approach of the internet. We have to share knowledge and craft better informed responses to information threats and we have to foster global cooperation. We have to promote digital inclusion and freedom through partnerships and global fora like the Forum on Information and Democracy and we have to encourage cross-border knowledge sharing and joint actions. Another important point is to ensure algorithmic transparency and accountability. We have to disclose how the algorithms rank, recommend and suppress content in user-friendly language. We have to implement oversight mechanisms to ensure responsible algorithm use. We have learned a hard lesson in the Netherlands with the social security program which harmed large groups in society. That’s why we created an algorithm registry which also includes a human rights assessment for new algorithms being introduced by the governments and it’s now really almost close to a thousand algorithms have been registered and we also encourage now the private sector to register their algorithms. So I think a strong example how we work together within the government but also with private sector to be transparent on the use of algorithms. And we have to further to safeguard political and electoral integrity. Last year was the year with the most elections in history and a lot of discussion was of course on the use of AI and influence the elections. So we have to develop clear policies for political and issue-based ads to protect democratic processes and we have to support transparency in content moderation and appeal mechanisms. A final point I would like to make is to promote trust and integrity. We have to fight monetization of disinformation. We have to ensure content governance is ethical and transparent and we have to partner with academia and civil society to identify trustworthy sources and create tools for users. As I said before, the Netherlands sees the European Digital Services Act as an important step in taking action in order to achieve information integrity and addressing harmful content online while protecting freedom of expression. And we are now working with the Freedom Online Coalition. It’s 41 members and advisory network and the OECD to develop this work on information integrity further.


Lei Ma: Thank you very much. Thank you so much Mr. Ernst. Okay and we should have another speaker from GFMD but Laura is not here yet so we will skip this part. But I will encourage you guys to check the GFMD website. It’s a global forum for media development. They are doing amazing work. So let’s move on. We already heard about the story from the Netherlands and let’s switch our focus to more countries from global south. So Inzaf, over to you.


Participant 1: Yeah, thank you very much. We were supposed to have three partners but unfortunately one of our partners will not be able to make it due to the current crisis that is happening in the Middle East. He’s literally stuck at an airport so we’re very pleased to have two partners of us with us. One in person and the other one online. I’m pleased to introduce Taysir Matlouti, the EU advocacy officer at Hamleh, the Arab Center for the Advancement of Social Media. Hamleh is a leading non-profit organization advocating for Palestinian digital rights and working to create a safe, fair and free digital space. So welcome Taysir. And online we have with us, joining us, Sanskriti, the president of Juva, a youth-led organization from Nepal that empowers young people to lead on gender equality, SRHR and civic engagement. I would like to ask, double check with the technicians, if our online participants also have access to the mic to participate, to make that possible for us. Thank you so much. I will start with Taysir. Could you please share a little bit more about the very important and wonderful work, amazing work that Hamleh is doing and more about the organization and work indeed. Yes, thank you very much. So as you already said,


Taysir Mathlouthi: Hamleh is a Palestinian digital rights organization. We are based in Palestine, in Israel, but also in different regions in Europe and in the United States. One of the most important works we’re doing is platforms accountability, especially in conflict-affected settings, and how when a conflict happens now, war is not only offline but online as well. So we’ve been working mostly on content moderation issues, especially after October 7th, how specific platforms such as Meta have been modifying their internal policies and have been taking down many content of Palestinian content creators, but also journalists. And we’ve been advocating for the fact that those accounts being taken down, the content being censored, is obviously against freedom of opinion and expression, but also international humanitarian law and international human rights law. So we’ve been also working on the use of AI in digital warfare, how generative AI can lead to digital dehumanizations, especially the dehumanization of the Palestinian people, and how AI also can be used as a weapon of war, especially when it comes to identification tools that are being used by different armies to target potential people within the Gaza Strip.


Participant 1: Thank you very much. And online, Sanskriti Panday, can you… Yeah, thank you very much. Sanskriti, would you like to please introduce the work that you’re doing?


Sanskriti Panday: Hello, everyone. My name is Sanskriti Panday. I’m from Yuva. So we are an organization that is completely youth-run and youth-led. So Yuva is based in Nepal. So our three thematic core areas are active citizenship, that basically means we are coming together to make a difference. connecting young people to their civil rights and making them know their rights and everything. Then the second thing we have is sexual and reproductive health and rights. And the third thing we have is research unit that basically supports this two thematic course areas that we are focusing on. So we have our advocacy that is very evidence-based and yeah, so that is the work we do and we focus on youth empowerment.


Participant 1: Thank you very much, Shanskita. And do you use AI tools in the work related to digital media?


Sanskriti Panday: Yeah. So we have a small team in our group. So there’s not many communication team who can have a dedicated Photoshop, illustrators. So we do use AI, but just as a tool, we don’t completely rely on it because we do work on sensitive issues, sensitive topics. We’re trying to bust like myths about abortion, sexual and reproductive health and rights, which is quite taboo in Nepal. So we are using AI just as a tool. We use like chat GPT, we use Canva, which is like Canva AI tools and then we use Grammarly for that.


Participant 1: Yeah. And Hamle, how are you using AI tools in the work?


Taysir Mathlouthi: Yes, so at Hamle, when we talk about ethical AI and especially in our context of conflict affected setting, the main challenge we face with generative AI are algorithmic biases because most AI tools that we’re currently using are being developed within the Northern hemisphere and are not understanding and grasping all the issues we’re facing within our context, but also all the cultural aspects and different languages. So at Hamle, we’ve been working on our own AI models that are designed to classify hate speech and violence on social media platforms and different chat platforms in two different languages, Hebrew and Arabic. And those models are really unique because they are built on words, terms, definitions and data labeling that reflect the specific region we live in and countries we work on. So narratives are contextualized and identities are also defined within their intersectionality rather than relying on generic or externally imposed standards. So we are really using ethical AI as a localized approach, which is very crucial for creating more inclusive AI systems that truly understand the nuances of global majority communities, especially in areas where big text content moderation tools unfortunately often fail.


Participant 1: So you do have discussions in your organization about the use and how to use AI ethically and mindfully in the work that you’re doing?


Taysir Mathlouthi: Yes, we do. And we don’t do it only under the umbrella of freedom of opinion expression. We also do it around the environmental impact of AI. So when we build our own tools, especially these AI models for content moderation issues, we’ve been focusing on very, what we call a weak AI systems that are LLMs, but that are not obviously very big processors, which means that although the training part of it could lead to a lot of water consumption, we also take into account the fact that when we use this AI tool now, obviously it’s way less than a tool such as ChatGPT or bigger LLMs that we’re currently using in our daily lives. But also we take the aspect of freedom of expression and opinion and how it is important although as a Palestinian organization to take into account all the disparities and the differences within the Hebrew language to grasp all the contents that could be considered as harmful and harmful in Hebrew, but also in Arabic. So we’re also trying to have this balance focusing on both sides of the war and how harmful content could come from both parts. Obviously, we also take into account the anonymity of the content we’re using. So we’re not trying to do profiling or we’re not trying to identify the sources. We’re mostly using the data as anonymized tools and we don’t at all share the information around sources. And obviously the last part is that our tool is not open source, but we would like to have it open source later on, obviously because we want other NGOs, other contexts being able to use the same systems within their own context, within their own languages and issues.


Participant 1: Thank you very much. Sanskriti, do you have deliberations in your organization about how to use AI more ethically in the work that you’re doing?


Sanskriti Panday: Yeah. So we have quite long discussions about it because it’s a new tool for us and everyone has been using it quite frequently in the individual level, organizational level. So we have had this conversation multiple times on how we should do it ethically and especially when we are working with so many stakeholders, the data is, especially working around sexual and reproductive health and rights, we have to be very mindful of how and where we are using the data. We are not exposing the names. So we have had that conversation and I agree with the previous speaker, like the water use and everything, like even globally speaking, it is estimated like in 2027, we will be using 4.2 to 6 trillion, right? Yeah, liters of water, which is huge. So the environmental impact is really huge. So we are trying to make it as ethical as possible. There’s always a human touch, there’s always biases and especially when we as an organization are trying to deal with biases, stereotypes and if we are using AI completely and depending on it without our own human touch, it makes it quite difficult to do the work we’re doing very ethically because of the language they have used, because of how they’ve presented, what kind of views are being like narrated in like the general source, right? A lot of things are generalized. So we have this conversation when we should use it, because I feel like AI is like the new way of life. So we, it’s like, if we don’t use it, there’s a lot of resource constraints as a small NGO. So we are somehow forced to use AI, but if we don’t, like it’s hard to maintain, right? But we do like maintain the authenticity, we label it and be like, this is AI generated, this is where we took our source from. Even if we are getting any information from our AI, we double check it because we don’t want to spread any misinformation and then the content guidelines. So we don’t have a very proper guideline, but we have vetted our like communication team to be very mindful. However, it’s quite tricky in UI because not just the communication team, we have our local youth partners as well, youth champions, which are in different parts of Nepal. So they all have social media users to promote their content, their work they have been doing in their own places. And we give them complete control over how like what they want to post, what they want to promote. We look at it, but like they have that autonomy of posting their work in however way they like. So it’s very hard to internally control everything. So that has been quite challenging, but the discussions are obviously there, but we don’t have a framework yet, like what should be done or what we shouldn’t do, but video generation from AI is non-negotiable for us because we know how much damage it causes to the environment, but using it when we are having blocks and then AI like Grammarly and the chat GPT is like quite often used, yeah. Yeah, a lot of challenges indeed. What kind of support would help you, your organization implement it in an ethical way? sorry. Yeah. So I think it would really help if we have literacy trainings and like we said if there are better alternatives, right? So we would be, it would be very helpful if you are aware of it. Like if there is the same thing, like we can also keep prompt and like have the idea generation. But if there is a better like alternative to chatgit, we would love to know. And then toolkits like what should be done, what is ethical, which is not good. We may not know what we’re doing is like causing harm. So if we had that toolkit, we can also transfer it to the young champions we are like giving because we often have like orientations for them, how to make content and everything. So we can also integrate how they should be using it ethically, like the AI ethically, we cannot control an individual like act. But what we can provide is like these trainings, these guidelines, and then have our communication team do the best to moderate it. So I think those are the things that would really help. And also when we are doing art competitions, it was very hard because digital art also we need to accept. And then which is very like, if we just don’t review digital art, it’s unfair for the participant who have put so much time and effort. But right now, we don’t know whether it’s AI generated or not. Right. So it also becomes tricky at those times. So if we had a way to verify like, you know, what is AI generated, which is like authentic. So that would also help a lot.


Participant 1: Yeah. Thank you very much for Hamlet.


Taysir Mathlouthi: So when it comes to Hamlet, we are taking the ethics by design strategy. So we really want AI to be designed ethically from the beginning. And we don’t want regulations either at the local, regional or international levels to only try to regulate the risks that are already taking place with AI. So according to us, what is important is obviously investments around AI literacy, but also AI within the education. We need more people being trained and educated in the use of AI within global majority countries. And we need people who kind of take a sort of a very holistic approach when it comes to AI to decrease as much as possible algorithmic biases. So we need people who understand the different contexts. We need AI tools that take into account the small languages without prioritizing northern hemispheres languages. We also consider that it’s important to push for more AI models and LLMs being developed within the global majority countries and by people who understand their own contexts and communities. Obviously, we do consider that AI could have a huge potential and could maximize the benefits for human rights principles. But at the same moment, if we produce what we already have, we might face some issues in the next years as well. So global majority countries should take also the lead when it comes to AI projects and implementation.


Participant 1: Thank you very much. I don’t know if that was an AI generated baby sound, but because I don’t see a child, but I can definitely hear it. Oh yes, there. Thank you very much for sharing your experience and the work and the goals that you have when it comes to the use of AI in an ethical way. I was looking in the audience if Laura was already amongst us. I would like to invite you to the stage. Thank you very much. And if we can have access to the PowerPoint again, showing it in the background. Yes. Thank you Laura, the floor is yours.


Laura Becana Ball: Well, I wanted to first introduce for those who don’t know GFMD, we are a network of more than 200 organizations working to support journalism, media development organizations to R&W media is one of our members. And we are working together to protect and promote media freedom and support journalism worldwide. So we do this through collaboration, knowledge exchange and coalition building, both with our members, also other partners. I’m happy to see also Hamlet here. We’ve worked with them a lot on EU media advocacy. We have also this informal coalition. And what we want to make sure is that our community is represented and it’s actively engaging into these key policy discussions on digital governance, for example. So we want to bring their knowledge, expertise and recommendations, especially because policies on social media or AI are developed and deployed in ways that uphold media, like we want them to uphold media freedom, editorial independence and human rights, because otherwise our voices are not in those discussions. As I was saying, one of these examples is, for example, the EU Media Advocacy Group. We work on the EU Digital Services Act, also the AI Act a bit, and also the European Media Freedom Act, which includes also protections online for media journalism and also against surveillance. We’re also, in more of a global level, we’re engaging with the UN in the Pact for the Future, the Global Digital Compact, which also has a section on AI, and now, for example, on the WSIS plus 20 review. In the context also of the IGF, I think it’s very important to mention that we are the secretariat, also Lei is one of the co-coordinators, newly elected co-coordinators, of the Dynamic Coalition on the Sustainability of News, sorry, the Dynamic Coalition on the Sustainability of News, Media and Journalism. We’ve just launched a report, I think we have the QR code, or if not we can share it afterwards, on how AI also affects the sustainability of journalists, and there’s a lot of case studies there, so I really would like to encourage you to read it, it’s here, we have the printed copy. I was just in the session presenting also this, we have also Shouravi, and all these activities, so what we really want there is to strengthen this presence of the journalist voices in these digital governance discussions, because this is not a niche issue, it is essential for the future of democracy, for the access to information, freedom of expression, and inclusivity and accessibility. And in the meantime also, we know that rising expense, and I think it was also, it’s mentioned in the report, but it was also being mentioned there, the rising expenses of the cloud computing, AI tools, big data infrastructures, are threatening the future of the public interest journalism, and also the impact that can have at the moment when our democratic future is at risk. So among these also collaboration efforts on shaping these policies, we have also practical alternatives. We just launched the Journalism Cloud Alliance, which is a joint initiative with OCCRP and other 33 members, that through collaboration, collective action, and via strategic partnerships, aims to make these services, these AI tools, this cloud-based infrastructure and services more accessible, secure, affordable, and sustainable for the newsrooms worldwide. So that would be a bit of like overlooked of our work. Thank you.


Participant 1: Thank you very much, Laura.


Lei Ma: Yeah, thank you, Laura, and unfortunately this session is overlapping with another session, that’s why the Dutch Cyber Affairs Ambassador already left, but he is going to stay here for several days, and so definitely you can meet him and reach out to him if you have any questions. So I will say let’s move on, and I prefer to leave the last 17 minutes to my colleague Surabhi, who will share our S-code AI checklist, and also we really, really want to work with you on this. So I’m going to turn it over to Lea, who is going to talk about the Harlem Declaration and the ethical AI checklist.


Participant 2: Surabhi. » Thank you, Lea, and apologies for not being here earlier. I was, as Laura mentioned, in a session with her which was also overlapping. But, yeah, I’m sure my colleagues have already shared a bit about the Harlem Declaration, and you can find a link to that in the chat. So I’m going to turn it over to Lea, who is going to talk a bit about the ethical AI checklist, and you can find a link to that in the chat. » Thank you, Lea, and apologies for not being here earlier. But I wanted to talk a bit about our approach to AI as R&W media, where we center people over technology, and people over AI, and hence we call it the AI-supported approach. And I’ll also mention a bit, as highlighted by our partner organizations, the need for an ethical AI checklist, a holistic approach to AI, and that’s what we’re trying to do in the next couple of years. So we can move to the next slide. Yes, so, as I mentioned earlier, we are interested in more assisting people with AI technology rather than replacing people with AI, and, of course, the job loss and the loss of people with AI is a pressing concern, but as much as it’s important, I think it’s very relevant to also understand the importance of AI, of human oversight and human agency, especially when it comes to newsrooms and media organizations and journalists, and how important it is to have a human agency and human element in everything we do along the workflow of media work. And hence, it’s very important for us that we pay attention to the assistance part, and also support part, rather than automation. Maybe I can, yes, sorry. Yes, and I wanted to take you through also, I mean, ChatGPD came around in 2023, so it’s also been a big learning exercise for us internally as an organization on what it has meant to use AI tools, and this provides a bit of a glimpse of what we have been doing in the last few years in terms, as a media development organization, internal learning and processes, and creating our own repository of understanding on what these AI tools actually mean in practice. And this has come in many different forms, both internally through the lens of actually using and experimenting some of these AI tools to understand how they work in practice, what are their limitations, how well do they integrate in our current workflows, but also developing our own understanding collectively about different facets of AI. So part of it is learning about what actually is artificial intelligence, what is algorithms, what is this whole new jargon of terminology that’s become quite normalized now, but also what are the ethical and, you know, problematic implications of using these tools if we leave human oversight or agency outside of AI. We have also been talking to our partner organizations around the world, especially in global majority, global south countries, where we work with a lot of independent media, public-interest media outlets, with civil society organizations like you heard from Nepal, but also digital rights organizations like Hamle, where we are talking to them about how they’re using AI, and not just using, but also how are they thinking about using AI, and how are they thinking about using AI, and not just using, but also how are they thinking about ethical implications of using AI tools, what are their concerns and apprehensions about these tools, as well as how do they envision we can provide and offer more support, and part of the user stories is documented in the report that Laura referred to, where several partner organizations have highlighted the importance of talking about the ethical attributes of using AI tools. And the learning continues, of course, so this is not an exhaustive list, but this is just to indicate that we are very interested in continuing our learning as an organization, but also with our partners, but also with various different stakeholders to develop solutions, so for instance, looking at AI and journalism, but also taking the next step in terms of we have a blueprint in terms of our guiding principles, and how do we implement them in practice, and I think both Tasir and Sanskriti highlighted that in their presentations. So Harlem Declaration, as I mentioned, is a blueprint for us in terms of our ethical guidelines that we would like to commit to in practice, and these are the six guiding principles for us, so they range from ethical data practices, securing and restoring information integrity, understanding explainability of AI tools, but also looking at the broader environmental implications, which I think often is left outside of the ethical considerations, but is becoming fast an important element as we read more about the data centers, but also some incredible investigative journalism being done on how these data centers are being built in communities that are already impoverished and marginalized around the world. So as I mentioned, we have this Harlem Declaration, so the question was what next? How do we really think about now translating these principles in practice? And I think an important element for us was to understand what is it that we would like people to do with an ethical AI checklist. Now we have all used checklists in different ways. They can be boring, they can be tiring, they can feel unnecessary and burdensome. So that’s an important attribute of how do we also prevent checklist fatigue, and I’ll come back to that in a bit, but really the key elements you want to look at are understanding what are the ethical implications, having some real practical things to look at as we work through our everyday tasks with media organizations, having reflection points to look at, how do we think about these issues and how do we collaborate over these aspects? And then also document some of these challenges and continuing conversations because these are often ethical dilemmas that we will not resolve in one go. So how do we keep coming back to them and how do we evolve with these discussions over time with our teams as well as at individual level? And I have some examples here, and I will skip because I do want to continue to the guiding questions and I would love to hear from the audience here. But this is just some examples in terms of how could you implement something like a toolkit like this in, let’s say, a local newsroom in India that’s trying to do news or storytelling in local languages. And a part of this is to highlight that this needs to begin from the very beginning, right? So it’s not when you are already doing content production that you start thinking of AI, but you start thinking about AI tools at the very beginning of the planning and setup stage. And this includes, for example, understanding what AI tools to use. And that itself can be several days of deliberation, but that brings forward questions of who is building these tools? Where is the funding for these tools coming about? What are the ideologies that are shaping the development of these tools? And, you know, what are the financial attributes, aspects of pushing these tools into, you know, into the market? How do we understand the labor rights issues behind development of these tools? As Karen Howe in her latest book on Empire of AI calls it disaster capitalism. How do we really look at all of these attributes to make the decision whether the tool that we are even planning to use is ethical or responsible in its development? And these are quite, you know, big questions. And we often may not have the time and resources to answer all of these questions. And we often may not have the time and resources enough to actually discuss these questions in detail. But this is just to guide the idea of checklists is not just to check off certain things and then move on, but really sit with some of these discussions and reflections and think about are we making the most responsible decisions as media organizations or as civil society organizations when we are trying to use these tools? Because we also work with media makers who are content makers who are making these decisions. We work with media makers who are content creators online and they’re on Instagram and TikTok. Another example could be how do we understand the production of content for social media channels? How do we look at understanding both publishing aspect but also production aspects? Are we talking about transparency with our audiences in terms of an AI-generated content? I think Sanskriti mentioned in her presentation that it’s often hard for young people to tell whether a content piece is AI-generated or not. Can we bring in that ethical AI checklist framework to understand how transparent can we be to our audience? Can we claim that certain part of this was done by AI? is generated through AI, how much transparency is needed, and what happens after that transparency is accorded to the audience, right? So, because they know that it’s AI generated, does it help them to actually consume that information more meaningfully? Does it help them to make the right decisions around whether the information’s accurate, reliable, fact-based? So I think a lot of those layers and nuances need to be delved into when we are thinking about this checklist as well. Sorry, I’m not, yeah. So some of the challenges, and this slide is cut off for some reason, but, ah, okay, this works. So as I mentioned, some of the challenges come about if you are implementing a checklist like this. As I mentioned, checklist fatigue. I think we’ve all been through this in our own organizations. There are a lot of, they might be considered a bureaucratic hurdle. You might just wanna move past it in a rushed manner. So how do we address that? Also thinking about the resource constraints. We work with a lot of media organizations who are working under really narrow timelines, very restrictive funding and restrictive settings. Do they have the time to really think through and implement this checklist in the first place? The nature of evolving AI technology. I mean, we all know that AI is evolving exponentially, and so a checklist really needs to be considered as a living document. It cannot be a static thing that we plan to use even two months from now, for instance. So these are some of the challenges I think that we will need to keep coming back to, and with our organizations internally, think about how do we address them to really make sense of and ensure that we are using this checklist in the best way possible. So we wanted to do a fishbowl activity, but since we are running out of time, we had some guiding questions, and this is something that we would like to open the floor to the audience. And think about in your own work, if you are implementing a checklist, whether it may be related to AI or another, what have been some of your own learnings around it? How do you ensure that there’s a buy-in from the organization to use a checklist like this? Do you use certain kind of incentives? Are you using certain processes to document whether this checklist is being used efficiently? Is it really effective for people and your colleagues and the organizations you work in? And yeah, these are some guiding questions, but if something else comes to mind, I think this is really the point we would like to hear from you as well.


Participant 1: So I don’t know if there are any quick reflections or ideas or thoughts.


Participant 2: How do you like the idea of another checklist?


Lei Ma: Yeah, sometimes I feel that having a checklist, sometimes people may be, they are too busy on their daily work. They actually will not use it. But some checklists can be really, really, really, how to say that, is starting from something really small, right? I remember I learned from somebody that you need to stop saying thank you to tele-GPT. A lot of people still saying thank you to tele-GPT. Okay, I’m looking at an audience. So if there is no one, yeah.


Participant 2: Maybe if I could, not to put you on spot test here, but if you have any reflections on how do you create a buy-in within the organization to use a checklist like this?


Taysir Mathlouthi: Yeah. I mean, I think that you raised an important point when you said that it could be seen as a very bureaucratic process. So it’s mostly around how we can make it, first of all, compatible with the capacity that we have within the organization, but also being also included within our reporting and our workloads. So it’s not something that we just do in addition to our work, but it’s really part of the process. But obviously, I think that everyone needs to be included within the process. Obviously, that’s something difficult to put in place. But what I learned when it comes to ethical AI and the use of AI is that sometimes we have a very narrow idea of how we can use it and how we use it personally. But you need conversations, you need to be able to sit with your colleagues, with other organizations, with different stakeholders to really take into account and understand how AI can be implemented and used. And also, obviously, the risks that could evolve from the beginning to the moment it’s implemented. And also, as you said, this is a process that should be always, I mean, it’s not something that you finish. That’s always a process that we have to do every couple of months, maybe every year, because the risks of AI are changing. But that’s a huge issue. How can we take that into account within our work without putting too much burden, especially when it comes to, I mean, global majority local organizations, we don’t have that much capacity. How you just reorganize the workforce that you have as well. Those are really important questions. Otherwise, this will be expelled like another bureaucratic process to be done.


Participant 2: Right, yeah. Thank you so much, Taysir, for that. And we have a question, I think, from our online audience. Any reflections from our online audience at this point? No? No, okay. So yeah, I think we’re running out. We just have a minute, or not even a minute. I think we have 10 seconds, so I will move on. But this is just to say that if you would like to keep in touch with us and think along with us on how do we co-create and implement this checklist tool, please get in touch with us. Our business cards are also at the back table. Also, do check out the Harlem Declaration. Read through it and consider endorsing it, especially if you’re a media organization or you work with media outlets who are using AI at their own work. But I think that’s a wrap from us then. And thank you so much for joining us. And if there are any questions, we’ll be happy to take them.


L

Lei Ma

Speech speed

169 words per minute

Speech length

1196 words

Speech time

423 seconds

R&W Media supports public interest digital media across 40+ countries with focus on media viability and information integrity

Explanation

R&W Media is an international media development organization that operates globally to support independent digital media in upholding human rights and advancing public good. The organization focuses on two main areas: media viability (ensuring financial independence and editorial freedom) and information integrity (rebuilding public confidence in information ecosystems).


Evidence

Organization operates in more than 40 countries across Europe, North Africa, Middle East, West and East Africa, East Asia and Central and South America; has training center established in 1968; provided training to professionals from over 110 countries; reaches 300 million people annually with 91% reporting knowledge change and 79% reporting behavior change


Major discussion point

Media Development and Digital Rights Organizations’ Work


Topics

Development | Human rights | Sociocultural


Harlem Declaration represents international commitment to promote ethical AI in digital media with six ethical principles

Explanation

The Harlem Declaration is a new initiative created with 88 public interest media organizations from the global south and experts from 34 countries. It outlines six ethical values and principles along with six tangible actions, including an ethical AI checklist, to promote ethical AI use in digital media.


Evidence

Created with 88 public interest media from global south and experts from 34 countries; outlines six ethical values and principles and six tangible actions including ethical AI checklist


Major discussion point

Harlem Declaration and Ethical AI Framework


Topics

Human rights | Legal and regulatory | Sociocultural


Agreed with

– Ernst Noorman
– Taysir Mathlouthi
– Participant 2

Agreed on

Human rights and ethical principles must be central to AI and digital governance


E

Ernst Noorman

Speech speed

131 words per minute

Speech length

950 words

Speech time

433 seconds

Human rights must be at the core of information integrity policy, upholding freedom of expression and access to information

Explanation

Information integrity online is essential for promoting freedom of expression, including the right to seek, receive, and impart information and ideas. Any measures to protect information integrity must comply with international human rights laws, particularly the international covenant on civil and political rights.


Evidence

Netherlands started global declaration on information integrity in 2023 with Canada, signed by 36 countries; focuses on positive agenda rather than just banning disinformation


Major discussion point

Information Integrity and Human Rights Framework


Topics

Human rights | Legal and regulatory


Agreed with

– Taysir Mathlouthi
– Lei Ma
– Participant 2

Agreed on

Human rights and ethical principles must be central to AI and digital governance


Legal and regulatory measures should comply with international human rights laws while avoiding restrictive laws that infringe on digital freedoms

Explanation

Countries must implement appropriate laws and platform governance that align with privacy and human rights obligations. At the same time, they must avoid restrictive laws that infringe on digital freedoms, finding the right balance between protection and freedom.


Evidence

EU Digital Services Act cited as excellent example of how EU tries to cope with this balance; EU AI Act mentioned as example of managing emerging technologies through multi-stakeholder dialogue


Major discussion point

Information Integrity and Human Rights Framework


Topics

Human rights | Legal and regulatory


Disagreed with

– Taysir Mathlouthi

Disagreed on

Approach to AI regulation and governance


Multi-stakeholder approach is essential for collaboration between governments, tech companies, civil society, and academics

Explanation

Effective information integrity requires collaboration between governments, tech companies, civil society, academics and experts. This multi-stakeholder approach is not just ideological but practical, as it created the internet as it exists today and will further its resilience and plural approach.


Evidence

Multi-stakeholder approach created the internet as it is today; Netherlands works with Freedom Online Coalition (41 members) and OECD to develop information integrity work


Major discussion point

Information Integrity and Human Rights Framework


Topics

Legal and regulatory | Sociocultural


Agreed with

– Laura Becana Ball
– Taysir Mathlouthi

Agreed on

Multi-stakeholder collaboration is essential for effective AI governance


Disagreed with

– Taysir Mathlouthi

Disagreed on

AI development and ownership priorities


Algorithmic transparency and accountability require disclosure of how algorithms rank and recommend content

Explanation

There must be disclosure of how algorithms rank, recommend and suppress content in user-friendly language, with oversight mechanisms to ensure responsible algorithm use. This transparency is crucial for maintaining public trust and preventing algorithmic harm.


Evidence

Netherlands created algorithm registry after hard lesson with social security program that harmed large groups; registry includes human rights assessment for new government algorithms and encourages private sector registration; close to 1000 algorithms registered


Major discussion point

Information Integrity and Human Rights Framework


Topics

Human rights | Legal and regulatory | Sociocultural


Agreed with

– Sanskriti Panday
– Participant 1

Agreed on

Transparency and accountability in AI systems are crucial


T

Taysir Mathlouthi

Speech speed

132 words per minute

Speech length

1207 words

Speech time

545 seconds

Hamleh advocates for Palestinian digital rights and works on platform accountability in conflict-affected settings

Explanation

Hamleh is a Palestinian digital rights organization that focuses on platform accountability, especially during conflicts when war extends to online spaces. The organization works across Palestine, Israel, Europe and the United States to address digital rights violations.


Evidence

Organization based in Palestine/Israel with presence in Europe and US; works on content moderation issues and platform accountability in conflict settings


Major discussion point

Media Development and Digital Rights Organizations’ Work


Topics

Human rights | Sociocultural


Algorithmic biases in AI tools developed in Northern hemisphere don’t understand cultural contexts and languages of Global South

Explanation

Most AI tools are developed in the Northern hemisphere and fail to understand and grasp issues, cultural aspects, and languages specific to Global South contexts. This creates significant challenges for organizations working in conflict-affected settings and marginalized communities.


Evidence

Hamleh develops own AI models for hate speech classification in Hebrew and Arabic using localized data; models built on words, terms, definitions reflecting specific regional context rather than generic standards


Major discussion point

AI Implementation Challenges in Global South Organizations


Topics

Development | Human rights | Sociocultural


Global majority countries should take the lead in AI projects and implementation to address contextual needs

Explanation

Rather than relying on AI tools developed elsewhere, global majority countries need more people trained in AI use within their contexts and should develop their own AI models and LLMs. This approach would better understand different contexts and decrease algorithmic biases.


Evidence

Need for AI tools that account for small languages without prioritizing northern hemisphere languages; need for people who understand different contexts to take holistic approach


Major discussion point

AI Implementation Challenges in Global South Organizations


Topics

Development | Human rights | Sociocultural


Agreed with

– Ernst Noorman
– Laura Becana Ball

Agreed on

Multi-stakeholder collaboration is essential for effective AI governance


Disagreed with

– Ernst Noorman

Disagreed on

AI development and ownership priorities


Hamleh develops AI models for hate speech classification in Hebrew and Arabic using localized data and contextualized narratives

Explanation

Hamleh creates unique AI models designed specifically for their regional context, built on localized words, terms, definitions and data labeling. These models reflect the specific region and define identities within their intersectionality rather than using generic or externally imposed standards.


Evidence

Models classify hate speech in Hebrew and Arabic; built on contextualized narratives and intersectional identity definitions; uses anonymized data without profiling or identifying sources; plans to make open source for other NGOs


Major discussion point

Ethical AI Development and Localized Solutions


Topics

Human rights | Legal and regulatory | Sociocultural


Ethics by design strategy is crucial rather than only regulating risks after they occur

Explanation

Organizations should focus on designing AI ethically from the beginning rather than waiting to regulate risks that have already materialized. This proactive approach prevents problems rather than trying to fix them after harm has occurred.


Evidence

Hamleh takes ethics by design strategy; emphasizes need for investments in AI literacy and education in global majority countries


Major discussion point

Ethical AI Development and Localized Solutions


Topics

Human rights | Legal and regulatory


Disagreed with

– Ernst Noorman

Disagreed on

Approach to AI regulation and governance


Environmental impact of AI, including water consumption, must be considered in ethical AI discussions

Explanation

When building AI tools, organizations must consider the environmental impact, particularly water consumption during training phases. Hamleh focuses on smaller AI systems that consume less resources than larger language models like ChatGPT.


Evidence

Hamleh focuses on ‘weak AI systems’ that are LLMs but not big processors; while training requires water consumption, usage is less than tools like ChatGPT


Major discussion point

Ethical AI Development and Localized Solutions


Topics

Development | Legal and regulatory


Agreed with

– Sanskriti Panday

Agreed on

Environmental impact of AI must be considered in ethical frameworks


Platforms like Meta modify internal policies during conflicts, leading to censorship of Palestinian content creators and journalists

Explanation

During conflicts, platforms such as Meta change their internal policies and take down content from Palestinian content creators and journalists. This content moderation disproportionately affects Palestinian voices and narratives during times of conflict.


Evidence

Content moderation issues especially after October 7th; Meta taking down Palestinian content creator and journalist accounts


Major discussion point

Content Moderation and Platform Accountability


Topics

Human rights | Sociocultural


Content takedowns violate freedom of expression and international humanitarian law

Explanation

The censorship and takedown of Palestinian accounts and content by platforms violates fundamental rights to freedom of opinion and expression. These actions also contravene international humanitarian law and international human rights law.


Evidence

Accounts being taken down and content being censored during conflict


Major discussion point

Content Moderation and Platform Accountability


Topics

Human rights | Legal and regulatory


Agreed with

– Ernst Noorman
– Lei Ma
– Participant 2

Agreed on

Human rights and ethical principles must be central to AI and digital governance


AI is used as a weapon of war through identification tools for targeting people in conflict zones

Explanation

Generative AI can lead to digital dehumanization, particularly of Palestinian people, and AI serves as a weapon of war through identification tools used by armies to target potential people within conflict zones like the Gaza Strip.


Evidence

AI identification tools used by armies to target people within Gaza Strip; generative AI leads to digital dehumanization of Palestinians


Major discussion point

Content Moderation and Platform Accountability


Topics

Cybersecurity | Human rights


S

Sanskriti Panday

Speech speed

153 words per minute

Speech length

1055 words

Speech time

411 seconds

Yuva is a youth-led organization in Nepal focusing on active citizenship, sexual and reproductive health rights, and research

Explanation

Yuva is a completely youth-run and youth-led organization based in Nepal with three core thematic areas. These include active citizenship (connecting young people to civil rights), sexual and reproductive health rights, and a research unit that supports the other two areas with evidence-based advocacy.


Evidence

Organization completely youth-run and youth-led; three thematic areas include active citizenship, SRHR, and research unit; focuses on youth empowerment


Major discussion point

Media Development and Digital Rights Organizations’ Work


Topics

Development | Human rights | Sociocultural


Small organizations use AI tools like ChatGPT and Canva but face resource constraints and need human oversight for sensitive topics

Explanation

Small organizations with limited communication teams use AI tools as assistants rather than replacements, particularly for tasks like content creation and writing assistance. However, they maintain human oversight especially when working on sensitive issues like sexual and reproductive health rights, which are taboo topics in Nepal.


Evidence

Small team without dedicated Photoshop/illustrator staff; use ChatGPT, Canva AI tools, and Grammarly; work on sensitive topics like abortion and SRHR which are taboo in Nepal; use AI as tool but don’t completely rely on it


Major discussion point

AI Implementation Challenges in Global South Organizations


Topics

Development | Human rights | Sociocultural


Organizations need AI literacy training, toolkits for ethical use, and better alternatives to current AI tools

Explanation

Organizations require literacy trainings to understand ethical AI use, toolkits that outline what should and shouldn’t be done, and awareness of better alternatives to current AI tools. This support would help them transfer knowledge to their youth champions and partners while maintaining ethical standards.


Evidence

Need for literacy trainings and toolkits; want to know better alternatives to ChatGPT; need to provide training to youth champions across Nepal; difficulty verifying AI-generated content in art competitions


Major discussion point

AI Implementation Challenges in Global South Organizations


Topics

Development | Sociocultural


Organizations maintain discussions about ethical AI use while balancing resource constraints and authenticity needs

Explanation

Organizations have lengthy discussions about ethical AI use, considering environmental impacts like water consumption and the need to maintain authenticity in their work. They face the challenge of being somewhat forced to use AI due to resource constraints while trying to maintain ethical standards and human oversight.


Evidence

Long discussions about ethical use; awareness of environmental impact (4.2-6 trillion liters of water by 2027); maintain human touch to deal with biases; label AI-generated content; double-check information; video generation from AI is non-negotiable due to environmental damage


Major discussion point

Ethical AI Development and Localized Solutions


Topics

Development | Human rights | Legal and regulatory


Agreed with

– Taysir Mathlouthi

Agreed on

Environmental impact of AI must be considered in ethical frameworks


Organizations need transparency in content moderation and verification tools to distinguish AI-generated content

Explanation

Organizations struggle with verifying whether content is AI-generated, particularly in contexts like art competitions where they want to be fair to participants who put time and effort into digital art. They need better tools and methods to verify authenticity of submitted content.


Evidence

Difficulty determining if digital art in competitions is AI-generated; unfair to reject digital art entirely; need verification tools to distinguish authentic from AI-generated content


Major discussion point

Content Moderation and Platform Accountability


Topics

Legal and regulatory | Sociocultural


Agreed with

– Ernst Noorman
– Participant 1

Agreed on

Transparency and accountability in AI systems are crucial


L

Laura Becana Ball

Speech speed

142 words per minute

Speech length

571 words

Speech time

239 seconds

GFMD is a network of 200+ organizations working to protect media freedom and support journalism worldwide

Explanation

GFMD operates as a network of more than 200 organizations, including R&W Media as a member, working to support journalism and media development. They protect and promote media freedom through collaboration, knowledge exchange, and coalition building with members and partners.


Evidence

Network of 200+ organizations; R&W Media is member; works through collaboration, knowledge exchange and coalition building


Major discussion point

Media Development and Digital Rights Organizations’ Work


Topics

Human rights | Sociocultural


GFMD engages in EU Digital Services Act, AI Act, and Global Digital Compact to ensure media voices are represented

Explanation

GFMD actively engages in key policy discussions on digital governance to ensure their community is represented and that policies on social media and AI uphold media freedom, editorial independence and human rights. They work on multiple levels from EU to global UN initiatives.


Evidence

EU Media Advocacy Group works on Digital Services Act, AI Act, and European Media Freedom Act; engages with UN on Pact for the Future and Global Digital Compact; secretariat of Dynamic Coalition on Sustainability of News Media and Journalism


Major discussion point

Policy Advocacy and Coalition Building


Topics

Human rights | Legal and regulatory | Sociocultural


Agreed with

– Ernst Noorman
– Taysir Mathlouthi

Agreed on

Multi-stakeholder collaboration is essential for effective AI governance


Rising expenses of AI tools and cloud computing threaten the sustainability of public interest journalism

Explanation

The increasing costs of cloud computing, AI tools, and big data infrastructures pose a significant threat to the future of public interest journalism. This economic pressure comes at a critical time when democratic institutions are already at risk.


Evidence

Rising expenses of cloud computing, AI tools, big data infrastructures; threat occurs when democratic future is at risk


Major discussion point

Policy Advocacy and Coalition Building


Topics

Economic | Human rights


Journalism Cloud Alliance aims to make AI tools and infrastructure more accessible and affordable for newsrooms worldwide

Explanation

The Journalism Cloud Alliance is a joint initiative with OCCRP and 33 other members that uses collaboration, collective action, and strategic partnerships to address the accessibility and affordability crisis. The alliance works to make cloud-based infrastructure and AI services more accessible, secure, affordable, and sustainable for newsrooms globally.


Evidence

Joint initiative with OCCRP and 33 members; uses collaboration, collective action, and strategic partnerships; focuses on making services accessible, secure, affordable, and sustainable


Major discussion point

Policy Advocacy and Coalition Building


Topics

Development | Economic


Dynamic Coalition on Sustainability of News Media and Journalism strengthens journalist voices in digital governance discussions

Explanation

The Dynamic Coalition, with Lei Ma as co-coordinator, works to ensure journalist voices are present in digital governance discussions because this is essential for democracy, access to information, freedom of expression, and inclusivity. They recently launched a report on how AI affects journalist sustainability.


Evidence

Lei Ma is co-coordinator; launched report on how AI affects sustainability of journalists with case studies; strengthens presence of journalist voices in digital governance


Major discussion point

Policy Advocacy and Coalition Building


Topics

Human rights | Sociocultural


P

Participant 2

Speech speed

192 words per minute

Speech length

2201 words

Speech time

685 seconds

AI-supported approach centers people over technology, emphasizing human oversight and agency rather than automation

Explanation

R&W Media’s approach focuses on assisting people with AI technology rather than replacing people with AI. This emphasizes the importance of human oversight and human agency, especially in newsrooms and media organizations, prioritizing the assistance and support aspects rather than automation.


Evidence

R&W Media centers people over technology; emphasizes human oversight and agency in newsrooms; focuses on assistance rather than automation


Major discussion point

Harlem Declaration and Ethical AI Framework


Topics

Human rights | Sociocultural


Agreed with

– Ernst Noorman
– Taysir Mathlouthi
– Lei Ma

Agreed on

Human rights and ethical principles must be central to AI and digital governance


Ethical AI checklist should address practical implementation challenges including checklist fatigue and resource constraints

Explanation

The ethical AI checklist must be designed to prevent checklist fatigue and avoid being seen as a bureaucratic burden. It should provide practical guidance for everyday tasks while considering the resource constraints that organizations face, particularly in rushed timelines and restrictive funding environments.


Evidence

Checklist can be boring, tiring, unnecessary and burdensome; organizations work under narrow timelines and restrictive funding; need to address checklist fatigue and bureaucratic hurdles


Major discussion point

Harlem Declaration and Ethical AI Framework


Topics

Legal and regulatory | Development


Checklist implementation requires organizational buy-in and integration into regular workflows rather than additional bureaucratic burden

Explanation

For effective implementation, the checklist must be compatible with organizational capacity and integrated into regular reporting and workloads rather than being an additional task. It should be part of the regular process and involve everyone in the organization, with regular updates as AI technology evolves.


Evidence

Must be compatible with organizational capacity; integrated into reporting and workloads; should be part of regular process; needs to be updated regularly as AI risks change; requires conversations and collaboration with colleagues and stakeholders


Major discussion point

Harlem Declaration and Ethical AI Framework


Topics

Legal and regulatory | Development


P

Participant 1

Speech speed

134 words per minute

Speech length

457 words

Speech time

204 seconds

Organizations need to facilitate discussions and conversations about ethical AI use across different stakeholders

Explanation

Participant 1 emphasizes the importance of having discussions within organizations about ethical AI use, particularly when working with sensitive data and stakeholders. These conversations are essential for developing ethical frameworks and ensuring responsible AI implementation.


Evidence

Organizations have had conversations multiple times about ethical AI use, especially when working with stakeholders and sensitive data around sexual and reproductive health rights


Major discussion point

Ethical AI Development and Localized Solutions


Topics

Human rights | Legal and regulatory | Sociocultural


Technical support is needed to enable online participation and accessibility in digital governance discussions

Explanation

Participant 1 highlights the practical challenges of ensuring online participants can effectively contribute to discussions. This includes technical infrastructure and support to enable meaningful participation from remote locations.


Evidence

Asked technicians to ensure online participants have access to microphones for participation


Major discussion point

AI Implementation Challenges in Global South Organizations


Topics

Development | Infrastructure


Organizations should provide transparency and labeling when using AI-generated content

Explanation

Participant 1 supports the need for organizations to be transparent about their use of AI tools and to properly label AI-generated content. This transparency helps audiences make informed decisions about the information they consume.


Evidence

Discussion about organizations labeling AI-generated content and being transparent about sources


Major discussion point

Content Moderation and Platform Accountability


Topics

Human rights | Legal and regulatory | Sociocultural


Agreed with

– Ernst Noorman
– Sanskriti Panday

Agreed on

Transparency and accountability in AI systems are crucial


Agreements

Agreement points

Human rights and ethical principles must be central to AI and digital governance

Speakers

– Ernst Noorman
– Taysir Mathlouthi
– Lei Ma
– Participant 2

Arguments

Human rights must be at the core of information integrity policy, upholding freedom of expression and access to information


Content takedowns violate freedom of expression and international humanitarian law


Harlem Declaration represents international commitment to promote ethical AI in digital media with six ethical principles


AI-supported approach centers people over technology, emphasizing human oversight and agency rather than automation


Summary

All speakers agree that human rights principles, particularly freedom of expression and access to information, must be foundational to any AI governance framework. They emphasize the need for ethical approaches that prioritize human agency over technological automation.


Topics

Human rights | Legal and regulatory


Multi-stakeholder collaboration is essential for effective AI governance

Speakers

– Ernst Noorman
– Laura Becana Ball
– Taysir Mathlouthi

Arguments

Multi-stakeholder approach is essential for collaboration between governments, tech companies, civil society, and academics


GFMD engages in EU Digital Services Act, AI Act, and Global Digital Compact to ensure media voices are represented


Global majority countries should take the lead in AI projects and implementation to address contextual needs


Summary

Speakers consistently advocate for inclusive multi-stakeholder approaches that bring together governments, tech companies, civil society, and academics. They emphasize the importance of ensuring diverse voices, particularly from global majority countries, are represented in AI governance discussions.


Topics

Legal and regulatory | Sociocultural


Transparency and accountability in AI systems are crucial

Speakers

– Ernst Noorman
– Sanskriti Panday
– Participant 1

Arguments

Algorithmic transparency and accountability require disclosure of how algorithms rank and recommend content


Organizations need transparency in content moderation and verification tools to distinguish AI-generated content


Organizations should provide transparency and labeling when using AI-generated content


Summary

All speakers agree on the fundamental importance of transparency in AI systems, including algorithmic decision-making processes and clear labeling of AI-generated content. They emphasize the need for accountability mechanisms and user-friendly disclosure of how AI systems operate.


Topics

Legal and regulatory | Human rights


Environmental impact of AI must be considered in ethical frameworks

Speakers

– Taysir Mathlouthi
– Sanskriti Panday

Arguments

Environmental impact of AI, including water consumption, must be considered in ethical AI discussions


Organizations maintain discussions about ethical AI use while balancing resource constraints and authenticity needs


Summary

Both speakers recognize the significant environmental costs of AI systems, particularly water consumption for data centers and training processes. They advocate for including environmental considerations as a core component of ethical AI frameworks.


Topics

Development | Legal and regulatory


Similar viewpoints

Both speakers from Global South organizations highlight the challenges of using AI tools that are primarily developed in the Global North, emphasizing the need for localized solutions that understand cultural contexts and languages while maintaining human oversight for sensitive work.

Speakers

– Taysir Mathlouthi
– Sanskriti Panday

Arguments

Algorithmic biases in AI tools developed in Northern hemisphere don’t understand cultural contexts and languages of Global South


Small organizations use AI tools like ChatGPT and Canva but face resource constraints and need human oversight for sensitive topics


Topics

Development | Human rights | Sociocultural


All three speakers emphasize the need for proactive, practical approaches to ethical AI implementation, including training, toolkits, and integrated workflows that prevent problems rather than addressing them after harm occurs.

Speakers

– Sanskriti Panday
– Taysir Mathlouthi
– Participant 2

Arguments

Organizations need AI literacy training, toolkits for ethical use, and better alternatives to current AI tools


Ethics by design strategy is crucial rather than only regulating risks after they occur


Checklist implementation requires organizational buy-in and integration into regular workflows rather than additional bureaucratic burden


Topics

Development | Legal and regulatory


Both speakers represent large international networks focused on supporting media development and journalism globally, emphasizing the importance of media freedom and information integrity in the digital age.

Speakers

– Laura Becana Ball
– Lei Ma

Arguments

GFMD is a network of 200+ organizations working to protect media freedom and support journalism worldwide


R&W Media supports public interest digital media across 40+ countries with focus on media viability and information integrity


Topics

Human rights | Sociocultural


Unexpected consensus

Environmental impact as core ethical consideration in AI

Speakers

– Taysir Mathlouthi
– Sanskriti Panday

Arguments

Environmental impact of AI, including water consumption, must be considered in ethical AI discussions


Organizations maintain discussions about ethical AI use while balancing resource constraints and authenticity needs


Explanation

It’s notable that organizations from conflict-affected Palestine and Nepal both independently prioritize environmental considerations in their AI ethics frameworks. This suggests that environmental consciousness in AI use transcends regional and organizational contexts, representing a global concern even among organizations primarily focused on other issues like digital rights and youth empowerment.


Topics

Development | Legal and regulatory


Localized AI development as solution to bias

Speakers

– Taysir Mathlouthi
– Ernst Noorman

Arguments

Hamleh develops AI models for hate speech classification in Hebrew and Arabic using localized data and contextualized narratives


Algorithmic transparency and accountability require disclosure of how algorithms rank and recommend content


Explanation

The alignment between a Palestinian digital rights organization’s practical approach to developing localized AI models and a Dutch government official’s emphasis on algorithmic transparency represents unexpected consensus across very different institutional contexts. Both recognize that effective AI governance requires understanding local contexts and ensuring transparency in algorithmic processes.


Topics

Human rights | Legal and regulatory | Sociocultural


Overall assessment

Summary

The discussion reveals strong consensus around core principles of ethical AI governance: human rights centrality, multi-stakeholder collaboration, transparency requirements, and environmental considerations. Speakers from diverse backgrounds – government officials, NGO representatives, and media development organizations – align on fundamental approaches despite different operational contexts.


Consensus level

High level of consensus on principles with practical alignment on implementation approaches. The agreement spans across different sectors (government, civil society, media development) and regions (Global North and South), suggesting robust foundational agreement on ethical AI frameworks. This consensus provides a strong foundation for collaborative action on AI governance, though implementation challenges remain around resource constraints and capacity building in Global South organizations.


Differences

Different viewpoints

Approach to AI regulation and governance

Speakers

– Ernst Noorman
– Taysir Mathlouthi

Arguments

Legal and regulatory measures should comply with international human rights laws while avoiding restrictive laws that infringe on digital freedoms


Ethics by design strategy is crucial rather than only regulating risks after they occur


Summary

Ernst Noorman advocates for a regulatory approach using existing legal frameworks like the EU Digital Services Act and AI Act, while Taysir Mathlouthi emphasizes the need for ethics by design from the beginning rather than reactive regulation


Topics

Human rights | Legal and regulatory


AI development and ownership priorities

Speakers

– Ernst Noorman
– Taysir Mathlouthi

Arguments

Multi-stakeholder approach is essential for collaboration between governments, tech companies, civil society, and academics


Global majority countries should take the lead in AI projects and implementation to address contextual needs


Summary

Ernst Noorman supports multi-stakeholder collaboration including existing tech companies, while Taysir Mathlouthi argues for Global South leadership in developing their own AI solutions rather than relying on Northern hemisphere tools


Topics

Development | Human rights | Sociocultural


Unexpected differences

Environmental considerations in AI ethics

Speakers

– Taysir Mathlouthi
– Sanskriti Panday
– Other speakers

Arguments

Environmental impact of AI, including water consumption, must be considered in ethical AI discussions


Organizations maintain discussions about ethical AI use while balancing resource constraints and authenticity needs


Explanation

While most speakers focused on traditional ethical AI concerns like bias and human rights, Taysir and Sanskriti unexpectedly prioritized environmental impacts, with Sanskriti stating ‘video generation from AI is non-negotiable due to environmental damage.’ Other speakers did not address environmental concerns at all, creating an unexpected divide in priorities


Topics

Development | Legal and regulatory


Overall assessment

Summary

The main disagreements center on regulatory approaches (reactive vs. proactive), AI development ownership (multi-stakeholder vs. Global South-led), and implementation strategies (organizational integration vs. fundamental redesign). Environmental considerations emerged as an unexpected dividing line.


Disagreement level

Moderate disagreement with significant implications. While speakers share common goals of ethical AI use, their different approaches could lead to conflicting policy recommendations and implementation strategies. The divide between Northern regulatory frameworks and Global South self-determination could impact international AI governance discussions.


Partial agreements

Partial agreements

Similar viewpoints

Both speakers from Global South organizations highlight the challenges of using AI tools that are primarily developed in the Global North, emphasizing the need for localized solutions that understand cultural contexts and languages while maintaining human oversight for sensitive work.

Speakers

– Taysir Mathlouthi
– Sanskriti Panday

Arguments

Algorithmic biases in AI tools developed in Northern hemisphere don’t understand cultural contexts and languages of Global South


Small organizations use AI tools like ChatGPT and Canva but face resource constraints and need human oversight for sensitive topics


Topics

Development | Human rights | Sociocultural


All three speakers emphasize the need for proactive, practical approaches to ethical AI implementation, including training, toolkits, and integrated workflows that prevent problems rather than addressing them after harm occurs.

Speakers

– Sanskriti Panday
– Taysir Mathlouthi
– Participant 2

Arguments

Organizations need AI literacy training, toolkits for ethical use, and better alternatives to current AI tools


Ethics by design strategy is crucial rather than only regulating risks after they occur


Checklist implementation requires organizational buy-in and integration into regular workflows rather than additional bureaucratic burden


Topics

Development | Legal and regulatory


Both speakers represent large international networks focused on supporting media development and journalism globally, emphasizing the importance of media freedom and information integrity in the digital age.

Speakers

– Laura Becana Ball
– Lei Ma

Arguments

GFMD is a network of 200+ organizations working to protect media freedom and support journalism worldwide


R&W Media supports public interest digital media across 40+ countries with focus on media viability and information integrity


Topics

Human rights | Sociocultural


Takeaways

Key takeaways

Organizations need human-centered AI approaches that prioritize assistance over automation, maintaining human oversight and agency in media workflows


Global South organizations face significant challenges with AI tools developed in the Northern hemisphere due to algorithmic biases, lack of cultural context, and language limitations


Ethical AI implementation requires localized solutions – organizations like Hamleh are developing their own AI models using contextualized data in local languages (Hebrew and Arabic)


Resource constraints force small organizations to use AI tools despite ethical concerns, creating tension between practical needs and ethical considerations


Environmental impact of AI, including water consumption and data center impacts on marginalized communities, must be integrated into ethical AI discussions


Multi-stakeholder collaboration between governments, tech companies, civil society, and academics is essential for responsible AI governance


Platform accountability issues are critical in conflict settings, where content moderation policies can violate freedom of expression and international humanitarian law


AI literacy training, toolkits for ethical use, and better alternatives to current AI tools are urgently needed for Global South organizations


Resolutions and action items

Organizations encouraged to endorse the Harlem Declaration as an international commitment to ethical AI in digital media


R&W Media to continue co-creating and implementing the ethical AI checklist tool with partner organizations


Participants invited to join the Dynamic Coalition on Sustainability of News Media and Journalism


GFMD’s Journalism Cloud Alliance launched to make AI tools and infrastructure more accessible and affordable for newsrooms worldwide


Organizations encouraged to contact R&W Media for collaboration on ethical AI implementation


Continued engagement needed in policy discussions including EU Digital Services Act, AI Act, and Global Digital Compact


Unresolved issues

How to prevent checklist fatigue and ensure organizational buy-in for ethical AI frameworks without creating bureaucratic burden


How to balance resource constraints of small organizations with the need for ethical AI implementation


How to verify AI-generated content, particularly in contexts like art competitions where digital and AI-generated content overlap


How to control AI use by distributed teams and local partners while maintaining ethical standards


How to address the rapid evolution of AI technology while maintaining relevant and current ethical guidelines


How to ensure Global South voices lead AI development rather than just adapting Northern hemisphere solutions


How to address platform censorship and content moderation biases in conflict-affected settings


Suggested compromises

Using AI tools as assistance rather than replacement, maintaining human oversight and final decision-making authority


Implementing ‘weak AI systems’ that are less resource-intensive than large language models to reduce environmental impact


Labeling AI-generated content for transparency while still utilizing AI tools for efficiency


Integrating ethical AI checklists into regular workflows rather than treating them as additional bureaucratic processes


Developing ethics by design strategies that build ethical considerations into AI tools from the beginning rather than regulating after problems occur


Creating living documents for ethical AI guidelines that can evolve with technology rather than static frameworks


Thought provoking comments

War is not only offline but online as well… we’ve been working on the use of AI in digital warfare, how generative AI can lead to digital dehumanizations, especially the dehumanization of the Palestinian people, and how AI also can be used as a weapon of war, especially when it comes to identification tools that are being used by different armies to target potential people within the Gaza Strip.

Speaker

Taysir Mathlouthi


Reason

This comment powerfully reframes AI ethics from an abstract concept to a life-and-death reality in conflict zones. It introduces the concept of ‘digital dehumanization’ and positions AI as a weapon of war, moving beyond typical discussions of bias to examine AI’s role in actual violence and targeting.


Impact

This shifted the discussion from theoretical ethical considerations to urgent, real-world applications where AI ethics have immediate humanitarian consequences. It grounded the entire conversation in concrete stakes and demonstrated why ethical AI frameworks are not just academic exercises but essential for human rights protection.


We need more people being trained and educated in the use of AI within global majority countries. And we need people who kind of take a sort of a very holistic approach when it comes to AI to decrease as much as possible algorithmic biases… We also consider that it’s important to push for more AI models and LLMs being developed within the global majority countries and by people who understand their own contexts and communities.

Speaker

Taysir Mathlouthi


Reason

This comment challenges the dominant narrative of AI development being concentrated in the Global North and introduces the critical concept of localized AI development. It connects technical capacity building with decolonizing AI development, suggesting that ethical AI requires shifting who builds these systems.


Impact

This comment introduced a structural critique that influenced how other speakers framed their challenges. It moved the conversation beyond using existing AI tools ethically to questioning who should be building AI systems in the first place, adding a dimension of technological sovereignty to the ethical AI discussion.


AI is like the new way of life. So we, it’s like, if we don’t use it, there’s a lot of resource constraints as a small NGO. So we are somehow forced to use AI, but if we don’t, like it’s hard to maintain… However, it’s quite tricky in UI because not just the communication team, we have our local youth partners as well, youth champions, which are in different parts of Nepal… So it’s very hard to internally control everything.

Speaker

Sanskriti Panday


Reason

This comment reveals the paradox facing small organizations: being ‘forced’ to use AI due to resource constraints while struggling to implement it ethically. It also highlights the complexity of distributed organizations where control over AI use becomes nearly impossible, introducing the reality of organizational limitations in implementing ethical frameworks.


Impact

This honest admission of being ‘forced’ to use AI despite ethical concerns added a layer of pragmatic realism to the discussion. It influenced the later conversation about checklist implementation by highlighting that ethical frameworks must account for resource constraints and distributed decision-making, not just organizational will.


We center people over technology, and people over AI, and hence we call it the AI-supported approach… we are interested in more assisting people with AI technology rather than replacing people with AI… it’s very important for us that we pay attention to the assistance part, and also support part, rather than automation.

Speaker

Participant 2 (Surabhi)


Reason

This comment introduces a fundamental philosophical framework that distinguishes between AI as replacement versus AI as augmentation. The ‘AI-supported approach’ provides a clear alternative paradigm to automation-focused AI implementation, particularly relevant for media and journalism contexts.


Impact

This framing influenced how the subsequent discussion about the ethical AI checklist was received. It provided a philosophical foundation that made the checklist seem less like bureaucratic oversight and more like a tool for maintaining human agency, helping address some of the implementation concerns raised by other speakers.


These are quite, you know, big questions. And we often may not have the time and resources to answer all of these questions… But this is just to guide the idea of checklists is not just to check off certain things and then move on, but really sit with some of these discussions and reflections and think about are we making the most responsible decisions as media organizations or as civil society organizations when we are trying to use these tools?

Speaker

Participant 2 (Surabhi)


Reason

This comment acknowledges the tension between comprehensive ethical consideration and practical limitations while reframing checklists from compliance tools to reflection frameworks. It recognizes the complexity of ethical decision-making while proposing a more nuanced approach to implementation.


Impact

This comment directly addressed the ‘checklist fatigue’ concern and provided a more sophisticated understanding of how ethical frameworks should function. It influenced Taysir’s response about making checklists part of the workflow rather than additional burden, showing how reframing the purpose of ethical tools can affect their adoption.


Overall assessment

These key comments fundamentally shaped the discussion by moving it from abstract ethical principles to concrete, lived realities of AI implementation in resource-constrained, conflict-affected, and distributed organizational contexts. Taysir’s comments about digital warfare and localized AI development introduced urgency and structural critique that elevated the stakes of the conversation. Sanskriti’s honest admission about being ‘forced’ to use AI added crucial pragmatic realism that influenced how the ethical checklist was presented and discussed. The RNW Media team’s ‘AI-supported approach’ and nuanced understanding of checklist implementation provided philosophical grounding and practical solutions that addressed the tensions raised by other speakers. Together, these comments created a rich dialogue that balanced idealistic ethical frameworks with the messy realities of implementation, ultimately producing a more sophisticated and actionable understanding of ethical AI in media and civil society contexts.


Follow-up questions

How can organizations ensure buy-in from staff to use ethical AI checklists without creating bureaucratic burden?

Speaker

Surabhi and Taysir Mathlouthi


Explanation

This addresses the practical challenge of implementing ethical AI frameworks in resource-constrained organizations while maintaining efficiency and staff engagement


How can we verify whether content is AI-generated, particularly for art competitions and digital content?

Speaker

Sanskriti Panday


Explanation

This is crucial for maintaining fairness in competitions and transparency in content creation, especially when working with youth participants


What are better alternatives to ChatGPT and other mainstream AI tools that are more ethical and environmentally sustainable?

Speaker

Sanskriti Panday


Explanation

Organizations need practical alternatives that align with their ethical values while still meeting their operational needs


How can we develop more AI models and LLMs within global majority countries that understand local contexts and languages?

Speaker

Taysir Mathlouthi


Explanation

This addresses the need for culturally appropriate AI tools that serve communities beyond the Northern hemisphere and reduce algorithmic bias


How can we make AI literacy training and toolkits more accessible to small organizations and their local partners?

Speaker

Sanskriti Panday


Explanation

This addresses capacity building needs for organizations working with limited resources and distributed teams


How can ethical AI checklists be designed as living documents that evolve with rapidly changing AI technology?

Speaker

Surabhi


Explanation

Given the exponential evolution of AI technology, static checklists become obsolete quickly, requiring adaptive frameworks


How can we better integrate ethical AI considerations into existing workflows rather than treating them as additional bureaucratic processes?

Speaker

Taysir Mathlouthi


Explanation

This addresses the practical implementation challenge of embedding ethics into daily operations without overwhelming staff


What specific transparency measures should be implemented when using AI-generated content for audiences?

Speaker

Surabhi


Explanation

This explores the depth and methods of disclosure needed to help audiences make informed decisions about AI-generated information


How can we address the environmental impact of AI tools, particularly water consumption and data center placement in marginalized communities?

Speaker

Taysir Mathlouthi and Sanskriti Panday


Explanation

This addresses the often-overlooked environmental justice aspects of AI deployment and usage


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.