Tackling disinformation in electoral context
Tackling disinformation in electoral context
Session at a Glance
Summary
This session focused on tackling disinformation in electoral contexts, exploring the roles of various stakeholders and potential solutions. Participants discussed the challenges posed by disinformation during elections, emphasizing its threat to human rights and democracy. The European Union’s approach was highlighted, including the Code of Practice on Disinformation, which involves multiple stakeholders in a co-regulatory framework.
The importance of fact-checking, media literacy, and public awareness campaigns was stressed by several speakers. There was debate about the responsibility of digital platforms in moderating content, with some arguing for greater accountability and others cautioning against overregulation that could stifle free speech. The need for tailored approaches considering cultural contexts was emphasized, particularly for smaller countries.
Multi-stakeholder partnerships and collaborations were seen as crucial in combating disinformation. Speakers highlighted the role of traditional and social media in spreading information during elections, and the need for empowering citizens to identify misinformation. The discussion touched on the challenges of regulating content without infringing on freedom of expression, with some advocating for a focus on systemic risks rather than specific content.
Participants also debated the effectiveness of algorithmic content moderation and the importance of transparency in platform policies. The session concluded with calls for greater collaboration, awareness-building, and a focus on information integrity, while recognizing the regional and national specificities of disinformation challenges.
Keypoints
Major discussion points:
– The role of regulations, platforms, and multi-stakeholder partnerships in combating election disinformation
– Balancing efforts to counter disinformation with protecting freedom of expression
– The importance of fact-checking, media literacy, and public awareness campaigns
– Regional and cultural differences in how disinformation manifests and should be addressed
– Debate over platform accountability and content moderation vs. user empowerment
The overall purpose of the discussion was to explore strategies for tackling disinformation in electoral contexts, with a focus on the roles and responsibilities of different stakeholders including tech platforms, governments, civil society, and citizens.
The tone of the discussion was largely collaborative and solution-oriented, with panelists sharing insights from different regional perspectives. However, there were moments of debate and disagreement, particularly around issues of platform regulation and accountability. The tone became more urgent towards the end as some participants expressed frustration with the lack of concrete progress on these issues.
Speakers
– Peace Oliver Amuge: Moderator
– Giovanni Zagni: Expert on EU regulations and disinformation
– Poncelet Ileleji: Expert on sub-Saharan Africa and community radio
– Aiesha Adnan: Representative from Women Tech Maldives
– Juliano Cappi: Representative from CGI (Brazilian Internet Steering Committee)
– Nazar Nicholas Kirama: Expert from Tanzania
Additional speakers:
– Tim: Audience member
– Kosi: Student from Benin
– Nana: Audience member with experience in elections and disinformation
– Peterking Quaye: Representative from Liberia IGF
Full session report
Expanded Summary of Discussion on Tackling Disinformation in Electoral Contexts
Introduction:
This session focused on addressing the critical issue of disinformation during elections, exploring the roles of various stakeholders and potential solutions. The discussion brought together experts from different regions and backgrounds to examine the challenges posed by disinformation and its threat to human rights and democracy.
Key Themes and Discussion Points:
1. Regulatory Approaches and Frameworks:
The discussion highlighted various approaches to regulating disinformation, with a particular focus on the European Union’s strategy. Giovanni Zagni introduced the EU Code of Practice on Disinformation as a voluntary co-regulatory instrument, emphasising the “European way” of bringing all relevant stakeholders together. He noted that the Code has 34 signatories, including major tech platforms, fact-checking organizations, and civil society groups. This approach contrasts with more stringent regulatory measures, sparking debate about the appropriate level of government involvement.
Juliano Cappi shared insights from Brazil, mentioning the Brazilian Internet Steering Committee’s guidelines and the Internet and Democracy Working Group’s publications on combating disinformation. He also introduced the concepts of “systemic risk” and “duty of care” in platform regulation, emphasizing the need for digital public infrastructure and digital sovereignty.
2. Multi-stakeholder Collaboration and Partnerships:
A recurring theme throughout the discussion was the crucial role of multi-stakeholder partnerships in combating disinformation. Speakers agreed that collaboration between fact-checkers, platforms, civil society organisations, and government bodies is essential for developing effective strategies. Juliano Cappi emphasised the need for improved processes to better integrate work from different forums addressing disinformation, suggesting that current efforts may not be sufficiently effective.
3. Role of Tech Platforms and Content Moderation:
The responsibility of digital platforms in moderating content emerged as a contentious issue. Nazar Nicholas Kirama advocated for proactive content moderation and transparency in algorithmic policies by tech platforms. He suggested implementing advanced algorithms for flagging misleading information and collaborating with fact-checkers. Kirama provocatively framed tech platforms as de facto electoral commissions, highlighting the need for accountability.
Juliano Cappi raised concerns about potential bias in platform business models and the advancement of certain political views. He also suggested a “Follow the money” approach to investigate the financing of disinformation campaigns.
An audience member cautioned against over-regulation that could stifle innovation, highlighting the tension between combating false information and protecting free speech. Another participant warned about the potential misuse of regulation by governments to suppress social media, citing examples from African countries.
4. Fact-checking and Media Literacy:
The importance of fact-checking and media literacy was stressed by several speakers. Poncelet Ileleji called for fact-checking websites supported by organisations like UNESCO, while Aiesha Adnan emphasised the need for civic education and information literacy programmes. These initiatives were seen as crucial tools for empowering citizens to identify misinformation and disinformation.
Poncelet Ileleji highlighted the shift in information dissemination and consumption patterns, noting that young people, political parties, and lobbyists increasingly use social media platforms like Twitter, TikTok, and X to spread information, rather than traditional mainstream media. He also stressed the importance of empowering people at the grassroots level, particularly through community radios, to combat disinformation.
5. Cultural Context and Localised Approaches:
Speakers emphasised the need for tailored programmes that fit cultural norms and consider the specific needs of smaller countries and populations. Aiesha Adnan, drawing from her experience with the recent presidential election in the Maldives, stressed the importance of designing tools and interventions that account for the unique challenges faced by smaller nations, as their needs may be overlooked in global approaches.
An audience member raised concerns about the cultural context in algorithmic content moderation, highlighting that algorithms may misinterpret content due to cultural differences. This point emphasised the need for explainable AI in content moderation and the importance of considering diverse cultural perspectives in developing anti-disinformation strategies.
6. Empowering Communities and Grassroots Initiatives:
Poncelet Ileleji stressed the importance of empowering people at the grassroots level, particularly through community radios, to combat disinformation. This approach aligns with Aiesha Adnan’s call for promoting “information integrity” across society, shifting the focus from just regulating media to empowering citizens.
7. Balancing Regulation and Free Speech:
A significant point of debate was the tension between regulating disinformation and protecting freedom of expression. Giovanni Zagni highlighted this challenge, while Nazar Nicholas Kirama suggested the need for “facilitative regulations” that balance platform accountability with the protection of free speech. The discussion revealed the complexity of addressing disinformation without infringing on fundamental rights.
An audience member raised concerns about the potential misuse of conspiracy theory labels, referencing experiences during the COVID-19 pandemic. This highlighted the need for careful consideration when categorizing and addressing potentially misleading information.
Conclusion:
The session concluded with calls for greater collaboration, awareness-building, and a focus on information integrity. Participants recognised the regional and national specificities of disinformation challenges, emphasising the need for both localised and global approaches. Giovanni Zagni particularly stressed the importance of considering regional and national contexts when addressing disinformation problems.
Unresolved issues and areas for further exploration include:
1. Effectively regulating tech platforms without stifling innovation or free speech
2. Addressing political biases and power dynamics in the spread of disinformation
3. Creating global standards while respecting regional and national differences
4. Determining the extent of platform accountability for user-generated content
5. Developing sustainable national civic education programs
6. Implementing transparent and culturally sensitive algorithmic policies
The discussion provided valuable insights into the multifaceted nature of election-related disinformation and underscored the need for continued dialogue, research, and collaborative efforts to safeguard democratic processes in the digital age. Participants emphasized the importance of more listening and discussion at the global level to find common ground on addressing disinformation while respecting diverse perspectives and stakeholder interests.
Session Transcript
Peace Oliver Amuge: briefly introduce the session to you and so this session is a NRA collaborative session. It’s on tackling disinformation in election electoral contexts. Channel 4, 4 please. We are on channel 4. Is everyone there? Channel 4, yes. Channel 4, if you have just gotten in, please get yourself the gadgets. And so yes, I just say that this session is tackling disinformation in the electoral context and as you are all aware that this year has been called the year of elections. We’ve had several countries going through elections and we know that during elections human rights is at stake and we’ve had growing issues of disinformation during elections and these are issues that put human rights at stake, democracy at stake, so this is a very important and crucial discussions to have and so we are very happy to have you join here. We have distinguished panelists that you’re seeing here. We should have one panelist joining online and so this session will discuss a couple of issues. We’ll discuss the role of different stakeholders, the role of social media, we will discuss the norms, the principles, the frameworks, standards, you know, some of the ways that we can take to counter disinformation and we will be sharing different contexts when it comes to disinformation and I see the room is quite diverse so I expect that we will have a wealth of discussions and I want to just mention that we have an online moderator who is here in the room, we have rapporteurs who will be supporting us, we have Michelle from Zambia, we have Umut who will also support us with rapporteuring and also big thanks to the organizers who are in the room and some might not be here, who are the Asia Pacific Youth IGF, the Bangladesh, the Benin IGF, the Caribbean IGF, the Colombia, Eurodig, Gambia, you know, and several others, South Sudan. So, I will not mention them all because we lost a little bit of time at the beginning and I will come already to say that the panelists who are here, one who might join, we have Aisha who will introduce herself later very well, we have Juliano, you’re most welcome, we have Nazar who is here and Ponce de Tuma I just mentioned that will join and Giovanni. Thank you very much for making time and so I think that we will already start our conversation. I am keeping fingers crossed that we don’t have any technical glitches and please just give me a note if you can’t hear me or you are having trouble and if anyone walks in and sits near you, let them know we are on channel 4 under tech, please let people know that we are on channel 4. So, since Poncelet is not on yet, I will come to you Giovanni and to open our discussions and the question to you is how have existing regulations addressed this information during elections? What are the practices, you know, that balances combating false information with protecting free freedom of expression?
Giovani Zagni: Now it’s on, now it works. Okay, thank you for this question, good afternoon and I will answer by making reference to what I know more which is the European Union case which is peculiar in many ways. First of all, the European Union is not the place where the majority of very large online platforms are based, which is clearly the US, but the EU has at the same time always taken a very proactive approach when it comes to regulation. Common say in this area is that the US innovates while the EU regulates. Secondly, 2024 was the year when about a dozen European states went to the polls for a variety of national elections, from Spain to Finland and from Greece to France, but also when a European-wide common election, so to say, took place for renewing the European Parliament, the only elective body of the European Union, only directly elected body of the European Union. Thirdly, in 2024 new important EU regulations like the Digital Services Act, DSA, and Digital Markets Act, DMA, were not yet fully enforced because even if they have been approved by the relevant institutions, the process of implementing them is still ongoing, so they were not able to impact the electoral processes that took place this year. So how did the EU address the issue of disinformation in the crucial electoral year 2024? The main tool was the strengthened Code of Practice on disinformation, which was promoted by the European Union. The Code was presented in June 2022 and it is a voluntary and co-regulatory instrument developed and signed by 34 signatories at the time of the adoption. Who are the signatories? Players from the advertising ecosystem, advertisers, app tech companies, fact checkers, many but not all very large online platforms, civil society, and third party organizations. A few names, Meta, Microsoft, Adobe, Avast, the European Fact Checking Standards Network, the European Association of Communication Agency, Reporters Without Borders, World Federation of Advertisers, TikTok, Twitch, and Vimeo. So all these signatories agreed to establish a permanent task force with the idea of ensuring that the Code adapts and evolves in view of technological, legislative, societal, and market developments. The task force is a rare place where representatives from the different stakeholders have a place to exchange information, require specific action, and discuss the best way ahead. The Code therefore is the key instrument of the European Union’s policy against disinformation and its two key characteristics are to be a voluntary and co-regulatory instrument. So coming back to your question, the second half of it is how do you balance that with protecting freedom of information? And the European way, so to say, is to have all the relevant stakeholders around the same table and do not to impose any kind of kind of direct intervention from the authorities on the specific content, but more to have a forum where, I don’t know, like potentially damaging cases or potential threats or things that need to be looked after are discussed and then the platforms decide to take action or not. I’ll give you a very practical example to conclude. The recent elections in Romania that took place a few days ago made the headlines in Europe and beyond because under the strong suspicion of foreign interference they were annulled by the Romanian Constitutional Court and the first round of the elections has to be redone. So during this process basically all the stakeholders that were involved in the code decided to set up a rapid response system. What that meant was that there was a mechanism through which all the, I don’t know, like fact checker or a civil society organization could say look in our day-to-day work we noticed that this particular suspicion activities happened in this particular social network platform. So now it’s up to you my dear social network platform to check if that this particular phenomenon violates the terms of use. So as you can see there is no direct intervention or no kind of, I don’t know, regulation or law by which you have to do something yet, but there is this co-regulatory and collective effort to work together as stakeholders involved. Thank you.
Peace Oliver Amuge: Thank you very much for those very informative regulations that you mentioned and the collective actions that we are taking, I think it’s very key to have these frameworks in place when we talk about this information. We will park it there. I’ve been told that we have a pamphlet in the room and I would like us to hear from pamphlet. As we all know, sometimes tech can be difficult, so it would be nice to hear from pamphlet. But also I wonder why we don’t have pamphlet on the screen, if you could let us have pamphlet. And pamphlet, if you can hear us, would you please just say something? Pamphlet, are you able to hear us? No. Okay. I think our online moderator is trying to sort out that. And then I will come to you, Aisha. We’ve had Giovanni pamphlet. Can you just open your mic and say something?
Poncelet Ileleji: Yes, thank you very much. Peace, I’m here, sorry. I was waiting to be granted access. Can you hear me?
Peace Oliver Amuge: Yes, we can hear you. And would you, are you able to speak? We’d either hear from you or listen to Aisha.
Poncelet Ileleji: Yes, yes, you can definitely hear from me. And thank you all.
Peace Oliver Amuge: And one of the most… Pamphlet, so this is the question that I would like you to take, share with us what the role that traditional media plays and social media in election, you know, during elections and how this is effective and how regulations have been used to address these issues of disinformation.
Poncelet Ileleji: I think speaking from a sub-Saharan point of view, you will notice that the role of social media in terms of disinformation and even misinformation is very important. We have to observe, we have to also know that why has this become very important? Most young people, most political parties, most lobbyists, what they use all over the world today to disseminate information has been through social media. Whether it’s Twitter, whether it’s TikTok, whether it’s X, they have used all this to disseminate information. And most people don’t naturally use mainstream media. They use social media as a way in which they get information. And one way, in a good example, we can use to combat this is making sure that within countries, we have what we have, we have what are called fact-checking websites, like what we did in the Gambia. Coming into our last presidential elections, we worked with the Gambian Press Union to set up what we call a fact-checking website that was supported by UNESCO. So UNESCO has always been a good agency in supporting a number of countries to setting up fact-checking websites. And it’s important that you have to have a bottom-top approach in training journalists at grassroots level, especially journalists working at community level using community radio and how they can work with various fact-checking websites to be able to do this. Unfortunately, I don’t know why my camera is not coming on. It’s showing here, but this is the little intervention I’ll make for now and I’ll take any other question. Thank you.
Peace Oliver Amuge: Thank you. Thank you very much, Consulate, for your intervention and sharing like Giovanni also talked about the fact-checking how that is important amid election times and when the widespread disinformation. And also you mentioned how social media is an important tool that people use to access information and also it’s the same tool that is used to spread disinformation. I will still park that, we will park at that and I would like to hear. Yes, we can now see you in the room, Consulate. So Aisha, I will come to you. How can tailored programs, initiatives and overall good values help people identify misinformation, engage diverse communities and ensure there is electoral integrity?
Aiesha Adnan: Yes. Hello, everyone. Great to be here and coming from very far from the Maldives. And this is an interesting topic because I come from an organization called Women Tech Maldives where we initially started to support women and girls. And then we realized we need to talk about safety. And when you talk about safety, you have to talk about everyone. That is where actually our work began on this space, disinformation. Then we had the opportunity to actually communicate with a lot of organizations. And one of the area we work is on identifying disinformation and doing media analysis. Okay, coming to the election. One of the interesting thing that highlighted was the last presidential election in the Maldives. Traditionally, it has been, we had several election observers group. There was nothing like disinformation in most of the reports, but this time it was quite a bit different because we saw that major percentage of the disinformation came from the online media and then very few were from the traditional media. So we know that this shift is happening. In few years time, we don’t see much in the traditional media. It goes through the social media. Okay, then you mentioned about what are the initiatives. So when you talk about the initiatives, I know that there are a lot of tools available. And can it really fit all the countries? No, it has to be designed in a way that it fits the cultural norms.
Peace Oliver Amuge: Are we having a cut in the?
Aiesha Adnan: Okay, sorry about that. I hope you’ve heard some of my words, okay? All right, so when we talk about misinformation and what comes to my mind is like, what are the ways that we can really tackle it? So right now, we are coming up with a lot of tools actually to debunk this information and everything that you see. But I would like to see a place where that we actually build a culture where we promote information integrity across everyone. And when we especially talk about election, then everyone says, it’s the media. It’s the media spreading the information. But yes, of course, some part is the media, but it is the citizen who believe in it. If they are not equipped in ways with knowledge and tools on how they can really identify them, and then that means that’s where we fail. Because it’s not only the election, it’s everywhere. Then information integrity is an important factor to consider. And we as an organization, we have conducted several sessions with the political parties as well and with the parliamentarians as well. How can they actually support these kind of processes within the communities? Because in Maldives, we have remote islands as well. Then the councillors and the parliamentarians, they do really travel across. And then that’s the way that we can actually connect with the communities and run more programs. But in light of this, I also want to highlight there are two interesting guides from NDI. NDI has a very interesting guide to promote information integrity. And this guide has a lot of tools, like the tools they have supported to develop fact-checking and other frameworks as well. Another one I would also like to highlight is UNDP Strategic Guidance Information Integrity Framework. They have one as well. And when we go back and talk about these kind of initiatives that both UNDP and NDI have supported, I would like to highlight one of the initiatives that NDI took. One is in Georgia. They partnered with local organization to address the spread of misinformation during elections. And the impact, the effort empowered citizens to make informed decisions and reduced the effectiveness of misinformation aimed at influencing public opinion and what behavior. So we know that these kind of interventions actually help. So, one is definitely going for fact-checking tools and empowering citizens to make the right choices, and also involving the media and the political parties itself. And on another note, I would also like to highlight that in Maldives we are currently working with the community to develop a fact-checking system, so that hopefully this will be a way that smaller countries like us, where we have very few population and we speak one language, so most of the time what happens is, when this kind of information is posted online in our native language, you know the algorithms cannot pick it up, so that is challenging for us. I hope that all these platforms, they do consider us, because we also need to exist, and we need that support from everyone. And at the national level, we are doing our work, but it is labor-intensive, so that support is required. Thank you.
Peace Oliver Amuge: Thank you very much, Aisha, for pointing out those gaps and the need for capacity-building, and also empowering, you know, the citizen when we talk about issues of countering disinformation. And we are in a multi-stakeholder space, and I think it would be nice to talk about that a little bit. I would come to you, Giuliano, and my question to you is that, how can multi-stakeholder partnerships and public-private collaborations improve efforts to combat election disinformation and expand media literacy programs to reach all parts of the society?
Juliano Cappi: Thank you so much. Well, I decided to bring a reflection here based on fiction. In Aurel’s novel, 1984, he states, who controls the past, controls the future. Who controls the present, controls the past. Well, Aurel builds a dystopian reality where a state institution, the records department, releases every single piece of stored information and rewrites it, if necessary, to conform to the party’s vision. He reminds us of the sets of institutions, disciplines, and propositions built throughout human history to organize discourse. More importantly, he sheds light on the social disputes to control and impose discourse as a strategy for maintaining or gaining power. Well, at this point, we must recognize that the internet has brought the challenge of organizing speech to an entirely new level. This is a challenge for society as a whole. In this sense, I understand that most stakeholder spaces are especially important to foster social arrangements capable of dismantling a highly developed industry, the disinformation industry. At CGI, we have been working on the production of principles and guidelines to combat disinformation. I guess what happened in 2018 in England was like a trigger to promote debate at international level on disinformation. Despite that, in 2017, the United Nations signs a declaration on freedom of expression, fake news and disinformation. In the same year, European Union sponsored a first major study on disinformation, information disorder, towards an interdisciplinary framework to research and policymaking. One year after that, in 2018, the European Union creates the High-Level Group on Fake News and Disinformation online to produce one important first report called A Multidimensional Approach to Disinformation. This is kind of important. I ask you please to go to the next slide because it is in 2018 that the Brazilian Internet Steering Committee creates the Internet and Democracy Working Group, which produced a first publication, the Fake News and Elections Guidebook for Internet Users. The guide outlined the general problem and present concepts, discuss risks and offer guidelines to prevent the distribution of disinformation. In 2018, we have the election of Jair Bolsonaro in Brazil. The working group carries on working on the challenges imposed by disinformation and produces the report on disinformation and democracy, discussing international initiatives to combat disinformation in the industry and proposing 15 directives to address the phenomenon while promoting what we now call information integrity. In 2021, the working group presented another work, which is the contributions to combating disinformation on the Internet during electoral periods, looking specifically to electoral periods. Additionally, CJWR participates in the Electoral Supreme Court Task Force to Combat Disinformation. All this work that you can see here at the presentation is available on the Internet. It is in Portuguese. We nowadays can easily translate a PDF for any language through Internet applications. And also, if any country or working group is interested, we could translate this work. But we should consider that they try to address a specific reality, which is what happens in Brazil. Well, still, my feeling is that we can do more. We should ask ourselves about the impact of the work carried out in most stakeholder spaces, this includes obviously IGF, to combat disinformation. I believe we may find opportunities to improve processes and foster intersectional collaboration to better integrate this different forum. We carry out every year a lot of work. Lots of people go to the IGF, and then this is time for us to recognize that we have to think of how to organize this for us, considering that in some measure we fail as a society to combat disinformation. Thank you.
Peace Oliver Amuge: Thank you very much, Julio, for that, and I think you share the need for research. Harmonizing strategies and efforts, and embracing collaborations. We are about to open up to hear from you, but let’s just hear from Nazar. And Nazar, we would just like to hear from you what should digital platforms and tech companies, what role should they play in reducing disinformation during elections, and how can regulations ensure that they are held accountable?
Nazar Nicholas Kirama: Thank you for posing this question. Can you hear me? Okay, thank you so much for organizing this, and my name is Dr. Nazar Nicholas Kirama from Tanzania, and before I answer that question, I would like to take us to a little background in terms of why is it we are discussing about misinformation in the electoral context. It is because the elections have an enormous power either to put people in power or to disempower candidates. It is a very fertile ground where the misinformation, disinformation, fake news is attracted. It is like a sort of space where a lot of activities happens in a very short period of time during the campaign. If you look, for example, the elections that we had in the United States this year, there was a lot of information, misinformation, disinformation, fake news. And I wouldn’t want to comment much on that, but I think this led to one of the candidates being sort of, you know, the campaign was sunk because of misinformation. If I were to look at what the tech companies and platforms need to actually do to mitigate the situation, to ensure that the electoral processes, you know, are free of all these bugs in terms of disinformation and misinformation, fake news, there are several areas that the platforms and the tech companies need to either invest or do more. Number one is actually being proactive in terms of content moderation. They need to implement advanced algorithms that will actually flag and detect flagging and any misleading information so that people who are going to elect a candidate can understand that this information is the right kind of information from the campaign or from which is being put out there. And before I proceed, I would like to say that the tech companies and platforms have sort of, in terms of the electoral processes, they’ve become sort of electoral commissions without regulation and without anybody to sort of answer to. Because they are out there, the campaigns or the countries that are being affected have actually little or nothing to be able to make the platforms and the tech companies be answerable for the content that are being posted by either the proxies of the campaigns or the bad actors for a certain campaign. So I think the regulation in terms of what they do is very important. Number two is transparency in algorithm policies. These tech companies and platforms, they need to be transparent in the algorithm that they use so that information is clear and out there and make sure that the sort of misinformation and other content is put away for the candidates, for example. Number three is collaboration with fact-checkers. They need to collaborate with fact-checkers. The platform should collaborate with independent fact-checking organizations to verify accuracy of content and label or demote false information. This partnership ensures objectivity and credibility. So these tech platforms, because they have the ability to make things go viral, they have the ability to reach thousands and millions of people around the world and within the country, it is important through this transparency with the partnership for collaboration with the fact-checkers, we’ll be able to undo all these defects and fake news and disinformation and make sure that the right kind of information for a candidate is being put out there. So that is very critical. Number four is about having public awareness campaigns that are within the platform themselves, within the country and the actors, the politicians, the regular consumer on the tech and media platforms. Awareness is very key. And the ordinary citizens need to know when should I believe this information is true about a certain candidate, for example. So that is very important. There has to be some kind of real-time safeguards during the electoral periods. For the tech platforms, they have to collaborate even with the electoral commissions to ensure that these kind of safeguards are out there and they are put up and make sure the information that is there is key. Regulatory measures to ensure accountability. The platforms and tech companies have got to be accountable also for the content that is posted on their platforms online. This is because sometimes the tech companies and platforms tend to hide behind the veil of freedom of expression and all this that. But now the freedom of expression does not exclude them from the accountability. They have to be accountable in some ways for the content that is posted and which is false. So I think that is very key in terms of making sure defects and misinformation, disinformation is rooted out of electoral processes and in the end the citizens enjoy the right of electing the right kind of candidates, not because of the misinformation or the information from the bad actors against the candidate, but because of the actual policies that that candidate put out there for the common or ordinary citizens to be able to consume. Because this misinformation, disinformation, fake news and defects have the ability to actually disenfranchise citizens of a particular country. Because now the tech platforms, if they are not accountable for the content, that means they can be able to elect a president of a country, a member of parliament of a certain country, a judge for example. In the US, I know judges in the United States of America, they get voted for in the office. So that’s why I said at the beginning that these tech platforms, tech companies and media platforms, electoral commissions without regulation, without guardrails for them to be able to ensure that the content that is delivered on their platforms resonates with what is actually happening on the ground. With that, I thank you for this.
Peace Oliver Amuge: Thank you very much, Nazar, for your input and highlighting some of the issues that are happening and also steps that need to be taken by platform owners and also other stakeholders. For instance, valuing and focusing on accountability, transparency, fact-checking, awareness creation. I must say that as someone that works in the African region, I followed elections that were happening across Africa. We had over 18 countries going through elections and disinformation was such a big threat to human rights and we had undermining of freedom of expression, civic engagement, having people decide, like Nazar just mentioned, people decide on rumours, fake news. So I think that’s a very important thing and also access to information. We use digital platform a lot to access information and these were things that are very much undermined during elections. So we will open up a bit. Do we have anyone online? If you have a question here, you can raise your hand. One, two, three. But let’s just check if anybody is online. Is there anyone? Okay. So you go first.
Audience: Thank you so much. I have a question for Giovanni. You spoke about the platform, an inclusive platform that is more or less saying if the misinformation, disinformation impacted the result of election, are their conclusions binding for the decision-makers? Because decision-makers may be interested by the election. They may be. candidate, the government, the sitting government may be also candidate for the new term. So is this platform, the conclusion of this platform binding for the decision makers?
Peace Oliver Amuge: Thank you. Let’s take the three questions and then the fourth one here. OK.
Audience: Thank you. My name is Nana. So I listened, and there’s a lot of conversation around the right candidate. And that sounds like bias towards a particular set of values, because I have observed elections for at least the past 15 years. And I’ve also worked on disinformation and misinformation for a long time. And the fact remains that in every election, all parties contribute to misinformation. Our personal biases might tend to show us more from one side. I say our personal biases because even online, the cliques, the people we follow helps tell our algorithms what to bring to our feed. So I’m wondering if we’re looking at it objectively. Then secondly, around platform accountability, I agree with you. Platforms should be held accountable. But I’m concerned for what? We have to be very clear what we are asking platforms to be held accountable for. If we decide to start holding platforms accountable for all forms of fake news posted on the platform, it’s a roundabout way to stifle free speech. I say this because a platform might have the ability to pull down news that has been verified by fact checkers as fake news. But if there’s no means of verifying it, it won’t, because of my opinion, pull it down. You see my point? Recently, ex-formerly Twitter introduced community notes. And anyone who has community note access will know that people even misuse it. And this is supposed to be the court of public opinion. We have to be very careful. I say this because, as a Nigerian, different African governments have found ways of trying to regulate and hyperregulate social media. When we open a door, we have to be very direct to where that door is pointed to, so that we don’t open a Pandora’s box, and it will be very difficult to shut it down. And I agree with you around advanced algorithm to detect and flag any misinformation. But it’s also very, very important that, for all algorithms, there is explainability. Because of cultural context, there are words that, when I say it, it means something else than when maybe someone who’s sitting in Italy says it. I can tell someone, oh, you’re so silly, you’re so foolish. And it’s banter, like we call it. But those words form abuse and insult in a different language. So algorithms, while advanced, may not be the best people, or the best tools, rather, not people, to flag misinformation. I do 100% agree with working and collaborating with fact checkers, because it’s very important that we have the human and the persons who work on this issue. So yeah, this is my contribution, just saying we should be a little bit more circumspect in some of the things that we’re proposing. Thank you very much.
Peace Oliver Amuge: Thank you. There was another hand last, and then we come to you.
Peterking Quaye: Yeah, OK. So thank you so much. My name is Peter King for the Records from Liberia IGF. And two interventions I would like to push across. Just what the other colleague just said, platform regulation. I can attest to that. Platform length in META, I work with them on something similar to that, based on election content that is more of misinformation or disinformation. So in terms of electoral content, they are doing something in terms of regulating content that, when reported, that has not been fact checked, they pull it down based on the fact that, yes, this is someone locally that is flagging this particular content that is not of truth. And then the other issue is, in terms of electoral context with respect to misinformation, I think there is a need for consistent and sustainable national civic education. Because basically, you tell your local story better than anybody. And in constantly in a sustainable education nationally, we have to shift misinformation or disinformation into electoral context. These are my two interventions, please.
Peace Oliver Amuge: Thank you. So there is a hand behind. Since the mic is close by, let him take it there, and then you have it last. Thank you for understanding. Thank you for this. Can I move? Can I go on? Yes, please. Yes.
Audience: OK. I’m Kosi. I’m a student from Benin. From my understanding, it’s not normal to say platform will be responsible for my information I put online. If I put something online, it’s supposed to be a response on that. It’s supposed to respond on that. Platform can, any time, if government requests information, platform can share my name and the information I share on each platform. It’s very important for everybody to know that, because information is freedom. I know information I’m sharing is to do something. It’s for information. My issue is for destruction. It’s supposed to respond on it also. That is very important to do. But all the platforms we have now are doing their own regulation process. We let them do it better. Thank you. Hello, I’m Tim. Thank you for giving me the possibility to talk. You know, I’ll tell you my favorite joke. What’s the difference between conspiracy theory and truth? Six months, remember COVID times, how much we have heard about COVID-19 and how much of that revealed to be not factually correct or even incorrect. And turned out something what we called conspiracy theories and before turned out to be truth. And here, I want to highlight the fact that we should be especially very precise on what we mark disinformation or misinformation or even fakes, especially when we are talking about elections. Because as it was mentioned before very correctly, elections are not about fact checking. They’re about a political battle and political bias where all the parties, directly or indirectly, basically fueling the misinformation narratives in the media landscape just to win. And sometimes they are supported by like an establishment and authorities in power because they just want to sit this chair another four years. It’s obvious. So I think we should be very precise here. And what to tell you more about this, say we have established it like a fact-checking association here in Russia, not here, there in Russia, with the intent to share all the possible experience we have for fighting fakes and disinformation and moreover to share our tools and platform, absolutely free and absolutely open basis. There is even a tool to detect deepfakes. So basically upload a video and it highlights a special, say it runs up a special scene in this video saying that this possibility of this, like some face of being deepfaked is like 70% or 80% or 97%, whatever. So I advise everybody to be especially precise of what we label fake or not. Understanding that political fakes and electoral fakes are most of time is a battle of trying to somebody get some more power, not to get to some truth point. And remember lessons of COVID-19 where lots of conspiracy theorists and even like lines for which people were persecuted and sometimes fined or even jailed turned out to be absolute truth. Thank you.
Peace Oliver Amuge: Thank you. So let’s give the panel this time and Giovanni will come to you first.
Giovani Zagni: Thank you. So I’ll answer first of all to the question from the gentleman in the second row. So talking again about the, how the thing was framed in Europe, have to be very clear that the occasion when a candidate said something that is false is not something that was addressed by the, that is addressed by the task force. Absolutely not. The kind of, so neither the task force nor the code of practice on this information, nor the. European Digital Media Observatory, of which I’m a member, none of these entities and in no way the general way in which the thing is framed in Europe, they have any interest in framing the political debate or in labeling political expression in any sense. So all the candidates in Europe can say basically whatever they want and there will not be any intervention, direct intervention that is established by this framework. The things that the Code and all the other stakeholders are involved in are things like transparency in funding for political advertising, for example, or flagging cases of possible inauthentic coordinated behavior. So for example, a civil society organization or a fact-checking organization can bring to the table something that they’ve noticed as, I don’t know, a bot campaign. They think it’s a bot campaign or they think there is a specific account that is particularly reactive in spreading demonstrably false news. And then there is no coercive way to oblige the platforms to do anything, but it’s up to the platforms to decide if that specific campaign, that specific instance, that specific behavior violates their terms of use. So this is how things currently stand at the European level. Second thing that I wanted to mention is a thought about how countries should be regulating social media. My personal opinion, and probably is not that of all the members of the panel or all the people in the room, is that countries should stay as far away as possible from directly regulating through law. Anything similar to spreading false content on the internet, per se, because in any way saying something that is false is punishable by law, with some exceptions like libel or slander, whatever. But generally speaking, freedom of expression has to be the most important value that is. At the same time, though, I wanted to point out that basically no human platform or way of communication is completely unfiltered, or if they are, doesn’t turn very soon into something that nobody wants to be, nobody, I mean sane of mind, want to be in. Of course, currently, in all the countries that I know of, with strong regulations in some parts of, so unabated free speech doesn’t exist. So in terms of what we should do when it comes to disinformation, my personal idea is that something like labeling is probably the best thing to do. So, fact-checking, in my opinion, is not, thank you, you’re very kind, helping me out with this, fact-checking, in my opinion, is not kind of telling, giving out like cards, like who’s saying the truth and who’s saying what’s false, but is more like providing contextual information to the user and like saying, okay, this is what’s out there. You can say that we never went to the moon, that’s fine, but keep in mind that according to all this list of reputable sources, this is actually, doesn’t appear to be what really happened. Then it’s up to you to make up your mind, to evaluate if those sources are fine. Okay, but still, I think that providing more information is always better than providing no information. So this is, but this is just my personal opinion. And with that, I shut up.
Peace Oliver Amuge: Thank you very much, Giovanni. Giuliano, you want to take some questions?
Juliano Cappi: Yes, thank you. Well, I was trying to make a point on what are we dealing with here, and what our colleague has has said is, has everything to do with that. We have a huge dispute of power. This information has everything to do with power, who have power, wants to maintain power, and those who want to gain power. Then I would like to address a few points that were mentioned here, and in the panel. First is, I couldn’t say that there is no bias in platform models, business models, considering who has been, in the last 10 years, gaining power around the world. So in Brazil, in Europe, in the United States, and in many other countries in the world, we can see in many other countries in the world, we can see the extreme right groups gaining power in Congress and media. I mean, it’s not just political power, it’s communication power. Then I guess that we have a relation between the kind of business model that are established in digital platforms, and the advancement of some political views. There is bias, it’s biased, and we cannot and cannot just imagine that there is no bias. This is important because we can try to address or try to investigate where this money that finances the industry of disinformation is coming from, and this is very important. Follow the money is one other thing that we have to do. We cannot shut blind eye on who is financing disinformation. Second point is, I wouldn’t concern with the excess of regulation at this time, because any regulation is so difficult to get. I mean, despite that Europe has done a great job on the MA and DSA recently, even in European countries, the challenge to produce any regulation is still great. And in Brazil, we have no regulation, no platform regulation at all. And this is a fight that we are trying to face, and it’s very hard. Of course, we should consider that regulation for the digital era should be based on principles. And I would like to bring another principle. I like very much the idea of trusted flaggers that my colleague here has brought, but there is a principle which lies behind the European regulation, which is systemic risk. This is very, very powerful, because it is difficult to establish specific kinds of content, to believe that we can, through algorithms, find what is wrong and what is right, or what is true, and what is conspiracy theory. We will not do this, but we can hold companies accountable for the systemic risks they are putting in place in society. This is, I believe that we can find a fair equilibrium to regulate content moderation through this principle of systemic risks. And there is a name for this in Portuguese. I’m trying to remind what’s the name, that this principle has been used in some regulation, but I forgot now. But it’s something like duty of care or something like this. And this is quite important, I would say. And finally, to finish, I would like to promote this consultation that we have done in Brazil. And it’s a consultation on digital platform regulation. And we established three pillars for regulated digital platforms. The first is disputability as a concept of economic theory. We cannot sort out these information problems while we still have impact from companies who concentrate so much market share in society, like Instagram and WhatsApp. And we have to face the challenge of building disputability. The second one is digital sovereignty. We have to look to infrastructure. There is a concept of digital public infrastructure which is gaining hype right now. But it’s important to understand if infrastructures, despite they are private or public, serve the public interests or business interests, in terms that some infrastructure are serving business interests over public interests, then we should regulate this infrastructure, despite they are private. And to finish, we have to regulate content moderation. And I guess this idea of systemic risk is a good idea for we start discussing what kind of regulation we wanted in different countries. Thank you.
Peace Oliver Amuge: Thank you. Let me just check if Poncelet wants to interview. Is Poncelet still online? Poncelet, if you can hear me, do you have any comments or to the questions?
Poncelet Ileleji: Yes, thank you very much. I think, overall, we have to realize that any disinformation in any electoral context impacts the common man, those at grassroot level. So whether it’s platforms, whether we use fact-checking sites, the most important thing is advocacy for communities to know how misinformation can affect and disinformation can affect their lives. And the only way to do it is empowering people, especially those who communities relate with at their grassroot level. And those people usually have community radios. So the power of community radios is still very important. People will always listen to what they hear from their own community. And we have to be able to have avenues to empower those people and get all stakeholders involved. Social media has been a game changer. So I remember way back in 2006 when they said Time Magazine voted Person of the Year as you. You look at it today, it’s very relevant. Person of the Year is still you because the amount of information online and disinformation has really contributed to a lot of very unfortunate things in the world, especially in electoral processes. So let us see how we’ll do. I don’t have any one cup fits all. But within our own context, I know the main focus should be addressing people at the grassroot level. Let there be no default. Thank you.
Peace Oliver Amuge: Thank you, Fonz. Yes, you can go on.
Audience: Thank you so much. And I wanted to respond a little bit about the right kind of candidate. She was talking about the right kind of candidate. I was contributing that from the perspective of the eco level of information, for example, about the candidate. It’s not the right candidate, the right, right candidate that is ideal for that post. I meant that when one candidate, for example, is there is a miss or disinformation against another candidate, the chances are that the two candidates, one of them will be disenfranchised in terms of the information, the right kind of information as of that time. So I didn’t mean that having the right candidate, the ideal candidate for that post. I wanted to clarify that. And one of the things that we have to look at is that regulations have been there since the world came into the being. And I cannot imagine a space where there’s no regulation at all. There has to be some form of regulation. What we should be against is that having overregulation in terms of whatever that we are doing. For example, if you do overregulation in terms of people becoming innovative or, you know, certain innovations, that means you will stifle competition and to second, you will stifle, you know, growth of that particular space. So I think regulation, transparency, having people being accountable, it will make the space a level playing field where everybody can be able to interact and have the right, for example, to have your content, you know, read and also have the right for not anybody to stumble on your feet, on your toes. I think that is what we are looking at. We’re not looking at, you know, making sure that, you know, for example, all these tech companies or platforms, they are banned because of the content that is posted by the end users. So I think there has to be some kind of regulation because just imagine if I walk into this room naked, yeah, there’s no regulation that has been written on the door that you can’t walk here naked. But if I walk here naked, people will go like, you know. So I think we have to have some form of regulation. And these regulations have got to be facilitative regulations. They have to facilitate the tech companies as well as the platforms, do their stuff, people read the stuff. But now when they cross the line, the red line of allowing their platform to be used, especially for disinformation, because the disinformation is intentional, unlike, for example, the misinformation. Disinformation is intentional.
Aiesha Adnan: I create a content and disseminate it for me to disparage, for example, your personality. If you are a candidate, I say, this guy is a rapist, for example, but the guy is not a rapist. If that content, you know, continue to be on the platform, the impact on the end users will decimate that particular candidate. So I think there has to be, in my opinion, as a speaker, I think there has to be some kind of regulation accountability and collaboration is very key, you know, engaging the fact checkers to ensure that, you know, the information that is being put out there by maybe third parties is the correct kind of information about a particular candidate. So I think the awareness, collaboration, it’s very key in terms of where we are going in the future. That would be my 50 cents contribution. This has become a very, very interesting discussion now. Yeah, for me, I don’t really believe that we should actually try and make the platforms accountable on the content actually uploaded by someone else. When we talk about regulation, we might be saying that this might be a simple thing, but we know that how humans and a lot of people with power, they try to influence. So that’s the reason I don’t believe that we should try and force maybe the platforms to remove content. And because all these platforms you see, it’s run by community guidelines. It’s available and it’s visible for everyone. And I believe with that, it is more towards society’s role in debunking, helping debunk this information through these kind of awareness campaigns that we are talking about, because we cannot let just the platforms decide whether this is true or not, especially when they don’t have enough information. So that is my take on that. And another point is we have talked a lot about the regulation. and then maybe holding the platforms accountable. But we have a bigger, bigger work to do. That is, as some of the members from audience mentioned about the civic education, especially on information integrity through information literacy programs. Because this doesn’t only impact the elections. It’s a general thing that we need to identify, like what is misinformation and disinformation and how do you really understand the deep fakes and all. So that is what I believe that we should be focusing on and less influence on the platforms, yeah. Thank you.
Peace Oliver Amuge: Very much. I think we’ve had a very good conversation. Aisha is starting something, but we can’t start opening it up. I should say that she does not believe that there should be, we should emphasize on content moderation because why should they moderate something they didn’t put there? I don’t know what you think. We have only 10 minutes. We can’t go into this conversation. Yes, it’s a debate for another day, but otherwise I think I have had fun moderating this session. And just to, before I kind of sum up some of the things that have come up, I want to just give you just one minute each if you want to just say your parting shots. You can start from Aisha and just take it. Very quickly, thank you.
Aiesha Adnan: Thank you. This is an interesting discussion and like coming from a very small population and then being able to be here and talk about some of the challenges that we have. I hope that some of you here do consider us when you design some of your tools and other interventions.
Nazar Nicholas Kirama: Thank you, Madam Chairman. My parting shot would be that to directing more efforts in terms of collaborations, ensuring that everybody out there, you need to kind of date or the ordinary citizen, become impacted by this information, whether we deploy a facilitative regulation or you think the date in terms of dialogue with the platforms and take companies on how they can root out the scourge of this information. I think that is important for all of us. I think awareness, making sure we mitigate from the end user perspective is very key as we move forward and I wish you luck in terms of where you are going. I hope you embrace awareness and fact-checking, platform accountability and facilitative regulation. Thank you.
Juliano Cappi: Thank you. Information integrity and disinformation is at the base of the issues that we have to sort out. And become like, it’s not only sustainability and dogma in question, just to finish, but what we see is a kind of cynical agreement over general values like privacy and freedom of expression that prevent us to debate the problems some actors are causing to society. So we have to stop this because I’ve been in IGF for another 10 years and I can’t stand it. Oh, let’s advance to attack privacy. And this has become, seriously, my friends, this has become sort of ridiculous. So start to face the problems and tell each other what we have to sort out on. Seriously, I don’t know if we can give up of society. And I would like to invite the Brazilian international community has a boot camp here and we have some of this, the hard copies of the work that we have done on consultation of digital platform regulation. We have a kind of a scene in Brazil, I’m giving to you if you’re interested, a scenario in Brazil of the main disputes. This is what we need to do to bring up what are the disputes are in place and try to sort out those problems. I’m sorry for this final speech, but I’m really concerned, thank you.
Giovani Zagni: So my final thought is that I do think that there is a strong regional and national specificity to these problems. I mean, the issue of this information is absolutely not the same also inside Europe. The problems that I can observe in my country as an Italian are probably completely different from what a Norwegian see, well, they’re outside European Union, but let’s say Scandinavian country or from Eastern Europe. What happens in each of these regions is very specific and I’m not even thinking about what’s happening in the Maldives or in Tanzania or in Brazil. So one thing that I take away from this session is how the issue of this information can become kind of an academic and very theoretical stuff from some perspectives and one of the most pressing and urgent issues from another perspective in another area of the world. So there have been cases in the past few years when this information has had such a concrete impact as to harm people and to be really a problem for the whole of society. And I do think that one of the most difficult things is to agree on some common ground at the global level. It will probably, I’m sorry about that, but it will probably need much more listening, much more discussing. And I think it’s great that a forum such as the IGF exists or have this kind of discussions.
Peace Oliver Amuge: Thank you. And we’ll go right over to your points. Are you, yes. Committee, would you like to give us your parting shot in just one minute? Okay. We’ll get back to you. And usually I think I’m not going to be on time, so I’ll see you all in just a little bit. Yes, I appreciate all that. And thank you very much to the panelists and also to you for your time and your attention. Also to me for your comments and input to this discussion. Thank you both. I think some of what we will talk about is different from public awareness. I think we need collaboration, but collaboration synergies and using reports. I’m so happy that in a different context that we have regulations, like stakeholder approach. And we will encourage more promoting and fact-checking in a particular session. And thank you all. Have a good evening, ladies and gentlemen. Bye all. Bye. Bye. Bye.
Giovanni Zagni
Speech speed
118 words per minute
Speech length
1573 words
Speech time
797 seconds
EU Code of Practice on Disinformation as voluntary co-regulatory instrument
Explanation
The EU has implemented a Code of Practice on Disinformation as a voluntary and co-regulatory instrument. This code involves various stakeholders including platforms, advertisers, and fact-checkers to collectively address disinformation issues.
Evidence
34 signatories including Meta, Microsoft, TikTok, and fact-checking organizations
Major Discussion Point
Regulations and frameworks to address disinformation
Agreed with
Juliano Cappi
Nazar Nicholas Kirama
Agreed on
Need for multi-stakeholder collaboration
Tension between combating false information and protecting free speech
Explanation
There is a tension between efforts to combat false information and the need to protect freedom of expression. Regulation of online content should prioritize freedom of expression while providing contextual information to users.
Evidence
Suggestion of labeling and fact-checking as preferable to content removal
Major Discussion Point
Challenges in addressing disinformation
Regional and national specificity of disinformation problems
Explanation
The issue of disinformation varies significantly across different regions and countries. What is a pressing issue in one area may be a more theoretical concern in another, making it challenging to agree on common ground at a global level.
Evidence
Differences in disinformation issues within Europe and between different parts of the world
Major Discussion Point
Challenges in addressing disinformation
Poncelet Ileleji
Speech speed
122 words per minute
Speech length
523 words
Speech time
255 seconds
Need for fact-checking websites supported by organizations like UNESCO
Explanation
Fact-checking websites are crucial in combating disinformation during elections. Organizations like UNESCO have been supporting the establishment of such websites in various countries.
Evidence
Example of UNESCO supporting the setup of a fact-checking website in Gambia for their last presidential elections
Major Discussion Point
Regulations and frameworks to address disinformation
Agreed with
Giovanni Zagni
Nazar Nicholas Kirama
Agreed on
Importance of fact-checking in combating disinformation
Empowering citizens and communities at grassroots level
Explanation
Empowering people at the grassroots level is crucial in combating disinformation. Community radios play a vital role in disseminating accurate information to local communities.
Evidence
Importance of community radios in reaching people at the grassroot level
Major Discussion Point
Role of different stakeholders in combating disinformation
Aiesha Adnan
Speech speed
136 words per minute
Speech length
1250 words
Speech time
548 seconds
Importance of tailored programs fitting cultural norms
Explanation
Programs and initiatives to combat disinformation should be designed to fit the cultural norms of specific countries. One-size-fits-all solutions may not be effective in addressing disinformation across different cultural contexts.
Evidence
Example of the Maldives presidential election where disinformation mainly came from online media rather than traditional media
Major Discussion Point
Regulations and frameworks to address disinformation
Importance of civic education and information literacy programs
Explanation
Civic education and information literacy programs are crucial in combating disinformation. These programs help citizens identify misinformation and make informed decisions during elections.
Major Discussion Point
Role of different stakeholders in combating disinformation
Differed with
Nazar Nicholas Kirama
Differed on
Role of tech platforms in content moderation
Juliano Cappi
Speech speed
99 words per minute
Speech length
1531 words
Speech time
923 seconds
Brazilian Internet Steering Committee’s guidelines and reports on combating disinformation
Explanation
The Brazilian Internet Steering Committee has produced several guidelines and reports on combating disinformation, especially during electoral periods. These documents provide directives and contributions to address the phenomenon of disinformation.
Evidence
Fake News and Elections Guidebook for Internet Users, report on disinformation and democracy, contributions to combating disinformation on the Internet during electoral periods
Major Discussion Point
Regulations and frameworks to address disinformation
Multi-stakeholder partnerships and collaboration between fact-checkers and platforms
Explanation
Multi-stakeholder partnerships and collaboration between fact-checkers and platforms are essential in combating election disinformation. These collaborations can improve efforts to verify information accuracy and label or demote false information.
Major Discussion Point
Role of different stakeholders in combating disinformation
Agreed with
Giovanni Zagni
Nazar Nicholas Kirama
Agreed on
Need for multi-stakeholder collaboration
Differed with
Nazar Nicholas Kirama
Unknown speaker
Differed on
Regulation of tech platforms
Power dynamics and political biases in spread of disinformation
Explanation
The spread of disinformation is closely tied to power dynamics and political biases. Those in power often use disinformation to maintain their position, while those seeking power use it to gain influence.
Evidence
Observation of extreme right groups gaining power in various countries over the last 10 years
Major Discussion Point
Challenges in addressing disinformation
Nazar Nicholas Kirama
Speech speed
101 words per minute
Speech length
1091 words
Speech time
646 seconds
Proactive content moderation and transparency in algorithm policies by tech platforms
Explanation
Tech platforms should implement advanced algorithms for proactive content moderation to flag and detect misleading information. They should also be transparent about their algorithm policies to ensure clarity in how information is presented.
Major Discussion Point
Regulations and frameworks to address disinformation
Agreed with
Giovanni Zagni
Poncelet Ileleji
Agreed on
Importance of fact-checking in combating disinformation
Differed with
Aiesha Adnan
Differed on
Role of tech platforms in content moderation
Accountability of tech platforms and social media companies
Explanation
Tech platforms and social media companies should be held accountable for the content posted on their platforms, especially during electoral periods. This accountability is crucial to ensure the integrity of electoral processes.
Major Discussion Point
Role of different stakeholders in combating disinformation
Agreed with
Giovanni Zagni
Juliano Cappi
Agreed on
Need for multi-stakeholder collaboration
Differed with
Juliano Cappi
Unknown speaker
Differed on
Regulation of tech platforms
Unknown speaker
Speech speed
0 words per minute
Speech length
0 words
Speech time
1 seconds
Caution against over-regulation that could stifle innovation
Explanation
While some regulation is necessary, over-regulation should be avoided as it could stifle innovation and growth in the digital space. A balance needs to be struck between regulation and facilitating innovation.
Major Discussion Point
Role of different stakeholders in combating disinformation
Differed with
Nazar Nicholas Kirama
Juliano Cappi
Differed on
Regulation of tech platforms
Difficulty in objectively identifying misinformation in political contexts
Explanation
It is challenging to objectively identify misinformation in political contexts as all parties contribute to misinformation during elections. Personal biases can influence what is perceived as misinformation.
Major Discussion Point
Challenges in addressing disinformation
Need to consider cultural context in algorithmic content moderation
Explanation
Algorithmic content moderation needs to consider cultural context as words and phrases can have different meanings in different cultures. This is crucial to avoid misidentifying content as misinformation or abuse.
Evidence
Example of words that may be considered banter in one culture but insults in another
Major Discussion Point
Challenges in addressing disinformation
Agreements
Agreement Points
Importance of fact-checking in combating disinformation
Giovanni Zagni
Poncelet Ileleji
Nazar Nicholas Kirama
EU Code of Practice on Disinformation as voluntary co-regulatory instrument
Need for fact-checking websites supported by organizations like UNESCO
Proactive content moderation and transparency in algorithm policies by tech platforms
Multiple speakers emphasized the crucial role of fact-checking in addressing disinformation, whether through voluntary codes, dedicated websites, or platform policies.
Need for multi-stakeholder collaboration
Giovani Zagni
Juliano Cappi
Nazar Nicholas Kirama
EU Code of Practice on Disinformation as voluntary co-regulatory instrument
Multi-stakeholder partnerships and collaboration between fact-checkers and platforms
Accountability of tech platforms and social media companies
Speakers agreed on the importance of collaboration between various stakeholders, including platforms, fact-checkers, and regulatory bodies, to effectively combat disinformation.
Similar Viewpoints
Both speakers emphasized the importance of localized, culturally-sensitive approaches to combating disinformation, focusing on empowering communities at the grassroots level.
Aiesha Adnan
Poncelet Ileleji
Importance of tailored programs fitting cultural norms
Empowering citizens and communities at grassroots level
Unexpected Consensus
Caution against over-regulation
Giovani Zagni
Unknown speaker
Tension between combating false information and protecting free speech
Caution against over-regulation that could stifle innovation
Despite coming from different perspectives, both speakers cautioned against excessive regulation that could potentially infringe on free speech or stifle innovation, highlighting a shared concern for balancing regulation with other important values.
Overall Assessment
Summary
The main areas of agreement included the importance of fact-checking, multi-stakeholder collaboration, and culturally-sensitive approaches to combating disinformation. There was also a shared concern about balancing regulation with free speech and innovation.
Consensus level
Moderate consensus was observed on the need for collaborative efforts and localized strategies. However, there were differing views on the extent of platform accountability and the appropriate level of regulation. This suggests that while there is agreement on the importance of addressing disinformation, the specific methods and extent of intervention remain contentious issues requiring further dialogue and research.
Differences
Different Viewpoints
Role of tech platforms in content moderation
Nazar Nicholas Kirama
Aiesha Adnan
Proactive content moderation and transparency in algorithm policies by tech platforms
Importance of civic education and information literacy programs
Nazar argues for proactive content moderation by tech platforms, while Aiesha emphasizes the importance of civic education and information literacy programs rather than platform-led moderation.
Regulation of tech platforms
Nazar Nicholas Kirama
Juliano Cappi
Unknown speaker
Accountability of tech platforms and social media companies
Multi-stakeholder partnerships and collaboration between fact-checkers and platforms
Caution against over-regulation that could stifle innovation
Nazar and Juliano advocate for stronger accountability and collaboration for tech platforms, while the unknown speaker cautions against over-regulation that could stifle innovation.
Unexpected Differences
Objectivity in identifying misinformation
Unknown speaker
Nazar Nicholas Kirama
Difficulty in objectively identifying misinformation in political contexts
Proactive content moderation and transparency in algorithm policies by tech platforms
The unknown speaker unexpectedly challenges the idea that misinformation can be objectively identified, especially in political contexts, which contrasts with Nazar’s advocacy for proactive content moderation by tech platforms.
Overall Assessment
summary
The main areas of disagreement revolve around the role of tech platforms in content moderation, the extent of regulation needed, and the most effective approaches to combat disinformation (platform-led vs. education-focused).
difference_level
The level of disagreement is moderate. While speakers generally agree on the need to address disinformation, they differ significantly on the methods and responsibilities of various stakeholders. These differences highlight the complexity of addressing disinformation globally and the need for nuanced, context-specific approaches.
Partial Agreements
Partial Agreements
Both speakers agree on the need for addressing disinformation, but Giovani emphasizes a co-regulatory approach at the EU level, while Aiesha stresses the importance of tailoring programs to specific cultural contexts.
Giovani Zagni
Aiesha Adnan
EU Code of Practice on Disinformation as voluntary co-regulatory instrument
Importance of tailored programs fitting cultural norms
Both speakers agree on the importance of educating the public, but Poncelet focuses on fact-checking websites, while Aiesha emphasizes broader civic education and information literacy programs.
Poncelet Ileleji
Aiesha Adnan
Need for fact-checking websites supported by organizations like UNESCO
Importance of civic education and information literacy programs
Similar Viewpoints
Both speakers emphasized the importance of localized, culturally-sensitive approaches to combating disinformation, focusing on empowering communities at the grassroots level.
Aiesha Adnan
Poncelet Ileleji
Importance of tailored programs fitting cultural norms
Empowering citizens and communities at grassroots level
Takeaways
Key Takeaways
Disinformation during elections is a significant threat to human rights and democracy
Multi-stakeholder collaboration and public-private partnerships are crucial for combating disinformation
There is a need for tailored, culturally-appropriate approaches to address disinformation in different contexts
Fact-checking, media literacy, and civic education programs are important tools for countering disinformation
Regulation of tech platforms and social media companies is a complex issue that requires balancing free speech concerns
The role of traditional and social media in spreading disinformation during elections is significant
Resolutions and Action Items
Promote and support the development of fact-checking websites and tools
Implement civic education and information literacy programs to empower citizens
Encourage collaboration between platforms, fact-checkers, and other stakeholders
Consider adopting co-regulatory approaches like the EU Code of Practice on Disinformation
Unresolved Issues
How to effectively regulate tech platforms without stifling innovation or free speech
How to address the political biases and power dynamics inherent in the spread of disinformation
How to create global standards for addressing disinformation while respecting regional and national differences
The extent to which platforms should be held accountable for user-generated content
Suggested Compromises
Adopting a principle-based approach to regulation focused on systemic risks rather than specific content
Using labeling and providing additional context rather than removing content outright
Balancing platform accountability with user responsibility through community guidelines and transparent policies
Thought Provoking Comments
The European way, so to say, is to have all the relevant stakeholders around the same table and do not to impose any kind of kind of direct intervention from the authorities on the specific content, but more to have a forum where, I don’t know, like potentially damaging cases or potential threats or things that need to be looked after are discussed and then the platforms decide to take action or not.
speaker
Giovani Zagni
reason
This comment provides insight into the European approach to addressing disinformation, emphasizing collaboration and voluntary action rather than top-down regulation.
impact
It set the tone for discussing different regulatory approaches and sparked further conversation about the role of platforms in content moderation.
Most young people, most political parties, most lobbyists, what they use all over the world today to disseminate information has been through social media. Whether it’s Twitter, whether it’s TikTok, whether it’s X, they have used all this to disseminate information. And most people don’t naturally use mainstream media. They use social media as a way in which they get information.
speaker
Poncelet Ileleji
reason
This comment highlights the shift in information dissemination and consumption patterns, emphasizing the growing importance of social media in shaping public opinion.
impact
It led to a deeper discussion about the role of social media platforms in elections and the need for digital literacy.
I would like to see a place where that we actually build a culture where we promote information integrity across everyone. And when we especially talk about election, then everyone says, it’s the media. It’s the media spreading the information. But yes, of course, some part is the media, but it is the citizen who believe in it.
speaker
Aiesha Adnan
reason
This comment shifts the focus from just regulating media to empowering citizens, introducing the concept of ‘information integrity’.
impact
It broadened the discussion to include the role of citizens and the importance of digital literacy in combating disinformation.
These tech platforms, tech companies and media platforms, electoral commissions without regulation, without guardrails for them to be able to ensure that the content that is delivered on their platforms resonates with what is actually happening on the ground.
speaker
Nazar Nicholas Kirama
reason
This comment provocatively frames tech platforms as de facto electoral commissions, highlighting the need for accountability.
impact
It sparked a debate about the extent of platform responsibility and the need for regulation.
We have to be very careful. I say this because, as a Nigerian, different African governments have found ways of trying to regulate and hyperregulate social media. When we open a door, we have to be very direct to where that door is pointed to, so that we don’t open a Pandora’s box, and it will be very difficult to shut it down.
speaker
Audience member (Nana)
reason
This comment introduces a cautionary perspective on regulation, highlighting potential unintended consequences.
impact
It added complexity to the discussion about regulation, prompting participants to consider the potential downsides of overzealous content moderation.
Overall Assessment
These key comments shaped the discussion by introducing diverse perspectives on the roles of different stakeholders in combating disinformation. They highlighted the complexity of the issue, touching on themes of platform responsibility, citizen empowerment, regulatory approaches, and potential pitfalls of overregulation. The discussion evolved from focusing solely on platform regulation to considering a more holistic approach involving digital literacy, multi-stakeholder collaboration, and careful consideration of cultural and regional contexts.
Follow-up Questions
How can we improve processes and foster intersectional collaboration to better integrate different forums addressing disinformation?
speaker
Juliano Cappi
explanation
Juliano suggested we need to improve how we organize and integrate work from different forums, as current efforts may not be sufficiently effective in combating disinformation.
How can we ensure algorithms used by tech platforms for content moderation are explainable and account for cultural context?
speaker
Audience member (Nana)
explanation
The speaker highlighted that algorithms may misinterpret content due to cultural differences, emphasizing the need for explainable AI in content moderation.
How can we design tools and interventions that consider the needs of smaller countries and populations?
speaker
Aiesha Adnan
explanation
Aiesha emphasized the importance of considering smaller populations when designing tools to combat disinformation, as their specific needs may be overlooked.
How can we effectively empower grassroots communities to combat disinformation, particularly through community radios?
speaker
Poncelet Ileleji
explanation
Poncelet stressed the importance of empowering people at the grassroots level, particularly through community radios, to combat disinformation.
How can we implement ‘facilitative regulations’ that balance the need for platform accountability with the protection of free speech?
speaker
Nazar Nicholas Kirama
explanation
Nazar suggested the need for regulations that facilitate platform accountability without stifling innovation or free speech.
How can we improve civic education and information literacy programs to better equip citizens to identify misinformation and disinformation?
speaker
Aiesha Adnan
explanation
Aiesha emphasized the need for broader civic education and information literacy programs to help people identify various forms of false information.
How can we address the regional and national specificities of disinformation while still finding common ground at a global level?
speaker
Giovani Zagni
explanation
Giovani highlighted the significant differences in how disinformation manifests across different regions and countries, suggesting the need for both localized and global approaches.
Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.
Related event
Internet Governance Forum 2024
15 Dec 2024 06:30h - 19 Dec 2024 13:30h
Riyadh, Saudi Arabia and online