WS #255 AI and disinformation: Safeguarding Elections

18 Dec 2024 13:45h - 14:45h

WS #255 AI and disinformation: Safeguarding Elections

Session at a Glance

Summary

This discussion focused on the impact of artificial intelligence (AI) on elections, disinformation, and democratic processes. Panelists from various countries shared insights on how AI has been used in recent elections worldwide. While fears of widespread AI-generated deepfakes disrupting elections did not fully materialize, AI was utilized in campaign strategies, such as automating responses to voter inquiries and creating personalized content. The discussion highlighted both positive and negative aspects of AI in elections. On one hand, AI tools have empowered smaller candidates to compete more effectively with limited resources. On the other hand, concerns were raised about the potential for AI to be used for voter manipulation and the spread of misinformation.


The panelists emphasized the need for transparency in how social media platforms use algorithms to promote political content. They discussed the challenges of content moderation, particularly in languages with limited online representation. The case of Romania’s recent election cancellation due to foreign interference and algorithmic manipulation was cited as a wake-up call for the potential risks of AI in electoral processes. The discussion also touched on the broader implications for democracy, including the need to update electoral institutions and processes to address technological challenges.


Participants debated the role and accountability of social media platforms in elections, with some arguing for increased regulation and others cautioning against over-reliance on these private entities. The conversation concluded by acknowledging that while technological governance is crucial, addressing underlying social issues like poverty and isolation is equally important in combating the spread of misinformation and preserving democratic integrity in the age of AI.


Keypoints

Major discussion points:


– The impact of AI on elections and disinformation, including both fears and realities


– The use of AI by political campaigns, both for promotion and attacking opponents


– Platform governance and transparency issues related to AI and elections


– The potential for AI to both help and hinder election integrity and democratic processes


– The need for regulatory frameworks and digital literacy to manage AI risks in elections


The overall purpose of the discussion was to examine how AI is affecting elections globally, looking at both positive and negative impacts, and to consider what policy and governance approaches may be needed to address emerging challenges.


The tone of the discussion was largely analytical and cautiously optimistic. While speakers acknowledged serious risks and challenges posed by AI in elections, they also highlighted potential benefits and ways to mitigate negative impacts. The tone became somewhat more concerned when discussing specific cases of election interference, but remained focused on finding constructive solutions.


Speakers

– Tapani Tarvainen: Moderator


– Ayobangira Safari Nshuti: Member of Parliament of Democratic Republic of Congo


– Roxana Radu: Chair of the Global Internet Governance Academic Network, Assistant Professor at Oxford University


– Babu Ram Aryal: Chair of Digital Freedom Coalition from Nepal


– Dennis Redeker: Online moderator


Additional speakers:


– Nana: Works on AI and ethics


Full session report

The Impact of AI on Elections and Democratic Processes


This discussion, moderated by Tapani Tarvainen, brought together experts from various countries to examine the impact of artificial intelligence (AI) on elections, disinformation, and democratic processes. The panel included Ayobangira Safari Nshuti, a Member of Parliament from the Democratic Republic of Congo; Roxana Radu (participating online), Chair of the Global Internet Governance Academic Network and Assistant Professor at Oxford University; and Babu Ram Aryal, Chair of the Digital Freedom Coalition from Nepal.


AI’s Evolving Role in Recent Elections


The panellists observed that AI’s impact on elections in 2023-2024 differed from initial expectations. Ayobangira Safari Nshuti noted that AI was primarily used for self-promotion rather than attacks on opponents, citing examples such as AI-generated speech in Pakistan. Interestingly, AI tools helped smaller candidates compete more effectively with larger ones by providing similar campaign capabilities, potentially levelling the playing field in political contests.


Roxana Radu emphasized that AI has been used for both positive and negative purposes in elections. She highlighted positive examples, such as AI’s use in India to enhance voter outreach and improve campaign efficiency. However, serious concerns were raised about AI’s potential for spreading disinformation and manipulating public opinion. A stark example of AI’s disruptive potential was the cancellation of Romania’s recent election due to foreign interference and algorithmic manipulation, which is currently under investigation by the European Commission.


Platform Governance and Transparency


A major point of discussion was the need for greater transparency from social media platforms regarding their algorithms and content promotion practices. Ayobangira Safari Nshuti stressed the importance of understanding how algorithms treat information from different sources to ensure fairness in election-related content distribution. He also mentioned a collaboration between Meta and the election commission in Congo as an example of platform engagement.


Roxana Radu pointed out that platforms have reduced staff monitoring election content, increasingly relying on AI for content moderation despite its limitations. Babu Ram Aryal highlighted that AI tools are not effective for monitoring content in local languages, creating a significant gap in content moderation capabilities. This issue raised concerns about the potential for unchecked spread of misinformation in languages with limited online representation.


The speakers agreed on the need for platforms to be more accountable, especially during sensitive periods like elections. However, an audience member questioned whether platforms should be trusted at all, given their profit-driven nature, highlighting a more fundamental disagreement about the role of platforms in democratic processes.


Election Integrity and Trust


The discussion emphasized the critical importance of election integrity, particularly in light of the Romanian election cancellation. Roxana Radu argued for the need for safeguards across the entire election process, not just voting itself, and suggested rethinking democratic processes in light of new technologies.


E-voting and AI security concerns were prominent topics. Ayobangira Safari Nshuti and Babu Ram Aryal raised concerns about the vulnerability of e-voting machines to hacking, including through AI-powered attacks. The moderator also noted the potential for AI to be used to undermine trust in elections by simulating attacks.


Addressing Disinformation and Underlying Issues


The speakers agreed that multiple stakeholders have responsibility in combating election disinformation, including platforms, election officials, and voters themselves. Babu Ram Aryal emphasized the need for fact-checkers and digital literacy initiatives to combat disinformation effectively.


While technological solutions were discussed, the conversation also touched on the importance of addressing underlying social issues. An audience member pointed out that factors such as poverty and isolation contribute to the spread of disinformation and need to be addressed alongside technological interventions. This highlighted the need for a comprehensive approach to preserving democratic integrity in the age of AI.


Unresolved Issues and Future Challenges


Several unresolved issues emerged from the discussion, including:


1. How to effectively regulate AI use in elections without infringing on free speech


2. The appropriate level of trust to place in social media platforms during elections


3. Safeguarding the entire election process against AI-enabled interference


4. Balancing the benefits of e-voting with cybersecurity concerns


5. Addressing AI-generated disinformation in languages not well-represented online


The panellists suggested some potential solutions, such as focusing on transparency and labelling of AI-generated content, combining technological solutions with efforts to address underlying social issues, and enhancing digital literacy.


Conclusion


The discussion highlighted the complex and evolving nature of AI’s role in elections. While some feared disruptions did not fully materialize, new challenges emerged, demonstrating AI’s potential to both enhance and undermine democratic processes. The panellists emphasized the need for a multifaceted approach involving technological governance, digital literacy, and addressing broader societal issues to ensure the integrity of elections in the AI era. As the moderator noted in closing, this discussion marks the beginning of an ongoing conversation about AI and elections, recognizing that adaptive strategies will be crucial as AI continues to advance.


Session Transcript

Tapani Tarvainen: Okay, sorry about this little confusion here. So we have three distinguished panellists, Ayubanjira Safarinshuti, Member of Parliament of Congo, the Democratic Republic of Congo, and Roxana Radu is online, Chair of the Global Internet Governance Academic Network and Assistant Professor at Oxford University, and Baburam Ariyal, Chair of Digital Freedom Coalition from Nepal. And we are about to talk, as the title says, about elections, AI and disinformation. As I presume most of you have heard, this has been a major election year around the world. I haven’t been able to determine the exact number, but some 60 countries have had elections this year, and at least two more are to come. Chad and Croatia will have elections later this month. And it was feared in advance that disinformation generated by AI would be a major factor in elections. One question we have to talk about is, did that actually happen, and if so, what should be done about it? Now, you may have noticed that there have been some less than perfectly fair elections even before AI and the Internet, all kinds of election campaign meddling has happened in the past. Governments, the people in power have, let’s say, creatively used their power to influence the outcome of elections. So how big a difference does AI and the Internet make on that? We’ll start with that. Have you, did their fears about AI messing up all elections come true? Let’s start with Mr. Safaree, is that okay? You go first.


Ayobangira Safari Nshuti: Okay, it’s okay. What I can say, as you say, in 2024, this year, there was a lot of election, and there was a lot of concern about having AI used by some of the actors to gain more a result on election. But I think we had a lot of fear. It doesn’t happen as we was here. And it’s like maybe people was also prepared to face the AI, to take some measure against the AI. But on our side, as a legislative people, we didn’t make done really a work on that. It’s like only people themselves, the politician actors in that field, have either take some measure to fight by themselves, by their team, the use of AI by the opponent. And also those who was planning to use it, maybe they didn’t use it as much as they would like to use. Because the community was already prepared to see the use of AI. And I think we had a lot of fear compared on what how really happened on the ground.


Tapani Tarvainen: OK, so do you think it was more scary that did not come to reality as yet anyway? Maybe I’ll try to reach Roxana next. Perhaps you might want to comment on what’s happened in Romanian elections. And did AI have anything to do with that?


Roxana Radu: Yes, absolutely. First of all, apologies for not being able to join you physically this year at the IGF. But thank you for the invitation to join online. So I wanted to bring in the example of Romania. I think for the first part of the year, we’ve heard quite a bit of comments about AI. And as we’re approaching the end of the year, people started to feel that AI is just another tool in the toolbox of technologies that we have available around the election. But the case of Romania changes the narrative completely. As you might have seen about two weeks ago. the Constitutional Court of Romania decided to cancel the results of the first round of presidential elections. So it’s the first time it has happened in the Romanian history, it’s also the first time it has happened since we’ve introduced AI. And of course, there are several reasons behind this decision, but it’s very clearly linked to electoral interference from foreign states, in particular one, Russia, as it was revealed by the intelligence reports. It was very clearly linked also to algorithmic treatment, in particular preferential treatment of one of the 13 candidates in the elections. And the decision of the court cited the illegal use of digital technologies, including artificial intelligence. So this is a case that tells us, in a way, it’s a wake up call, right? It’s just the fact that all of this can be abused massively, and it hasn’t happened in other presidential elections, it hasn’t happened in other parliamentary elections. But it doesn’t mean it’s not something we should have on our radar. I wrote a report earlier this year with a colleague of mine, looking into some positive uses of AI during elections. So we took the Indian case and we concluded, in fact, in India, we could see some very creative uses of AI, both to motivate people to go out and vote, but also to promote campaigns in ways that were fair, and also to promote inclusivity, translating in real time some of the politicians’ speeches, some really useful ways to reach out to a larger voter base. At that point, it didn’t look, so this was May, June, right, the Indian elections. At that point, it didn’t look like there was a lot to worry about. So by the time we entered the American elections, there was quite a bit of attention paid to the use of AI. And yet it happened in a country that was not in the media spotlight. And I think that’s something that we should also bring into the discussions. All elections have their own stakes. But I think it’s useful to think about this experimental use of AI for both some of the good uses and some of the really bad outcomes. I’ll stop here, but happy to jump in later in the conversation.


Tapani Tarvainen: Roxana, so very nice to observe that AI is a double-edged sword that can be used for good and bad also in election contexts. But maybe I’ll hand over to Babu now on your notion of what happened, what could have happened, what should have happened.


Babu Ram Aryal: Thank you very much. It’s my pleasure to be here and talk about this very interesting topic. And I have the privilege to speak with the honorable parliamentarian who fought elections and got through the process of all these things. And then whether it’s scary or it’s normal, it’s very rightly mentioned that it has two sides. It has the bad side as well. So benefit is that it has become very easy to make political advertisement for candidates, especially using their campaign and developing some contents. And it has become very easy for them. But simultaneously, there is a bigger risk that opposition or some other stakeholders may influence their election campaign using similar content, which could be detrimental to their characters and all these things. So one of the major issues in the political campaign is the advertisement of political campaign. And this is one issue. And another issue is the transparency of the campaign. Now, we can see that various platform providers, they have their own regulations internally. Platform providers have also their own provision about what kind of advertisement limitation could be there. And their own filtering some content using AI as well. If I recall, various platform providers, including Facebook or Twitter or TikTok as well, they themselves removed many political contents of their campaigns. And then later on, there were contest or challenge by the politicians themselves. So there is another risk of using AI in election process by the platform providers themselves on the filtering of their content. And another issue was, as I mentioned, that the development of content which could be useful and which could be detrimental. And different contents are also there that are damaging immediately the campaign of election. But when intervening on that content, it could be very late. A content can damage a politician in a few seconds or a few minutes. Even if it is removed in a few hours, that could be not sufficient to repair the damage of the political campaign. So that is another issue that could be seen in the field of election during the election. And another thing is. When we are talking about from – we are talking from campaign perspective, content perspective, also the major issue is coming from, like Roxana just mentioned, that foreign influence in the election process or election day, like intervening on data, intervening on system of ballot papers or ballot process. So this is a very significant part. And another thing is, like, when this comes to the remedy process, whether we have sufficient regulatory approach or not, whether our courts, election courts are also – need to be very clear on this kind of recognition or identifying these contents or effects. So these issues are evolving around the election and disinformation and misinformation. So if we have proper regulatory framework or understanding or digital literacy as well, so we can manage the risk of AI and using from a positive perspective.


Tapani Tarvainen: Thank you. You made some very keen observations there, notably that in elections time is everything. Also if an AI system were to, say, try to remove misinformation or whatever and then accidentally remove somebody’s political advertisement, and it takes days before it comes back online and they lose election because of that. That’s also a problem. So it can cut both ways. So the question is…


Roxana Radu: can jump in very quickly here. I definitely want to talk a little bit more about the question of transparency because that has been part of the regulatory agenda for a while, not necessarily in the context of election but platform transparencies with regard to their to their algorithmic practices has been on the mind of policymakers for a while now and in the EU we do have a framework for that is the Digital Services Act and right now the European Commission has decided to open an investigation, a formal investigation in the case of TikTok with regards to the Romanian elections so this was the platform that was scrutinized for this illegal use of AI. It turns out the transparency prerogative was not really working in this case. One of the candidates received preferential treatment without ever having their electoral content labeled as such so it would appear in all sorts of feeds without ever mentioning that this was in fact part of the the campaign and this is obviously in breach of the laws in place in Romania which is why the court had to issue the decision but we also saw now that the European Commission looking at this case, so Romania is one of the members of the European Union, there is a framework in place at the EU level and the Commission has asked for a couple of things. First of all, already on the 5th of December it asked TikTok to retain all the information that had to do with the elections for a particular period of time I think it was between end of November and going all the way to to March 2025, TikTok is now under obligation, as per this EU order, to retain all the information that has to do with any national election. So that will include the upcoming elections in Croatia as well. For the Romanian one, they said this will be a matter of priority. So we’ll complete this investigation in a speedy manner. And they want to look at what was recommended content during the period of election and also what was a potential intentional manipulation of the platform. So there are quite a few aspects that will now come into question with regard to the practices of TikTok. My previous speaker also mentioned different platforms taking action throughout this year. And it’s true, we have seen lots of statements from both Meta across their different platforms, from Instagram all the way to Facebook, but also from Twitter. I think we’ve had mixed messages in this period. But the truth is also that many of these platforms have actually reduced the number of staff working on these issues, on the issue of monitoring electoral content. So at the end of the day, I think we have to put that in balance. On the one hand, they’ve cut all the funding they had towards proper ways of dealing with this and outsourced a lot to AI, in fact, using AI tools to detect some of this content. Turns out it doesn’t work all that well. And on the other hand, they’ve made all these statements about the proactive attitudes towards preventing electoral interference. I think the truth sits somewhere in the middle. So it’s a lot more mixed than we have seen. And the reality is the AI tools we have today are probably better and better in particular languages, especially widely used languages. But they are not very good in languages that are not as well represented on the internet. So ultimately, if AI is supposed to be in charge of monitoring how AI is used on platforms, we can’t really trust that to be very, very accurate. Thank you. I’ll stop here.


Tapani Tarvainen: Thank you, Roxana. An interesting point here that historically, freedom of speech has been the freedom of newspaper owners to propose whatever they want. And of course, a platform on the internet can also have its own political position. It’s just that they should be open about it. The AI might think something like through social or whatever, which is explicitly on one politician’s platform. But pretending to be neutral and not being it is something that’s definitely bad. Maybe I’ll hand over to Safari at this point. How do you feel about this, especially in the Congolese point of view, if you have some observations there? Is AI a different issue there?


Ayobangira Safari Nshuti: On Australia, the concern we had, as she’s saying, some of the problem we have with having AI to monitor the content is the problem of the language. Because many of the communication will be done in our local language first. And also, even some words that may be in English or in French, they don’t have the same meaning locally. We used to call some of the political party using one name, which is a common name in English. But that means really different things. We have in our country some. some part of the political side that we can use to call them, to identify them regarding the somewhat like a Taliban. When you say Taliban in English you may think is someone in Taliban but on our side is another, it is another meaning. It’s member of the majority, you see. So the AI will not see that context, will not, that’s why we need really to have someone, some men, some real people in a background and will know the local context. If I come back on the use of AI, what I was saying is not to say that AI was not used in the election but it was not on the way people was expecting it. Everyone was looking on the US election on Deepfake but as you say it happened in Romania but people was looking on the US and also even the US. AI was used not mainly to make Deepfake but to promote themselves, like people who were using the AI to make some, some chatbook to respond to email, to respond to phone call automatically. There was even in Pakistan I heard that one of the candidates, the former prime minister used the AI to make speech because it was in prison but was able to make speech, live speech using AI by cloning his voice. So there was a use of AI but because many people was waiting to see it on Deepfake side, I think people have shifted, instead of attacking their opponent they start to promote themselves. They start to use the AI to reinforce their own campaign team. So mainly in the US it was to respond to email, to make call, to make speech, to make some advertising. to make some some nice video for themselves, some nice picture from themselves. In Congo we had election just before the end of 23, we are not on 24, we are not concerned, but it was at the end, on the last day of 23, so we was also part of that big game of the election on 24. On our side before election we had a meeting with even a team from META who came to see our election committee and they agreed to work with us and to help us to put in place a team to monitor all that content. And we can say that it worked partially because in the 23 election last year we didn’t have so much deepfake that it was like it was a before, because before the election we had a team from META who came in the country and put a place in place a strategy and work together with the our election a commission to see how they can fight they can fight against the deepfake and the misinformation.


Tapani Tarvainen: Okay, thank you Safari for that. It’s an interesting observation that AI has been used as a tool for election campaigns and then the question comes does it help more those who have been trouble being getting their things the underdogs because they have the same tools now they multiply their voice or will it actually just help those who are already powerful more? It helped as the


Ayobangira Safari Nshuti: report I saw it helped mostly those small candidate we are who was under under the table because they was able to to put much effort on AI like in the US I know there was a some in the country there was one of the small candidates who was able to gain more voter than Joe Biden in his state by using just the AI. He didn’t have the budget like the Joe Biden budget, but putting much effort on AI. It also happened for a small candidate in Japan, also putting much effort on AI. So it really helped those who were seen as small candidates. It gave them the same tools like those powerful candidates.


Tapani Tarvainen: That’s interesting. So it turns out that it can be a force for good. But maybe Babu has a point of view here. Maybe things are different in Nepal or other observations.


Babu Ram Aryal: Not really. It’s a similar kind of context in Nepal as well. We had election in 2022, just two years ago. When we had election, the AI tools were not that much used in that sense. But nowadays, this is a big discussion. After three years, we’ll have a new election. We are already discussing about the potential risk of using AI, especially on influencing the result of election. So in this context, our speaker Rakshana also raised some of the issues of platform governance. Previously, we considered platform as trusted third party. Media were not taking side on the content. Digital could be there. Media might have endorsed any candidate, but not through the content. But this time, we observed that, especially during US election, ex-owner was putting his… content post repeatedly and that post are coming to our account as well repeatedly and that significantly influence the result of election, this is said. So in this point what I’m talking about that now if platform owners are using platforms for their personal desired candidates then it’s a big risk and if they use AI based content and the process then that is more dangerous thing in democratic process and democratic, this is not the standard that we expect in the democracy. So in that perspective how we make more accountable to these platforms is one thing. Then another thing is very significant, previously there were in Nepal as well, in Nepal’s election context as well, this business platform operators they wish to have their business and if election commission also work with them or election commission also influenced by them then there will be more risky. In 2022 in Nepal some of the candidates they had some problem with the election commission and then they were complaining that their contents were asked by the commission to remove to the platform providers. So that is another very big risk on platform governance and the institutional mechanism of election, you know election commission as well. So these are the very significant issues if these are are influenced using AI, then there could be more risk.


Tapani Tarvainen: Thank you. At this point, I understand we have some online questions. Maybe Dennis would like to read out some questions for us.


Dennis Redeker: I’m happy to do so. This is a fantastic discussion and we have some questions in the chat. So I thought there were some public and one privately reached me. So I thought I’m gonna share those with you. And I thank you all for the speakers so far. The first question by Ahmed is only identified with the first name here or last name, asked what is the role of e-voting in this? And this is maybe something that where you think this might be a different conversation, but maybe it isn’t because maybe this is also about trust and e-voting and AI are both matters of trust when it comes to elections. And it would be certainly something that is interesting perhaps for some of the speakers to pick up on. So how does that combine? So having AI power disinformation online and then also providing you a vote online, potentially, how does that play together? The second question here is by Tanka Ayal, who asked about the positive uses of AI in elections. And I think that refers in part or is meeting in part what Roxanna has already presented about positive use of AI. I think in the context of India, I think you mentioned, maybe that’s something that you can go in some more detail, but also saw that you already posted the link to the report in the chat online as well. And the third question is on the risk of, elections being canceled. And we just had this in Romania and that also relates to trust, I think, in election integrity. under which conditions could elections be cancelled and what does it do to us as voters when we go into an election not knowing whether this will be an election that is fought fairly and whether it will be cancelled by a court later on and so maybe this is a question to Roxana but also for those others perhaps, what does it make with a community when you cannot trust that the election will go forward and the manipulation actually might mean having to retreat, take back the results of an election.


Tapani Tarvainen: So much from the online moderation team here. Okay, thank you for those. Let’s ask if anybody wants to pick up on the e-voting issue, how much if at all that relates to AI. Is e-voting going on in Estonia for a long time for example but I don’t think we have any Estonians around to talk about that but has it had anything specifically to do with AI? Anybody want to


Babu Ram Aryal: pick on that? Can I take this question? Yeah, as being a neighbor of India, Nepal and India we share a border and recently India had an election and in India there were many challenges on the compromise of voting machines and also Elon Musk in one statement said that voting machine could be compromised and then that sparked a bit of debate in Indian context and obviously this is a very challenging thing and then in Nepal’s context we may not have so far, let’s say, may not have foreign influence in the election process but if our, you know, talking from Indian perspective again, India has a range of positions like rural, on education is very much there so but still they are using voting machines so in that context it’s very risky when we use this and at the beginning I also mentioned that data systems and voting machines are very vulnerable, critical infrastructure when we talk about from an election perspective. And if our data system and the voting machine system are not securely protected, then in that case, there could be a big chance of compromise. And as Tanka asked about positive side of election, that at the beginning also, I mentioned that this has given a power to a common person to participate in the political process. And lots of examples we have seen, even in Nepal, we have seen a single person without any campaign group, only using platforms, got elected in mayor or parliamentarian. So yes, it gives significant power, as our honorable MP Safari also mentioned, that unknown person also could have been elected using this content and participating in that process.


Tapani Tarvainen: Kio, it seems like Safari has something to add to that.


Ayobangira Safari Nshuti: Yeah, I would like to say that the link between electronic vote and AI, it is not straight. Because on the electronic vote, the problem we have is maybe to corrupt the vote, to change the vote. You vote for A, and the machine will count for B. That can be done by attacking that machine. But also, what AI bring, AI also work in the cybercrime. Because to attack a machine, normally it requests some kind of skills. But AI give those skills to normal people. You can attack, you can hack something just using AI. It gives you a skill. So voting machines, they are now vulnerable. not only to those high-profile hacker, even normal people hacker, normal people using AI that are able to hack the system. But for now most of the use of AI that can interfere in election is using those deepfakes and using the misinformation to change the perception of the voter themselves so they can be convinced to vote for someone they will not vote for him normally. And on that way is I don’t know how you will cancel an election because the voter have voted for someone maybe you have influences him. It’s like even the advertising on the TV, the normal advertising, but it’s not it’s not easy to detect, to determine if that that kind of deepfake was not there the impact it will have been on election. It’s not very easy but for trafficking the voting machine there is very easy to see how how many vote was changed by by the attack and that’s where the AI gives some access for those for anyone now to be able to act and to change the result on those


Tapani Tarvainen: machine. I’m not sure if I read between you’re implying that AI could be actually useful in e-voting that it could be used to detect certain kinds of tampering as well but otherwise the link is definitely not direct. But thinking of the third question there that cancelling election can be a problem so maybe if AI can cause so much distrust in elections that they tend to be cancelled too easily that could be a problem. Maybe Roksana could like to address that


Roxana Radu: possibility. Yes thank you very much for this question. I think it’s a very important one, and it’s definitely on everybody’s mind back home in Romania, I can tell you that. With the court decision announced at the beginning of December, we still don’t know the dates of the next election, but everybody’s thinking, can we actually trust the next round of presidential elections if here we have proven post facto, so after the fact that there was so much interference, what are we putting in place to prevent that this is going to happen next time? And it’s a big question because we’ve just had elections for the parliament and those elections were not challenged from the perspective of the process, but they showed that the vote was very split, so we need a coalition in place to be agreed before we have the date of the new presidential elections. So it’s going to take a while and we’ll see what happens in between and whether we have institutional measures to address this. But just on the question of trust, right now, there’s also an indirect undermining of the democratic process through the cancellation of elections, right? On the one hand, yes, we had this in reaction to what has happened, but also for most people, this decision is perceived as a violation in some respect of the democratic process itself, that there is a core decision that comes in and annuls the vote of 52% of eligible voters. So this is something that needs to be addressed in a broader conversation around how democracy itself transforms with the rise of AI and digital technologies more broadly. In a way, the processes we’ve had in place for so long, including some of the institutions that are overseeing the democratic process, were created. in an era that had very little technology around. Right now we’re talking about transforming these processes all together, and we probably have to rethink a little bit the relationship between the forms of democracy we have and the technology that is available. And just very briefly, if I may jump in on the question of e-voting, I’ll just say very briefly that if we look at the data on this, it’s actually very few countries around the world that have opted for e-voting. We have obviously very good examples in that category, Estonia being one of them. We have a couple of examples from outside of the Western world as well. But altogether, many countries have stayed away from that because the feeling is that we are not able to prevent any sort of manipulation that might happen with e-voting. So most democracies, at least in Europe, have had that conversation, and most have decided to not move their voting processes online, which ultimately may or may not mean it makes a big difference, because in the case of Romania, we had paper ballots, and yet the whole integrity of the process had been compromised. So before you go to that final stage of whether the vote is cast online or on paper, we need to think about those other intermediary stages, whether that’s electoral registration, whether that’s the campaigning you might have for the elections, the vote counting itself and the verification reporting. And it seems that in the Romanian case, there were cyber attacks happening at the time of the vote counting and those paper ballots being introduced into the system, as well as post-election audit. This is another very important part of the democratic process, and we have to have… safeguards in place across the whole cycle of the electoral process, not just at the time of counting the vote or casting the vote.


Tapani Tarvainen: Thank you. It does occur to me that somebody might want to set up deliberately pretending to be attacking the election so as to get the vote cancelled in order to undermine trust on the system. So even instead of actually trying to affect the election, just make the impression of that so that people don’t trust the system anymore. And AI may also make that easier or even just impossible without it maybe to do effectively. And another interesting observation here that in some countries the incumbent have so much power in the situation that they tend to win that actually foreign interference might be good for the democratic process here. But that’s also something that’s very difficult to, let’s say, assess in any useful way. But maybe you want to carry on from that or if not, I might suggest that you’d consider what kind of power AI actually does for the specific issue of disinformation spreading. A question is there. A question inside. Okay. Hands up. Who’s first? Sorry for not noticing.


Audience: Okay. My name is Nana. Can you hear me? Okay. I have a question, especially as someone who works specifically on AI and ethics. Considering the very big distinction between algorithms and AI, because they’re very different, there’s a lot of conversation around algorithmic discrimination against specific candidates. And from what I hear, there seems to be a lot of responsibility placed on the platforms. Beyond the responsibility, I’m also hearing a lot of trust, because the words like trusted partner has been used. And I’m wondering, is it not too much? Because in the real world sense of it, platforms are like vendors, right? They’re business set up for profit. They’re not NGOs. They’re not civil society organizations. It’s like expecting a newspaper to publish your views, and not the views of those who pay, and not the views of the people who set it up. it up to push their own agenda. I’m wondering if it would not be more beneficial to push for algorithmic transparency, in the sense that publications that would allow people understand how decisions were made by those algorithms, what did the algorithm consider in pushing this content towards someone’s feed and all of that? Because feedback, that we have received a lot of feedback from very right-wing people around platforms like X, and platforms like TikTok, platforms like IG. And that feedback says that previously, these platforms used to be very left-wing agenda, very liberal, very, this is what we want to see. This is how the world should be run. It was run like an alternate universe to the actual real life. But there’s like a push or a shift in agenda. And now that they feel like some sort of balance has been achieved. I disagree with this, but that’s different. But this is the conversation. And I’m wondering that in demanding certain things from the platforms, are we not, one, trying to curb free speech? And because the free speech doesn’t look like the speech that we’re used to, or the speech that we like. And two, why do we trust these platforms? Why do we expect these platforms to comply to other things other than regulatory requirements? Why do we trust this platform so much? That’s like my big question. Why do we trust this platform so much? Thank you.


Tapani Tarvainen: Thank you. And I’ll hand it over to you quickly, but I’ll have to ask you to please be brief. We have only 10 minutes left of the session.


Audience: Okay, I’ll be really brief. Although there was a story to be told about this. I was thinking more very, very similar to your question, I suppose. I’ll start with the example, because during the COVID pandemic, there was a lot of conspiracy theories. A lot of people felt isolated online and started to believe that there are certain larger agendas in the world, and which is why we’re subject to them. And what a lot of research found was that these people were generally ostracized in society, left isolated. there’s a poverty problem, there’s a socio-economic problem that left people out. And I feel like we see this also playing out in the election space, where when people are isolated to believe in such disinformation campaigns, deep fakes, different examples like that. So when talking about governance, and this is to all the speakers, do you think there should be, to what extent, if any, do you think that the intervention should be more on a social aspect rather than tech governance or platform governance?


Tapani Tarvainen: So two very good interventions and questions there. Who would like to go first? I think Babu looks like he wants to speak, go ahead.


Babu Ram Aryal: I was also supposed to come into the very topic, disinformation and the election in our topic. So who is providing disinformation? Who are the agent of disinformation, especially in election process? And it has to be very clear that. So now we are very clear. There is possibility of misinformation and disinformation in election process or in common communication platforms. So to whom to trust and whether we need to trust or not to the platform providers. If we engage on our own, then we don’t have choice to trust. But it’s our choice to confine our engagement in the platform. If you lock your privacy system, and if you limit your engagement, then there will be more secured process, right? So it’s very important that we ourselves set our design that what level of engagement we do in the platform. And when there are disinformations or misinformation, who are responsible? able to remove that. Now, platform providers, they have their own system. There are two models. One is automated, AI-based. Millions of contents are moderated by the platform providers based on their own standards. And there will be another level of moderation, manual moderation. When you complain, then platforms will be responding on that. And they will be evaluated. And then if they think that it has to be removed, then they’ll remove that. And also, now, significant agents are there now. Now, it’s like now lots of fact-checkers are there. And the role of fact-checkers, responsible fact-checkers, are very significant during the election and during the regular time as well. But during the election, it’s the responsibility of the actor of that election has to be very precise on how we fight with this disinformation. Actor means election commission, law enforcement, politicians who are standing as a candidate and voters and civil society. All of them have to be more careful than the regular time because there could be targeted disinformation supplied during that process. And it’s not only the business. Business also should be accountable. Accountability comes when you start your business. Any kind of business, accountability comes together. It’s not a different thing that you do business, you don’t be accountable. So it’s very important that platform providers also be more accountable when there is a sensitivity. They have to take more care on that, that they have more responsibility. So in this way, we’re when, how we can address the major things and like that. And also on law and ethics perspective, of course, we need certain model of governance or regulatory perspective. And in that case, yes, in that way, we can address this thing. Thank you very much.


Tapani Tarvainen: I was just reminded that we have only five minutes left of the session. So we’ll have to start wrapping up slowly, but let’s go one more round of our panelists commenting that.


Ayobangira Safari Nshuti: Yeah, I will be just short, very short. I should say, previously, some of those platform, they were seen as left-wing, but the perceptions change. Where the perceptions change is because somehow we think something has changed in the algorithm they use. And as a parliament, what we want to, from those platform is one thing you say, is transparency. We just to know what is being run in the background. So we can see if there’s some fairness, some equity, how they treat information coming from different sources. If we have that transparency, the trust will be more. Thank you.


Roxana Radu: Yeah, very briefly on the first question, I agree with Babu, there’s a need to have more transparency over funding, over the labeling of that content, and also over promotion, right? These algorithms are not a different species, right? They are, they should not be completely unaccountable. We need to look into how they promote the content and why, and whether there is that preferential treatment or not, and whether that results in manipulation or not. That’s the second part of the question, but there is funding involved, obviously, and that has to be also. transparent and placed under scrutiny. Since platforms have become the new public sphere, they are not just businesses, they are more than businesses, they are the new public sphere, that’s where communication actually happens. People might not turn on the TV anymore but they will receive their news from encrypted groups, from different platforms and so on and so forth. So they provide a public channel for communication during elections and most countries have rules in place for how you promote yourself during the elections and the platforms can be living in a different universe, they need to abide by those rules, they are bound to apply national legislation on these electoral cycles. So this is something that is only a question of respecting existing legislation. And on the second question, very briefly, should the intervention be broader than just tech governance? Should we look at social aspects as well? And absolutely, I agree with you. I think we need to work on multiple levels and so far we’ve given quite a bit of attention to technology, albeit imperfectly, we have not found the right solution to all of these problems, but we haven’t really looked at what could be done on the social level, beyond just saying more digital literacy and just having a level of awareness that is better. I think we need to work on issues of poverty, on issues of connectivity, we need to work on many other aspects including welfare and so on, to be able to give people equal chances in society and that’s going to make democracy a better place for everybody.


Tapani Tarvainen: My watch says we have 45 seconds to go. I would like to hand over to Dennis if you have a final comment here to make.


Dennis Redeker: Let me just say that this conversation has been been thrilling. I really appreciate both the positive and the scary scenarios for the use and also the misuse of AI in the context of elections. I think this is only the start of a conversation that we’ll be having. And the way that we started off this planning of the session at a time when we thought AI and elections in 2024 is going to be scary, this is mirroring what Rosanna said earlier, that we had a phase where we thought we have nothing to talk about in December because nothing is going to happen. And then came along the Romanian elections and there will be more. And there’ll be more things that we have to deal with. So I think this is the start of a conversation and also start to what more regulation and more transparency in that field. Thank you everyone from the side of the Inherent Rights and Principles Coalition. Thank you for the speakers and the moderators to jump in to this frame.


Tapani Tarvainen: Well, panelists and Jerenis and everything and for the audience as well and for the great questions we had. But now we are 30 seconds over time, so let’s close it here. Thank you.


A

Ayobangira Safari Nshuti

Speech speed

129 words per minute

Speech length

1249 words

Speech time

578 seconds

AI use in elections less widespread than feared

Explanation

The speaker suggests that the use of AI in elections was not as extensive as initially anticipated. There was concern about AI being used to influence election results, but it did not materialize to the extent expected.


Evidence

The speaker mentions that people were prepared to face AI and took measures against it.


Major Discussion Point

Impact of AI on Elections


Differed with

Roxana Radu


Differed on

Impact of AI on election outcomes


AI used to promote candidates rather than attack opponents

Explanation

The speaker notes that AI was primarily used by candidates to promote themselves rather than attack opponents. This shift in usage was different from what people initially expected.


Evidence

Examples given include using AI for chatbots to respond to emails and phone calls, and to make speeches.


Major Discussion Point

Impact of AI on Elections


AI helped smaller candidates compete with larger ones

Explanation

The speaker argues that AI tools helped level the playing field for smaller candidates. It allowed them to compete more effectively with larger, better-funded candidates.


Evidence

Examples of small candidates in the US and Japan gaining more votes by using AI effectively.


Major Discussion Point

Impact of AI on Elections


Agreed with

Roxana Radu


Babu Ram Aryal


Agreed on

AI has both positive and negative impacts on elections


E-voting machines vulnerable to hacking, including through AI

Explanation

The speaker points out that e-voting machines are vulnerable to hacking, and AI can potentially make these attacks easier. This vulnerability extends to both sophisticated hackers and ordinary people using AI tools.


Evidence

Mention of AI giving hacking skills to normal people, making voting machines more vulnerable.


Major Discussion Point

Election Integrity and Trust


R

Roxana Radu

Speech speed

141 words per minute

Speech length

2098 words

Speech time

892 seconds

AI used for both positive and negative purposes in elections

Explanation

The speaker points out that AI has been used for both beneficial and harmful purposes in elections. While there are creative uses to promote inclusivity, there are also cases of electoral interference.


Evidence

Positive example from Indian elections using AI for voter motivation and campaign translation. Negative example from Romanian elections where AI was used for electoral interference.


Major Discussion Point

Impact of AI on Elections


Agreed with

Ayobangira Safari Nshuti


Babu Ram Aryal


Agreed on

AI has both positive and negative impacts on elections


Differed with

Ayobangira Safari Nshuti


Differed on

Impact of AI on election outcomes


Platforms have reduced staff monitoring election content

Explanation

The speaker notes that many social media platforms have reduced the number of staff working on monitoring electoral content. This reduction in human oversight has led to increased reliance on AI tools for content moderation.


Evidence

Mentions of platforms like Meta and Twitter reducing staff working on these issues.


Major Discussion Point

Platform Governance and Transparency


Agreed with

Babu Ram Aryal


Ayobangira Safari Nshuti


Agreed on

Need for increased transparency and accountability from platforms


Differed with

Babu Ram Aryal


Differed on

Effectiveness of AI in content moderation


Romanian election cancelled due to foreign interference and AI use

Explanation

The speaker discusses the cancellation of the Romanian presidential election due to foreign interference and illegal use of AI. This case is presented as a wake-up call for the potential misuse of AI in elections.


Evidence

Specific mention of the Constitutional Court of Romania’s decision to cancel the election results due to electoral interference and AI use.


Major Discussion Point

Election Integrity and Trust


Need for safeguards across entire election process, not just voting

Explanation

The speaker emphasizes the need for safeguards throughout the entire electoral process, not just during voting. This includes stages such as electoral registration, campaigning, vote counting, and post-election audits.


Evidence

Mention of cyber attacks during vote counting and post-election audit in the Romanian case.


Major Discussion Point

Election Integrity and Trust


B

Babu Ram Aryal

Speech speed

112 words per minute

Speech length

1580 words

Speech time

844 seconds

AI tools not effective for monitoring content in local languages

Explanation

The speaker highlights that AI tools are not very effective in monitoring content in local languages. This is particularly problematic in countries where multiple languages are used.


Evidence

Example of words having different meanings in local contexts, which AI may not understand correctly.


Major Discussion Point

Impact of AI on Elections


Agreed with

Ayobangira Safari Nshuti


Roxana Radu


Agreed on

AI has both positive and negative impacts on elections


Differed with

Roxana Radu


Differed on

Effectiveness of AI in content moderation


Risk of platform owners using AI to influence elections

Explanation

The speaker expresses concern about platform owners potentially using AI to influence election outcomes. This is seen as a significant risk to the democratic process.


Evidence

Mention of platform owners potentially using their platforms to promote desired candidates.


Major Discussion Point

Platform Governance and Transparency


Multiple actors responsible for fighting disinformation

Explanation

The speaker argues that combating disinformation is a shared responsibility among various actors. This includes election commissions, law enforcement, politicians, voters, and civil society.


Major Discussion Point

Addressing Disinformation


Need for fact-checkers and digital literacy

Explanation

The speaker emphasizes the importance of fact-checkers and digital literacy in combating disinformation. These are seen as crucial tools in maintaining the integrity of the electoral process.


Major Discussion Point

Addressing Disinformation


Agreed with

Roxana Radu


Ayobangira Safari Nshuti


Agreed on

Need for increased transparency and accountability from platforms


Platforms should be more accountable during sensitive periods

Explanation

The speaker argues that platform providers should be held to a higher standard of accountability during sensitive periods like elections. This increased responsibility is seen as necessary due to the potential impact on democratic processes.


Major Discussion Point

Addressing Disinformation


Agreed with

Roxana Radu


Ayobangira Safari Nshuti


Agreed on

Need for increased transparency and accountability from platforms


T

Tapani Tarvainen

Speech speed

150 words per minute

Speech length

1136 words

Speech time

451 seconds

Question of how much trust to place in platforms

Explanation

The speaker raises the question of how much trust should be placed in social media platforms during elections. This reflects the ongoing debate about the role and responsibilities of these platforms in democratic processes.


Major Discussion Point

Platform Governance and Transparency


A

Audience

Speech speed

150 words per minute

Speech length

567 words

Speech time

225 seconds

Question of whether to trust platforms as neutral actors

Explanation

An audience member questions the level of trust placed in platforms, pointing out that they are profit-driven businesses rather than neutral actors. This raises concerns about their role in shaping public discourse during elections.


Evidence

Comparison of platforms to newspapers, which have their own agendas and business interests.


Major Discussion Point

Platform Governance and Transparency


Need to address underlying social issues, not just technology

Explanation

An audience member suggests that addressing disinformation requires looking beyond just technological solutions. They argue for a broader approach that includes addressing social issues such as poverty and isolation.


Evidence

Reference to research findings about conspiracy theories during the COVID pandemic being linked to social isolation and economic issues.


Major Discussion Point

Addressing Disinformation


Agreements

Agreement Points

AI has both positive and negative impacts on elections

speakers

Ayobangira Safari Nshuti


Roxana Radu


Babu Ram Aryal


arguments

AI used for both positive and negative purposes in elections


AI helped smaller candidates compete with larger ones


AI tools not effective for monitoring content in local languages


summary

The speakers agree that AI has dual impacts on elections, offering benefits like leveling the playing field for smaller candidates, but also posing risks such as ineffective content monitoring and potential misuse.


Need for increased transparency and accountability from platforms

speakers

Roxana Radu


Babu Ram Aryal


Ayobangira Safari Nshuti


arguments

Platforms have reduced staff monitoring election content


Platforms should be more accountable during sensitive periods


Need for fact-checkers and digital literacy


summary

The speakers agree on the need for greater transparency and accountability from social media platforms, especially during elections, and emphasize the importance of fact-checking and digital literacy.


Similar Viewpoints

Both speakers emphasize the need for a comprehensive approach to safeguarding elections, involving multiple stakeholders and addressing various stages of the electoral process.

speakers

Roxana Radu


Babu Ram Aryal


arguments

Need for safeguards across entire election process, not just voting


Multiple actors responsible for fighting disinformation


Unexpected Consensus

AI potentially benefiting smaller political candidates

speakers

Ayobangira Safari Nshuti


Babu Ram Aryal


arguments

AI helped smaller candidates compete with larger ones


AI tools not effective for monitoring content in local languages


explanation

While discussing the challenges posed by AI, there was an unexpected consensus on its potential to benefit smaller political candidates, leveling the playing field in elections. This positive aspect of AI in elections was not initially anticipated in the discussion.


Overall Assessment

Summary

The main areas of agreement include the dual nature of AI’s impact on elections, the need for increased platform accountability and transparency, and the importance of a comprehensive approach to election integrity involving multiple stakeholders.


Consensus level

Moderate consensus was observed among the speakers on key issues. While there were differences in specific examples and experiences, there was general agreement on the broader challenges and necessary actions. This level of consensus suggests a shared understanding of the complex relationship between AI and elections, which could facilitate more targeted and collaborative approaches to addressing these challenges in the future.


Differences

Different Viewpoints

Impact of AI on election outcomes

speakers

Ayobangira Safari Nshuti


Roxana Radu


arguments

AI use in elections less widespread than feared


AI used for both positive and negative purposes in elections


summary

While Safari Nshuti suggests AI use was less widespread and impactful than feared, Radu points out significant cases of both positive and negative AI use in elections, including electoral interference.


Effectiveness of AI in content moderation

speakers

Babu Ram Aryal


Roxana Radu


arguments

AI tools not effective for monitoring content in local languages


Platforms have reduced staff monitoring election content


summary

Aryal highlights the ineffectiveness of AI in monitoring local language content, while Radu notes that platforms are increasingly relying on AI for content moderation despite its limitations.


Unexpected Differences

Trust in platforms

speakers

Babu Ram Aryal


Audience member


arguments

Platforms should be more accountable during sensitive periods


Question of whether to trust platforms as neutral actors


explanation

While Aryal suggests increased accountability for platforms during elections, an audience member unexpectedly questions whether platforms should be trusted at all, given their profit-driven nature. This highlights a more fundamental disagreement about the role of platforms in democratic processes.


Overall Assessment

summary

The main areas of disagreement revolve around the extent and impact of AI use in elections, the effectiveness of AI in content moderation, and the level of trust and responsibility that should be placed on platforms.


difference_level

The level of disagreement among speakers is moderate. While there is general agreement on the need for measures to ensure election integrity, speakers differ in their assessment of AI’s impact and the most effective approaches to address challenges. These differences reflect the complex and evolving nature of AI’s role in elections, suggesting that a multifaceted approach may be necessary to address the various concerns raised.


Partial Agreements

Partial Agreements

All speakers agree on the need for measures to ensure election integrity, but they focus on different aspects: Safari Nshuti emphasizes digital literacy, Aryal stresses platform accountability, and Radu advocates for comprehensive safeguards throughout the election process.

speakers

Ayobangira Safari Nshuti


Babu Ram Aryal


Roxana Radu


arguments

Need for fact-checkers and digital literacy


Platforms should be more accountable during sensitive periods


Need for safeguards across entire election process, not just voting


Similar Viewpoints

Both speakers emphasize the need for a comprehensive approach to safeguarding elections, involving multiple stakeholders and addressing various stages of the electoral process.

speakers

Roxana Radu


Babu Ram Aryal


arguments

Need for safeguards across entire election process, not just voting


Multiple actors responsible for fighting disinformation


Takeaways

Key Takeaways

AI’s impact on elections in 2023-2024 was less dramatic than initially feared, with more use for self-promotion than attacks


AI helped smaller candidates compete with larger ones by providing similar campaign tools


There are both positive and negative uses of AI in elections, including for voter outreach and disinformation


Platform governance and algorithmic transparency are major concerns, especially given reduced human content moderation


Election integrity remains a critical issue, as demonstrated by the cancellation of Romania’s election due to foreign interference and AI use


Multiple stakeholders have responsibility in combating election disinformation, including platforms, election officials, and voters


Underlying social issues like poverty and isolation contribute to the spread of disinformation and need to be addressed alongside technological solutions


Resolutions and Action Items

Need for greater transparency from social media platforms about their algorithms and content promotion practices


Platforms should be more accountable and take extra precautions during sensitive periods like elections


More fact-checkers and digital literacy initiatives are needed to combat disinformation


Unresolved Issues

How to effectively regulate AI use in elections without infringing on free speech


The appropriate level of trust to place in social media platforms during elections


How to safeguard the entire election process against AI-enabled interference, not just voting itself


Balancing the benefits of e-voting with cybersecurity concerns


How to address AI-generated disinformation in languages not well-represented online


Suggested Compromises

Focusing on transparency and labeling of AI-generated content rather than outright bans


Combining technological solutions with efforts to address underlying social issues contributing to disinformation spread


Thought Provoking Comments

But the case of Romania changes the narrative completely. As you might have seen about two weeks ago. the Constitutional Court of Romania decided to cancel the results of the first round of presidential elections.

speaker

Roxana Radu


reason

This comment introduced a concrete, recent example of AI interference in elections having major consequences, shifting the discussion from theoretical concerns to real-world impacts.


impact

It changed the tone of the conversation from speculative to more urgent and serious. It led to further discussion about the specific ways AI was used to interfere in the Romanian election and the implications for future elections.


So at the end of the day, I think we have to put that in balance. On the one hand, they’ve cut all the funding they had towards proper ways of dealing with this and outsourced a lot to AI, in fact, using AI tools to detect some of this content. Turns out it doesn’t work all that well.

speaker

Roxana Radu


reason

This insight highlighted the paradox of using AI to police AI-generated content, and the inadequacy of current approaches by platforms.


impact

It deepened the conversation around platform responsibility and the challenges of content moderation, leading to further discussion about the need for human oversight and the limitations of AI in addressing disinformation.


Everyone was looking on the US election on Deepfake but as you say it happened in Romania but people was looking on the US and also even the US. AI was used not mainly to make Deepfake but to promote themselves, like people who were using the AI to make some, some chatbook to respond to email, to respond to phone call automatically.

speaker

Ayobangira Safari Nshuti


reason

This comment provided a nuanced perspective on how AI was actually being used in elections, contrasting expectations with reality.


impact

It shifted the discussion from focusing solely on negative uses of AI to considering how it was being used as a campaign tool, broadening the scope of the conversation.


Considering the very big distinction between algorithms and AI, because they’re very different, there’s a lot of conversation around algorithmic discrimination against specific candidates. And from what I hear, there seems to be a lot of responsibility placed on the platforms. Beyond the responsibility, I’m also hearing a lot of trust, because the words like trusted partner has been used. And I’m wondering, is it not too much?

speaker

Audience member (Nana)


reason

This question challenged the assumption that platforms should be trusted partners in addressing election interference, raising important points about the nature of these companies as profit-driven entities.


impact

It led to a deeper discussion about the role of platforms, the need for transparency, and the balance between regulation and free speech. It also prompted panelists to clarify their positions on platform responsibility.


Overall Assessment

These key comments shaped the discussion by grounding it in concrete examples, challenging assumptions, and broadening the scope of the conversation. They moved the dialogue from theoretical concerns about AI in elections to a more nuanced exploration of real-world impacts, the complexities of platform governance, and the balance between leveraging AI’s benefits and mitigating its risks. The discussion evolved from focusing solely on disinformation to considering both positive and negative uses of AI in elections, as well as the broader societal context in which these technologies operate.


Follow-up Questions

How can we ensure algorithmic transparency in social media platforms during elections?

speaker

Ayobangira Safari Nshuti


explanation

Understanding how algorithms treat information from different sources is crucial for ensuring fairness and equity in election-related content distribution.


What are the most effective ways to combat disinformation in local languages and contexts?

speaker

Ayobangira Safari Nshuti


explanation

AI tools struggle with local languages and context-specific meanings, making it challenging to detect and counter disinformation in diverse linguistic environments.


How can we balance the positive uses of AI in elections (e.g., increasing voter participation) with the risks of manipulation?

speaker

Roxana Radu


explanation

Understanding this balance is crucial for leveraging AI’s benefits while mitigating its potential negative impacts on democratic processes.


What measures can be put in place to prevent the cancellation of elections due to AI-related interference?

speaker

Dennis Redeker


explanation

Addressing this issue is vital for maintaining trust in the democratic process and ensuring the integrity of future elections.


How can we improve the security of e-voting systems against AI-powered cyber attacks?

speaker

Babu Ram Aryal


explanation

As AI enhances the capabilities of potential attackers, ensuring the security of electronic voting systems becomes increasingly important.


What reforms are needed in electoral institutions and processes to adapt to the challenges posed by AI and digital technologies?

speaker

Roxana Radu


explanation

Existing democratic institutions and processes may need to be updated to effectively address the new challenges presented by AI in elections.


How can we address the underlying social and economic factors that make people susceptible to election-related disinformation?

speaker

Audience member


explanation

Tackling root causes like poverty and social isolation may be crucial in combating the spread of disinformation during elections.


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.