Day 0 Event #170 2024 Year of All Elections: Did Democracy Will Survive?

15 Dec 2024 12:45h - 13:45h

Day 0 Event #170 2024 Year of All Elections: Did Democracy Will Survive?

Session at a Glance

Summary

This discussion focused on efforts to combat disinformation and protect election integrity across different regions, particularly in light of recent and upcoming elections. Speakers from European institutions highlighted initiatives like the European Digital Media Observatory (EDMO) and legislative measures such as the Digital Services Act to monitor and counter disinformation. The European Parliament’s efforts included pre-bunking videos and media literacy campaigns to empower voters. In contrast, the United States was described as regressing in its approach, with platforms reducing content moderation and a rise in polarization and hate speech. The speaker from Africa Check emphasized unique challenges in Africa, including limited internet access, language barriers, and concerns about potential misuse of anti-disinformation laws. Despite fears about the impact of generative AI on elections in 2024, the final speaker noted that while AI-generated content played a role in elections, it did not have the catastrophic effects some had predicted. However, continued vigilance and research on AI’s impact on elections was stressed as crucial. The discussion highlighted regional differences in approaches to combating disinformation, with Europe taking a more regulatory stance, while other regions face distinct challenges in implementing similar measures. Overall, the speakers emphasized the importance of collaboration, media literacy, and ongoing monitoring to address the evolving landscape of online disinformation and its potential impact on democratic processes.


Keypoints

Major discussion points:


– European efforts to combat disinformation, including legislation, rapid response systems, and initiatives like EDMO


– US challenges with disinformation, including platform inaction and polarization


– African experiences with disinformation in elections, including targeting of journalists and electoral bodies


– The impact of generative AI on elections in 2024 and looking ahead to 2025


– Differing views on the value of legislative approaches to combating disinformation in different regions


The overall purpose of the discussion was to examine approaches to combating disinformation and protecting election integrity across different regions, with a focus on recent and upcoming elections. Speakers shared experiences and initiatives from Europe, the US, and Africa.


The tone was largely informative and analytical, with speakers providing overviews of the situation in their respective regions. There was a sense of concern about the challenges posed by disinformation, but also some cautious optimism about efforts to address it, particularly in Europe. The tone became slightly more urgent when discussing the US situation and the potential impacts of AI going forward.


Speakers

– GIACOMO MAZZONE: Moderator


– ALBERTO RABBACHIN: Representative from the European Commission


– GIOVANNI ZAGNI: Representative from the European Digital Media Observatory (EDMO)


– PAULA GORI: Representative from EDMO


– DELPHINE COLARD: Spokesperson at the European Parliament


– BENJAMIN SHULTZ: Representative from American Sunlight Project


– PHILILE NTOMBELA: Researcher at Africa Check (South African office)


– CLAES H. DE VREESE: University of Amsterdam, Member of Executive Board of EDMO


Additional speakers:


– VIDEO: Narrator in a pre-recorded video about disinformation


Full session report

The discussion focused on efforts to combat disinformation and protect election integrity across different regions, particularly in light of recent and upcoming elections. Speakers from various institutions and organizations shared insights on the challenges and strategies employed in Europe, the United States, and Africa.


European Efforts and the Role of EDMO


Giovanni Zagni, from the European Digital Media Observatory (EDMO), explained EDMO’s crucial role in monitoring disinformation across the European Union. This initiative brings together fact-checkers, researchers, and other stakeholders to provide a comprehensive overview of the disinformation landscape in Europe. EDMO’s work includes coordinating national hubs, conducting research, and providing policy recommendations.


Delphine Colard, spokesperson for the European Parliament, outlined additional measures taken by the institution. These include the creation of a website explaining election integrity measures and the production of pre-bunking videos designed to educate voters about disinformation techniques. Colard emphasized the importance of media literacy campaigns in empowering voters to critically evaluate information. She also mentioned the potential establishment of a special committee on European democracy shields by the European Parliament.


Alberto Rabbachin, representing the European Commission, highlighted the activation of a rapid response system for European elections. This system aims to quickly identify and address disinformation threats as they emerge, demonstrating a coordinated approach to tackling disinformation in Europe.


Challenges in the United States


Benjamin Shultz, representing the American Sunlight Project, painted a concerning picture of the situation in the United States. He described a regression in efforts to combat disinformation, characterized by platforms “giving up” on content moderation. This has led to a rise in far-right narratives claiming censorship and a proliferation of deepfakes targeting politicians.


Shultz highlighted specific examples, including a recent report by the American Sunlight Project on sexually explicit deepfakes targeting members of Congress. He also noted the lack of regulation on election integrity measures in the US, contrasting sharply with the European approach. Shultz expressed concern about platforms like Meta’s third-party fact-checking program and issues with content moderation on YouTube.


African Perspective on Disinformation


Philile Ntombela, a researcher at Africa Check’s South African office, provided insights into the unique challenges faced in combating disinformation in Africa. These include:


1. Targeting of journalists and judiciary bodies by disinformation campaigns


2. A significant digital divide limiting access to fact-checking resources


3. Language barriers that complicate fact-checking efforts


4. Concerns about the potential misuse of anti-disinformation laws for censorship


Ntombela shared an example of how fact-checkers and journalists in South Africa faced accusations of bias when attempting to fact-check politicians’ statements. This led to the formation of an Elections Coalition, which included journalists and media houses working together on fact-checking efforts.


She also highlighted the Africa Facts Network declaration, an initiative to increase collaboration among African fact-checking organizations. Ntombela emphasized the need for context-specific solutions that take into account local challenges and potential risks associated with strict regulations.


Impact of AI on Elections


Claes H. de Vreese, from the University of Amsterdam and a member of EDMO’s Executive Board, addressed the role of generative AI in recent elections. He described the current situation as being “between relief and high alert.” While AI-generated content played a role in the 2024 elections, it did not have the catastrophic effects that some had feared.


However, de Vreese emphasized the importance of continued vigilance and research on AI’s impact on elections. He suggested that observatories like EDMO should continue monitoring how these technologies are deployed across various aspects of the electoral process in future elections, such as those in 2025.


Unresolved Issues and Future Directions


Several key issues remain unresolved and warrant further attention:


1. Balancing free speech concerns with the need to combat disinformation, particularly in the US


2. Addressing the digital divide and language barriers in combating disinformation in Africa


3. Understanding the long-term impact of AI-generated content on election integrity


4. Assessing the effectiveness of current platform policies in addressing disinformation globally


The speakers discussed whether a legislative framework similar to Europe’s would be helpful in their regions. While some saw potential benefits, others expressed concerns about the potential for misuse or unintended consequences.


The discussion concluded with a question from the moderator about the most pressing issues for the coming year. Responses varied by region but included the need for continued monitoring, improved collaboration between stakeholders, and addressing the challenges posed by emerging technologies.


The discussion highlighted the evolving nature of the disinformation landscape and the importance of ongoing research, collaboration, and adaptive strategies to protect election integrity in diverse global contexts.


Session Transcript

ALBERTO RABBACHIN: were committed to analyze the flags and decide, on the basis of their terms of services, if an action needs to be taken. This system was activated for the European election, for the French election, and for the Romanian election. And there was also a similar mechanism that was put in place on the Moldovan election. I’m basically going towards the end of my presentation. I just wanted to have a focus on the Romanian election. We know that they have been very difficult. There was a lot of foreign interference on this election, and this was shown also by the number of flags that the rapid response system had seen, with more than a thousand flags exchanged, coming from civil society organizations and fact-checkers, and going to the major platforms. Alberto? Yes, I’m concluding, because very quickly, I mentioned the European Digital Media Observatory. This is a bottom-up initiative financed by the Commission. We have put more than 30 million euros on this initiative. It has 14 hubs, and it has a system of monitoring this information across the EU and doing investigation. I’m sure that Giovanni will give you all the details on this. I will stop here, and I’m happy to take any questions if you need.


GIACOMO MAZZONE: Thank you very much. The question will be eventually at the end, because now we are very tight. Giovanni, you have been asked to continue and to complement the picture.


GIOVANNI ZAGNI: Yes, thank you. Thank you, Alberto. I’ll try and share my screen, because I have my presentation there. We’ll have a couple of seconds of embarrassed silence. Something is happening, yes, that’s magic, ok, that’s great. So I present, good afternoon, it is great for me to present our work with the European Digital Media Observatory in such an important venue as the 2024 Internet Governance Forum. My time is short, so I will dive right in. First of all, I would like to present a couple of key pieces of information about the observatory, which is usually known through its acronym EDNMO. Alberto mentioned it briefly, so since I have a couple of minutes more, I’ll try to present it. The observatory was established in 2020 as a project co-funded by the European Union. Its core consortium is composed of universities, research centres, fact-checkers, technology companies and media literacy experts. Beside the coordinating core, there are currently 14 hubs connected to EDNMO, covering sometimes just one country and sometimes a larger area in Europe. The concept behind the hubs is to cover the variety of languages and the specificity of media landscapes across the Union, since the challenges posed by this information are clearly very different from Slovakia to Portugal and from Finland to Greece. The general scope of EDNMO is to obtain a comprehensive coverage of the European media ecosystem, mainly with regards to disinformation and all the connected issues, and to propose, as well as enact, new and effective strategies to tackle them. For the broadness of its scope and its multi-stakeholder approach, it is a unique experience in the European landscape for its ability to carry out many different efforts, such as 1. To monitor disinformation across the continent through a network of 50-plus fact-checking organisations, working together on a regular basis. in monthly briefs and investigations. Two, to provide analysis and suggestions in the policy area with a special focus on the code of practice on this information that was mentioned by Alberto right before. Three, to coordinate and to promote media literacy activities such as the one that I’ll present in a minute. And four, to contribute to research landscape, to the research landscape in the field with a special effort in promoting data access to researchers but on this we have a class here which will tell us more in a minute. Let’s try and be very practical and specific about what we have done and what we are currently doing. I will give only a couple of examples of the many activities we are carrying out and I invite you to visit our website edmo.eu to know more. First example, ahead of the 2024 European elections we decided to set up a task force specifically to monitor the media ecosystem and to provide insight, analysis and action around the issue of disinformation. As part of this effort we set up a daily newsletter that, thanks to the hubs and the day-to-day work of fact-checkers in the field, updated policy makers, journalists and experts about the main disinformation narratives and the most important issues of the day, providing a connecting work which is usually made difficult at the European level, for example because of the great linguistic variety of Europe. Let me show you an issue I selected almost randomly. This is issue number 46 on June 10, 2024. It had three main items. One about how disinformation tried to exploit problems with polling stations in Spain on election day, as detected by Spanish fact-checkers. Another one about how a top Swedish candidate’s campaign was likely boosted by coordinated behaviours by ex-accounts. And it was done linking a Swedish report in the newsletter. and the third one about the rise of disinformation targeting the 2024 Paris Olympics, which was spotted by the big French newswire agency AFP and its Factuel department. At the same time we published reports and weekly insights about the main trends in disinformation in Europe and we promoted a Europe-wide media literacy effort with the hashtag BeElectionSmart. The BeElectionSmart campaign was an Edmo initiative to support citizens in finding reliable information about the elections and recognizing false or manipulative content ahead of the elections themselves. Each Monday from 20 to 29 of April to June 3rd of 2024 a new message along with the practical tips was published on the websites and social media accounts of all 14 Edmo hubs covering all EU member states. Most of the activities were new ideas and never tried out at the European scale and some of these activities turned out to be successful pilot projects that we have since adapted to new scenarios. For example the Romanian elections that were mentioned before of late November 2024 made headlines in Europe and beyond when a previously little-known candidate was able to finish first in the popular vote. Citing foreign interference and tampering with the regularity of the electoral campaign, on December 6th 2024 the country’s constitutional court annulled the results and ordered for the first turn of the elections to be held again. Drawing from a pilot during the last few weeks before the EU elections in June that Alberto mentioned, in the context of the Romanian elections we activated a rapid response system mechanism through which members of the Edmo community can proactively flag two very large online platforms suspicious and troublesome cases. It is then up to the platforms to take action or not, according to their terms of service. Moreover, on the EDMO website, we translated and made available to the community at least three analyses of the role of social media on the Romanian elections. Just yesterday, we made progress in an effort to provide a technological tool based on AI, which is currently in its testing phase, to Romanian fact-checkers, a tool which is apparently quite good at spotting networks of social media accounts, carrying on coordinated campaigns of dissemination of suspicion content. This is the result of cooperation between many actors, researchers involved in developing the tool, which are part of the EDMO community, the Bulgarian-Romanian Observatories of Digital Media, or BROAD, which is one of the most active regional hubs in the EDMO ecosystem, and EDMO-EU acting as coordinator and facilitator of these exchanges. It is tiresome, intensive, but also very rewarding work. And let me conclude with some actionable advice, if this short presentation gave you some food for thought. First, disinformation is an issue that naturally crosses disciplines and fields. It is crucial to build a multidisciplinary network of practitioners. Second, it is also necessary to find means to connect those practitioners. If not a big and expensive project as EDMO, then just a newsletter, or a comms group, or an app. Three, to help you communicate the results and attract new forces for your effort, you will need to produce an easy-readable, easy-shareable output. There are many for EDMO, and you can find all of them on the edmo.eu website. Finally, I will invite you to get in contact with us at EDMO to know more about our experience, our difficulties, our few successes, and many challenges. We are very happy to share what we know and to learn about what we do not. And thank you for your attention.


GIACOMO MAZZONE: Thank you very much. Giovanni, I hand over to Paola, please.


PAULA GORI: Thank you, Giovanni, for sharing the work that we are doing at ETMO. I think you did a very impressive presentation of the work we did and I like how you concluded. We are trying to learn, but we are open to learn even more. And that’s why events like today are actually very important also to learn from other experiences. But now I would like to give the floor to Delphine Golar. She is the spokesperson at the European Parliament and she will tell us what the European Parliament actually did. Because, as you know, in June we had the EU elections and that was quite an important moment for the Parliament. So the floor is yours, Delphine.


DELPHINE COLARD: Well, thank you and thank you for the opportunity to join you remotely and to talk here today. Indeed, the Parliament has been active in this area since 2015. As co-legislator, it has been pushing forward legislations, the legislations that my colleague from the Commission outlined, legislations to protect citizens from detrimental and harmful effects of the Internet, but also promoting freedom of speech and ensuring consumers have access to trustworthy information. This was at the core of the priorities during the past legislature and it will remain at the core in the next legislature that just started. And if we take stock now of the European elections that took place last June, well, the European elections are conducted hand in hand with the 27 EU member states. The European Parliament was adding a layer, deploying a go-to-vote campaign to mobilize as many people as possible by showing the added value of European democracy. And an important part of this democracy campaign, of this communication strategy, was to counter and prevent disinformation from harming the electoral process. And the idea was to anticipate potential disruption. could be expected in connection with the European elections. And we cooperated in this end with the other EU institutions, from the colleagues of the Commission that you just heard, the European member states in a rapid alert system, and the fact-checking community to get a complete picture. And I have to say that in-depth analysis that was provided throughout the period by the European Digital Media Observatory that we just heard, Giovanni, was really, really worthwhile. This was instrumental to have this whole-of-society approach. From our internal analysis in the Parliament of national elections in the member states, we knew that we could expect attempts to try and sow distrust in their electoral processes, alleging that elections are fraudulous or rigged, or spreading false voting instructions, or sowing polarization, especially around controversial topics. So what we wanted is to make sure that European citizens were exposed to factual and trustworthy information, so to empower them to recognize the signs of disinformation and also to give them some tools to tackle it. So inspired, we were inspired by many good practices in the different member states. And with the similar example that we had from Estonia and the Nordic countries, we set up a website about the European elections, explaining the technicalities of the elections. I hope you see the slide, because I don’t. The measures, perfect. The measures put in place by the European member states were explained, and it explained also at length how the EU ensured free and fair elections. So the idea of the website was to inform about the different aspects of election integrity, from information manipulation to data security. So the website was also equipping voters with tools how to tackle this information. This is one example. But of course, what we did also is we developed a… a series of pre-bunking videos explaining how to avoid common disinformation techniques. So, for example, taking advantage of strong emotions to manipulate or polarizing attempts or flooding the information attempts with contradictory versions of the same event. And thanks to the External Action Service, our partners in foreign policy, the videos were also available in non-EU languages, including Ukrainian, Chinese, Arabic and Russian. And I think I have a short version that we can show for 40 seconds now. Thank you. And it’s coming.


VIDEO: Disinformation can be a threat to our democracy. People who want to manipulate us with disinformation often use content with strong emotions, such as anger, fear or excitement. When we feel strong emotions, we are more likely to hit the like or share button without checking if the content is true. By doing this, we help spread disinformation to our friends and families. What can you do? Watch out for very emotional content, such as sensational headlines, strong language and dramatic pictures. Question what you see and read before you share. Things you would question if somebody told you face to face. You should also question online. Take a step back. Pause and resist the reflex to react without thinking.


DELPHINE COLARD: So you see, this was an example. So this is what spread throughout the period before the elections. And it was shared via social media and TikTok. And in addition, we organized briefings, workshops for civil society organizations, for youth, for educators, for journalists, for content creators. The idea there was really to engage different audiences, providing really tips and tricks on how to detect and avoid disinformation techniques. So, it was also reaching other members in the European Parliament with a specific guide. One element we are particularly proud of is establishing contact with several youngsters across Europe, especially first-time voters, via Euroscola and what we call the European Parliament’s Ambassador School Programme. Those are flagship programmes of the Parliament for students. As you know, raising awareness of the threat is a key and long-lasting solution that requires the implication of the society as a whole, and it starts with education. These were two examples, or three examples. One element that I want to highlight is also the importance we pushed on having strong relationships with private entities, civil society organisations to convey the importance of voting, and this was spread as widely as possible. So, it was tech companies, other companies, as the Code of Practice also mentioned, it was instrumental to have them on board. I want to mention the importance of strong, independent, pluralistic media that were instrumental in this fight. During the legislature, there were several legislations that were passed at European level, the European Media Freedom Act, or a Directive to Protect Journalists Against Abusive Lawsuits, and we tried also to support media in their work, from briefings, invitations or grants. I don’t know, we saw a lot of things during the last elections, maybe not the tsunami that was potentially, that we potentially feared, but there was an increase of information manipulation attempts targeting the European elections. Until now, we have not detected any that seemed capable of seriously distorting the conduct of these elections, and this is an assessment that we have shared with the EU institutions, the other EU institutions, and the European Digital Media Observatory, Giovanni can of course give you more. more information if you need. But we have to remain vigilant beyond the elections and continue the European elections, continue monitoring because the effect of this information is not a one-off. It’s not only during the European elections or those big moments. It’s a slow dripping that hollows out the stone. Look at what recently happened with the Romanian elections. It’s something that’s where the parliament is scrutinizing really at the moment and asking an information about to the commission. This legislature, we see that the parliament is very eager to have more information. They have passed several legislations to call for step up in this area, stepping up efforts. There will be, that’s a new dimension for this week. There will be a special committee that will focus on the European democracy shields. It’s really to assess existing and planned legislations related to the protection of democracy. They are also asking to deal with the question of addictive designs in social media platforms. So there is a lot of activity. And next week in the plenary, there will be two debates specifically on this information, especially this information during electoral period. Maybe to conclude as Giovanni did with some learnings, three main of our end, information manipulations, information manipulators really see elections as an opportunity to advance their own goals by smearing leaders, exploit existing political issues. So distrust erode the credibility in the democratic system and its institution. Second, good intentions and voluntary actions are not enough. Legislations and regulations play an instrumental part. So parliament has been and will remain a key actor there as co-legislator to shape laws that are fit for the digital age. And third, at the same as Giovanni already underlined, it is really important to continue to implement a whole of society approach. learning from each other’s practices and programs to double our efforts to make society more resistant to destabilization attempts. So this was a bit what we want to transmit for the European Parliament. Thank you.


PAULA GORI: Thank you so much Delphine and indeed as you said maybe there wasn’t a tsunami but as you rightly underlined and as we also underlined as Edmo this information is rather a drop after drop so we cannot just focus on it ahead of the elections but it’s rather a longer process and I think that right after these two presentations I mean there are so many keywords media, journalism, fact-checking, media literacy, emotions, addictive design, media literacy, digital platforms and so on so you understand why the whole of society approach but also that why the multidisciplinary approach for example if we know that emotions play a certain role that’s thanks to a specific research in the field. If we know about fact-checking it’s another field of research so that’s why institutions like Edmo in collaboration with other organizations like the Parliament, the Commission but platforms as well and civil society organization and so on are so important. And now I mean after this lots of words about the EU we thought that maybe would be interesting also to focus on other parts of the world because as we were saying this was a year of elections actually in the whole globe and so I’m very happy to give it over to Benjamin Schultz from American Sunlight who will focus on the US. The floor is yours.


BENJAMIN SHULTZ: Awesome, thanks so much Paula. I’m just gonna share my screen real quick if it wants to go. Okay I think it looks like that was a success. Can everyone see? Yes. Okay lovely. Well thank you all very much it’s wonderful to be here speaking with such a geographically and in so many other ways different diverse crowd from all over the world, different backgrounds, all here to talk about disinformation and how do we make the internet a better, safer place. So thank you for having me. Obviously, in the US, we just had an election. I think, putting it bluntly, I think it’s a result that surprised a lot of people. And really, this year of elections in the States was kind of a weird one. In many ways, we kind of regressed. And I’m going to explain this a bit further on, into my seven or eight minutes. But this is just a taste of what’s to come. But we’re in a very different place in the States than we were four years ago, and even eight years ago, in terms of taking action on disinformation, in terms of platforms playing an active role in content moderation, trust, and safety across the board. We’ve seen things regress, which I think is the direct opposite to how things have gone in Europe. And so it’s a very interesting phenomenon taking place. So with that, I will jump on into the slides here. So this GIF, this is one of my favorite GIFs. And I think it actually really accurately, despite it being funny, really accurately describes the current US approach to disinformation and online harms of all kinds. Whack-a-troll is what we at the American Sunlight Project call it. That’s a play on whack-a-mole. And as you can see, the cat is just trying to tap the fingers as they’re popping up. And as the cat tries to tap them, they go away. And the cat continues to do this for as long as I have this slide up here. And I think, again, this sums up where we’re at. In the last year, really two years, we have seen platforms pretty much just give up entirely. And this is not specific to any particular platform, although I will say that some certainly are doing more giving up. Uh, than others, um, I, you know, for, for lack of, um, or for not wanting to sort of impugn anyone’s integrity, I won’t, I won’t name that platform, but I think we can all take a guess. Um, and this is really problematic because we have seen a massive proliferation of hate speech, of, uh, false information of all kinds from false polling place locations to, um, attacks against elected officials, um, doxing, things like this. This has become commonplace in the States in the last year or two, um, we’ve seen polarization reach really unprecedented levels, political polarization. Um, and the political system is about as toxic as it’s been, um, certainly in my lifetime. And even though that’s, you know, anecdotal and qualitative, I, I, you know, it has certainly become worse in the last couple of years. Um, this has been buoyed by the rise of the far right in the States. Um, there’s been a significant populist turn as there has been in many other countries in the world. Um, and we’ve seen platforms sort of go along with this, um, which I think is very different than how things have played out in Europe. Um, in the States we do not have regulation such as the digital services act or really anything of the sort. Um, and platforms, as you can see just from these headlines, um, have, have surrendered and give it up. Um, termination of, uh, trust and safety teams has taken place. Um, and we’ve also seen the rise of, of a narrative of censorship, uh, from the far right primarily, also limited sections of the far left. Um, but really the political fringe on both sides, um, have started to claim that, um, any content moderation, any, um, action against false and, and, and malicious claims is tantamount to censorship. Um, in the States, of course, we have the first amendment, which protects the right to freedom of speech and, um, expression and assembly. Um, and this is of course a very American, um, American right. I, you know, this is something that I think every single American supports, um, the right to freely express yourself. Um, but where we’re seeing. some conflict, politically, in the states and with platforms in the government and the incoming administration, is sort of where that right, where the boundary is, right? For instance, in the states right now, my organization, the American Sunlight Project, we just released a new report just last week impacting how malicious, sexually explicit deepfakes have actually affected members of Congress. What we found was one in six women in Congress in the states are subject currently right now on the internet to sexually explicit deepfakes. And what we’ve seen is the Senate has passed bills already that would regulate these deepfakes and make it a criminal offense to spread them non-consensually because you’re using someone’s likeness to denigrate them and portray them as being in pornographic material. But we’ve seen pushback in the House from the far right who claims that this is a violation of free speech. So this is kind of the situation in the states right now, and I just want to preface everything that I’ve said here is not me taking a position one way or another, just sort of painting a picture of where we’re at in the states. And so, sorry, I should have skipped ahead a little bit earlier on the slide here, but we’ve seen this kind of democratization in the most kind of extreme way of artificial intelligence, deepfake technology, but also other types of malicious text-based content too that we’ve found plenty of evidence of foreign bot networks that have played a significant role on various social media platforms in this election cycle. You know, to the extent they changed voting behavior, I think that’s really kind of impossible to measure. But certainly we have plenty of evidence and not just us, but pretty much every organization working in this field in the states that this type of content has been pervasive and getting


GIACOMO MAZZONE: into people’s feeds, whether it’s on TikTok or X or Facebook or Instagram or whatever. We’ve seen that algorithms, just as my European colleagues just mentioned, you know, content that is emotional or. gets people riled up, makes them want to click more and scroll more. Algorithms favor this kind of content. And we’ve certainly seen plenty of malicious, fake, and actually also illegal content making their way into people’s feeds. So again, to the extent that changed voting behavior, not sure, but certainly people have been increasingly exposed to false and malicious content in this election cycle. Going back to the deepfake issue, and not just in terms of deepfake sexually explicit material or image-based deepfakes, but we’ve seen numerous instances in the States of actually election officials, and even Joe Biden himself, being spoofed and imitated by deepfakes. And again, in the States right now, just to paint the picture, again, without taking a political position, just accurately describing where we’re at, a lot of this material, it’s completely legal to make and create and disperse. Now, in this headline, the third one on the right here, New Hampshire officials investigate robocalls, et cetera, there was a criminal investigation here because it’s illegal to sort of interfere this blatantly in an election. But there’s plenty of other instances which haven’t been prosecuted of this type of behavior happening. And this is incredibly damaging. And again, in the States, we have pretty much zero regulation on deepfakes on really any kind of election-based integrity measures. We have pretty much none. Which again, is pretty much in direct opposite to how Europe has approached this issue. I think certainly in the States, our institutions are structured differently. I think it’s much harder for us to implement these kinds of measures, the DSA, GDPR, et cetera. But nonetheless, this really just highlights the issue. Thank you, Benjamin. Can you go to a close, please? Yes, we’ll close it up. So one thing, just I’ll wrap it up very quickly. Kamala Harris


BENJAMIN SHULTZ: ran for president in 2020, again, not getting political, but she has pretty much been the most attacked person in terms of gender and identity in the last really four to five years in the States. Numerous studies, actually one study that my boss authored found that of all gender and identity based attacks against any politician in the United States, Kamala Harris received roughly 80% of those attacks. And this really just, again, going back to the polarization, the toxicity of the American political system, this highlights that and it makes it incredibly difficult to get people to agree on a set of regulations, a set of rules for technologies or platforms, et cetera. And so with that, I will wrap it up and say that for the immediate term, the outlook in the States is not so good. And hopefully we get through this tough period and are able to sort of be united as Europe has been on this issue and, you know, improve our feeds, improve our political system and go from there.


PAULA GORI: So thank you very much. Thank you very much, Benjamin. As we are running a little late, I will give it immediately to Pilile Ntombela from Africa Check. We are moving again geographically. Pilile, the floor is yours.


PHILILE NTOMBELA: Good day, everybody. I’m going to just quickly share my screen as well and then I’ll get started if it doesn’t actually share. Can you see my screen at all? Not yet. OK. Now, yes, you can see it now. Yeah, fantastic. Amazing. Great. So I’m just going to quickly go off the screen and then hope for the best. Great. OK, first and foremost, thank you for having me. My name is Pilile Ntombela. I’m a researcher at the South African Office of Africa Check. Africa Check is the continent’s first independent fact-checking organization, and we have offices in South Africa, Kenya, Nigeria, and Senegal. So first and foremost, I’m sure everybody in the room is aware, but we often like to share that misinformation shared by well-meaning people who are trying to inform you and have no idea that the information is false, while disinformation, obviously shared with an intent to mislead, has a golden mind, often a political one, and the people share it know that the information is false. So patterns across the continent this year that we’ve seen through the election year, we found that targeting journalists and the judiciary and all these other bodies was a very powerful tactic. We found that journalists were accused of bias whenever they tried to fact-check, so we had something called the Elections Coalition in South Africa, which included journalists and media houses who would either try and do a quick fact-check themselves, so we trained them beforehand as part of our company, our organization’s training systems, or we helped them to fact-check, and often they were accused of bias whenever they fact-checked a specific politician and told that they support their opposition. We had rumors of electoral bodies favoring parties. A certain party in South Africa actually took our independent electoral commission to the Constitutional Court, which is the highest court in our country, stating that they felt they were being marginalized and also they were being basically receiving unfair treatment. Of course, the Constitutional Court ruled against that because it wasn’t actually true. It was more publicity to make people both wonder whether the Constitutional Court is independent and if the electoral commission is. And so this is the last part, which again leads back to the court case with the Constitutional Court. We found that people are more connected, but voters are still vulnerable to misinformation. Media literacy is not a big like this, a lot of people don’t have it. News media is increasingly putting important information behind paywalls. And in a continent like ours, which has a huge amount of economic inequality and poverty, this is very difficult for people to overcome those. Language remains a barrier. Africa has, I think, more than 2,000 languages across 53 states. And those are the ones that are the official languages. Sorry, let me go back here quickly. The news media as well, but on top of being connected, we found a report by the International Telecommunications Union, which showed that Africa is still, even in 2022, Africa still suffered from what we call the digital divide. We had the lowest level of people that had internet connectivity, which means those people then, whatever information they receive, they don’t have the opportunity to look it up or double check it or send it to a fact-checking organization like Africa Check, which means that these issues then remain a problem. Finally, we had platform accountability. We found that, especially on YouTube, I don’t know if I’m supposed to mention it, but some platforms, people were able to share their information unchecked, disinformation unchecked. One of the platforms obviously has put a note with a sort of miniature fact-check, but doesn’t actually curb the spread of that same content or the same posts that show disinformation. We are part of the META third-party fact-checking program. And so in that side, we found that we actually were able to kind of help the spread, because when we add a fact-check, it downgrades the post or even removes it. However. there’s still far more to be done on social media, particularly in places where, just like I said, in Africa, it’s very difficult for people to access that information in other ways. So if I find that there’s information that is incorrect, but I believe it and I share it and I send it to somebody who has no way of finding out if it’s important, if it’s true, then you’ll find that then that information spreads even faster. And platforms, obviously, their algorithms definitely need to be able to pick up common phrases used for disinformation, particularly in election years, but in general overall. The biggest allegation, the biggest disinformation this year was fraud allegations, particularly in South Africa. Claims of post spreading were by far the worst. A specific politician started these, knowing that he’s a very charismatic character, but also has had a lot of problems with legally and also with the IEC. So of course, started the issue saying that the vote would be rigged even before the election season started, even the year before. The media then did, they just shared with the parity fashion. They’d just not actually fact check it or even try to say this person said. It was more just shared verbatim to what he said. This then trended on social media. And this is some of the examples. So I’m assuming it’s my, but on my left, you have a report by a nationwide newspaper, which then took that statement made by the politician and just made this the headline. Which, of course, then drives the idea rather than creates a sense of this is what somebody said. On the right, the conspiracy was then named the big lie. And we found that between 25 May to June, this cloud was basically. it took over social media with the biggest players being in the biggest causes being the ones in purple and then later the ones in the turquoise. Sorry, we are long. Could you go to a close because we have the last speaker waiting and the next session starting soon. Thank you very much. Okay, sure. Then I’ll just speak about our stance on anti-information regulation. We found that in Africa, the backfire can be quite damaging. So if we use stifle and stifle people, censorship, and also people can turn to covert means, for example, using platforms like WhatsApp, which have end-to-end encryption, and then we won’t have access to them. It also runs the risk of penalizing misinformation instead of disinformation. So we had an accurate, we’ve decided to have a combined accord, which was created this year at the African Exchange, 55 checking organizations in more than 30 countries, basically reaching offline and variable communities, expanding access to reliable information, protecting fact checkers from facing harassment, and collaborating with tech partners to innovate. This then for us was more a collaborative space rather than a legislative or legal space. And that is all I’d say. Thank you so much.


GIACOMO MAZZONE: Thank you, Philippe. Sorry for that. The last speaker now is Klaas. Please, Klaas. Yes. So I suggest something radical,


CLAES H. DE VREESE: seeing that we have taken a lot of time and the next session is beginning soon. So I’m Claes de Vries. I work at the University of Amsterdam, a member of the Executive Board of Aetmo. And I was going to talk about a specific risk, but from a broad perspective around generative AI and elections in 2024. Let me just give you my take home message rather than going through the whole presentation at this point of the panel. I think we are somewhere between relief and high alert. So if you look across elections of 2024, it will be hard to identify a generated AI generated risk that really flipped elections in the last days. of the election. So on the one hand, you could say that was a big relief because that’s very different from the expectations going into 2024 when there were true and genuine fears about elections being overturned through generative AI. That didn’t happen. At the same time, all the evidence, and that’s the evidence that would be in the slides that I will then skip, there’s also not been a single election in 2024 worldwide where generative AI and AI-generated material has not played a role. And I think that’s the take-home message really of this discussion on AI, that there is a certain sense of relief that 2024 did not become the absolute catastrophic year where there was an absence still of regulations in this area and a technology that was available and that was deployed, but maybe not with the detrimental effects that were expected in 2024. Does that mean that the AI discussion is over as we move into a big election year in 2025? Absolutely not. It’s important to look at the impact of generative AI and Aetmo will continue to doing so also in 2025 as we see elections that take place in that year. It’s important to not only look at persuasion of voters, but to see what kind of role AI is playing in the entire ecosystem of elections, whether that is in the donation phase, whether that is in the mobilization phase, whether it is spreading disinformation about your political opponents, and whether or not it is igniting and fueling into already existing conflict lines and emotions that are taking place in particular elections in society. So let that be the take-home message for 2025, that while 2024 did not become the AI catastrophe, which was in many ways predicted by a lot of observers also in this space, I believe that as we move into 2025, there’s all the reason for an observatory like Aetmo to continue the work, to see how these technologies are being deployed across elections. And this is something that we should do collaboratively also with centers and researchers and civil society from outside of the European Union to really get a better grasp on the. impact of AI on elections.


GIACOMO MAZZONE: Thank you very much, Klaas. Thank you for sacrificing your time. Just one question to the two non-European speakers. Do you think that, Philile and Ben, that if you have a legislative framework like in Europe this would make your life easier or not?


PHILILE NTOMBELA: Okay, I’ll go first. So, for us, no, it wouldn’t, for the reasons I mentioned in the presentation. There’s a huge history of censorship, suppression, and once you create a law like that, unfortunately, because everything works with precedent, once one person is able to use that law to manipulate people that are actually trying to spread proper information, it can go wrong in so many ways. And so, this is why we came up with that declaration at the Africa Facts Network. You know, 30 fact-checking organizations around the, 50 actually, around the continent then all said, proved that, well, signed up to say that they’d rather we collaborate, including with responsive governance, so that we can try and fight disinformation and misinformation from, you know, a media literacy perspective, an outreach perspective, and an international, but still within the continent perspective, rather than laws, because of how they can be manipulated, as we’ve seen with other laws in our countries already.


BENJAMIN SHULTZ: Yeah, I generally, I think, yes, actually, I think it would help us in the States. I think, you know, implementing something like totally identical to like the DSA or any general law regulating platforms, I think that would be difficult to like legally deal with. I think it would face a lot of kind of chopping down, but particularly around protection of researchers, data access for researchers, these things, I think, would be extremely helpful and would enable civil society to you know, do what we were doing in 2020, which was analyzing content from platforms and reporting on online harms, which we can’t do today.


GIACOMO MAZZONE: Thank you very much. Thank you to all the speakers. Thank you to the people in the room for the patience. Apologies to the next speakers of the next session. We hand over to them and sorry for not taking question, but we will be outside the room if you need help and you have to raise any point. Thank you.


A

ALBERTO RABBACHIN

Speech speed

134 words per minute

Speech length

210 words

Speech time

94 seconds

Rapid response system activated for European elections

Explanation

A rapid response system was implemented to combat disinformation during European elections. This system involved analyzing flags and deciding on actions based on platforms’ terms of service.


Evidence

The system was activated for the European election, French election, Romanian election, and Moldovan election.


Major Discussion Point

Efforts to combat disinformation in Europe


Agreed with

GIOVANNI ZAGNI


DELPHINE COLARD


BENJAMIN SHULTZ


PHILILE NTOMBELA


CLAES H. DE VREESE


Agreed on

Importance of monitoring disinformation in elections


G

GIOVANNI ZAGNI

Speech speed

145 words per minute

Speech length

1217 words

Speech time

503 seconds

European Digital Media Observatory monitors disinformation across EU

Explanation

The European Digital Media Observatory (EDMO) is a project that monitors disinformation across the European Union. It involves a network of fact-checkers, researchers, and media literacy experts working together to tackle disinformation.


Evidence

EDMO has 14 hubs covering different countries and regions in Europe, and it monitors disinformation through a network of 50-plus fact-checking organizations.


Major Discussion Point

Efforts to combat disinformation in Europe


Agreed with

DELPHINE COLARD


PHILILE NTOMBELA


Agreed on

Need for multi-stakeholder approach to combat disinformation


D

DELPHINE COLARD

Speech speed

147 words per minute

Speech length

1271 words

Speech time

518 seconds

European Parliament website explaining election integrity measures

Explanation

The European Parliament created a website to explain the technicalities of elections and measures to ensure free and fair elections. The website aimed to inform voters about various aspects of election integrity and equip them with tools to tackle disinformation.


Evidence

The website explained measures put in place by European member states and how the EU ensured free and fair elections.


Major Discussion Point

Efforts to combat disinformation in Europe


Agreed with

ALBERTO RABBACHIN


GIOVANNI ZAGNI


BENJAMIN SHULTZ


PHILILE NTOMBELA


CLAES H. DE VREESE


Agreed on

Importance of monitoring disinformation in elections


Differed with

BENJAMIN SHULTZ


PHILILE NTOMBELA


Differed on

Approach to regulating disinformation


Pre-bunking videos to explain disinformation techniques

Explanation

The European Parliament developed a series of pre-bunking videos to explain common disinformation techniques. These videos aimed to educate voters on how to avoid manipulation and recognize disinformation tactics.


Evidence

The videos covered topics such as emotional manipulation, polarization attempts, and flooding of information with contradictory versions of events.


Major Discussion Point

Efforts to combat disinformation in Europe


Agreed with

GIOVANNI ZAGNI


PHILILE NTOMBELA


Agreed on

Need for multi-stakeholder approach to combat disinformation


B

BENJAMIN SHULTZ

Speech speed

174 words per minute

Speech length

1361 words

Speech time

467 seconds

Platforms giving up on content moderation

Explanation

In the United States, social media platforms have largely abandoned content moderation efforts. This has led to a proliferation of hate speech, false information, and attacks against elected officials on these platforms.


Evidence

Termination of trust and safety teams at various platforms, and headlines indicating platforms’ surrender on content moderation.


Major Discussion Point

Disinformation challenges in the United States


Differed with

DELPHINE COLARD


PHILILE NTOMBELA


Differed on

Approach to regulating disinformation


Rise of far-right narratives claiming censorship

Explanation

There has been an increase in narratives from the far-right in the US claiming that content moderation is a form of censorship. This has made it difficult to implement measures against false and malicious claims on social media platforms.


Evidence

Pushback in the House of Representatives against bills regulating deepfakes, with claims that such regulation violates free speech.


Major Discussion Point

Disinformation challenges in the United States


Proliferation of deepfakes targeting politicians

Explanation

There has been a significant increase in the use of deepfake technology to create false or misleading content targeting politicians in the US. This includes sexually explicit deepfakes of female politicians and voice imitations of election officials.


Evidence

A study found that one in six women in Congress are subject to sexually explicit deepfakes. There were also instances of deepfake voice imitations of Joe Biden and other election officials.


Major Discussion Point

Disinformation challenges in the United States


Agreed with

ALBERTO RABBACHIN


GIOVANNI ZAGNI


DELPHINE COLARD


PHILILE NTOMBELA


CLAES H. DE VREESE


Agreed on

Importance of monitoring disinformation in elections


P

PHILILE NTOMBELA

Speech speed

147 words per minute

Speech length

1347 words

Speech time

549 seconds

Targeting of journalists and judiciary bodies

Explanation

In African elections, there has been a trend of targeting journalists and judiciary bodies with disinformation. Journalists attempting to fact-check were often accused of bias, while rumors spread about electoral bodies favoring certain parties.


Evidence

A political party in South Africa took the independent electoral commission to the Constitutional Court, claiming unfair treatment.


Major Discussion Point

Disinformation issues in Africa


Agreed with

GIOVANNI ZAGNI


DELPHINE COLARD


Agreed on

Need for multi-stakeholder approach to combat disinformation


Differed with

DELPHINE COLARD


BENJAMIN SHULTZ


Differed on

Approach to regulating disinformation


Digital divide limiting access to fact-checking

Explanation

Africa suffers from a significant digital divide, with the lowest level of internet connectivity. This limits people’s ability to fact-check information or access reliable sources, making them more vulnerable to misinformation.


Evidence

A report by the International Telecommunications Union showed that Africa had the lowest level of internet connectivity in 2022.


Major Discussion Point

Disinformation issues in Africa


Fraud allegations spreading rapidly on social media

Explanation

In African elections, particularly in South Africa, fraud allegations were a major form of disinformation. These claims spread rapidly on social media, often initiated by charismatic politicians and amplified by uncritical media coverage.


Evidence

A specific politician in South Africa started fraud allegations even before the election season began, which then trended on social media.


Major Discussion Point

Disinformation issues in Africa


Agreed with

ALBERTO RABBACHIN


GIOVANNI ZAGNI


DELPHINE COLARD


BENJAMIN SHULTZ


CLAES H. DE VREESE


Agreed on

Importance of monitoring disinformation in elections


C

CLAES H. DE VREESE

Speech speed

175 words per minute

Speech length

491 words

Speech time

167 seconds

Generative AI played a role but did not overturn elections in 2024

Explanation

While generative AI was used in elections worldwide in 2024, it did not have the catastrophic impact that was initially feared. There was no evidence of AI-generated content flipping election results in the final days of campaigns.


Evidence

No single election in 2024 was overturned due to AI-generated material, despite its presence in every election.


Major Discussion Point

Impact of AI on elections


Agreed with

ALBERTO RABBACHIN


GIOVANNI ZAGNI


DELPHINE COLARD


BENJAMIN SHULTZ


PHILILE NTOMBELA


Agreed on

Importance of monitoring disinformation in elections


Need to monitor AI’s impact across entire election ecosystem

Explanation

It’s important to continue monitoring the impact of AI on elections beyond just voter persuasion. AI’s role in various aspects of the election process, including donation, mobilization, and spreading disinformation about opponents, needs to be studied.


Major Discussion Point

Impact of AI on elections


Agreements

Agreement Points

Importance of monitoring disinformation in elections

speakers

ALBERTO RABBACHIN


GIOVANNI ZAGNI


DELPHINE COLARD


BENJAMIN SHULTZ


PHILILE NTOMBELA


CLAES H. DE VREESE


arguments

Rapid response system activated for European elections


European Digital Media Observatory monitors disinformation across EU


European Parliament website explaining election integrity measures


Proliferation of deepfakes targeting politicians


Fraud allegations spreading rapidly on social media


Generative AI played a role but did not overturn elections in 2024


summary

All speakers emphasized the importance of monitoring and addressing disinformation in elections, whether through rapid response systems, observatories, or analysis of AI-generated content.


Need for multi-stakeholder approach to combat disinformation

speakers

GIOVANNI ZAGNI


DELPHINE COLARD


PHILILE NTOMBELA


arguments

European Digital Media Observatory monitors disinformation across EU


Pre-bunking videos to explain disinformation techniques


Targeting of journalists and judiciary bodies


summary

These speakers highlighted the importance of involving various stakeholders, including fact-checkers, researchers, media literacy experts, and government bodies in combating disinformation.


Similar Viewpoints

These speakers shared a positive view of the European Union’s efforts to combat disinformation through various initiatives and tools.

speakers

ALBERTO RABBACHIN


GIOVANNI ZAGNI


DELPHINE COLARD


arguments

Rapid response system activated for European elections


European Digital Media Observatory monitors disinformation across EU


European Parliament website explaining election integrity measures


Both speakers highlighted challenges in their respective regions (US and Africa) related to the spread of disinformation, particularly due to platform issues or lack of access to fact-checking resources.

speakers

BENJAMIN SHULTZ


PHILILE NTOMBELA


arguments

Platforms giving up on content moderation


Digital divide limiting access to fact-checking


Unexpected Consensus

Impact of AI on 2024 elections

speakers

CLAES H. DE VREESE


BENJAMIN SHULTZ


arguments

Generative AI played a role but did not overturn elections in 2024


Proliferation of deepfakes targeting politicians


explanation

Despite concerns about AI’s potential to significantly disrupt elections, both speakers noted that while AI and deepfakes were present in elections, they did not have the catastrophic impact that was initially feared.


Overall Assessment

Summary

The main areas of agreement included the importance of monitoring disinformation in elections, the need for a multi-stakeholder approach to combat disinformation, and the recognition that while AI and deepfakes were present in elections, they did not have the catastrophic impact initially feared.


Consensus level

There was a moderate level of consensus among the speakers, particularly on the importance of addressing disinformation. However, there were notable differences in approaches and challenges faced in different regions (EU vs. US vs. Africa). This implies that while there is a shared recognition of the problem, solutions may need to be tailored to specific regional contexts and legal frameworks.


Differences

Different Viewpoints

Approach to regulating disinformation

speakers

DELPHINE COLARD


BENJAMIN SHULTZ


PHILILE NTOMBELA


arguments

European Parliament website explaining election integrity measures


Platforms giving up on content moderation


Targeting of journalists and judiciary bodies


summary

The speakers disagreed on the effectiveness and appropriateness of regulatory approaches to combat disinformation. While the European approach favors strong regulation and platform accountability, the US has seen a retreat from content moderation, and the African perspective warns against potential misuse of regulations.


Unexpected Differences

Impact of AI on elections

speakers

CLAES H. DE VREESE


BENJAMIN SHULTZ


arguments

Generative AI played a role but did not overturn elections in 2024


Proliferation of deepfakes targeting politicians


explanation

While both speakers addressed AI’s role in elections, there was an unexpected difference in their assessment of its impact. De Vreese suggested a sense of relief that AI didn’t cause catastrophic effects in 2024, while Shultz highlighted significant concerns about deepfakes targeting politicians in the US.


Overall Assessment

summary

The main areas of disagreement centered around regulatory approaches to disinformation, the role of platforms in content moderation, and the impact of technological advancements like AI on elections.


difference_level

The level of disagreement was moderate to high, with significant variations in approaches and experiences across different regions. These differences highlight the complexity of addressing disinformation globally and the need for context-specific solutions.


Partial Agreements

Partial Agreements

All speakers agreed on the need to combat disinformation, but disagreed on the methods. While the European approach involves a coordinated observatory, the US faces challenges with platform cooperation, and Africa struggles with digital access issues.

speakers

GIOVANNI ZAGNI


BENJAMIN SHULTZ


PHILILE NTOMBELA


arguments

European Digital Media Observatory monitors disinformation across EU


Platforms giving up on content moderation


Digital divide limiting access to fact-checking


Similar Viewpoints

These speakers shared a positive view of the European Union’s efforts to combat disinformation through various initiatives and tools.

speakers

ALBERTO RABBACHIN


GIOVANNI ZAGNI


DELPHINE COLARD


arguments

Rapid response system activated for European elections


European Digital Media Observatory monitors disinformation across EU


European Parliament website explaining election integrity measures


Both speakers highlighted challenges in their respective regions (US and Africa) related to the spread of disinformation, particularly due to platform issues or lack of access to fact-checking resources.

speakers

BENJAMIN SHULTZ


PHILILE NTOMBELA


arguments

Platforms giving up on content moderation


Digital divide limiting access to fact-checking


Takeaways

Key Takeaways

Europe has implemented coordinated efforts to combat disinformation, including rapid response systems, monitoring by EDMO, and pre-bunking campaigns


The US has seen a regression in platform content moderation and a rise in disinformation, particularly deepfakes


Africa faces unique challenges with disinformation due to the digital divide, language barriers, and risks of censorship


Generative AI played a role in 2024 elections but did not have the catastrophic impact some feared


There are significant differences in regulatory approaches to disinformation between Europe, the US, and Africa


Resolutions and Action Items

EDMO to continue monitoring disinformation across EU elections


European Parliament to establish a special committee on European democracy shields


African fact-checking organizations to collaborate through the Africa Facts Network declaration


EDMO to continue monitoring AI’s impact on elections in 2025


Unresolved Issues

How to balance free speech concerns with the need to combat disinformation, particularly in the US


How to address the digital divide and language barriers in combating disinformation in Africa


Long-term impact of AI-generated content on election integrity


Effectiveness of current platform policies in addressing disinformation globally


Suggested Compromises

In Africa, focus on collaborative efforts and media literacy rather than strict regulation to avoid potential misuse of laws


In the US, consider adopting some aspects of European regulations, particularly around researcher access to data, while respecting First Amendment concerns


Thought Provoking Comments

We found that journalists were accused of bias whenever they tried to fact-check, so we had something called the Elections Coalition in South Africa, which included journalists and media houses who would either try and do a quick fact-check themselves, so we trained them beforehand as part of our company, our organization’s training systems, or we helped them to fact-check, and often they were accused of bias whenever they fact-checked a specific politician and told that they support their opposition.

speaker

Philile Ntombela


reason

This comment highlights the challenges faced by fact-checkers and journalists in Africa, revealing how attempts to combat misinformation can be weaponized against them.


impact

It shifted the discussion to consider the unique challenges faced in different regions and the potential backlash against fact-checking efforts.


In the States we do not have regulation such as the digital services act or really anything of the sort. Um, and platforms, as you can see just from these headlines, um, have, have surrendered and give it up.

speaker

Benjamin Shultz


reason

This comment provides a stark contrast between the regulatory approaches in the US and Europe, highlighting the lack of oversight on platforms in the US.


impact

It prompted a comparison of different regulatory approaches and their effects on platform behavior across regions.


So let that be the take-home message for 2025, that while 2024 did not become the AI catastrophe, which was in many ways predicted by a lot of observers also in this space, I believe that as we move into 2025, there’s all the reason for an observatory like Aetmo to continue the work, to see how these technologies are being deployed across elections.

speaker

Claes H. de Vreese


reason

This comment provides a balanced perspective on the impact of AI in elections, acknowledging both the relief that catastrophic scenarios didn’t materialize and the ongoing need for vigilance.


impact

It shifted the discussion towards a more nuanced view of AI’s role in elections and emphasized the importance of continued monitoring and research.


Overall Assessment

These key comments shaped the discussion by highlighting the diverse challenges faced in different regions when combating disinformation, from accusations of bias in Africa to lack of regulation in the US. They also emphasized the evolving nature of threats, particularly regarding AI in elections. The discussion moved from specific regional experiences to broader comparisons of approaches and the need for ongoing vigilance and research. This led to a more nuanced understanding of the global landscape of disinformation and the varying strategies needed to address it effectively.


Follow-up Questions

How can we improve media literacy efforts to combat disinformation?

speaker

Delphine Colard


explanation

Delphine emphasized the importance of education and media literacy in combating disinformation, suggesting this is a key area for ongoing work and research.


What are the impacts of addictive design in social media platforms on the spread of disinformation?

speaker

Delphine Colard


explanation

Delphine mentioned that the European Parliament is interested in addressing this issue, indicating it’s an important area for further investigation.


How can we better measure the impact of disinformation on voting behavior?

speaker

Benjamin Shultz


explanation

Benjamin noted that it’s difficult to measure how disinformation changes voting behavior, suggesting this is an area that needs more research and better methodologies.


What are effective strategies for combating disinformation in contexts where media literacy is low and internet access is limited?

speaker

Philile Ntombela


explanation

Philile highlighted these challenges in the African context, indicating a need for research on strategies that work in these conditions.


How can we improve platform accountability and content moderation without risking censorship or suppression of free speech?

speaker

Philile Ntombela


explanation

Philile expressed concerns about potential negative consequences of strict regulations, suggesting a need for research on balanced approaches.


What are the long-term impacts of AI-generated content on elections and democratic processes?

speaker

Claes H. de Vreese


explanation

Claes emphasized the need for continued monitoring and research on AI’s role in elections beyond immediate persuasion effects.


How can we improve international collaboration in researching and combating disinformation?

speaker

Claes H. de Vreese


explanation

Claes suggested the need for collaborative efforts across countries to better understand the impact of AI on elections globally.


Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.