Day 0 Event #236 EU Rules on Disinformation Who Are Friends or Foes

23 Jun 2025 09:00h - 09:45h

Day 0 Event #236 EU Rules on Disinformation Who Are Friends or Foes

Session at a glance

Summary

This Internet Governance Forum session focused on identifying allies and challenges in combating disinformation while protecting freedom of expression. The discussion brought together representatives from European institutions, fact-checking organizations, and civil society to examine the complex landscape of information integrity.


Paola Gori from EDMO (European Digital Media Observatory) opened by highlighting the dual challenge facing democracies: the spread of disinformation through various channels including AI-generated content, and the growing rhetoric against policy frameworks designed to address disinformation. She emphasized that effective responses must be grounded in fundamental rights while focusing on algorithmic transparency and multi-stakeholder approaches rather than content deletion.


Benjamin Schultz from the American Sunlight Project described the deteriorating situation in the United States, where democracy is backsliding and platforms are moving closer to the administration. However, he offered hope through recent bipartisan success in banning non-consensual deepfake pornography, suggesting that collaboration on specific issues with broad support could maintain transatlantic cooperation.


Nordic representatives Mikko Salo from Finland and Morten Langfeldt Dahlback from Norway provided regional perspectives on the challenges. Salo emphasized the urgent need for AI literacy and teacher training, particularly in trust-based Nordic societies. Dahlback raised three critical concerns: deteriorating access to platform data for research, the delicate balance between independence and government cooperation for fact-checkers, and the shift from observable public disinformation to private AI chatbot interactions that fact-checkers cannot monitor.


Alberto Rabbachin from the European Commission outlined the EU’s comprehensive framework, including the Digital Services Act and the Code of Practice on Disinformation, which now covers 42 signatories with 128 specific measures. He stressed that the EU supports independent fact-checking organizations rather than determining what constitutes disinformation itself.


The discussion concluded with recognition that the battle against disinformation is evolving from reactive fact-checking toward proactive media literacy and user empowerment, as AI makes the information landscape increasingly complex and personalized.


Keypoints

## Major Discussion Points:


– **The complexity of disinformation as a global phenomenon**: Speakers emphasized that disinformation is not a simple problem with easy solutions, involving state and non-state actors, AI-generated content, and targeting various issues like elections, health, climate change, and migration. The phenomenon creates doubt and division in society while eroding information integrity essential for democratic processes.


– **Regulatory approaches and the tension between content moderation and freedom of expression**: The discussion covered various policy frameworks including the EU’s Digital Services Act, UNESCO guidelines, and the Global Digital Compact. There’s ongoing debate about balancing disinformation countermeasures with protecting fundamental rights and free speech, with speakers noting that emotional rhetoric often overshadows factual assessment of these policies.


– **Transatlantic divergence and changing political landscape**: Speakers highlighted growing differences between US and European approaches to platform regulation and content moderation, particularly following recent political changes in the US. This includes concerns about democratic backsliding, reduced cooperation between platforms and fact-checkers, and threats to research access.


– **The shift from reactive fact-checking to proactive media literacy**: Multiple speakers discussed the evolution from traditional fact-checking and content debunking toward empowering users with digital and AI literacy skills. This shift is driven partly by the rise of AI chatbots that generate personalized responses invisible to external fact-checkers.


– **Challenges in understanding the scope and impact of disinformation**: Speakers noted difficulties in measuring the actual extent of disinformation due to limited platform transparency, reduced research access, and the complexity of distinguishing disinformation from the broader information ecosystem. This knowledge gap hampers effective policy responses.


## Overall Purpose:


The discussion aimed to examine the current landscape of internet governance and disinformation, identifying key stakeholders (“friends and foes”) in the fight against false information while exploring policy approaches, challenges, and future directions for maintaining information integrity in democratic societies.


## Overall Tone:


The discussion maintained a professional but increasingly concerned tone throughout. It began with a comprehensive, somewhat optimistic overview of existing frameworks and cooperation mechanisms, but gradually became more sobering as speakers addressed current challenges including political polarization, regulatory divergence, and technological complications from AI. While speakers acknowledged significant obstacles, they maintained a constructive approach focused on finding solutions and maintaining international cooperation despite growing difficulties.


Speakers

**Speakers from the provided list:**


– **Moderator (Giacomo)** – Session moderator organizing the discussion on Internet Governance and disinformation


– **Paula Gori** – Secretary General of EDMO (European Digital Media Observatory), the body tasked by the European Union for fighting disinformation


– **Benjamin Shultz** – Works for American Sunlight Project, a non-profit based in Washington D.C. that analyzes and fights back against information campaigns that undermine democracy; currently based in Berlin


– **Mikko Salo** – Representative of Faktabari, a Finnish NGO focused on fact-checking and digital information literacy services; part of the Nordic hub within EDMO network


– **Morten Langfeldt Dahlback** – From Faktisk, the Norwegian fact-checking organization jointly owned by major Norwegian media companies including public and commercial broadcasters; coordinator of Nordisk (the Nordic hub of EDMO)


– **Alberto Rabbachin** – Representative from the European Commission


– **Audience** – Multiple audience members who asked questions during the Q&A session


**Additional speakers:**


– **Eric Lambert** – Mentioned as being present to make the report of the session, described as “an essential figure” working “behind the scene”


– **Lou Kotny** – Retired American librarian who asked a question about EU bias regarding the Ukraine war


– **Thora** – PhD researcher from Iceland examining how large platforms and search engines undermine democracy; research fellow at the Humboldt Institute


– **Mohamed Aded Ali** – From Somalia, part of the RECIPE programme, asked about recognizing AI propaganda and digital integrity violations


Full session report

# Internet Governance Forum Session: Combating Disinformation – Identifying Allies and Challenges


## Executive Summary


This Internet Governance Forum session brought together European policymakers, fact-checking organisations, and civil society representatives to examine the evolving landscape of disinformation and information integrity. Moderated by **Giacomo**, the discussion featured perspectives from EDMO, Nordic fact-checking organisations, the American Sunlight Project, and the European Commission.


The session highlighted the complexity of addressing disinformation while protecting fundamental rights, with speakers discussing challenges ranging from AI-generated content to platform transparency and the need for enhanced media literacy. Key themes included the evolution from reactive fact-checking to proactive education approaches, concerns about research access to platform data, and the importance of maintaining independence while fostering multi-stakeholder cooperation.


## Opening Framework and Context


**Paola Gori**, Secretary General of EDMO (European Digital Media Observatory), opened by characterising disinformation as a phenomenon that “creates doubt and division in society” while eroding information integrity essential for democratic decision-making. She noted that disinformation manifests across multiple domains – elections, health, climate change, and migration – involving both state and non-state actors, including increasingly sophisticated AI-generated content.


Gori outlined EDMO’s structure as a network of 14 hubs covering EU member states, soon expanding to 15 with Ukraine and Moldova, comprising more than 120 organizations. She referenced Eurobarometer survey results showing that 38% of Europeans consider disinformation one of the biggest threats to democracy, with 82% considering it a problem for democracy.


She positioned EDMO’s approach within broader global frameworks, including UNESCO guidelines and the Global Digital Compact, emphasising fundamental rights, algorithmic transparency, multi-stakeholder approaches, and risk mitigation rather than content deletion. Gori also highlighted concerning rhetoric against policy frameworks designed to address disinformation, noting that “emotional rhetoric often overshadows factual assessment of these policies.”


## Nordic Perspectives: Trust, Education, and Evolving Challenges


The moderator **Giacomo** opened the Nordic discussion by asking whether participants were “more afraid of neighbors or supposed friends,” prompting responses about regional security dynamics.


**Mikko Salo** from Faktabari in Finland responded by referencing Finland’s 50-year history of preparedness with neighbors, then emphasised the urgent need for AI literacy, particularly in trust-based Nordic societies. He introduced the concept of “AI native persons,” questioning how people who grow up with AI will develop critical thinking skills. His central argument was that “people need to develop AI literacy and learn to think critically before using AI tools.”


Salo also raised questions about societal investment in information integrity, referencing security spending and suggesting that cognitive security deserves significant attention as part of whole-of-society security approaches.


**Morten Langfeldt Dahlback** from Faktisk in Norway identified three critical concerns challenging current approaches to combating disinformation:


First, he highlighted deteriorating access to platform data for research, noting that “major platforms are limiting researcher access to data, with research APIs being more restricted than expected.” He expressed concern that “we don’t know enough about the scope of the problem, and we don’t know enough about its impact,” while “the conditions for gaining more knowledge about this problem have become worse.”


Second, Dahlback addressed the balance between independence and government cooperation for fact-checkers, observing that “once our objectives are aligned with the objectives of governments and of other regulatory and official bodies, it’s easy for others to throw our independence into doubt, because the alignment is too close.”


Third, he identified the shift from observable public disinformation to private AI chatbot interactions that fact-checkers cannot monitor. He explained that “when you use chatbots like ChatGPT or Claude, the information that you receive from the chatbot is not in the public sphere at all,” making traditional fact-checking approaches obsolete. This led him to suggest “a transition from more debunking and fact-checking work like what we’ve been engaged in so far to more literacy work.”


## Transatlantic Perspectives and Political Challenges


**Benjamin Shultz** from the American Sunlight Project described the deteriorating situation in the United States, characterising it as “democratic backsliding” with platforms moving closer to the administration. He described the current environment as one where “bad actors are becoming more active in spreading information campaigns that undermine democracy and tear at social fabric.”


However, Shultz offered a pragmatic path forward through recent bipartisan success in banning non-consensual deepfake pornography. He argued that “small steps like these that have been taken in the states that do have broad support” could maintain transatlantic cooperation despite broader political tensions.


## European Union Policy Framework and Implementation


**Alberto Rabbachin** from the European Commission provided an overview of the EU’s regulatory approach, emphasising that European frameworks focus on algorithmic transparency and platform accountability rather than content censorship.


Rabbachin outlined the Digital Services Act as “pioneering regulation that addresses disinformation while protecting freedom of expression by focusing on algorithm functioning rather than content.” He stressed that the EU supports independent fact-checking organisations rather than determining what constitutes disinformation itself, noting that “the EU supports an independent, multidisciplinary community of more than 120 organisations whose fact-checking work is completely independent from the European Commission and governments.”


He detailed the evolution of the Code of Practice on Disinformation, which has grown from 16 signatories with 21 commitments to 42 signatories with 43 commitments and 128 measures. He announced that this code would be fully integrated into the DSA framework as of July 1st, making it auditable and creating binding obligations for platform signatories.


Regarding research access to platform data, Rabbachin acknowledged the challenges while pointing to upcoming delegated acts designed to improve researcher access.


## Audience Engagement


The audience questions revealed additional concerns within the broader community working on information integrity issues.


**Thora**, a PhD researcher from Iceland, highlighted ongoing problems with academic access to platform data, noting that “large platforms are dragging their feet on providing academic access, claiming the EU needs to make definitions first.”


**Mohamed Aded Ali** from Somalia raised questions about recognising AI propaganda and digital integrity violations, highlighting the global nature of these challenges.


**Lou Kotny**, a retired American librarian, raised concerns about potential EU bias regarding the Ukraine war, introducing questions about how fact-checking organisations maintain objectivity in politically charged environments.


## Key Themes and Challenges


Several important themes emerged from the discussion:


**Shift Toward Media Literacy**: Multiple speakers emphasised the growing importance of media literacy and critical thinking education, with some suggesting this represents a necessary evolution from traditional fact-checking approaches.


**Platform Transparency Concerns**: Both researchers and fact-checkers expressed frustration with decreasing access to platform data needed for understanding and addressing disinformation.


**Independence vs. Cooperation**: The tension between maintaining organisational independence while cooperating with government initiatives emerged as a significant concern for civil society organisations.


**AI Challenges**: All speakers acknowledged that AI is fundamentally changing the disinformation landscape, making detection more difficult and requiring new approaches, particularly regarding private AI interactions that are not publicly observable.


**Local Context**: Speakers emphasised that disinformation responses must account for local cultural, political, and linguistic contexts.


## Conclusion


The session demonstrated the complexity of addressing disinformation while protecting fundamental rights and democratic values. While speakers agreed on the importance of multi-stakeholder cooperation and media literacy, significant challenges remain around platform transparency, maintaining organisational independence, and adapting to new technologies.


The moderator concluded by noting that information integrity is becoming increasingly important and announced an upcoming workshop by BBC and Deutsche Welle. **Eric Lambert** was mentioned as the session’s rapporteur.


The discussion revealed a field grappling with fundamental changes in how information is created and consumed, particularly the shift from public, observable disinformation to private AI interactions that traditional oversight mechanisms cannot monitor.


Session transcript

Moderator: Good morning. Good morning, everybody. Thank you for being so kind to be here so early in the morning, after a long trip to here, and to be for this session that will be about, as you have seen from the title, trying to understand who are the friends and who are the foes in this very complicated and unclear situation for the Internet Governance, and especially for the fight to disinformation. It’s a session that will have some participants with me here, from Nordic countries, one from Finland, another one from Norway, but also we will have other participants from remote. will be from Brussels, we will have somebody from the European Commission and we will have somebody from the, based in Berlin at the moment, but is one of the most active person in the fact-checking and countering disinformation in the U.S., and we will have Paola Gori that will open, that is from the EDMO, this is the Secretary General of EDMO, that is the body that the European Union has tasked for fighting disinformation. So, I think that we don’t have too much time, I would prefer that we start immediately. If Paola is ready, I will give the floor to her. Hello, good morning. Are you ready? Yes, she’s with us. Welcome, Paola. You look frozen.


Paula Gori: I guess you’re hearing me and are sharing also a quick presentation. Can you hear me? Yes, we can hear you. Can you hear me well? Yes, well, but we don’t see the presentation yet. She is coming. I see it on my screen, so just let me know when you see it. Yes, now we can see the first slide, it’s okay. Okay, great. Thank you very much, Giacomo. And good morning, everyone. I’m very happy to be in this, to start the day, actually, this day zero with this session. As Giacomo was saying, the overall IGF focus is on internet governance and within this topic, of course, disinformation is creating quite a few, quite a lot of, if you want, emotional reactions around. Also, in past editions of the IGF, and I think everybody here, what I just wanted to bring up here today, again, is this situation. which we are aware on one side, we have the spread of disinformation now. Be it from internal or from external actors, they are called different policies, big migration, climate change, elections, of course, health. They’re often very linked. For example, the same goes, for example, with disinformation and so on. And it can be spread internally and externally and by internal and external actors. It can be state-backed, it can be also not state-backed. There can be the use of proxies. There can be the use of artificial intelligence, both to generate content, but also to spread it. These are all things that I think we all know, as well as the fact that disinformation is there in a broader mission of creating doubt, creating division in our society, put us in a situation in which at a certain moment, we don’t actually are in a position to really be sure about stuff because we got so many information with so different facts or non-facts, actually. And this puts us in a very difficult situation overall. And this erodes, of course, the information integrity. Information integrity is key in a democratic process because, let’s put it in a very simple and easy way, if we want to take any decision, we have to have a basis on which we can make this decision. So if the basis is actually not based on facts, then we are in a situation in which we may make a decision which is not in our interest in the end. On the other side, what we are seeing more and more is a huge rhetoric against any policy framework that tries to tackle disinformation. One of the main arguments at the basis of this is the fact that it may violate freedom of expression, which if you look at it from a very neutral point of view, it’s a very fair concern because it is very important that whichever policy that deals with disinformation respects fundamental rights. fundamental rights and also freedom of expression, but the rhetoric that we’re seeing there is actually more if you want an emotional one rather than a rhetoric which actually looks at the real framework and then actually does a real assessment of whether freedom of expression is violated or not, because very often actually it is not. And the two reinforce each other. And this is something we are seeing globally. So I’m just setting the scene in a very global way. What we are seeing as approaches, and of course EDMO, and I will say a few words about EDMO. Those who are familiar with the IGF are also familiar with EDMO so far, because it’s not the first session we’re having here, is that whichever response to get back to information integrity starts of course with digital literacy, media literacy, with strengthening quality journalism and so on. And if you look at the global frameworks that we have around, like the Global Digital Compact, the guidelines by UNESCO on the governance of digital platforms, the recent communication by the High Representative and the European Commission on the communication on international digital strategy for the EU, and also the Digital Service Act, which is a regulation, there are a few elements which are common there, which are the fact that any response, as we were saying before, has to be grounded on fundamental values and the respect of human rights. We cannot transcend from it. It has to happen. The focus is rather on algorithmic and transparency. There should be a multi-stakeholder approach. This is, I think the IGF is actually one of the responses to that, right? So it’s really a multi-stakeholder level. It is based on risk mitigation, which means that it looks at the risk, at the way that the platforms, for example, or some online actors work, could have on certain elements, for example, public health, minors, civic discourse, and so on. So just to remind us that the focus is not on we delete content, we look at content, but rather on… we look at if the way the platforms work can actually be abused for malign purposes. So you just wanted to set the scene in highlighting these differences. And the instruments that I was mentioning earlier, I think show that we are all going into that direction so that the global principles overall are those. And then of course, the regional specificities, they rightly so also have differences. And this is normal. I don’t think we will ever, ever get to something which is global in this sense, but this is fine. As long as the principles are shared and the principles are all agreed, then I think that it is important to keep regional specificities also because, especially when it comes to this information, it is a global phenomenon, but the local characteristics are playing quite a strong role. Now, I will not go into this slide, but I just wanted to show these two slides. This is one is climate change is information. The next one will be on the economics of this information. Just to show how complicated it is to navigate the disinformation sphere. It’s not just one problem that is easy to understand and with an easy solution. This makes it very complicated, but probably also very interesting for everybody involved to try to address it. And I will not go through it as I was saying, but just, I wanted to just. So with this, in the interest of time, I will just, sorry? Can you repeat the last phrase, you break up? Yeah, sorry. So I just wanted to say that I was showing these slides, not to go through them because we don’t have time, but just to show how complex the disinformation phenomenon is. And by consequence, how complex it is also to find a solution. So I think it’s not by chance that it is years and years that we’re all together sitting, also sometimes disagreeing in trying to find a solution because the problem itself is complex and we cannot always simplify. complex situations, like in the case of disinformation, and you cannot simplify it precisely because human rights are at stake. So before giving the floor to our next panelist, I just wanted to recap for those who are not familiar with Edmo, what is Edmo doing and why was I showing all this complexity? Because the complexity brings us to a situation in which we have to understand properly the phenomenon in order to come with solutions, and the solutions cannot be just one solution, it’s a mix of different solutions. And what Edmo is doing, Edmo is funded by the European Commission and is one of the pillars of the response to disinformation, it’s precisely that. We are a sort of a platform that brings together the different stakeholders, it’s sort of like what the IGF is doing more generally on internet governance, we are bringing them all together. When possible, we are trying to provide tools like trainings or like repositories of fact-checking articles and so on. And by putting the community together, we are also in a position to find common trends, to do investigations, to do joint media literacy initiatives, to do policy analysis. So how are we doing it? Just to say that we have an Edmo platform, if you want, which goes EU-wide, and then we work with 14 hubs, which are national or multinational, they cover all EU member states. And these are key, because you remember what I was saying at the beginning, we cannot avoid looking at the local specificities when it comes to disinformation. Very easily said, the culture, the policy, the politics, the history, the language, the media diet of a country are actually having an impact on whether disinformation is impactful or not, if it is entering a country or not, and so on. So we really need the local element to be there, otherwise we would miss part of the picture. These hubs working all together under our coordination also allow us, as you can imagine, to do pan-European analysis, pan-European comparison, and so on. So I hope I was… clear enough to somehow set this scene. I started with the global element and then I focused a little more on the EU, and our next speakers will continue in this sense, and I think I can give it


Moderator: over to Benjamin Schultz. Thank you very much, Paula. Yes, from Europe you make a very comprehensive panorama. Now we go to the US. Benjamin, American Sunlight, can you introduce yourself? Yes. Am I coming through clear? No, no, you can go now. Oh, okay. Yeah. Is the audio okay? Yes,


Benjamin Shultz: please. Wonderful. Well, thank you so much, Giacomo, Paula. I saw Miko and Martin there on the screen. It’s great to be back here with you all at the IGF. This is a really wonderful gathering and I think a great place for dialogue, for understanding, for discussing the issues of the day and really remembering just how global and borderless and connected the internet makes us all. And of course, that leaves ample opportunity for bad actors to misuse the internet and all of its wonderful technologies to spread disinformation. My name is Ben. I work for the American Sunlight Project, a non-profit based in Washington, D.C., although I’m based in Berlin at the moment. And we analyze and fight back against information campaigns that undermine democracy and pollute our information environment. It’s no secret that in the US a lot has changed in the last six months. Things have shifted. We’ve noticed. Things have shifted greatly. And we’ve seen, just putting it frankly, democracy begin to backslide in the United States. We’ve seen bad actors become more active than ever in spreading information campaigns and using information operations to tear at the social fabric of the US. And we’ve also seen the platforms move closer and closer to the administration. really in a total sea change from the last four years and even the four years before that. We’ve seen people be denied entry to the US based on having critical text messages of the administration, something that really as an American, I thought I would never see happen to my country. And so in this day and age in which content moderation, the removal of harmful or illegal content online is being equated falsely to censorship, to a violation of the right to free speech, free expression, in order to really make progress on making our internet safer and continuing the work that we all do, we have to really start to reframe how we approach this. We have to start to think about new ways, new creative ways to maintain the alliance, the Transatlantic Alliance in these rough times. And so in the preparatory call that we all hopped on for this panel, I was told not to be so negative. So I’m gonna cut myself off there on the bad and we’re gonna shift to the good. And I’m gonna tell you all kind of how I’m approaching this reframing. As someone working in this space, someone whose organization has been called evil by a certain person that runs X and so forth, there’s some work we can do, I think, to maintain the progress that we’ve made in making the internet a safer, better place. Recently in the US, non-consensual explicit deepfakes, colloquially known as deepfake porn, have actually been made illegal. And this is a really groundbreaking achievement, advancement in our country. And it’s something that we’ve done a lot of advocacy work for a really long time. And finally, just in the last months, we had enough votes in Congress to make this happen. And this achieved wide bipartisan. support. And the way that we framed this is we actually showed Congress just how affected by this problem they were too. A lot of times our elected officials, you know, putting it frankly, maybe aren’t keeping up as in the weeds as we are with all of the things happening online. You know, they’re busy people, fair enough. And one action that we took was we wrote a report in which we laid out just in very plain terms how Congress was being affected by this problem, how people online of all ages, particularly young women, were being affected by this problem of being depicted in deepfakes. And we were able to push a bill over the finish line and it was signed recently. And now platforms have to take down deepfake videos after receiving a request from a victim within 48 hours. You know, there’s been plenty of criticism of this bill. It’s not perfect, but it was a really, it’s been a big step forward. And I think we’re going to get into a little bit more of this later on, on this panel, on, you know, the varying degrees of regulation in different European countries. Of course, Europe is a big continent. The EU is big 20, 26, 7, you know, plus a few more in the EEA member states. And there’s a lot of conflicting values and arguments around regulating content online. But my hope amidst all of the not-so-nice things happening in the U.S. right now and the, you know, unfortunate degradation of the transatlantic relationship, my hope is that with small steps like these that have been taken in the states that do have broad support, such as banning explicit deepfakes that are made non-consensually, my hope is that collaborating on these issues that Europe and the U.S. and countries all around the world can continue the dialogue and continue to make some progress on keeping the internet safe and making it safer. And so with that, I will stop myself and pass. it back to you, Giacomo, and the panel can continue and I’m sure we’ll have some good discussion coming up. Thank you very much, Benjamin. Just one question, you moved to Berlin before November or after November? I moved in January. The timing just sort of worked out, but you know.


Moderator: Very timely. Yeah. I can understand. Okay, thank you. I think that, I hope that we will have time for questions. I remember that there is a mic over there. As soon as we finish with the presentation, we will discuss with the audience, because I think there are questions that are coming. So, who’s next? Okay. Mikko, please introduce yourself. Thank you. You are one of the members


Mikko Salo: of the network that Edmo just presented us. So, my name is Mikko Salo. I’m representing a Finnish NGO around fact-checking and digital information literacy service, Faktabari. We are part of a Nordis, that is part of the Edmo, a kind of Nordic hub, and we’re working with Morten on that one. I probably kind of opened it up a little bit, my angle, civil society point of view, whereas I understood Morten is more like the journalistic side that we are working on. But yeah, indeed, very, very challenging times. We started 11 years ago, it was still like accuracy. I think now it’s more about the integrity of the information, and when you are coming from a country with a Finland that is now praised for its preparedness culture, so I try to phrase it like, where do we need to prepare now? And I think it’s very much to the information integrity and the kind of AI literacy that is very, very urgently needed. And there, our small NGO has been working with actually government officials, pushing them to get the kind of, retrain the teachers, and then providing guidance to teachers that are very lost, of course, with the AI at the moment. And why am I so worried as an organization, starting from fact-checking, is that what is happening to our information, what is happening to our sources? Do people really anymore know where the information stems from, and what kind of consequences it has, especially in trust-based societies like the Nordic societies? And so, these are big challenges. But what gives me some hope is that I can say that we are happy to be part of the EU context, that there is at least some sort of rulebook for the internet that is badly broken. There is like a raising awareness that we need to kind of know something. I think, as we’re speaking, they are currently in Hague actually framing what is security at the moment, and I would talk about the cognitive security at the moment. And then we are talking about the famous five percent of investment. investment to security, but now what I’m referring to is the 1.5, the whole of society’s security and the information integrity. And I think that’s the frame that we should be talking. In general, the media education investments are all over the world pretty non-existing at the moment. So there is a lot to improve, at least at the moment. Finland is apparently performing the best as we are doing it. If I would invest something now, and what we are trying to do in Finland is exactly going back to the basics, is still the children, the next generation. I think that’s where we have to find some sort of protection and ensure that before they first need to… I mean, this sounds kind of crazy, but they need to be able to think before they use AI. And I was just framing and I was actually asking the chat, how does it look like an AI native person? Because if we are not able to think ourselves, we are not able to use the AI as it’s meant at the moment. So I would perhaps leave you with these thoughts about the importance of the education and the possibilities that we have in empowering the teachers in different societies to at least address the youngsters for the information integrity. Thank you. Thank you very much, Mikko. Are you more afraid of your neighbours or your supposed friends? We are not afraid of our neighbours. We are prepared and there is a 50 years of history of that one. But we are, I mean, everybody has a lot to do with this information side and it’s very mental, so to say. And I think nobody’s too prepared for that one. And this is a new battlefield and we just need to take it calm and try to progress. And that’s why the IGF is doing very important work to keep a kind of internet somehow in place.


Moderator: Thank you very much. So before to give the floor to Morten, that is the next speaker, I want to remember that we have with us also Eric Lambert that will make the report of this and it’s an essential figure. He’s not with us, but he’s behind the scene. Morten, your organisation is partially also owned by the National Public Service Broadcaster.


Morten Langfeldt Dahlback: Among others, yes. So my name is Morten. I’m from Faktisken, the Norwegian fact-checking organisation. We’re jointly owned by all of the major media companies in Norway, including the public broadcaster and also the commercial public broadcaster, yes. So I’m going… I’m going to talk about three issues that I think are important in this context. So I’m both part of Faktisk, the fact checker, but I’m also the coordinator of Nordisk, so the hub of Edmo that Mikko and Fakta Barri is also part of. And the first point I want to raise is that we talk about disinformation and misinformation here. I think one of the core challenges that we face in responding to this problem is that we don’t know enough about the scope of the problem, and we don’t know enough about its impact, either at least in a lot of domains. And I think the conditions for gaining more knowledge about this problem have become worse over the past few months. So the reason why it’s becoming worse is because of regulatory divergence between Europe and the US. So up until about a year ago, several legislations came into being which were supposed to increase transparency from major tech platforms, forcing them to provide more information to independent fact checkers, but also to researchers. And I think this is one. Except one. Except one, of course. The legislation was supposed to apply to all of them, but X didn’t refuse to be part of the legislation. That is correct. But we see that there were already this last year, there were some, or this year, there were some signs that things were deteriorating when MEDA closed down the fact checking program in the US. And we were expecting them to do so in Europe as well. That hasn’t happened, fortunately. But we think these programs that allow us to gain more knowledge about the disinformation phenomenon are probably under threat, which is going to make our life more difficult. But there is a different problem here as well. It’s very hard to, because of the wealth of information that is online in the first place, it’s very difficult to estimate the scope of disinformation there. So you can see when Paula, for example, shows you a model of the disinformation phenomenon, it’s very complex. It has a lot of variables. And it’s very difficult to disentangle just the overall composition of platforms, the algorithms there from disinformation, misinformation specifically. So I think it’s become more difficult to obtain knowledge about this phenomenon. And that hampers the size, the scope of our response. So I think we have a fundamental problem there. It’s probably solvable, but it’s something that worries me. The second thing I want to address is the relationship between policymakers and political bodies and independent actors, like Faktisk, for example, and like Faktabari, now that disinformation and misinformation is, to a greater extent, on the political agenda. So I think, overall, it’s a good thing that both governments and the European Union and others are attempting to limit the impact and the spread of disinformation. But it also places independent actors in a difficult position, because we need to be and maintain our independence from governments and from regulatory bodies in order to do our job and to maintain the trust of our audience. And once our objectives are aligned with the objectives of governments and of other regulatory and official bodies, I think it’s easy for others to throw our independence into doubt, because the alignment is too close. And this is, I think, a very important problem. I think something that both we as fact checkers and as hubs of Edmo, but also the political bodies need to work out over the next couple of years to figure out what would be the right kind of cooperative coexistence between journalistic organizations that have been at the forefront of the battle against disinformation for years and governmental bodies as well. I think it’s a difficult challenge, but it’s one that we are in the process of addressing. The final point I want to address has to do with something that Mikko just mentioned, which is he asked that GPT to give him some information that would be relevant, pertinent to this session. And I think this to me raises the challenge that we may, when we talk about mis- and disinformation, may be fighting yesterday’s battles. Because up until now, the way we have related to mis- and disinformation, both as consumers, accidental consumers maybe, but also as organizations that try to address it as a problem, is that we know that the disinformation and misinformation that’s out there is usually observable from the outside. And that means that we can see posts on Facebook. We can see videos on TikTok. They might be algorithmically delivered to individual people on their private feed, but the content is out there in the open. However, when you use chatbots like ChatGPT or Clod, which is you can use whichever you want, the information that you receive from the chatbot is not in the public sphere at all. It’s a response generated on the basis of a prompt that you give to the language model, which means that we, as fact-checkers, for example, are unable, we can’t see what responses you’re getting. And the more information consumption is driven into chatbots, the less we will be able to observe the misinformation out there, and the less able we will be to respond to it as well. So I don’t have a solution to this. I think what’s going to happen if this development accelerates is that literacy and information literacy will be much more important than it is today, because it will be up to the individual consumer and the individual user of chatbots and LLMs to actually assess the information that they’re being provided. So I think we might see a transition from more debunking and fact-checking work like what we’ve been engaged in so far to more literacy work, and really empowering people to think critically about the outputs of chatbots, for example. So I’m going to close there. I think we will see some big changes in the battle against misinformation in the coming years, but it really depends on both the regulatory divergence between the US and Europe, but also the AI development and usage of AI in the general public. Thank you.


Moderator: Thank you very much. I think that this last thing that you said are food for thought, so we need to reflect on that. But who has to reflect more is probably the European Commission that is with us in the form of Alberto Rabacin. This shift from the fact-checking to media literacy and empowerment of the users. You agree with that?


Alberto Rabbachin: Thank you, Giacomo. for this question. Indeed, this is, I hope you can hear me well. Yes, we can hear you well. This is certainly a shift that is happening and we are acknowledging that. And I would like to make, show you a few slides that I have prepared to accompany my presentation. Just give me a second that I make this happening. Yep. You should be able to see it. Yes, it’s coming. Okay. Still black, but we hope that we’ll see it in a second. Yes, now we can. Okay. So yes, indeed. So what do we, from the European Commission point of view, what we have in place, you know, is a framework which is quite a richer framework trying to, you know, preserve the integrity of the information sphere. It’s not necessarily, it’s a problem of content but it’s also a problem of functioning of the information ecosystem, of the digital information ecosystem. And first of all, I think we have to make sure that also the citizen that, the European citizen are also themselves considering disinformation and misinformation and information integrity as an issue, you know, as a problem, as a challenge. And in fact, the latest Eurobarometer survey from 2023 and 2024 had made the head of the European election had shown that 38% of the European consider, you know, disinformation, misinformation, one of the biggest threat to democracy. There is really also recently 82% of the Europeans consider that this information is a problem for democracy and they are aware, most of them are aware of this problem. So we are doing something that is perceived as useful by the citizen and also where we have to look at when we try to address the disinformation phenomenon. Certainly also from the citizen point of view, social media, online social network are the sources of the problem, the biggest source of the problem. And this also reflects the technological development that we have witnessed in the last 10 years, where the digital online information ecosystem became the main source of information. Of course, you mentioned also, some of you mentioned also, the role of AI and certainly the use of AI. Of course, AI opens a lot of opportunities in all sectors, but can also be used for malicious activity. And also thanks to Edmo, we are currently monitoring the amount of disinformation that is linked to content that has been generated by AI. And we see that this type of content is taking up. And we have witnessed this in particular in the latest national election in Europe. But what is the EU doing? First of all, we are working with partners among EU countries, with other countries outside the European border and with international organizations, and we are very happy. happy to be here talking about this important subject. Of course, there is also a very important mission, which is rising awareness and communicating about this phenomenon. I think Edmo is doing a great job with his network to also inform the citizen on the different forms that this phenomenon can take. Of course, we are also promoting access to independent media, to fact-check content. We support media literacy activity. And then we also foster this, in particular, around the Code of Conduct on Disinformation. We foster this cooperation between social media platform and civil society organization. Last but not least, of course, there is a pioneering regulation, which is the Digital Services Act. The Digital Services Act is the first global legal standard for taking disinformation, while protecting freedom of expression and information. This regulation does not look at content, but looks how the content is distributed based on, looks at the functioning of the algorithm, looks at avoiding that malicious actor abuse this algorithm to spread disinformation, to manipulate public discourse, to create different systemic risk. It gives to the Commission strong investigatory powers, which is also helping increasing transparency on the functioning of social media platform. Then we have, I mentioned, the Code of Conduct on Disinformation. The most recent development is that the Code of Practices on Disinformation has now been brought within the co-regulatory framework of the DSA. So it becomes a meaningful benchmark. for a very large online platform to fulfil the DSA requirement from, of course, the disinformation point of view. It contains a large set of commitment and measure. And then there is the third pillar, which is societal resilience. I will put AdMob under this basket. As I said, AdMob is a great tool that we support to increase awareness about the phenomenon of disinformation through the detection and the analysis of it. We have supported also the creation of the highest ethical and professional standard in the fact-checking for fact-checking in Europe. And we finance a lot of media literacy activities. This is a little bit of story of the code. We started back in 2018 with 16 signatory and 21 commitments. Now we are in 2025, 42 signatory with a very granular code that includes 43 commitments and 128 measures. As of the 1st of July, the code, as I mentioned before, we fully enter into the DSA framework and will be auditable. So this is also the big transformation that we are doing with this moving the code under the DSA. So the signatory of the code will need to be audited on the implementation of the code. This will be an obligation under the DSA. I’m not spending a lot of words on the code because maybe people are familiar, but the code wants to take several areas that are relevant for the disinformation phenomenon. The monetization of disinformation, transparency of political advertising. We also have new regulation coming into place, reducing manipulative behavior, empower user, empower fact-checkers, and provide access to data for research purposes. And then I’m concluding. you know, you have seen and it’s really a pleasure to see that in this panel there are a lot of admiral representatives. It was a huge effort from our side to create this network of 14 hubs, soon to be 15. We will have a new hub coming up which aligns to the new strategy for international cooperation of the European Union. We will have a new hub that will cover also Ukraine and Moldova, which are a critical regional spot if we want to fight disinformation. And let me also remind that maybe it’s not clear to everyone how big is this network. EDMO includes more than 120 organizations across the EU, including Norway and soon also Ukraine and Moldova. And last but not least, you mentioned it at the beginning, Giacomo, media literacy. Media literacy is an aspect that appears in different parts of our strategy. It is a part of our policy and regulatory framework, both in the DSA and in the European Media Freedom Act. We have a media literacy expert group. We also have the new European Board for Media Services that has a subgroup on media literacy. EDMO is doing great activities, in particular at the local level, with initiatives that are tailored to the needs of the different member states, and in particular to Creative Europe. And through pilot projects, we support a lot of cross-border media literacy activities. I will stop here and give you back the floor. Thank you very much, Alberto. We are quite late, but I don’t want to spoil the audience from the possibility to raise questions. I see that already there is somebody there. Could you introduce yourself, please? Yes, my name is Lou Kotny.


Audience: I’m a retired American librarian over here for my younger Norwegian-American children. On LinkedIn, I have a white paper about the Ukraine war titled Biden-Blinken’s War Beginning Holocaust Objective Facts Footnoted. And two big lies are being pushed by the European Union, by Europeans. First of all, Kyiv 2014 was an outside agitated coup for four objective reasons, which I put in my paper. Secondly, the attack in 2022 was provoked by Zelensky himself, pumped up by the Europeans in Munich, threatening Ukraine, getting nuclear weapons. And finally, which really concerns me, Europe is voting against the annual United Nations anti-Nazi resolution, which sort of is self-defining, self-incriminating that we are quizzling collaborators. Now, my question is, if the EU is so bias, pro-war biased, shouldn’t the United Nations keep it at far arm’s length as far as judging what’s misinformation and disinformation? Thank you for letting me ask my question. Thank you. Other questions from the room? Okay, in the meantime, Alberto, do you want to answer to this first question while, oh, yes, please, go ahead. There’s a second question. Hi. My name is Thora. I’m a PhD researcher from Iceland examining how very large platforms and search engines are undermining democracy. I am asking about academic access because this is a big problem, and I’ve been a research fellow at the Humboldt Institute where they have Friends of the DSA, which is a group of academics who are trying to gain this access, but the large platforms are dragging their feet and claiming that EU has to make a few definitions in order for this to start, and I’m wondering what is the status of academic access, and what should we start with? Thank you. Thank you very much. Okay. Do you want to answer to this, and then Alberto will give the other question?


Moderator: Yes, I can just echo what was just said from the audience.


Morten Langfeldt Dahlback: We recently tried to run a project where we were supposed to work with researchers to extract information from one of the major platforms, and we noticed very quickly that the research APIs where you can actually extract information was much more limited than we had expected. So, I think this is a major problem that a lot of people experience, and it definitely


Moderator: has not been fixed yet. Okay. So, Alberto, do you have some element of answer to the first question? And probably you have to complement what has been said about access to the data from the platforms. That is essential for understanding what happens.


Alberto Rabbachin: Yes, Giacomo. On the first question, and this is an important element that I want to stress, when we talk about detection of disinformation, analysis of disinformation, We don’t want to be the one calling the shots. We are supporting an independent, multidisciplinary community, which is represented by EDMO here, 120 organisations, which are selected by independent experts. And the work that they do in fact-checking and analysing this information is completely independent not only from the European Commission, but also from other EU governments. And this is really something that we really are taking care of and we want to be preserved. On the second question, I think there is the Digital Services Act that obliges platforms to provide data for research activity. There is an upcoming delegated act that should also move the bar up or down, let’s say, in terms of providing more access to researchers in Europe for doing their work.


Moderator: I think this is fundamental to have a better understanding of the phenomenon and therefore to design proper policy responses. Thank you very much. I think that we’ve run out of time. There is one more question. Yes, please. Thank you very much.


Audience: My name is Mohamed Aded Ali. I’m from Somalia. I’m part of the RECIPE programme. Recognising artificial AI propaganda in terms of digital integrity violations involves identifying when AI technologies are misused to deceive, manipulate or misinform individuals or groups. These violations can threaten trust, prosperity, ethical standards and digital communication. My question is, how can EU rulers recognise this in terms of internet integrity? Thank you.


Moderator: In terms of the internet? Internet and digital integrity, based on EU rules. I think that we can give a generic answer, that is, that the information integrity becomes now more and more, as Mikko said before, the relevant point, because especially we will have to face, thanks to the artificial intelligence, a flood of disinformation. automatically made and so becomes more and more important to identify which are the reliable sources and if the information has been manipulated and this according to what Martin was saying before will become more and more difficult. So a mix of rules as European Union is trying to do and work on the on the media integrity made by the media and the journalists is absolutely essential to try to to face this unpredictable future. Thank you very much. Sorry that we didn’t gave you too much answers but we share with you a lot of questions but this is the times we are living and we hope that in the next days we can we can find some other answers from other partners. I just remember you that in few minutes we’ll start in workshop room number two a seminar by BBC and Deutsche Welle that is about how the public service could remediate to part of the problems that we have faced this morning. Thank you very much everybody for participating and I wish you a nice IGF and thank you for coming again. Thank you. Thank you all. Thanks. Thank you.


P

Paula Gori

Speech speed

172 words per minute

Speech length

1567 words

Speech time

545 seconds

Disinformation creates doubt and division in society, eroding information integrity essential for democratic decision-making

Explanation

Gori argues that disinformation puts society in a situation where people cannot be sure about information due to conflicting facts and non-facts. This erosion of information integrity is problematic for democracy because decision-making requires a factual basis, and without it, people may make decisions not in their interest.


Evidence

Examples given include disinformation on migration, climate change, elections, and health topics that are often interconnected


Major discussion point

Information integrity as foundation for democracy


Topics

Human rights | Sociocultural


The disinformation phenomenon is extremely complex with multiple variables, making it difficult to find simple solutions while protecting human rights

Explanation

Gori emphasizes that disinformation cannot be simplified because it involves many complex factors and human rights are at stake. She argues that the complexity requires a mix of different solutions rather than a single approach.


Evidence

References to slides showing climate change disinformation and economics of disinformation to demonstrate complexity


Major discussion point

Complexity of disinformation requires nuanced solutions


Topics

Human rights | Sociocultural


Agreed with

– Morten Langfeldt Dahlback
– Alberto Rabbachin
– Mikko Salo
– Moderator

Agreed on

AI is creating new challenges for disinformation detection and response


Global frameworks emphasize fundamental rights, algorithmic transparency, multi-stakeholder approaches, and risk mitigation rather than content deletion

Explanation

Gori outlines that international frameworks like the Global Digital Compact and UNESCO guidelines focus on protecting human rights, ensuring algorithmic transparency, involving multiple stakeholders, and mitigating risks. The approach targets how platforms work rather than directly removing content.


Evidence

References to Global Digital Compact, UNESCO guidelines on digital platform governance, EU’s international digital strategy, and Digital Service Act


Major discussion point

Framework approaches to disinformation


Topics

Legal and regulatory | Human rights


EDMO serves as a platform bringing together different stakeholders, similar to IGF’s approach to internet governance

Explanation

Gori describes EDMO as a multi-stakeholder platform that brings together various actors to address disinformation, providing tools like training and fact-checking repositories. It operates through 14 hubs covering all EU member states to address local specificities.


Evidence

EDMO works with 14 national or multinational hubs covering all EU member states, funded by the European Commission


Major discussion point

Multi-stakeholder cooperation in combating disinformation


Topics

Legal and regulatory | Sociocultural


Agreed with

– Alberto Rabbachin
– Moderator

Agreed on

Multi-stakeholder approach is essential for addressing disinformation


Local specificities in culture, politics, and language are crucial for understanding how disinformation impacts different countries

Explanation

Gori argues that culture, policy, politics, history, language, and media consumption patterns of a country significantly impact whether disinformation is effective or enters a country. This necessitates local elements in any response strategy.


Evidence

EDMO’s structure with local hubs to address regional specificities while enabling pan-European analysis


Major discussion point

Importance of local context in disinformation response


Topics

Sociocultural | Legal and regulatory


Agreed with

– Mikko Salo

Agreed on

Local context and specificities are crucial for effective disinformation response


M

Morten Langfeldt Dahlback

Speech speed

182 words per minute

Speech length

1146 words

Speech time

376 seconds

There is insufficient knowledge about the scope and impact of disinformation, and conditions for gaining this knowledge are deteriorating

Explanation

Dahlback argues that understanding the scope and impact of disinformation is limited, and the situation is worsening due to regulatory divergence between Europe and the US. He notes that legislation meant to increase platform transparency is being undermined.


Evidence

META closed down fact-checking programs in the US, X refused to comply with transparency legislation, and research APIs are more limited than expected


Major discussion point

Knowledge gaps about disinformation scope and impact


Topics

Legal and regulatory | Sociocultural


Independent actors like fact-checkers face challenges maintaining independence from governments while their objectives align with official bodies

Explanation

Dahlback highlights the difficulty fact-checkers face in maintaining independence and audience trust when their objectives align closely with government goals. This alignment can lead others to question their independence.


Evidence

The challenge of cooperative coexistence between journalistic organizations and governmental bodies in addressing disinformation


Major discussion point

Independence of fact-checking organizations


Topics

Human rights | Sociocultural


Disagreed with

– Alberto Rabbachin

Disagreed on

Approach to combating disinformation: regulatory vs. independence concerns


The shift toward AI chatbots creates invisible information consumption that fact-checkers cannot observe or respond to effectively

Explanation

Dahlback warns that as information consumption moves to private chatbot interactions, fact-checkers lose the ability to observe and respond to misinformation. Unlike social media posts that are publicly observable, chatbot responses are private and generated individually.


Evidence

Comparison between observable content on Facebook and TikTok versus private responses from ChatGPT and Claude


Major discussion point

AI chatbots creating invisible misinformation


Topics

Sociocultural | Legal and regulatory


Agreed with

– Paula Gori
– Alberto Rabbachin
– Mikko Salo
– Moderator

Agreed on

AI is creating new challenges for disinformation detection and response


There may be a necessary shift from fact-checking and debunking work toward more literacy work and empowering people to think critically

Explanation

Dahlback suggests that as AI-generated content becomes more prevalent and less observable, the focus should shift from reactive fact-checking to proactive media literacy. This would empower individuals to critically assess information they receive from chatbots and other AI tools.


Evidence

The increasing use of chatbots and LLMs making traditional fact-checking approaches less effective


Major discussion point

Evolution from fact-checking to media literacy


Topics

Sociocultural | Human rights


Agreed with

– Mikko Salo
– Alberto Rabbachin
– Moderator

Agreed on

Media literacy and education are fundamental to combating disinformation


Major platforms are limiting researcher access to data, with research APIs being more restricted than expected

Explanation

Dahlback reports that recent attempts to work with researchers on extracting platform information revealed that research APIs provide much more limited access than anticipated. This restricts the ability to study and understand disinformation phenomena.


Evidence

Direct experience from a recent project attempting to extract information from a major platform


Major discussion point

Platform data access for research


Topics

Legal and regulatory | Development


Disagreed with

– Alberto Rabbachin
– Audience

Disagreed on

Platform data access and transparency


B

Benjamin Shultz

Speech speed

161 words per minute

Speech length

908 words

Speech time

336 seconds

Bad actors are becoming more active in spreading information campaigns that undermine democracy and tear at social fabric

Explanation

Shultz describes how information operations are being used more aggressively to damage democratic institutions and social cohesion in the US. He notes that platforms are moving closer to the administration and that there are concerning restrictions on free expression.


Evidence

People being denied entry to the US based on critical text messages about the administration, and platforms aligning more closely with government


Major discussion point

Increasing threats to democracy from information campaigns


Topics

Human rights | Sociocultural


Small legislative victories like banning non-consensual deepfakes can maintain transatlantic cooperation despite broader challenges

Explanation

Shultz argues that despite deteriorating US-Europe relations, focusing on specific issues with broad bipartisan support can preserve cooperation. He cites the success in making non-consensual explicit deepfakes illegal as an example of achievable progress.


Evidence

Recent US legislation requiring platforms to remove deepfake videos within 48 hours of victim requests, achieved through bipartisan support


Major discussion point

Maintaining international cooperation through targeted legislation


Topics

Legal and regulatory | Human rights


M

Mikko Salo

Speech speed

130 words per minute

Speech length

685 words

Speech time

314 seconds

Investment in media education is crucial, particularly for children who need to learn critical thinking before using AI tools

Explanation

Salo emphasizes that media education investments are insufficient globally and that Finland is focusing on preparing the next generation. He argues that children need to develop thinking skills before they can effectively use AI tools.


Evidence

Finland’s work with government officials to retrain teachers and provide guidance for AI literacy, described as ‘whole of society security’


Major discussion point

Education as foundation for information integrity


Topics

Sociocultural | Development


Agreed with

– Morten Langfeldt Dahlback
– Alberto Rabbachin
– Moderator

Agreed on

Media literacy and education are fundamental to combating disinformation


People need to develop AI literacy and learn to think critically before using AI tools

Explanation

Salo argues that individuals must be able to think independently before they can properly utilize AI. He questions what an ‘AI native person’ looks like and emphasizes the importance of maintaining human critical thinking capabilities.


Evidence

Reference to asking ChatGPT for information and the need for people to assess AI outputs critically


Major discussion point

AI literacy and critical thinking skills


Topics

Sociocultural | Development


Agreed with

– Paula Gori
– Morten Langfeldt Dahlback
– Alberto Rabbachin
– Moderator

Agreed on

AI is creating new challenges for disinformation detection and response


A

Alberto Rabbachin

Speech speed

127 words per minute

Speech length

1426 words

Speech time

671 seconds

The Digital Services Act is pioneering regulation that addresses disinformation while protecting freedom of expression by focusing on algorithm functioning rather than content

Explanation

Rabbachin describes the DSA as the first global legal standard for tackling disinformation while preserving free speech rights. It examines how content is distributed through algorithms rather than the content itself, aiming to prevent malicious actors from abusing algorithms.


Evidence

The DSA provides the Commission with strong investigatory powers and increases transparency on social media platform functioning


Major discussion point

Regulatory approach focusing on algorithmic transparency


Topics

Legal and regulatory | Human rights


AI-generated disinformation is increasing and was particularly witnessed in recent European elections

Explanation

Rabbachin notes that EDMO monitoring shows AI-generated disinformation content is rising, with particular evidence during recent national elections in Europe. This represents a growing challenge that requires attention.


Evidence

EDMO monitoring data showing increased AI-generated disinformation during recent European national elections


Major discussion point

AI’s role in generating disinformation


Topics

Sociocultural | Legal and regulatory


Agreed with

– Paula Gori
– Morten Langfeldt Dahlback
– Mikko Salo
– Moderator

Agreed on

AI is creating new challenges for disinformation detection and response


The Code of Practice on Disinformation has grown from 16 signatories with 21 commitments to 42 signatories with 128 measures

Explanation

Rabbachin highlights the expansion of the voluntary code since 2018, showing increased industry engagement. The code has been integrated into the DSA framework, making it auditable and creating obligations for signatories.


Evidence

Specific numbers showing growth from 16 to 42 signatories and from 21 to 128 measures, with integration into DSA making it auditable


Major discussion point

Evolution of industry self-regulation


Topics

Legal and regulatory | Sociocultural


The EU supports an independent, multidisciplinary community of 120+ organizations whose fact-checking work is completely independent from the European Commission and governments

Explanation

Rabbachin emphasizes that the EU doesn’t directly determine what constitutes disinformation but supports an independent network of organizations. These organizations are selected by independent experts and maintain complete independence in their fact-checking and analysis work.


Evidence

EDMO network includes more than 120 organizations across the EU, Norway, and soon Ukraine and Moldova, selected by independent experts


Major discussion point

Independence of EU-supported fact-checking network


Topics

Human rights | Sociocultural


Agreed with

– Paula Gori
– Moderator

Agreed on

Multi-stakeholder approach is essential for addressing disinformation


Disagreed with

– Morten Langfeldt Dahlback

Disagreed on

Approach to combating disinformation: regulatory vs. independence concerns


The Digital Services Act requires platforms to provide data for research activities, with upcoming regulations to improve researcher access

Explanation

Rabbachin explains that the DSA obligates platforms to provide data for research purposes, and there is an upcoming delegated act that should further improve researcher access to platform data for their work.


Evidence

Reference to DSA obligations and upcoming delegated act to enhance researcher access


Major discussion point

Platform data access for research under DSA


Topics

Legal and regulatory | Development


Disagreed with

– Morten Langfeldt Dahlback
– Audience

Disagreed on

Platform data access and transparency


Media literacy appears across multiple policy frameworks and is supported through various EU initiatives and expert groups

Explanation

Rabbachin outlines how media literacy is integrated into various EU policies including the DSA and European Media Freedom Act. The EU supports media literacy through expert groups, pilot projects, and local initiatives tailored to member state needs.


Evidence

Media literacy provisions in DSA and European Media Freedom Act, media literacy expert group, European Board for Media Services subgroup, and Creative Europe pilot projects


Major discussion point

Comprehensive EU approach to media literacy


Topics

Sociocultural | Legal and regulatory


Agreed with

– Mikko Salo
– Morten Langfeldt Dahlback
– Moderator

Agreed on

Media literacy and education are fundamental to combating disinformation


A

Audience

Speech speed

119 words per minute

Speech length

375 words

Speech time

188 seconds

Academic access to platform data remains problematic, with platforms claiming definitional issues prevent compliance

Explanation

An audience member from Iceland studying platform impacts on democracy reports that large platforms are avoiding providing academic access by claiming the EU needs to make clearer definitions. This affects research into how platforms undermine democratic processes.


Evidence

Experience from Humboldt Institute’s Friends of the DSA group of academics trying to gain access


Major discussion point

Platform compliance with data access requirements


Topics

Legal and regulatory | Development


Disagreed with

– Morten Langfeldt Dahlback
– Alberto Rabbachin

Disagreed on

Platform data access and transparency


M

Moderator

Speech speed

133 words per minute

Speech length

816 words

Speech time

367 seconds

The session aims to understand who are friends and foes in the complicated situation of Internet Governance and the fight against disinformation

Explanation

The moderator frames the discussion as needing to identify allies and adversaries in the complex landscape of internet governance, particularly regarding disinformation challenges. This sets up the session as exploring the different stakeholders and their roles in addressing these issues.


Evidence

Session title and opening remarks about the complicated and unclear situation for Internet Governance


Major discussion point

Identifying stakeholders in internet governance and disinformation


Topics

Legal and regulatory | Sociocultural


Agreed with

– Paula Gori
– Alberto Rabbachin

Agreed on

Multi-stakeholder approach is essential for addressing disinformation


There is a shift from fact-checking to media literacy and user empowerment that needs reflection, particularly by policymakers

Explanation

The moderator highlights and questions this transition from reactive fact-checking approaches to proactive media literacy and user empowerment strategies. He specifically asks the European Commission representative whether they agree with this shift, indicating it’s a significant policy consideration.


Evidence

Direct question to Alberto Rabbachin about agreeing with the shift from fact-checking to media literacy


Major discussion point

Evolution from fact-checking to media literacy approaches


Topics

Sociocultural | Legal and regulatory


Agreed with

– Mikko Salo
– Morten Langfeldt Dahlback
– Alberto Rabbachin

Agreed on

Media literacy and education are fundamental to combating disinformation


Information integrity and reliable source identification become increasingly important due to AI-generated disinformation floods

Explanation

The moderator synthesizes the discussion by emphasizing that information integrity is becoming more crucial as artificial intelligence enables automatic generation of disinformation at scale. He argues that identifying reliable sources and detecting manipulation will become increasingly difficult, requiring a combination of regulatory approaches and media integrity work.


Evidence

Reference to the flood of automatically generated disinformation through AI and the increasing difficulty of identification


Major discussion point

Information integrity in the age of AI-generated content


Topics

Sociocultural | Legal and regulatory


Agreed with

– Paula Gori
– Morten Langfeldt Dahlback
– Alberto Rabbachin
– Mikko Salo

Agreed on

AI is creating new challenges for disinformation detection and response


A mix of rules and media integrity work by journalists is essential to face an unpredictable future

Explanation

The moderator concludes that addressing disinformation challenges requires combining regulatory frameworks (like those the EU is developing) with professional media integrity work conducted by journalists and media organizations. He presents this as necessary preparation for an uncertain technological and information landscape.


Evidence

Reference to European Union’s regulatory efforts and the work of media and journalists


Major discussion point

Combined regulatory and professional approach to disinformation


Topics

Legal and regulatory | Sociocultural


Agreements

Agreement points

Multi-stakeholder approach is essential for addressing disinformation

Speakers

– Paula Gori
– Alberto Rabbachin
– Moderator

Arguments

EDMO serves as a platform bringing together different stakeholders, similar to IGF’s approach to internet governance


The EU supports an independent, multidisciplinary community of 120+ organizations whose fact-checking work is completely independent from the European Commission and governments


The session aims to understand who are friends and foes in the complicated situation of Internet Governance and the fight against disinformation


Summary

All speakers agree that combating disinformation requires collaboration between multiple stakeholders including civil society, government, platforms, and international organizations, while maintaining independence of fact-checking organizations


Topics

Legal and regulatory | Sociocultural


Local context and specificities are crucial for effective disinformation response

Speakers

– Paula Gori
– Mikko Salo

Arguments

Local specificities in culture, politics, and language are crucial for understanding how disinformation impacts different countries


Investment in media education is crucial, particularly for children who need to learn critical thinking before using AI tools


Summary

Both speakers emphasize that disinformation responses must account for local cultural, political, and linguistic contexts, with tailored approaches for different countries and communities


Topics

Sociocultural | Development


Media literacy and education are fundamental to combating disinformation

Speakers

– Mikko Salo
– Morten Langfeldt Dahlback
– Alberto Rabbachin
– Moderator

Arguments

Investment in media education is crucial, particularly for children who need to learn critical thinking before using AI tools


There may be a necessary shift from fact-checking and debunking work toward more literacy work and empowering people to think critically


Media literacy appears across multiple policy frameworks and is supported through various EU initiatives and expert groups


There is a shift from fact-checking to media literacy and user empowerment that needs reflection, particularly by policymakers


Summary

All speakers agree that media literacy and critical thinking education are becoming increasingly important, potentially more so than reactive fact-checking approaches


Topics

Sociocultural | Development


AI is creating new challenges for disinformation detection and response

Speakers

– Paula Gori
– Morten Langfeldt Dahlback
– Alberto Rabbachin
– Mikko Salo
– Moderator

Arguments

The disinformation phenomenon is extremely complex with multiple variables, making it difficult to find simple solutions while protecting human rights


The shift toward AI chatbots creates invisible information consumption that fact-checkers cannot observe or respond to effectively


AI-generated disinformation is increasing and was particularly witnessed in recent European elections


People need to develop AI literacy and learn to think critically before using AI tools


Information integrity and reliable source identification become increasingly important due to AI-generated disinformation floods


Summary

All speakers acknowledge that AI is fundamentally changing the disinformation landscape, making detection more difficult and requiring new approaches to combat AI-generated false content


Topics

Sociocultural | Legal and regulatory


Similar viewpoints

Both speakers advocate for regulatory approaches that focus on algorithmic transparency and platform functioning rather than direct content moderation, emphasizing protection of fundamental rights and freedom of expression

Speakers

– Paula Gori
– Alberto Rabbachin

Arguments

Global frameworks emphasize fundamental rights, algorithmic transparency, multi-stakeholder approaches, and risk mitigation rather than content deletion


The Digital Services Act is pioneering regulation that addresses disinformation while protecting freedom of expression by focusing on algorithm functioning rather than content


Topics

Legal and regulatory | Human rights


Both express frustration with limited platform data access for research purposes, highlighting that platforms are not providing adequate transparency despite regulatory requirements

Speakers

– Morten Langfeldt Dahlback
– Audience

Arguments

Major platforms are limiting researcher access to data, with research APIs being more restricted than expected


Academic access to platform data remains problematic, with platforms claiming definitional issues prevent compliance


Topics

Legal and regulatory | Development


Both speakers express concerns about threats to democratic institutions and the challenges of maintaining independence while working to combat disinformation

Speakers

– Benjamin Shultz
– Morten Langfeldt Dahlback

Arguments

Bad actors are becoming more active in spreading information campaigns that undermine democracy and tear at social fabric


Independent actors like fact-checkers face challenges maintaining independence from governments while their objectives align with official bodies


Topics

Human rights | Sociocultural


Unexpected consensus

Shift from reactive fact-checking to proactive media literacy

Speakers

– Morten Langfeldt Dahlback
– Mikko Salo
– Alberto Rabbachin
– Moderator

Arguments

There may be a necessary shift from fact-checking and debunking work toward more literacy work and empowering people to think critically


Investment in media education is crucial, particularly for children who need to learn critical thinking before using AI tools


Media literacy appears across multiple policy frameworks and is supported through various EU initiatives and expert groups


There is a shift from fact-checking to media literacy and user empowerment that needs reflection, particularly by policymakers


Explanation

It’s unexpected that fact-checkers themselves (Dahlback) are advocating for a shift away from their traditional reactive approach toward proactive education, with broad agreement from policymakers and civil society representatives


Topics

Sociocultural | Development


Complexity requires nuanced rather than simple solutions

Speakers

– Paula Gori
– Morten Langfeldt Dahlback
– Alberto Rabbachin

Arguments

The disinformation phenomenon is extremely complex with multiple variables, making it difficult to find simple solutions while protecting human rights


There is insufficient knowledge about the scope and impact of disinformation, and conditions for gaining this knowledge are deteriorating


The Digital Services Act is pioneering regulation that addresses disinformation while protecting freedom of expression by focusing on algorithm functioning rather than content


Explanation

Unexpected consensus among different stakeholder types (NGO, fact-checker, policymaker) that simple solutions are inadequate and that the complexity of disinformation requires sophisticated, multi-faceted approaches


Topics

Legal and regulatory | Human rights


Overall assessment

Summary

Strong consensus exists on the need for multi-stakeholder cooperation, importance of media literacy education, challenges posed by AI-generated disinformation, and the necessity of protecting fundamental rights while addressing disinformation. There is also agreement on the importance of local context and the complexity of the phenomenon requiring nuanced solutions.


Consensus level

High level of consensus among speakers despite representing different sectors (EU policy, fact-checking, civil society, US perspective). The consensus suggests a mature understanding of disinformation challenges and broad agreement on fundamental principles, though implementation details may vary. This strong alignment across different stakeholder groups indicates potential for effective collaborative approaches to combating disinformation while preserving democratic values.


Differences

Different viewpoints

Approach to combating disinformation: regulatory vs. independence concerns

Speakers

– Morten Langfeldt Dahlback
– Alberto Rabbachin

Arguments

Independent actors like fact-checkers face challenges maintaining independence from governments while their objectives align with official bodies


The EU supports an independent, multidisciplinary community of 120+ organizations whose fact-checking work is completely independent from the European Commission and governments


Summary

Dahlback expresses concern about fact-checkers maintaining independence when their objectives align with government goals, while Rabbachin emphasizes that EU-supported organizations maintain complete independence from government influence


Topics

Human rights | Sociocultural


Platform data access and transparency

Speakers

– Morten Langfeldt Dahlback
– Alberto Rabbachin
– Audience

Arguments

Major platforms are limiting researcher access to data, with research APIs being more restricted than expected


The Digital Services Act requires platforms to provide data for research activities, with upcoming regulations to improve researcher access


Academic access to platform data remains problematic, with platforms claiming definitional issues prevent compliance


Summary

There’s disagreement about the effectiveness of current data access provisions – Rabbachin presents the DSA as providing adequate framework, while Dahlback and audience members report practical difficulties in accessing platform data for research


Topics

Legal and regulatory | Development


Unexpected differences

Effectiveness of current transparency and research access mechanisms

Speakers

– Alberto Rabbachin
– Morten Langfeldt Dahlback
– Audience

Arguments

The Digital Services Act requires platforms to provide data for research activities, with upcoming regulations to improve researcher access


Major platforms are limiting researcher access to data, with research APIs being more restricted than expected


Academic access to platform data remains problematic, with platforms claiming definitional issues prevent compliance


Explanation

This disagreement is unexpected because it reveals a gap between regulatory intentions and practical implementation. While the EU representative presents the DSA as providing adequate framework for research access, practitioners report significant difficulties in actually obtaining data, suggesting implementation challenges not acknowledged in policy discussions


Topics

Legal and regulatory | Development


Overall assessment

Summary

The main areas of disagreement center on the balance between regulatory approaches and independence concerns, the effectiveness of current data access mechanisms, and the optimal balance between different anti-disinformation strategies (fact-checking vs. media literacy vs. regulatory frameworks)


Disagreement level

Moderate disagreement with significant implications – while speakers share common goals of protecting information integrity and fundamental rights, they differ substantially on implementation approaches and the effectiveness of current measures. This suggests potential coordination challenges between policy makers, practitioners, and researchers in addressing disinformation effectively


Partial agreements

Partial agreements

Similar viewpoints

Both speakers advocate for regulatory approaches that focus on algorithmic transparency and platform functioning rather than direct content moderation, emphasizing protection of fundamental rights and freedom of expression

Speakers

– Paula Gori
– Alberto Rabbachin

Arguments

Global frameworks emphasize fundamental rights, algorithmic transparency, multi-stakeholder approaches, and risk mitigation rather than content deletion


The Digital Services Act is pioneering regulation that addresses disinformation while protecting freedom of expression by focusing on algorithm functioning rather than content


Topics

Legal and regulatory | Human rights


Both express frustration with limited platform data access for research purposes, highlighting that platforms are not providing adequate transparency despite regulatory requirements

Speakers

– Morten Langfeldt Dahlback
– Audience

Arguments

Major platforms are limiting researcher access to data, with research APIs being more restricted than expected


Academic access to platform data remains problematic, with platforms claiming definitional issues prevent compliance


Topics

Legal and regulatory | Development


Both speakers express concerns about threats to democratic institutions and the challenges of maintaining independence while working to combat disinformation

Speakers

– Benjamin Shultz
– Morten Langfeldt Dahlback

Arguments

Bad actors are becoming more active in spreading information campaigns that undermine democracy and tear at social fabric


Independent actors like fact-checkers face challenges maintaining independence from governments while their objectives align with official bodies


Topics

Human rights | Sociocultural


Takeaways

Key takeaways

Disinformation is a complex, multi-faceted phenomenon that erodes information integrity essential for democratic decision-making, requiring sophisticated responses rather than simple solutions


There is a fundamental shift occurring from traditional fact-checking approaches toward media literacy and user empowerment, particularly as AI chatbots make disinformation less observable to fact-checkers


Regulatory divergence between the US and Europe is hampering knowledge gathering about disinformation, with the US experiencing democratic backsliding while Europe maintains stronger regulatory frameworks


The EU’s approach focuses on algorithmic transparency and platform accountability rather than content censorship, exemplified by the Digital Services Act which addresses how content is distributed rather than the content itself


Education and media literacy, particularly for children, are becoming increasingly critical as AI-generated disinformation proliferates and people need to develop critical thinking skills before using AI tools


Independent fact-checking organizations face the challenge of maintaining credibility and independence while their objectives increasingly align with government anti-disinformation efforts


Multi-stakeholder cooperation through networks like EDMO (120+ organizations across EU) is essential, but must respect local specificities in culture, politics, and language


Platform data access for researchers remains severely limited despite regulatory requirements, hindering understanding of disinformation scope and impact


Resolutions and action items

The Code of Practice on Disinformation will be fully integrated into the DSA framework as of July 1st, making it auditable and creating binding obligations for platform signatories


A new EDMO hub covering Ukraine and Moldova will be established to address critical regional disinformation challenges


Upcoming EU delegated acts will improve researcher access to platform data for disinformation studies


Continued investment in media literacy programs across EU member states, with initiatives tailored to local needs


Maintenance of transatlantic cooperation through focus on areas of broad bipartisan support, such as banning non-consensual deepfakes


Unresolved issues

How to effectively monitor and respond to disinformation distributed through private AI chatbot interactions that are not publicly observable


How independent fact-checking organizations can maintain credibility while working closely with government anti-disinformation initiatives


How to obtain sufficient knowledge about the scope and impact of disinformation when platform transparency is decreasing


How to balance the need for platform regulation with protecting freedom of expression, particularly given varying cultural and political contexts


How to address the growing regulatory divergence between the US and Europe while maintaining effective global cooperation against disinformation


How to scale media literacy education effectively when current global investments in media education are minimal


How to ensure meaningful academic and researcher access to platform data despite platform resistance and technical limitations


Suggested compromises

Focus on areas of broad political consensus (like banning non-consensual deepfakes) to maintain transatlantic cooperation despite broader disagreements


Emphasize algorithmic transparency and platform accountability rather than content moderation to address free speech concerns while tackling disinformation


Combine regulatory approaches with voluntary industry cooperation through codes of practice that can evolve into binding obligations


Balance global principles with regional specificities, allowing for local adaptation while maintaining shared fundamental values


Shift emphasis from reactive fact-checking to proactive media literacy education to address the changing nature of information consumption


Maintain independence of fact-checking organizations through multi-stakeholder governance structures rather than direct government control


Thought provoking comments

We have to start to think about new ways, new creative ways to maintain the alliance, the Transatlantic Alliance in these rough times… Recently in the US, non-consensual explicit deepfakes, colloquially known as deepfake porn, have actually been made illegal… my hope is that with small steps like these that have been taken in the states that do have broad support, such as banning explicit deepfakes that are made non-consensually, my hope is that collaborating on these issues that Europe and the U.S. and countries all around the world can continue the dialogue

Speaker

Benjamin Shultz


Reason

This comment was insightful because it reframed the discussion from focusing on problems to identifying practical solutions for maintaining international cooperation despite political tensions. Shultz acknowledged the deteriorating transatlantic relationship while proposing a pragmatic approach of finding common ground on specific, less politically charged issues.


Impact

This shifted the conversation from a purely analytical discussion of disinformation challenges to a more solution-oriented dialogue about maintaining cooperation. It introduced the concept of incremental progress through bipartisan issues, which influenced subsequent speakers to consider practical approaches rather than just theoretical frameworks.


However, when you use chatbots like ChatGPT or Clod… the information that you receive from the chatbot is not in the public sphere at all. It’s a response generated on the basis of a prompt that you give to the language model, which means that we, as fact-checkers, for example, are unable, we can’t see what responses you’re getting… I think we might see a transition from more debunking and fact-checking work like what we’ve been engaged in so far to more literacy work

Speaker

Morten Langfeldt Dahlback


Reason

This was perhaps the most thought-provoking comment of the session because it fundamentally challenged the existing paradigm of fighting disinformation. Dahlback identified a critical blind spot in current approaches – that AI-generated responses in private conversations are invisible to fact-checkers, making traditional debunking methods obsolete.


Impact

This comment created a pivotal moment in the discussion, shifting focus from current regulatory frameworks to future challenges. It prompted the moderator to specifically ask the European Commission representative about this shift from fact-checking to media literacy, making it a central theme for the remainder of the session. It essentially redefined the problem space from observable public content to private, personalized AI interactions.


I think one of the core challenges that we face in responding to this problem is that we don’t know enough about the scope of the problem, and we don’t know enough about its impact… the conditions for gaining more knowledge about this problem have become worse over the past few months… because of regulatory divergence between Europe and the US

Speaker

Morten Langfeldt Dahlback


Reason

This comment was insightful because it identified a fundamental epistemological problem – that effective policy responses require understanding the scope and impact of disinformation, but the tools for gaining this knowledge are being eroded. It connected regulatory divergence to practical research limitations.


Impact

This comment established a critical foundation for understanding why the disinformation fight is becoming more difficult. It influenced subsequent discussion about data access for researchers and highlighted the interconnected nature of regulatory frameworks and research capabilities.


Once our objectives are aligned with the objectives of governments and of other regulatory and official bodies, I think it’s easy for others to throw our independence into doubt, because the alignment is too close

Speaker

Morten Langfeldt Dahlback


Reason

This comment revealed a sophisticated understanding of the paradox facing independent fact-checkers: the more successful they are in aligning with government anti-disinformation efforts, the more their independence and credibility can be questioned. It highlighted the delicate balance between cooperation and independence.


Impact

This comment introduced a nuanced discussion about the relationship between civil society organizations and government bodies in the fight against disinformation. It added complexity to what might otherwise be seen as straightforward cooperation, showing how political dynamics can undermine the very organizations trying to combat disinformation.


I think that’s where we have to find some sort of protection and ensure that before they first need to… they need to be able to think before they use AI. And I was just framing and I was actually asking the chat, how does it look like an AI native person? Because if we are not able to think ourselves, we are not able to use the AI as it’s meant at the moment

Speaker

Mikko Salo


Reason

This comment was thought-provoking because it identified a fundamental cognitive challenge of the AI era – that people need critical thinking skills before they can effectively use AI tools. The concept of ‘AI native persons’ and the need to ‘think before using AI’ highlighted a crucial educational gap.


Impact

This comment reinforced the emerging theme about the importance of education and media literacy over traditional fact-checking approaches. It provided concrete support for the shift in strategy that other speakers were advocating, emphasizing the foundational role of critical thinking skills.


Overall assessment

These key comments fundamentally reshaped the discussion from a traditional focus on current disinformation challenges and regulatory responses to a forward-looking examination of how the landscape is changing. Morten Langfeldt Dahlback’s insights about AI-generated content being invisible to fact-checkers and the erosion of research capabilities created pivotal moments that shifted the conversation toward future challenges and the need for new approaches. Benjamin Shultz’s reframing toward practical cooperation despite political tensions moved the discussion from problem identification to solution-seeking. Together, these comments transformed what could have been a routine policy discussion into a more sophisticated analysis of the evolving nature of information integrity challenges, the limitations of current approaches, and the need for adaptive strategies that emphasize education and literacy over traditional content moderation.


Follow-up questions

How can we better understand the scope and impact of disinformation across different domains?

Speaker

Morten Langfeldt Dahlback


Explanation

He identified this as a core challenge, noting that we don’t know enough about the scope of the problem and its impact, and that conditions for gaining knowledge have worsened due to regulatory divergence and platform restrictions


How can independent fact-checking organizations maintain their independence while working with governments and regulatory bodies on disinformation?

Speaker

Morten Langfeldt Dahlback


Explanation

He highlighted the difficult position independent actors face when their objectives align with governments, as it can throw their independence into doubt and affect audience trust


How can fact-checkers and researchers address misinformation generated by private chatbot interactions that are not publicly observable?

Speaker

Morten Langfeldt Dahlback


Explanation

He noted that chatbot responses are not in the public sphere, making it impossible for fact-checkers to observe and respond to misinformation delivered through these channels


What does an AI-native person look like and how should we prepare them for information integrity?

Speaker

Mikko Salo


Explanation

He emphasized the urgent need for AI literacy and questioned how people who grow up with AI will think critically about information, stressing that people need to be able to think before they use AI


What is the current status of academic access to platform data under the Digital Services Act?

Speaker

Thora (audience member)


Explanation

She highlighted that large platforms are dragging their feet on providing academic access, claiming the EU needs to make definitions first, which is hindering research on how platforms undermine democracy


How can EU rules help recognize AI propaganda and digital integrity violations?

Speaker

Mohamed Aded Ali (audience member)


Explanation

He asked about identifying when AI technologies are misused to deceive or manipulate, and how EU frameworks can address these threats to digital communication integrity


How much investment should be allocated to cognitive security and information integrity as part of societal security?

Speaker

Mikko Salo


Explanation

He referenced the 5% investment in security and suggested 1.5% should go to whole-of-society security including information integrity, but questioned what the appropriate investment level should be


How can we transition from debunking and fact-checking work to more effective literacy work?

Speaker

Morten Langfeldt Dahlback


Explanation

He suggested this transition may be necessary as more information consumption moves to private chatbot interactions, requiring individuals to assess information themselves rather than relying on public fact-checking


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.