Open Forum #16 AI and Disinformation Countering the Threats to Democratic Dialogue
25 Jun 2025 12:45h - 14:00h
Open Forum #16 AI and Disinformation Countering the Threats to Democratic Dialogue
Session at a glance
Summary
This discussion focused on the dual role of artificial intelligence in both creating and combating disinformation, examining threats to democratic dialogue and potential solutions. The panel was organized by the Council of Europe as part of an Internet Governance Forum open forum, bringing together experts from policy, technology, and civil society sectors.
David Caswell opened by explaining how AI has fundamentally changed information creation from an artisanal to an automated process, enabling the generation of entire narratives across multiple platforms and extended timeframes rather than just individual fake artifacts. He highlighted emerging risks including automated personalized persuasion at scale and the embedding of deep biases into AI training models, while also noting opportunities for more systematic and accessible civic information. Chine Labbé from NewsGuard presented concrete evidence of AI’s impact on disinformation, revealing that deepfakes in the Russia-Ukraine conflict increased dramatically from one case in the first year to sixteen sophisticated examples in the third year. She described how malicious actors now create entire networks of AI-generated fake news sites, with over 1,200 such sites identified globally, and demonstrated how AI chatbots frequently repeat false claims as authoritative facts approximately 26% of the time.
Maria Nordström discussed Sweden’s policy approach, emphasizing the importance of the Council of Europe’s AI Framework Convention as the first legally binding global treaty on AI and human rights. She highlighted the challenge of balancing public education about AI risks without further eroding trust in information systems. Olha Petriv shared Ukraine’s practical experience, describing their bottom-up approach including industry self-regulation through codes of conduct and the critical importance of teaching children AI literacy and critical thinking skills, particularly given their vulnerability to disinformation campaigns. The discussion concluded with actionable recommendations including preserving primary source journalism, developing AI literacy programs, creating certification systems for AI chatbots, and potentially establishing public service AI systems trained on reliable data sources.
Keypoints
## Major Discussion Points:
– **AI’s dual role in disinformation**: The discussion explored how AI both amplifies disinformation threats (through deepfakes, automated content creation, and AI-generated fake news sites) while also offering potential solutions for detection and fact-checking
– **Scale and automation challenges**: Speakers emphasized how AI has fundamentally changed the disinformation landscape by enabling malicious actors to create sophisticated false content at unprecedented scale and low cost, with examples including over 1,200 AI-generated fake news sites and deepfakes becoming increasingly believable
– **Systemic failures in current approaches**: The panel critiqued existing disinformation countermeasures as largely ineffective due to scale mismatches, political polarization, and the focus on individual artifacts rather than systematic solutions
– **Education and literacy as key solutions**: Multiple speakers advocated for AI literacy programs, particularly targeting children and teachers, with innovative approaches like using AI chatbots to teach critical thinking about AI-generated content
– **Regulatory and governance frameworks**: Discussion of legal instruments like the Council of Europe’s AI Framework Convention and the need for industry self-regulation, certification systems, and consumer empowerment to incentivize truthful AI systems
## Overall Purpose:
The discussion aimed to examine the complex relationship between AI and disinformation, analyzing both the threats AI poses to democratic dialogue and the opportunities it presents for combating false information. The session sought to identify practical solutions and policy approaches for maintaining information integrity in an AI-driven world.
## Overall Tone:
The discussion maintained a serious, analytical tone throughout, reflecting the gravity of the subject matter. While speakers acknowledged significant challenges and risks, the tone remained constructive and solution-oriented rather than alarmist. There was a notable shift toward cautious optimism in the latter portions, with speakers emphasizing actionable solutions like education, regulation, and technological safeguards, concluding with a call to transform AI “from a weapon to a force for good.”
Speakers
– **Irena Gríkova** – Head of the Democratic Institutions and Freedoms Department at the Council of Europe, moderator of the panel
– **David Caswell** – Product developer, consultant and researcher of computational and automated forms of journalism; expert for the Council of Europe and member of expert committee for guidance note on implications of generative AI on freedom of expression
– **Chine Labbé** – Senior Vice President and Managing Editor for Europe and Canada at NewsGuard (company that tackles disinformation online)
– **Maria Nordstrom** – PhD, Head of Section Digital Government Division at the Ministry of Finance in Sweden; works on national AI policy at the Government Office of Sweden and international AI policy; participated in negotiations of the EU AI Act and the Council of Europe’s Framework Convention on Artificial Intelligence
– **Olha Petriv** – Artificial intelligence lawyer at the Centre for Democracy and the Rule of Law in Ukraine; expert on artificial intelligence who played active role in discussion and amending negotiations of the Framework Convention on Artificial Intelligence of the Council of Europe
– **Mikko Salo** – Representative of Faktabarida, a digital information literacy service in Finland
– **Audience** – Various unidentified audience members who asked questions
**Additional speakers:**
– **Frances** – From YouthDIG, the European Youth IGF
– **Jun Baek** – From Youth of Privacy, a youth-led privacy and cybersecurity education organization
Full session report
# AI and Disinformation: Navigating Threats and Opportunities for Democratic Dialogue
## Executive Summary
This comprehensive discussion, organised by the Council of Europe as part of an Internet Governance Forum open forum, brought together leading experts from policy, technology, and civil society sectors to examine the dual role of artificial intelligence in both creating and combating disinformation. The panel explored how AI has fundamentally transformed the information ecosystem, creating unprecedented challenges for democratic dialogue whilst simultaneously offering potential solutions for maintaining information integrity.
The discussion was structured around the Council of Europe’s three-pillar approach to addressing AI and disinformation: integrating fact-checking into AI systems, implementing human rights-by-design principles in platform development, and empowering users with knowledge and tools to navigate AI-mediated information environments.
## Opening Framework: Council of Europe’s Approach
Irena Gríkova, the moderator from the Council of Europe, established the context by highlighting the organisation’s role as “Europe’s watchdog democracy and human rights watchdog” representing 46 member states. She introduced the Council’s three-pillar guidance note addressing AI and disinformation: fact-checking integration, platform design principles, and user empowerment strategies.
Gríkova emphasised the global significance of the Council of Europe’s AI Framework Convention, the first legally binding international treaty addressing AI’s impact on human rights, democracy, and the rule of law. The convention has attracted international attention, with signatures from non-European countries including Japan, Switzerland, Ukraine, Montenegro, and Canada.
## The Transformation of Information Systems
David Caswell, a product developer and computational journalism expert, provided the foundational framework for understanding the current information crisis. He explained that society has undergone a fundamental transformation in its information ecosystem over the past 15 years, moving “from a one-to-many, or more accurately, a few-to-many shape, to a many-to-many shape.” This structural change represents the root cause of current disinformation challenges, with AI serving as the latest evolution in this transformation.
Caswell emphasised that AI has fundamentally altered information creation processes, transforming them from artisanal, human-driven activities to automated, scalable operations. This shift enables the generation of entire narratives across multiple platforms and extended timeframes, rather than just individual fake artefacts. He noted significant improvements in AI accuracy, citing leaderboard data showing hallucination rates dropping from “15% range to now I think the top models in the leaderboard are 0.7%.”
A concerning development highlighted during the presentation involves the potential for powerful actors to reshape foundational AI training data. The moderator read from Caswell’s slides about Elon Musk’s announcement that Grok would be used to “basically rebuild the archive on which they train the next version of Grok,” effectively enabling the rewriting of humanity’s historical record at the training data level.
## Empirical Evidence of AI’s Impact on Disinformation
Chine Labbé, Senior Vice President at NewsGuard, provided compelling empirical evidence of AI’s growing impact on disinformation campaigns. Her research revealed dramatic escalation in the sophistication and frequency of AI-generated false content, particularly regarding the Russia-Ukraine conflict. Deepfakes increased from one case in the first year to sixteen sophisticated examples in the third year, demonstrating both improved quality and increased deployment.
Labbé described how malicious actors now create entire networks of AI-generated fake news sites designed to appear as credible local news sources. NewsGuard has identified over 1,200 such sites globally, representing a new category of disinformation infrastructure operating at unprecedented scale. She provided a specific example of the Storm 1516 campaign, which created a fabricated video involving Brigitte Macron, and mentioned John Mark Dugan, “a former deputy Florida sheriff, who is now exiled in Moscow” who has created “273 websites.”
Critical research findings revealed that AI chatbots repeat false claims approximately 26% of the time overall, with specific testing of Russian disinformation showing rates of 33% initially, dropping to 20% two months later. A BBC experiment found that “10% of the cases, there were significant problems with the responses. In 19% of the cases, the chatbot introduced factual errors, and in 13% of the cases, there were quotes that were never in the original articles.”
Labbé identified “vicious cycles of disinformation,” where AI-generated false content becomes validated by other AI systems, creating self-reinforcing loops of synthetic credibility. Malicious actors exploit this through “LLM grooming” – saturating web results with propaganda so that chatbots will cite and repeat it as factual information.
## Policy and Regulatory Responses
Maria Nordström, representing Sweden’s national AI policy efforts, discussed the importance of developing comprehensive regulatory frameworks whilst acknowledging the limitations of purely legislative approaches. She highlighted a critical policy challenge: finding the right balance between educating the public about AI risks without further eroding trust in information systems.
Nordström posed a fundamental question: “To what extent is it beneficial for society when all information is questioned? What does it do with democracy and our agency when we can no longer trust the information that we see, that we read, that we hear?”
The Swedish approach recognises that hard law regulation has limitations in requiring “truth” from AI systems, making consumer empowerment and choice crucial components of any comprehensive strategy. This perspective emphasises market-driven solutions alongside regulatory frameworks, empowering users to make informed decisions about AI systems.
## Practical Experience from Ukraine
Olha Petriv, an artificial intelligence lawyer from Ukraine, provided insights from a country experiencing active disinformation warfare. She described Ukraine’s bottom-up approach to AI governance, including industry self-regulation through codes of conduct developed whilst awaiting formal legislation.
Petriv emphasised the particular vulnerability of children to AI-generated disinformation, sharing examples of Ukrainian refugee children’s faces being weaponised in deepfake campaigns. She argued for early AI literacy education, stating: “It’s not just a parental issue, it’s a generation’s lost potential… if we will not teach children how to understand news and understand AI, somebody else will teach them how to think.”
Her approach to children’s AI education focuses on critical thinking and algorithm understanding rather than prohibiting AI use entirely, recognising that children will inevitably encounter AI systems and must be equipped with the skills to use them responsibly.
## Educational Solutions and Audience Engagement
The discussion included significant audience participation, with contributions from Mikko Salo representing Faktabarida, Finland’s digital information literacy service, Frances from YouthDIG, and Jun Baek from Youth of Privacy. These contributions highlighted practical implementation challenges for AI literacy programmes and the need for specific guidelines and support materials for teachers.
The discussion revealed innovative approaches to AI education, including using AI chatbots themselves to teach children about AI literacy. However, questions remained about the optimal age for introducing AI concepts, with debate about whether children as young as 10 could effectively understand these concepts or whether secondary school age was more appropriate.
## Opportunities and Positive Applications
Despite significant challenges, speakers identified substantial opportunities for AI to enhance information systems when properly implemented. Caswell argued that AI can enable systematic rather than selective journalism coverage, processing vast amounts of digitally accessible information impossible for human journalists to handle comprehensively.
AI offers potential to make civic information more accessible across different literacy levels, languages, and format preferences. This democratising aspect could help bridge information gaps that currently exclude certain populations from full participation in democratic dialogue.
Labbé suggested that AI tools could assist in monitoring disinformation and deploying fact-checks at scale, provided humans remain in the loop to ensure accuracy and context. She noted that platforms have been “disengaging from that commitment” to fact-checking, making AI-assisted solutions potentially more important.
## Market-Driven Solutions and Consumer Empowerment
A significant theme involved the potential for consumer awareness and market pressure to drive improvements in AI system reliability. Labbé noted that AI companies currently prioritise new features over safety and accuracy, but argued that user pressure could shift this balance toward reliability.
The speakers discussed the need for certification and labelling systems for credible information sources and AI systems, helping users identify trustworthy content in AI-mediated environments. However, this approach requires raising public awareness about the scale of misinformation problems in current AI systems, as many users remain unaware of the frequency with which AI chatbots repeat false information.
## Key Challenges and Debates
The discussion revealed both areas of agreement and ongoing debates. While speakers agreed that AI has fundamentally transformed information creation at unprecedented scale and that education is crucial for building resilience, disagreements emerged regarding the effectiveness of current fact-checking approaches.
Caswell argued that previous anti-disinformation efforts have been largely ineffective due to scale issues and perceived bias, whilst others defended the work of fact-checkers. There was also debate about the extent to which AI can replace original source journalism, particularly in areas requiring human presence such as war journalism and personal storytelling.
## Recommendations and Future Directions
The panel concluded with concrete recommendations including developing AI literacy educational materials and programmes, creating certification and labelling systems for credible information sources, and preserving primary source journalism as the foundation for AI-based secondary journalism.
The discussion emphasised implementing the Council of Europe’s three-pillar approach comprehensively, addressing both technical and social aspects of the challenge through coordinated efforts across regulatory frameworks, educational programmes, market mechanisms, and technological solutions.
## Conclusion: Transforming AI from Weapon to Force for Good
The discussion concluded with Gríkova’s call to transform AI “from a weapon to a force for good” in the information ecosystem. This transformation requires coordinated efforts across multiple domains: regulatory frameworks that protect human rights whilst enabling innovation, educational programmes that build critical thinking skills, market mechanisms that reward truthfulness and accuracy, and technological solutions that preserve human agency in information consumption.
The speakers demonstrated that whilst AI poses unprecedented challenges to democratic dialogue, it also offers significant opportunities for improving information systems. The key lies in developing comprehensive approaches that address both technical and social dimensions of the challenge, ensuring that democratic institutions can adapt to and thrive in an AI-mediated information environment.
Session transcript
Irena Gríkova: Good afternoon everyone. Welcome to the IGF open forum on AI and disinformation countering threats to democratic dialogue organized by the Council of Europe. My name is Irena Gríkova, I’m head of the Democratic Institutions and Freedoms Department at the Council of Europe and I will be moderating this panel. I’d like immediately to thank my colleagues Giulia Lucchese sitting just here and Evangelia Vasalou who is online for helping to put together this panel and will be also producing the report and helping with the moderation. In this session we will be delving in one of the most pressing challenges facing democratic societies, in fact all societies today, probably not just democratic societies but we are, I’m personally concerned about democratic societies in the first place. The use of artificial intelligence in generating and spreading disinformation but we will also hopefully discuss the role of AI can play in actually curbing and limiting the spread of disinformation. Combating disinformation is a top priority for the Council of Europe as a human rights organization. For those of you who may not be familiar with the Council of Europe, especially those coming from other continents, the Council of Europe is what I call the older and larger brother of the European Union, an organization of 46 member states with a particular focus on human rights, democracy and the rule of law. We call Europe’s watchdog democracy and human rights watchdog and as such of course we are extremely concerned with phenomenon of disinformation and all the other threats to democracy today. The Council of Europe is also always on the forefront of how technological development impacts our societies and the rights-based ecosystem that we have. created and this is why the Council of Europe prepared and opened for signature and ratification last year the first international treaty on AI and its impact on democracy, human rights and the rule of law. And we are now in the process of developing sector specific policy guidelines and also supporting member states in implementing specific standards in different areas including in the field of freedom of expression. The Council of Europe has also been at the forefront of analysing the impact of AI generated disinformation and its role for resilient rights based pluralistic and open information ecosystem. In particular last year we issued a guidance note on countering online mis and disinformation which is uploaded as a background to this session also in the chat but there are a few copies here in the room in case you still want an analogue copy of the disinformation of the guidelines. And this note offers practical guidance really very very specific and detailed pointers of how states and other partners in this democratic system that we are trying to protect, digital platforms, editorial media and other stakeholders can fight disinformation in a rights compliant manner. Now it’s a soft instrument, this is not a binding treaty but it does include the collective wisdom of all of our member states and a large number of experts, some of them sitting around here. And therefore it’s really interesting and useful and it’s organised around three pillars, the main things that were suggested here and mind you in the process, at the time when it was actually developed and written AI was yet not that prevalent and prominent so it’s not so much AI. The pillars are fact-checking, calling for independent, transparency and financial sustainability by both states and digital platforms. Especially, platforms are urged to integrate fact-checking into their content systems. Unfortunately, we’ve seen in the past few months that the platforms have been disengaging from that commitment, and that’s an issue in itself. Platform design is the other pillar of that disinformation strategic approach. The Guidance Note advocates for human rights by design and safety by design principles. These are key words that we’ve been hearing a lot during this IGF and also in the previous editions. These are the basic principles of an information society, the way we’d like to see it. With emphasis on human rights impact assessments and process-focused content moderation, in order to favor nuanced approaches to content moderation and content ranking, preferable to blunt takedown approaches. Perhaps today we will explore how AI can help us achieve such a nuanced content moderation approach. The third pillar of that Guidance Note is user empowerment. I particularly see that user empowerment is becoming more and more prominent as a tool, as a strategy, as a dimension of fighting harmful content, including disinformation. And that includes all kinds of initiatives at local level, community-based, and also collective. We are working on the role of AI in creating a resilient and fact-based information ecosystem. We are working particularly on applied documents that will be even more specific and more practice-oriented. And we support implementation by states through projects, through field projects that we actually work, in which we work directly with our member states. Just to introduce the panel, and I’ll give the floor to our first panelist in a minute, our thinking at the moment, and these are the areas that we really are looking into in some more detail as policy strategies, how to reinforce public service media. Public service media has always been the cornerstone of a truth-based and authentic and quality information system, and they are threatened. We need to find ways of strengthening public service media, but also to enhance the capabilities of the regulatory authorities, their mandates, their independence, to navigate the rapidly evolving digital environment. Another line of thought is how to demonetize the disinformation economy, to cut off the financial incentives that help to amplify disinformation content. And then, indeed, another topic that’s been very much on the surface here in Norway, how to enforce regulation, co-regulation, and how to strengthen regulation. There are dissenting voices against regulation. Obviously, we hear them as well, but for the Council of Europe moving towards stronger regulation for platform design. to ensure transparency and public oversight in content moderation and curation is a must. And finally, investing in the resilience of users. We are talking a lot, there’s a lot of debate and publications and research about the supply side of disinformation. How do we ensure that the content produced and visible there is less disinforming or less harmful? But then what about the demand side? And I’m putting this in inverted commas because the demand is not necessarily explicit or willing, but you know, the use of information. What do we do about users and how do we make sure that they actually make use and demand and go for quality content even when it’s there? So our speakers, and in the first place, David Caswell, will tell us what is the state of the art now? How is now AI impacting disinformation and what can we do about it? Because the problem, it’s amplifying the problem clearly, but it’s maybe part of the solution. And we have an amazing panel here for you. Without further ado, I will introduce David, who is a product developer, consultant and researcher of computational and automated forms of journalism. And full disclosure, David is also an expert for the Council of Europe. And he’s a member of our expert committee for the guidance note on the implications of generative AI on freedom of expression, which is forthcoming. So keep an eye on the Council of Europe website. You will be informed about it by the end of the year. So David, please share your perspectives about AI and disinformation challenges and hopefully some solutions. Certainly.
David Caswell: Thank you for the introduction. Yeah, before I start, I just want to just… I’ll just make it clear that I’m not an expert on misinformation. My expertise is around AI in news and journalism and kind of civic information. So I focus kind of less on the social media side of things than I do on news. But I think a lot of this applies either way. Before kind of getting into what AI is doing in the space of disinformation, I want to just talk a little bit about what was happening before AI came along. So if you can imagine pre-ChatGPT, or even most of our ecosystem now. And the big thing that changed that kind of made this last 10 or 15 years of disinformation and misinformation activity, the thing that changed that made that necessary was this. We basically changed our information ecosystem from a one-to-many, or more accurately, a few-to-many shape, to a many-to-many shape. So we went from a situation where only a few entities, mostly organizations, could speak to a large audience, to this situation where anybody could speak to a large audience. And this was the internet, and then social media, of course. But this was a change in the distribution of content, the distribution of information. And this is the technical change that caused this cascade of activity over the last 15 years, including around disinformation and misinformation. I think, again, before AI, or generally before AI, it’s worth sort of looking at how the response to that era of kind of disinformation, how that went. And I would suggest that it hasn’t gone well. I would suggest that there aren’t many people on any side of any of the arguments that would suggest that it’s been very successful and kind of worked well. Just to kind of go through some of the things that I think… people kind of think about here or perceive. One is that it was generally ineffective. I think there’s an issue around scale here, that the scale of communication on the internet, on social media is such that things like fact checking and all that provide just a tiny, tiny drop in this vast ocean of information. I think there’s a perception that there’s a certain alarmism around it. There’s a lot of research that’s coming out on this with different ways of looking at it, but essentially the net of it is that the concern around these things seems to be restricted to a relatively small portion of the population. Most people think it’s kind of less of an issue. I think there’s a sense or a perception that there’s a certain self-interest around misinformation and disinformation activities, and that has to do with this overlap with journalism. And so there’s a sense that as journalism kind of was diminished and its power reduced in the internet era, that a lot of that activity kind of went over into the misinformation, disinformation space. On the political side, I think we’re pretty aware that there’s, on the left-right continuum, there’s a sense that the whole kind of disinformation, misinformation space has a left-coding bias to it. This is certainly what Mark Zuckerberg used as his justification for turning off the fact-checking activity at META. I think there’s another kind of politicization that’s going on here, which is more this elites versus populism politicization. That’s easy for me to say. The thing that’s happening here, and there’s a really good book by Hugo Mercier about this, is that in the elites versus populism dimension, misinformation and disinformation is used as a reason or an excuse or a narrative as to why populism is happening. It’s like misinformation and disinformation is causing this by kind of fooling half the population. So I think that’s that’s been an issue. I think there’s an issue around most of this being anecdotal it’s and not just anecdotal case by case by case, but anecdotal in terms of the artifacts focusing on individual artifacts, individual images, individual facts, individual documents, these kind of things, not on systems, not on sort of the processes and the systems that that do this. These are just kind of my my views, but I think it’s a general perception in a lot of parts of society that this sort of, you know, this attempt to put some order on the information environment has not been successful. So let’s turn to AI here and I think in this AI era, the thing that’s changed now, you know, with with chat GPT, it’s been building for a long time before that, but roughly since chat GPT the thing that’s changed now is the this transition from an artisanal form of creation of news and journalism to an automated form of news and journalism. And so this this is quite profound because news and journalism and the creation of knowledge generally was one of the last kind of handmade or artisanal activities in society. And with AI we now have tools that can at least partially automate that which is a new thing for the information ecosystem. And that includes the gathering of news-like information, the processing of it, and especially the creation of experiences of it for consumption. And that’s the I think the fundamental new thing here. In terms of, you know, what are these risks that AI poses that are new? You know, there’s a lot of them. I think an obvious one is the risk of accelerating this fragmentation of shared narratives. You know, this has been, you know, obviously, you know, a building a building issue for the last so since the internet came along basically, but it’s It’s important to keep in mind that this is not just a news or journalism kind of concern. This is happening in all knowledge-producing activities. It’s happening in scientific publishing and activity publishing. It’s happening in government intelligence services. It’s happening in enterprise knowledge management. The fundamental mechanisms behind this are really kind of broad. I think there’s a second risk that’s not perceived very widely about the ability with AI to extend disinformation beyond individual artifacts, like deepfakes or individual facts, to entire narratives that extend over many, many different documents and images and videos and media artifacts, and that extend over long periods of time, days or weeks or months or even years. An example of this in the manual space is this Hasbro program that the Israeli government have been running for many years, about 2,000 people that basically kind of operate on influencing narratives across the world, and with AI we may be entering into a world where that can be automated and also be made accessible to many, many more people and agents and governments and other actors. I think there’s a major risk that’s developing around automated and personalized persuasion at scale, so you could think of this as radicalization at scale or grooming at scale, to use the word that the Brits like to use. There’s been a couple of very, very interesting papers that have come out on this recently. One of them is actually not a paper because they had an ethical issue in the data collection and so it wasn’t an official paper, but generally it seems that these AI chatbots are already substantially more persuasive than humans. And so that’s their effectiveness. And then you kind of combine that with the fact that you can operate these individually in a personalized way across the whole population at some level. I think that’s a new and significant risk. And then I think there’s another deep risk that really is underappreciated here, which is that as we start to use these models as sort of core intelligence for our societies, that there are biases in these models. And we talk a lot about biases in AI models now, and that’s very true. There’s biases in the training data. There’s attempts to resolve this in things like system prompts or in the reinforcement learning from human feedback that kind of helps to train these models. But even more fundamental than that, we’re starting to see intentions to place deep biases
Irena Gríkova: into the model. So one, which is this tweet here that showed up recently, is Elon Musk. And he basically built this large language model, Grok. Its intention was to be what he called a maximally truth-seeking large language model. And it was a little too truth-seeking for his taste. So he’s been getting into this Twitter argument or X argument with Grok over the last little while. You can sort of see this interchange happening where you’ll ask a question and he doesn’t like the answer and so on. And so he has just recently announced that they’re going to use Grok to basically rebuild the archive on which they train the next version of Grok. So they’re going to write a new history, basically, of humanity, and then use that to train Grok. That’s an example of building a large language model that’s broadly used with the biases deep in the training weights. That’s a very significant risk. Oops. So I think there’s a, you know, if you kind of go all the way down to maybe the foundations or the first principles here, there’s a new deep need or a new… The New Core Requirement in the Information Ecosystem, and this is articulated very, very well in this book, this recent Yerval Noah Harari book. And what he says in this book is, which I think it’s very well evidenced and argued there, is that we’ve depended for 400 years on these mechanisms, like the scientific method or like journalism or like government intelligence or like the courts. And these mechanisms have two characteristics, they’re truth-seeking and they’re also self-correcting. So they have internal biases that move them towards the truth. I think there’s actually a third requirement in there that maybe wasn’t as apparent until these large language models got going, and this is just a kind of a personal interpretation. I think that our methodology or our mechanisms for things like journalism and civic information, I think they also need to be deterministic rather than probabilistic. In other words, they need to be specifically referenceable and explainable and verifiable and persistent in an archive, all of the things that large language models are not. Just to kind of turn to the opportunities here, because the news is not all bad. The scale of opportunities, I think, is of the same order as the scale of the risks. There are some real opportunities. I think one opportunity is this possibility that we might have news or journalism or civic information or societally beneficial information that is systematic instead of selective. In other words, the scarcity issues around collecting and processing and presenting or communicating this information, those scarcity issues go away and we have a level of systematic transparency that is vastly greater than it is today. I think that’s a very real possibility, scaling the amount of civic information. in the ecosystem. I think another one is this new ability to make civic information and news and journalism accessible to many, many, many more people, regardless of literacy or language or style preference or format preference or situation or whatever, because now we can adapt this information to each individual. That’s a very significant new thing. And I think those two things together, the scale and the accessibility, means that we do really have, I think, this possibility, if we were to build towards that, of basically having relevant, accessible, societally beneficial information available to everybody at a much, much deeper degree of relevance and personal relevance than we’ve had before. And then also, finally, I think one of the challenges of information now is it feels very overwhelming. We have a news avoidance problem at scale. We all have a personal sense of being overwhelmed by information. I think AI helps us address that. I think the thing that we’re primarily being overwhelmed by is units of content, not information. And we have this new possibility here with AI to not just have dramatically more information, but also to feel more in control and more mastery of that information. So just to wrap up here, I’d like to suggest an ideal for what we might aim for as an opportunity in this AI-mediated information ecosystem that’s forming.
David Caswell: And I think it’s worth looking at this in terms of a continuum. And the continuum goes from, say, medieval ignorance, way down at one end, to godlike omnipotence or maybe a Star Trek universe level of awareness and omnipotence about your environment. And if we look at that kind of continuum as an ideal, we’ve made a lot of progress along that line. We’ve gone from a situation before the printing press and before literacy where really people didn’t know much about their world at all. apart from their immediate experience. And through these inventions and these cultural adaptions to those inventions, we’re at a point now where the amount of information we know about our world almost instantly, at our fingertips, is just staggering compared to what would have been available to an average citizen in, say, 1425. But I think there’s also no reason to think that we’ve stopped, that we’re at the optimum place on that continuum, that we’re at the place where the democratic dialogue is as good as it could ever be, or was recently as good as it could ever be. I think we’ve got a ways to go. I think the AI that exists now, diffused into our information ecosystem with the right governance and the right orientation and so on, that could move us a considerable way up that continuum. If we get something like AGI, maybe in five or 10 years, I think that could move us even further. And then at some future hypothetical point, maybe some kind of super intelligence moves us even further again. So I think there’s a lot of technical challenges here, governance challenges, safety and security challenges, of course. But I think as an ideal, trying to move to the right of that continuum is a good place to be. I’ll just leave that there.
Irena Gríkova: Grant, thanks a lot, David. That was a lot of food for thought. Personally, I was quite struck by how the AI can create now a new plausible reality, just by the sheer scope and sophistication and scale of it. So how do you fact check your way out of a completely new alternative reality? Obviously, you can’t. And I really liked your idea that we are spiraling and going up and up and up much faster than we can actually conceptualize it, from information to content, and then hopefully to information again. But let’s see if we have some more practical tools that we can use to do that. And our next speaker is Chine Labbé. She’s a senior vice president and Managing Editor for Europe and Canada at NewsGuard. NewsGuard is a company that tackles disinformation online, and Chine will explain how they do it, what they do it, and what are the results.
Chine Labbé: Hi, thank you very much for having me. So I’ll start right away by explaining how AI is and has supercharged disinformation campaigns to this day. The first thing that we’re seeing is that malign actors are, and you all know that, increasingly using text, images, audio, and video generators to create content, deepfakes, images, et cetera. Just to give you one piece of data to illustrate that, take the Russia-Ukraine conflict. During the first year of the war, out of about 100 false claims that we debunked at NewsGuard, one was a deepfake of Zelensky, very pixelated, very bad quality. Now fast forward to the third year of the war, we had 16 deepfakes, super sophisticated, super believable, that we debunked. Now there’s still only 11% of the false claims that we debunked that year, but they increased quite astonishingly. And of course, the conflict that was more recent, Israel-Iran, has also shown lots of deepfakes being shared, images, short images circulating online. So this is just one example, a video that was shared as part of a Russian-influenced campaign called Storm 1516, shows a person whose identity was weaponized, it’s a real person, but modified with AI, to say that he has been sexually assaulted by Brigitte Macron, the wife of Emmanuel Macron, France’s president, when he was a student. He actually was a student of Brigitte Macron, but never said that, never was sexually assaulted. The video is very believable, all the more so that the person exists. If you Google him, you can see that he was a student of Brigitte. So all that, of course, has increased a lot. Now the second thing that we’ve seen in terms of how AI is supercharging this information is the use of AI tools to imitate credible local news sites. So basically create entire networks of websites that look just like a local news site that share maybe some reliable information and then some false claims. And that’s entirely generated with AI, maintained with AI, with no human supervision. The photo that I put here is a quite infamous Kremlin propagandist. He’s an American fellow, former deputy Florida sheriff, who is now exiled in Moscow. His name is John Mark Dugan, and he alone is behind more than 273 websites that he’s created using AI with an AI server to imitate first local news sites in the U.S. with names that really sounded like local news sites, and then in Germany ahead of important elections. So these AI content forums have grown exponentially over the past few years. We started monitoring them in 2023. May 2023, we had found 49 of them. Now fast forward to today, we have counted 1,271, but that’s probably the tip of the iceberg. Thank you, what we’re seeing and what we can actually confirm as being AI-generated. Why? Because it’s really cheap to create an AI-generated news site. We did the test, a colleague of mine, and that’s an essay that he wrote about the experience in the Wall Street Journal, paid $105 to a web developer based in Pakistan, and in just two days he had his self-running propaganda site. and many others. So this is a very simple, very simple propaganda machine, password protected, don’t be alerted, we didn’t want to put any further on the web, but it was just, it just took two days, $105. So now all of these sites, these over 1,000 sites that we found, AI content farms, they’re not all publishing false claims or only running false information, but they’re all, I would argue, they’re all a risk to democratic dialogue. Why? Because if you have a false claim, if you have a false decision, which is the case for all these websites, you have the risk of hallucinations, factual errors and misrepresentations of information. Now, that’s just one example. A recent one, the BBC conducted an experiment in December 2024, they asked 100 questions to four chatbots based on their information, so they gave the chatbots access to their website, which they found to be false. So the chatbots were able to read the questions, they were able to read the questions, and in 10% of the cases, there were significant problems with the responses. In 19% of the cases, the chatbot introduced factual errors, and in 13% of the cases, there were quotes that were never in the original articles, or that were modified by the chatbots. And just a recent example of such a problem, the BBC conducted a study in the United States, and they found that there were very existing authors, but very nonexisting books next to their names, along with some existing ones, so all the more troubling. Now, just imagine small errors like that, slowly but surely polluting the news that we consume, and I would argue that this is a very concrete threat to democratic dialogue. So, what we’re seeing is that, in particular, in the case of disinformation, is that AI chatbots are often repeating disinformation narratives as fact. this vicious circle where basically chatbots fail to recognize the fake sites that AI tools have contributed to creating and they will cite them and present authoritatively information that’s actually false. So you have information created by AI, generated, then repeated through those websites and validated by the AI chatbots, the really vicious circle of disinformation. Now in early 2023, the idea that AI chatbots could be misinformation super spreaders was hypothetical. We looked at it because it seemed possible, right? But today it’s a reality. Chatbots repeatedly fail to recognize false claims and yet users are turning more and more to them for fact-checking, to ask them questions about the news. So we saw it recently during the LA protests against deportation. This is just a very striking example. There was a photo of a pile of bricks that were circulating online being presented as a proof, as evidence that the protests were staged, were organized, orchestrated by someone that was putting bricks there to encourage the movement and that they were not organic. The photo was actually from New Jersey, so it was not in California. Then users online turned to Grok and asked Grok to verify the claim. And even here you have an example when even when a user was pushing back saying, no, Grok, you’re wrong, Grok would repeat the falsehood and insist that no, no, no, no, it is true. And in recent days, we saw the same thing with a false claim stating that China had sent military cargo planes to Iran that was based on actually misinterpreted flight-tracking data. And we’re doing that every month now. We’re auditing the main chatbots to see. How well they resist to repeating false claims and what we’re seeing month-to-month is that in about 26% of the time they repeat false claim Authoritatively, this is just one example asking Mistral about a false claims pushed by Russian disinformation sites And Mistral is not only saying that it is true, but is also citing known disinformation websites as its sources Now the problem is not just an English language problem It’s a problem in every languages. We did before the AI action summit in Paris earlier this year We did a test in seven languages and we proved that it was a problem in all languages and especially prevalent in languages where There’s less diversity of the press. So where the the language is dominated by state-sponsored narratives Now if I just told you that we had put a drug in the open To sell in the commercial world where 26 of the pills out of a hundred are poisoned Would you find it’s okay, and that’s the question that we have to ask ourselves when talking about information and AI and the last thing I want to Raise here is that this vulnerability that we are still failing at putting guardrails against is well Identified by malign actors so they know that by pushing Russian narratives, for example in the case of Russian actors They can change AI and influence the result to chatbots so this is a process that’s been well identified by a network called the Pravda Network a network of about 140 sites in more than 40 languages. That’s basically a laundering machine for Kremlin propaganda publishing more than 3 million pieces of content a year and With no no audience the websites have very little followers very little traffic But their goal is just to saturate the web results so that chatbots will use their content and we did An audit and we found that in 33% of the time the chatbots would repeat the disinformation from We did again the test in May, two months later, and it had gone down to 20%. We don’t know what mitigation measures were put in place, but pretty much the same. And I’ll just end here, because I’m running out of time, to say that to conclude, basically with generic TVI, disinformers can now fund less for more impact. So as David said, it’s just the scale that is changing dramatically, the automation. And now they can also influence information that is given through AI chatbots, through this process of LLM grooming. And I’ll just end on a positive note, yes, AI can help us fight disinformation. So we’re using AI for monitoring, for deploying fact checks. We can even use generic TVI, as David said, to create, for example, new formats of presenting content, as long as the human is in the loop. But it’s hard to foresee a world in which we’ll be able to label all false, synthetic disinformation. So that’s why I think a very important factor today is also to label and certify credible information, and allow users to that way identify credible information. That’s what we do at NewsGuard. There’s also the Journalism Trust Initiative. Trust My Content is another example, and I think that’s a very positive way forward.
Irena Gríkova: Thank you, Shin. I think we’ve entered the era of mistrust. We seemingly cannot trust anymore anyone, not even news sites, which may be fake. And I think this is dangerous, not just for the democratic dialogue, but democracy itself. Because when there is such a widespread distrust within society, between individuals, that the democratic institutions collapse, because democracy is ultimately built on trust. Maybe we need public service AI chatbots that are trained on only reliable data. Because unfortunately, even efforts to try and defend legitimate media, editorial media Maria Nordstrom. Maria is PhD Head of Section Digital Government Division at the Ministry of Finance in Sweden. Maria works on national AI policy at the Government Office of Sweden, as well as with international AI policy. In particular, she was participating in the negotiations of the EU AI Act and the Council of Europe’s Framework Convention on Artificial Intelligence. Please, Maria, the floor is yours.
Maria Nordstrom: Thank you, and thank you for having me. So I’ve had the pleasure and the privilege to currently be on the Bureau of the Committee on AI at the Council of Europe, and the main task of the committee was to negotiate the Framework Convention on AI, already mentioned, which was adopted and opened for signatures last year. And it’s the first, as mentioned, legally binding global treaty on AI. It is global because it’s open for signatures, not just to the members of the Council of Europe, but it can be signed by other states as well, and since it’s opening for signature, it’s also been signed by Japan, Switzerland, Ukraine, Montenegro and Canada. So it’s, in its essence, a global treaty on AI, human rights, democracy and the rule of law, and it formulates fundamental principles and rules which safeguard human rights, democracy and the rule of law throughout the AI life cycle, while at the same time being conducive to progress and technological innovation. So, as we’ve heard and as I think we know, AI has the potential to enhance democratic values and improve the integrity of information, but at the same time, valid concerns have been raised. The integrity of democracy and its processes rests on two fundamental assumptions, that individuals possess both agency, so the capacity to form an opinion and act on it, as well as influence, so the capacity to affect decisions made on their behalf. AI has the potential to either and both strengthen as well as to undermine these capacities. So David mentioned AI-driven persuasion at scale, which is an excellent example of how these capacities can be very efficiently undermined. So it’s not very surprising that one of the core obligations under the Convention, the Council of Europe’s AI Convention, is for parties to adopt or uphold measures to protect individuals’ ability to freely form opinions. So these measures can include measures to protect against malicious foreign inference, as well as efforts to counter the spread of disinformation. And as we’ve heard, AI can serve both as a tool for spreading, efficiently, disinformation and thereby fragmenting the public sphere, as well as to combat disinformation. This has to be done with some thought put into it. So it’s essential to implement appropriate safeguards and to ensure that AI does not negatively impact democratic processes. So currently the Committee on AI at the Council of Europe is developing a risk and impact assessment methodology, which can be used by developers and other stakeholders to guide responsible AI development. It’s a multi-stakeholder process with the civil society, the technical society being involved and there’s still time to contribute to this process if you wish and it’s a great example of how we can go from a convention to developing a tool which hopefully will have practical value and can be used by practitioners and policymakers to assess risks with AI systems to democratic processes and to democratic dialogue. Another safeguard that we in Sweden believe is very important is AI literacy. So AI literacy, both understanding what AI is but also understanding how it can affect this information, is crucial in addressing the challenges posed by the rapid advancement of AI technologies. So the Swedish government has tasked the Swedish Agency for Media with creating educational materials to enhance the public’s understanding of AI, particularly in relation to disinformation and misinformation. So they will develop a web-based educational tool which will be released later this year. However, one of the things that we are thinking about from a policymaker’s perspective and the key challenge is to find the right balance between providing sufficient information without further eroding public trust in the information ecosystem. So the important question here is, to what extent is it beneficial for the society when all information is questioned? What does it do with democracy and what does it do with our agency when we can no longer trust the information that we see, that we read, that we hear? So finding that right balance and informing about the risks while not eroding the public’s trust is, I think, a key challenge. and something I’d love to talk more about. Thank you.
Irena Gríkova: Thank you very much, Maria. Indeed, the AI Treaty of the Council of Europe is really important. It does need to be ratified though, so I encourage everyone here who has any say or any way of doing advocacy for the signature ratification of this treaty. Do not hesitate to come back to us, to Maria, to our colleagues, to myself, to find out more about it. Our final speaker is Olha Petrov. Olha is an artificial intelligence lawyer at the Centre for Democracy and the Rule of Law in Ukraine. She’s an expert on artificial intelligence and played an active role in the discussion and amending the negotiations of the Framework Convention on Artificial Intelligence of the Council of Europe. Olha, you, coming from Ukraine, clearly are at first hand observing the challenges that artificial intelligence poses for society amidst the war of aggression. And you also have some ideas about tools that can actually help curb that phenomenon. Can you tell us about it?
Olha Petriv: Yes, thank you. And I want to start that in Ukraine, disinformation, it’s not just a problem, it’s something that we face and solve every day. And we have some steps that we already did to fight with this. And Ukraine, our Ministry of Digital Transformation in Ukraine, we already have created AI strategy and also a roadmap on AI that has a bottom-up approach that help us right now to do our first step with companies and with our civil society to fight also against disinformation during the war. And what it It means that we don’t wait for the law to be passed and we have two parts, where first part consists of recommendation for society, for business, for developers and also other part is methodology of Huderia, self-regulation and other main steps that help us not to wait for the law. I want to share more information about self-regulation, like first step before this law, it’s like a process when companies come together to create their own code of conduct and to solve the problem when also companies, they don’t use AI in ethical way, and that is why six months ago our companies, it was like 14 companies, Ukrainian, Grammarly, SoftServe and other big companies that were created in Ukraine and work worldwide, they created code of conduct that consists of eight main principles that they implement in their business. Also after this they created Memorandum of Commitment, according to which they created self-regulation body and members of this self-regulation body and also people, not people, like businesses that signed code of conduct, they will report once a year about implementing these guidelines that are And as a result Ukrainian society and other countries that can see how Ukrainian business is working with ethical AI We can just check this and understand how it’s implemented So it was and it is right now our first step because we don’t want to wait when this law will be in Ukraine right now or later We know that if we will implement this ethical AI in our AI systems right now, for example, principles that are connected with transparency With risk-oriented approach and other principles that we have right now in code of conduct And other companies that already want to be part of this process As a result, we will show the world and show inside our country that we are innovative in using AI ethically And also after this and during this process to fight against disinformation What we are doing all the time because of a lot of campaigns that we have because of war And other important side of all of this what I want to discuss today Like tell today is part of children and AI and disinformation Because children are using AI a lot too And disinformation is that part that they can spread and they can be also victims of disinformation that are created by them We already have this situation in Ukraine, its campaign name of which is Matryoshka. And during this campaign, a face of Ukrainian refugee girl was used to spread information that she doesn’t like different schools in the U.S. and as a result to make Ukrainian refugees in other countries in the type that they have distrust. So it’s one type how disinformation is used against Ukrainian children also when they are part of this disinformational process but they aren’t ready for this and it was just using of their faces and creating deepfakes by Russia. Also right now disinformational bullying is spreading more and more in schools, not only in politics and what we can do with this, we are working a lot with using AI in educational process with children and also for example UNESCO has created a lot of programs connected with disinformational process in Ukraine where we spread to children how to make their critical thinking better because especially when you are living in a country when every day you face a lot of big amount of news that are faked, you have to make your critical thinking better. So also we now… are working on this, that we have not to ban AI from, like, for children, we have to give them knowledge how to help using AI better, and to know how AI works. And right now, like, strategies against AI, it’s, like, against AI disinformation, it’s the main, our main strategic response to AI disinformation, and also we have to make the children will understand how to resist to different fakes that we have. And when we are talking about this, for example, at my start-up, we are working on that we help children to develop the critical thinking through, not through lectures, or not through different moral lessons, but through AI companion, that it’s easy to understand for children, and as a result, it’s better for the education, because we know that if we will not teach children how to understand news and understand AI, somebody else will teach them how to think. And it’s not just a parental issue, it’s a generation’s lost potential. Thank you.
Irena Gríkova: Thank you very much, Olga. Well, that was plenty of information and insights. Now it’s time, we’ll have about 15 minutes for a discussion with you, the audience, both on-site and online. If you would like to ask a question or contribute with your thoughts, please use the two microphones on the sides of the room, introduce yourself, and go for it, and my colleague Julia will be checking. the Zoom site for any speakers online. Yes, please.
Mikko Salo: Thank you. My name is Mikko Salo. I’m representing Faktabarida, a digital information literacy service in Finland. And we’ve been in, let’s say, 12 years in the crazy world of information disorders. And with a small actor, we try to focus where it really matters. And I very much subscribe to the education field. OK, we might be spoiled, perhaps in Finland, but still being able to trust and investing in teachers for a long time. But that’s what you said on the line. And in Ukraine, you know what you’re talking about because you are in the face of it. But something that I’d like to specify that at what stage and how do you teach about the AI? Because, of course, in the big picture, and this is what we learned when we were lecturing in the US, people tend to take the AI as granted. And I mean, what is the right age? Because you have to first think yourself in order to use the AI as a supportive tool, like you describe. And that’s, I think, it’s a very culturally bound thing. Then a small note to Mr. Campbell, you have a very good presentation, but we are working a lot with the fact checkers as well. And just a little bit ringing in my eyes that if you use like a meta Zuckerberg to say about the left-leaning fact checking, because what happened was that Zuckerberg completely revised his opinion on fact checking because of the political tides in the US. I think it’s for everybody to judge what the fact checkers have proposed. It’s definitely not enough. But I think the fact checkers have written a very open letter about explaining the case. So that needs to be corrected for the files. Swedish colleagues as well, so we did in Finland. First, what we introduced is that as a small NGO, we kind of pushed the government to do the guidelines for the teachers, because outside that, this kind of guidelines, the teachers are lost. It’s such a big thing. And now when they have the guidelines, then we made a kind of guide on AI for teachers. It’s actually been translated within the OneEU project in English as well. Culture specific, but still that they need guidance on that one. I think that’s the shortest way and the lowest hanging fruit to multiply towards the next generations, because if you become AI native, so to say, without being literate on that one, that’s very scary. Thank you.
Irena Gríkova: Thank you for your contribution. Let’s take the other two questions and then revert to the panel.
Audience: Hi, my name is Frances. I’m from the YouthDIG, the European Youth IGF. And I just had a question mainly for David Caswell about basically people’s preferences with journalism, because at least my intuition is that there needs to be an original source when it comes to journalism. So when we say that the gathering of news and information can be equally or just about as good done by AI, is that necessarily true when there needs to be an original source? Maybe if you think about war journalism or people who go into conflicts or people who go to humanitarian crises, or even just like lots of us like to consume news about personal anecdotes or personal stories. So then can AI really replace that? And then the second thing is I think people also like to read the same content as other people because it unifies us in a way, which is why you have echo chambers. Yes, I read The Economist and you read The Daily Mail. And so we’re different in that respect. But why would that necessarily disappear? Like the whole screen for one or the media for one is not necessarily that attractive because it means that then… and Aya Solares. We’re all consumers of different news. We can’t relate to people because we’re all consuming different news. The last characteristic, I think it’s also possibly true, but please correct me if I’m wrong, is surely if online systems and AI is generating all of our news, that’s super-fast. That’s maybe too fast for us to keep up with. So you have a comparative advantage if you’re a news site that is the original source, one, and you only have a few articles, one, and you only have a few articles, one, and you only have a few articles, one, and you only have a few articles, one, and you only have a few articles, one, and you only have a few articles, one, and you only have a few articles. So I would just like to know how, yeah, if you disagree or agree with those characterizations. Thank you.
Irena Gríkova: Thank you. Yes, please.
Audience: Hello, my name is Jun Baek from Youth of Privacy. We are a youth-led privacy and cybersecurity education organization. One of the lessons that we learned from the last two years is that when you have a lot of attempts at education, it is a very hard battle to be won because of the scale of the problem. And I was wondering if there might be some other ways of incentivizing a lot of AI services providers to be more grounded on truth and reality, and what are some ways that we could try to incentivize them to do so? Thank you.
Irena Gríkova: Thank you so much. Actually, I had a very similar question for Ulha. How do you actually motivate AI companies to go for the self-regulation comply? I’ll abuse my role as moderator to also ask David a question, because you were saying that we need news that is systemic and not selective, and I don’t understand, actually, what you mean by that. So let’s address all of these questions now. Who wants to start?
Olha Petriv: Okay. I want to say that this question and answer is more complex, because when we are talking about When children are using KI, of course, in this moment, we want to give them a safe environment. And what can we do as people that are connected to policy process and that can work with ministry, for example, we should ask more and more companies that share services that children can use to do this in ethical way. And other side of this, of course, we have to work with parents and with teachers. That is why I thought that it’s not just a parental issue that we have this gap when AI already is here. It means that we should use even a result of work that UNESCO did and they wrote in 2023 that AI skills is most so important for children, beginning from the algorithm understanding and other important steps that children have to understand when they are at school too. So what about age? It’s also important that we, like people already use this in super young age for their children. And it’s important to give children understand that AI can be like a tool that helps them to find answers to their questions and to help them to ask more and more questions. Because even if we will say about this AI leap in Estonia, that it will be in their educational program and also If we will tell about Mriya in Ukraine, we also integrate AI in educational system when teachers use this. We understand that, for example, in AI-lib, we have lessons like parts that is responsible for critical thinking. And it’s the main part of this. And yes, it’s my answer.
Irena Gríkova: David?
David Caswell: And before I answer the questions, I’d just like to make a comment. I really think this idea of an AI chatbot to teach children AI literacy is an absolutely brilliant idea. And I’m going to have to noodle on that one. I think it’s really, really smart. So I’ll just go through these questions. And please correct me if I’ve kind of got anything wrong here. On the left coding of disinformation, misinformation concerns, yeah, the Mark Zuckerberg situation, him specifically, is one thing. But I think the reason he’s up there doing that, the reason he’s making these decisions, is because essentially half the electorate in the UK kind of feels broadly the same way. And I think the point I was trying to make there, didn’t have a lot of time to do it, is that I think it’s a tragedy that we have a partisan slant on this idea of an untruthful information environment. I think if there’s ever a thing we should all agree on, regardless of which side of the political spectrum we’re on, it should be a basis in fact and accuracy. So that was the point I was kind of trying to make with that one. I think on the fact checking specifically, there’s also a massive, massive scale issue where no matter how much you spend on fact checking, you just can’t keep up with the sort of volume of facts that need to be checked, basically. On the original sources thing, I think with AI, AI can be applied to only some kinds of journalism. And this is journalism that is essentially where the source material, the raw source material is. It’s digitally accessible. It has to be accessible to the AI. But I’d make a few comments there. One is that that’s already a significant and maybe even a majority chunk of journalism. If you kind of watch what actual journalists do in newsrooms these days, a lot of it is sitting at a computer, less and less out there in the field doing things. People always bring up the war journalism stuff and everything. That’s very important. AI is not going to do that in my lifetime. But that’s a very, very small part of journalism. And also, you know, the abilities of these systems to do other kinds of journalism, for example, interviews, AI is interviewing people, you know, these kind of things, you know, email exchanges, all of this kind of stuff. That’s a very real thing as well. It’s already starting. On the use of publications as part of identity, you know, this idea of a shared, you know, I read The Economist and I’m with people who read The Economist and somebody else reads The Telegraph and so on. I think that actually gets to one of the likely strategies that these kind of publications will take in the face of AI, which is to move up the value chain and focus more on identity. And that’s probably going to be quite successful. The problem with that, though, is it’ll be quite successful for The Economist and Telegraphs of the world because these are subscription-based, very narrowly focused identity-based publications. But if you take news publications and move them into sort of a high-value luxury good kind of category, you’re leaving out a lot of the population. So I think that’s a challenge there. The speed thing, I don’t know. I mean, I think, you know, there’s different ways to adapt to the news cycle or what even is a news cycle that’s always on. I think one of the things that AI can do is just basically create the experience that you want. So if you want a daily update, you know, whatever style or form of interaction or experience you want, and that’s one of the advantages there. The other question about… I don’t know if there’s another one for me there, but on your question about systematic versus selective, the opportunity there is, if you look at a domain of knowledge, say the auto industry in Bavaria, for example, some specific area of news. Right now, there’s a lot of journalists that cover that, but what they do is they don’t cover everything that’s happening because they can’t. There’s just not enough of them. They don’t have enough time. So they select, this is called newsworthiness decisions. They find stories, specific things in that industry, in that geography to report about. Whereas with AI, for some significant portions of that domain, that news domain, every single PDF can be read and analyzed. Every single social media post by every single executive can be analyzed. The automated systems can systematically cover all of it. Again, only the stuff that’s digitally accessible, but they can do it systematically. Whereas journalists have to pick and choose because they’re in a world of scarce resources.
Irena Gríkova: Yes.
Chine Labbé: Maybe, yeah, just to address the question about AI versus on-the-ground journalism, I think that’s an opportunity for on-the-ground journalism. I think having worked in newsroom most of my life before joining NewsGuard and having a hybrid role, we often just didn’t have time or money to go on the ground and report. On the ground doesn’t mean going to a war zone necessarily, but just going across the street and interviewing people. I think with AI, it’ll allow journalism to go back to its roots and do more on-the-ground journalism. It’s an opportunity. Then the one question I wanted to address is the question about how to incentivize AI providers to base their systems more on the truth. I think the first step here is to raise awareness because a lot of people don’t realize. I think once users realize the scale of the issue, so the more you have tests like the BBC. The more you have audits like ours that show repeatedly that chatbots authoritatively share false claims and can’t help you with the facts, people will ask the platforms to do better. And at the end of the day, it’s a business. So if the users ask for more truth, then they’ll have to put in the guardrails. The problem today is that AI chatbots are not meant to provide you accurate information. That’s not what they’re meant to do. But that’s how people are more and more using them for. So as people increase their use towards that end, we have to raise the awareness within the consumers so that they ask for more reliability. The problem that we’re seeing now in our audits is that the chatbots tend to do worse in month that they release new features. What does that mean? That means that the industry is focusing on efficiency, on new sexy features, but not on safety. And so when you have new features, usually safety takes the back seat when it comes to news. So I think it has to come from the users asking for it.
David Caswell: Sorry, if I could build on that one second. On hallucinations specifically, there are other kinds of errors in AI output other than hallucinations. But on hallucinations specifically, there’s a website. You can go to the AI leaderboard that measures the hallucination rate of different models. And although you have retrenchments in hallucination rates like the 03 model that was just released from OpenAI, you can see the march over time of these models going from hallucination rates around the 15% range to now I think the top models in the leaderboard are 0.7%. That’s an indication of progress. I think it’s a lot less spectacular than it looks. And there are other sources of error in AI output beyond hallucinations. Omission is a big one. But we are in a transition phase here with these tools, and they will get better.
Irena Gríkova: And I just wanted to add, I know two minutes, so I will give the speakers last word for their concluding, if you want to say something.
Maria Nordstrom: Yeah, I can, I can, these can be my concluding words, I guess, because I just fully agree that these are consumer products and we can empower the consumers. But at the same time, we are limited to hard law regulation and soft law measures. When it comes to hard law, yeah, we have the AI Act in the EU, for example, but it’s hard to, by hard law, by regulation, require the truth. I think that’s quite difficult to achieve through that particular measure. So when it comes to incentives, I think it’s very true that we can empower the consumers and probably help and lower the bar for consumers to understand and compare these products, because ultimately there are various AI systems out there that you can use and we can help consumers make a conscious choice about which systems they are using.
Irena Gríkova: Exactly.
Chine Labbé: I think the one thing I’d like to conclude with is that malign actors are betting on two things. They’re betting on one that will use AI chatbots more and more for information. Now, I think according to the latest digital news report, it’s 7% of people in the world that say that they use AI every week for the news, but it’s 15% if you take just the under 25, and it’s going to grow spectacularly, and they’re betting that we’re not going to put the guardrails. So I think we have to focus on that, realise that, yes, we’re going to use AI more and more for that, and put in the guardrails. Thank you. Less than one minute, so if you have
Irena Gríkova: three seconds, conclusion or…
David Caswell: Yeah, I’d just like to emphasise that it’s probably worth paying attention to the… The difficulties of the last 10 years of misinformation, disinformation response and not applying those or not sort of continuing those necessarily into the AI era. And I think particularly what that means is a more systems or strategic level focus. And the necessity for that is an ideal. It’s a view of what we want our information ecosystem to look like. And we have to have that conversation first, because then we know what we’re steering towards.
Irena Gríkova: Okay.
Olha Petriv: And I want to conclude that it’s important to remember the target audience of disinformation and propaganda sometimes and like all the time are not only people that are right now voters, but that people that will vote in the next years. And it’s important to remember when we are thinking about disinformation and campaigns that we have right now.
Irena Gríkova: Thank you very much. A couple of actionable highlights or food for thought, because we need to conclude on an action note. We need to preserve, first of all, for me, very important highlight. We need to preserve primary source journalism. And this is something we at the Council of Europe actually have started talking about to create a solid basis for AI based secondary journalism, because without it, it will turn into really entirely virtual world. We need AI and information literacy, including using chatbots to teach children AI literacy. It’s a good idea, but there are many other initiatives out there. Perhaps we need also certification for AI bots, because it’s true that you organizations like NewsGuard do monitor and alert, but then who knows, I mean, how many people actually are aware. So maybe we need some kind of a point system, like star system, like the users ranking, so that we know how trustable a particular bot is, or even maybe public service bots for trustworthy information. But there can be more ideas. AI is in its infancy and our understanding of it is even more in the beginning. So let’s hope we will together be able to turn AI from a weapon to a force for good. Thank you very much to the panelists, technicians, participants and everyone else. Thank you very much. Thank you very much. Thank you very much.
David Caswell
Speech speed
166 words per minute
Speech length
2739 words
Speech time
988 seconds
AI has transformed information creation from artisanal to automated processes, fundamentally changing the information ecosystem
Explanation
Caswell argues that AI represents a transition from handmade, artisanal news creation to automated processes. This is profound because news and journalism were among the last handmade activities in society, and AI can now partially automate gathering, processing, and creating news experiences.
Evidence
He notes that news and journalism creation was ‘one of the last kind of handmade or artisanal activities in society’ and that AI can now automate ‘the gathering of news-like information, the processing of it, and especially the creation of experiences of it for consumption’
Major discussion point
AI’s Impact on Disinformation Creation and Spread
Topics
Human rights | Sociocultural
Agreed with
– Chine Labbé
Agreed on
AI has fundamentally transformed information creation and distribution at unprecedented scale
Previous anti-disinformation efforts have been largely ineffective due to scale issues, perceived bias, and focus on individual artifacts rather than systems
Explanation
Caswell contends that the response to disinformation over the past 10-15 years has not been successful. He identifies multiple problems including ineffectiveness due to scale, alarmism, self-interest, political bias, and focusing on individual cases rather than systematic approaches.
Evidence
He cites that fact-checking provides ‘just a tiny, tiny drop in this vast ocean of information’ and mentions Mark Zuckerberg’s justification for turning off fact-checking at META due to perceived left-coding bias
Major discussion point
Challenges in Combating AI-Driven Disinformation
Topics
Human rights | Sociocultural | Legal and regulatory
Disagreed with
– Mikko Salo
Disagreed on
Effectiveness and bias of fact-checking approaches
AI can enable systematic rather than selective journalism coverage, processing vast amounts of digitally accessible information
Explanation
Caswell argues that AI offers the opportunity to move from selective news coverage to systematic coverage. Unlike human journalists who must choose specific stories due to resource constraints, AI can analyze all digitally accessible information in a given domain comprehensively.
Evidence
He provides the example of covering ‘the auto industry in Bavaria’ where AI could read ‘every single PDF’ and analyze ‘every single social media post by every single executive’ systematically, whereas journalists must make ‘newsworthiness decisions’ due to scarce resources
Major discussion point
Opportunities and Positive Applications of AI
Topics
Human rights | Sociocultural
Agreed with
– Chine Labbé
Agreed on
AI can be part of the solution when properly implemented with human oversight
Disagreed with
– Audience
Disagreed on
Role of AI in replacing original source journalism
AI can make civic information more accessible across different literacy levels, languages, and format preferences
Explanation
Caswell sees AI as enabling unprecedented accessibility of civic information by adapting content to individual needs. This could make relevant, societally beneficial information available to everyone at a much deeper level of personal relevance than previously possible.
Evidence
He mentions AI’s ability to adapt information ‘regardless of literacy or language or style preference or format preference or situation’ and the possibility of having ‘relevant, accessible, societally beneficial information available to everybody’
Major discussion point
Opportunities and Positive Applications of AI
Topics
Development | Human rights | Sociocultural
Agreed with
– Chine Labbé
Agreed on
AI can be part of the solution when properly implemented with human oversight
Chine Labbé
Speech speed
160 words per minute
Speech length
2147 words
Speech time
802 seconds
Malign actors increasingly use AI to generate sophisticated deepfakes, with Russian-Ukraine conflict showing 16-fold increase in deepfake quality and quantity
Explanation
Labbé demonstrates how AI has dramatically enhanced the creation of deepfakes and synthetic media. The quality and quantity of deepfakes has increased exponentially, making them more believable and harder to detect.
Evidence
During the first year of Russia-Ukraine war, only 1 out of 100 false claims was a deepfake (very pixelated, bad quality), but by the third year, there were 16 sophisticated, believable deepfakes. She also mentions a specific example of a deepfake showing someone falsely claiming sexual assault by Brigitte Macron
Major discussion point
AI’s Impact on Disinformation Creation and Spread
Topics
Cybersecurity | Human rights | Sociocultural
Agreed with
– David Caswell
Agreed on
AI has fundamentally transformed information creation and distribution at unprecedented scale
AI enables creation of entire networks of fake local news websites that appear credible but spread disinformation at unprecedented scale
Explanation
Labbé explains how AI tools are being used to create vast networks of fake news websites that mimic legitimate local news sources. These sites are entirely AI-generated and maintained, requiring minimal human oversight while appearing authentic.
Evidence
She cites John Mark Dugan, who created over 273 websites using AI, and mentions that NewsGuard found 1,271 AI content farms as of the time of speaking, up from just 49 in May 2023. A colleague created a propaganda site for just $105 in two days
Major discussion point
AI’s Impact on Disinformation Creation and Spread
Topics
Cybersecurity | Human rights | Sociocultural
Agreed with
– David Caswell
Agreed on
AI has fundamentally transformed information creation and distribution at unprecedented scale
AI chatbots authoritatively repeat false claims 26% of the time and cite known disinformation websites as sources
Explanation
Labbé presents evidence that AI chatbots frequently fail to distinguish between true and false information, presenting disinformation as fact. This represents a significant reliability problem as users increasingly turn to chatbots for information verification.
Evidence
NewsGuard’s monthly audits show chatbots repeat false claims authoritatively about 26% of the time. She provides examples including Grok repeating false claims about LA protests and Mistral citing known disinformation websites as sources. BBC’s experiment showed chatbots had significant problems in 10% of cases
Major discussion point
AI’s Impact on Disinformation Creation and Spread
Topics
Cybersecurity | Human rights | Sociocultural
AI creates vicious cycles where AI-generated false content gets validated by AI chatbots, creating self-reinforcing disinformation loops
Explanation
Labbé describes a problematic feedback loop where AI-generated disinformation gets published on fake websites, which are then cited by AI chatbots as authoritative sources. This creates a self-reinforcing system where false information appears increasingly credible.
Evidence
She explains the process: ‘information created by AI, generated, then repeated through those websites and validated by the AI chatbots, the really vicious circle of disinformation’ where chatbots ‘fail to recognize the fake sites that AI tools have contributed to creating’
Major discussion point
Challenges in Combating AI-Driven Disinformation
Topics
Cybersecurity | Human rights | Sociocultural
Malign actors exploit AI vulnerabilities through ‘LLM grooming’ – saturating web results with propaganda so chatbots will cite and repeat it
Explanation
Labbé reveals how sophisticated actors deliberately flood the internet with propaganda content specifically to influence AI training and responses. This represents a strategic approach to manipulating AI systems by corrupting their information sources.
Evidence
She describes the ‘Pravda Network’ with about 140 sites in over 40 languages publishing 3 million pieces of content yearly with ‘no audience’ but designed to ‘saturate the web results so that chatbots will use their content.’ Initial audits showed 33% success rate in getting chatbots to repeat their disinformation
Major discussion point
Challenges in Combating AI-Driven Disinformation
Topics
Cybersecurity | Human rights | Sociocultural
AI tools can assist in monitoring disinformation and deploying fact-checks at scale when humans remain in the loop
Explanation
Despite the challenges, Labbé acknowledges that AI can be part of the solution when properly implemented. AI can help scale up monitoring and fact-checking efforts, but requires human oversight to be effective.
Evidence
She mentions that ‘we’re using AI for monitoring, for deploying fact checks’ and that ‘we can even use generic TVI…to create, for example, new formats of presenting content, as long as the human is in the loop’
Major discussion point
Opportunities and Positive Applications of AI
Topics
Human rights | Sociocultural
Agreed with
– David Caswell
Agreed on
AI can be part of the solution when properly implemented with human oversight
Certification and labeling of credible information sources can help users identify trustworthy content in AI-mediated environments
Explanation
Labbé advocates for systems that certify and label credible information rather than just trying to identify false content. This positive approach helps users identify trustworthy sources in an increasingly complex information landscape.
Evidence
She mentions NewsGuard’s work and references ‘the Journalism Trust Initiative’ and ‘Trust My Content’ as examples of certification systems, stating ‘I think that’s a very positive way forward’
Major discussion point
Market and Consumer-Driven Solutions
Topics
Human rights | Sociocultural | Legal and regulatory
Consumer awareness and demand for truthful AI systems can drive industry improvements in accuracy and safety features
Explanation
Labbé argues that educating users about AI reliability problems will create market pressure for companies to improve their systems. As users become aware of the scale of misinformation issues, they will demand better accuracy from AI providers.
Evidence
She states ‘once users realize the scale of the issue…people will ask the platforms to do better. And at the end of the day, it’s a business. So if the users ask for more truth, then they’ll have to put in the guardrails’
Major discussion point
Market and Consumer-Driven Solutions
Topics
Economic | Human rights | Sociocultural
Agreed with
– Maria Nordstrom
Agreed on
Consumer awareness and market pressure can drive improvements in AI system reliability
AI companies currently prioritize new features over safety, but user pressure could shift this balance toward reliability
Explanation
Labbé observes that AI companies focus on developing attractive new features rather than ensuring accuracy and safety. However, she believes consumer demand could change these priorities if users prioritize reliability over novelty.
Evidence
She notes that ‘chatbots tend to do worse in month that they release new features’ because ‘the industry is focusing on efficiency, on new sexy features, but not on safety’ and that ‘safety takes the back seat when it comes to news’
Major discussion point
Market and Consumer-Driven Solutions
Topics
Economic | Human rights | Sociocultural
AI may allow traditional journalism to return to on-the-ground reporting by automating routine information processing tasks
Explanation
Labbé sees AI as potentially liberating journalists from routine desk work to focus on original reporting and human-centered stories. This could strengthen rather than replace traditional journalism by handling automated tasks.
Evidence
She explains that ‘having worked in newsroom most of my life…we often just didn’t have time or money to go on the ground and report’ but ‘with AI, it’ll allow journalism to go back to its roots and do more on-the-ground journalism’
Major discussion point
Opportunities and Positive Applications of AI
Topics
Human rights | Sociocultural
Agreed with
– David Caswell
Agreed on
AI can be part of the solution when properly implemented with human oversight
Maria Nordstrom
Speech speed
139 words per minute
Speech length
849 words
Speech time
366 seconds
The Council of Europe’s AI Framework Convention provides first legally binding global treaty addressing AI’s impact on human rights, democracy and rule of law
Explanation
Nordstrom explains that this treaty represents a landmark achievement in AI governance, being the first legally binding international agreement specifically addressing AI’s impact on fundamental democratic values. It’s global in scope, open to non-European countries as well.
Evidence
She notes it’s been signed by Japan, Switzerland, Ukraine, Montenegro and Canada beyond Council of Europe members, and ‘formulates fundamental principles and rules which safeguard human rights, democracy and the rule of law throughout the AI life cycle’
Major discussion point
Regulatory and Policy Responses
Topics
Human rights | Legal and regulatory
Finding balance between AI literacy education and maintaining public trust in information systems is a key policy challenge
Explanation
Nordstrom identifies a critical tension in policy-making: the need to educate people about AI risks without undermining their trust in information systems altogether. Too much skepticism could be as harmful to democracy as too little.
Evidence
She poses the key question: ‘to what extent is it beneficial for the society when all information is questioned? What does it do with democracy and what does it do with our agency when we can no longer trust the information that we see, that we read, that we hear?’
Major discussion point
Challenges in Combating AI-Driven Disinformation
Topics
Human rights | Sociocultural
Agreed with
– Olha Petriv
– Mikko Salo
Agreed on
Education and literacy are crucial for building resilience against AI-driven disinformation
Hard law regulation has limitations in requiring ‘truth’ from AI systems, making consumer empowerment and choice crucial
Explanation
Nordstrom acknowledges that while legal frameworks like the EU AI Act exist, it’s difficult to mandate truthfulness through regulation alone. This makes empowering consumers to make informed choices about AI systems particularly important.
Evidence
She states ‘when it comes to hard law, yeah, we have the AI Act in the EU, for example, but it’s hard to, by hard law, by regulation, require the truth’ and emphasizes helping consumers ‘make a conscious choice about which systems they are using’
Major discussion point
Regulatory and Policy Responses
Topics
Legal and regulatory | Economic | Human rights
Agreed with
– Chine Labbé
Agreed on
Consumer awareness and market pressure can drive improvements in AI system reliability
Olha Petriv
Speech speed
102 words per minute
Speech length
1208 words
Speech time
704 seconds
Children are particularly vulnerable to AI-generated disinformation, with Ukrainian refugee children’s faces being weaponized in deepfake campaigns
Explanation
Petriv highlights how children become both victims and unwitting spreaders of disinformation, particularly in conflict situations. She describes how children’s identities are exploited to create false narratives that undermine trust in refugee populations.
Evidence
She describes the ‘Matryoshka’ campaign where ‘a face of Ukrainian refugee girl was used to spread information that she doesn’t like different schools in the U.S.’ to create distrust of Ukrainian refugees, and mentions ‘disinformational bullying is spreading more and more in schools’
Major discussion point
AI’s Impact on Disinformation Creation and Spread
Topics
Human rights | Sociocultural | Cybersecurity
Self-regulation can serve as interim solution, with Ukrainian companies creating ethical AI codes of conduct while awaiting formal legislation
Explanation
Petriv describes Ukraine’s proactive approach of implementing self-regulation rather than waiting for formal laws. This bottom-up approach involves companies voluntarily adopting ethical AI principles and creating accountability mechanisms.
Evidence
She explains that 14 Ukrainian companies including Grammarly and SoftServe ‘created code of conduct that consists of eight main principles’ and established a ‘self-regulation body’ with annual reporting requirements on implementing ethical guidelines
Major discussion point
Regulatory and Policy Responses
Topics
Legal and regulatory | Economic
AI literacy education must start early, focusing on critical thinking and algorithm understanding rather than banning AI use by children
Explanation
Petriv advocates for teaching children how to use AI responsibly rather than prohibiting its use. She emphasizes that AI literacy should focus on developing critical thinking skills and understanding how AI systems work.
Evidence
She references UNESCO’s 2023 guidance that ‘AI skills is most so important for children, beginning from the algorithm understanding’ and emphasizes teaching children that ‘AI can be like a tool that helps them to find answers to their questions and to help them to ask more and more questions’
Major discussion point
Educational and Literacy Solutions
Topics
Human rights | Sociocultural | Development
Agreed with
– Maria Nordstrom
– Mikko Salo
Agreed on
Education and literacy are crucial for building resilience against AI-driven disinformation
Educational initiatives should help children understand AI as a tool while developing skills to resist disinformation
Explanation
Petriv argues that education should frame AI as a helpful tool while simultaneously building children’s capacity to identify and resist false information. This dual approach prepares children for an AI-integrated future while protecting them from manipulation.
Evidence
She mentions working on helping ‘children to develop the critical thinking through…AI companion’ and emphasizes that ‘if we will not teach children how to understand news and understand AI, somebody else will teach them how to think’
Major discussion point
Educational and Literacy Solutions
Topics
Human rights | Sociocultural | Development
Irena Gríkova
Speech speed
138 words per minute
Speech length
2907 words
Speech time
1262 seconds
Three-pillar approach needed: fact-checking integration, human rights-by-design platform principles, and user empowerment strategies
Explanation
Gríkova outlines the Council of Europe’s comprehensive strategy for combating disinformation through three interconnected approaches. This framework emphasizes both technical solutions and human-centered approaches to building resilience against false information.
Evidence
She details the three pillars: ‘fact-checking, calling for independent, transparency and financial sustainability by both states and digital platforms,’ ‘platform design’ with ‘human rights by design and safety by design principles,’ and ‘user empowerment’ including ‘initiatives at local level, community-based, and also collective’
Major discussion point
Regulatory and Policy Responses
Topics
Human rights | Legal and regulatory | Sociocultural
Mikko Salo
Speech speed
156 words per minute
Speech length
424 words
Speech time
162 seconds
Teachers need specific guidelines and support materials to effectively integrate AI literacy into education systems
Explanation
Salo emphasizes that educators require concrete guidance and resources to teach AI literacy effectively. Without proper support materials, teachers struggle with the complexity of AI-related topics and cannot adequately prepare students.
Evidence
He explains that ‘as a small NGO, we kind of pushed the government to do the guidelines for the teachers, because outside that, this kind of guidelines, the teachers are lost. It’s such a big thing’ and mentions creating ‘a kind of guide on AI for teachers’ that has been translated into English
Major discussion point
Educational and Literacy Solutions
Topics
Sociocultural | Development
Agreed with
– Maria Nordstrom
– Olha Petriv
Agreed on
Education and literacy are crucial for building resilience against AI-driven disinformation
Disagreed with
– David Caswell
Disagreed on
Effectiveness and bias of fact-checking approaches
Audience
Speech speed
211 words per minute
Speech length
458 words
Speech time
129 seconds
Using AI chatbots to teach children about AI literacy represents an innovative educational approach
Explanation
An audience member suggests that AI chatbots could be used as educational tools to teach children about AI itself. This meta-approach would use AI technology to help students understand AI capabilities and limitations.
Major discussion point
Educational and Literacy Solutions
Topics
Sociocultural | Development
Disagreed with
– David Caswell
Disagreed on
Role of AI in replacing original source journalism
Incentivizing AI service providers requires raising public awareness about the scale of misinformation problems in current systems
Explanation
An audience member argues that creating market incentives for more truthful AI systems depends on educating the public about existing problems. Only when users understand the scope of misinformation issues will they demand better accuracy from AI providers.
Major discussion point
Market and Consumer-Driven Solutions
Topics
Economic | Human rights | Sociocultural
Agreements
Agreement points
AI has fundamentally transformed information creation and distribution at unprecedented scale
Speakers
– David Caswell
– Chine Labbé
Arguments
AI has transformed information creation from artisanal to automated processes, fundamentally changing the information ecosystem
Malign actors increasingly use AI to generate sophisticated deepfakes, with Russian-Ukraine conflict showing 16-fold increase in deepfake quality and quantity
AI enables creation of entire networks of fake local news websites that appear credible but spread disinformation at unprecedented scale
Summary
Both speakers agree that AI represents a fundamental shift in how information is created and distributed, moving from manual/artisanal processes to automated systems that can operate at massive scale, though they focus on different aspects – Caswell on the general transformation and Labbé on malicious applications
Topics
Human rights | Sociocultural | Cybersecurity
Consumer awareness and market pressure can drive improvements in AI system reliability
Speakers
– Chine Labbé
– Maria Nordstrom
Arguments
Consumer awareness and demand for truthful AI systems can drive industry improvements in accuracy and safety features
Hard law regulation has limitations in requiring ‘truth’ from AI systems, making consumer empowerment and choice crucial
Summary
Both speakers recognize that while regulation has limitations, empowering consumers with knowledge and choice can create market incentives for AI companies to improve accuracy and reliability of their systems
Topics
Economic | Human rights | Legal and regulatory
Education and literacy are crucial for building resilience against AI-driven disinformation
Speakers
– Maria Nordstrom
– Olha Petriv
– Mikko Salo
Arguments
Finding balance between AI literacy education and maintaining public trust in information systems is a key policy challenge
AI literacy education must start early, focusing on critical thinking and algorithm understanding rather than banning AI use by children
Teachers need specific guidelines and support materials to effectively integrate AI literacy into education systems
Summary
All three speakers emphasize that education is fundamental to addressing AI disinformation challenges, though they highlight different aspects – the policy balance (Nordstrom), early childhood focus (Petriv), and teacher support needs (Salo)
Topics
Human rights | Sociocultural | Development
AI can be part of the solution when properly implemented with human oversight
Speakers
– David Caswell
– Chine Labbé
Arguments
AI can make civic information more accessible across different literacy levels, languages, and format preferences
AI can enable systematic rather than selective journalism coverage, processing vast amounts of digitally accessible information
AI tools can assist in monitoring disinformation and deploying fact-checks at scale when humans remain in the loop
AI may allow traditional journalism to return to on-the-ground reporting by automating routine information processing tasks
Summary
Both speakers acknowledge that despite the risks, AI offers significant opportunities to improve information systems – Caswell focuses on accessibility and systematic coverage, while Labbé emphasizes monitoring capabilities and freeing journalists for original reporting
Topics
Human rights | Sociocultural
Similar viewpoints
Both speakers recognize that current approaches to combating disinformation are inadequate and that AI exacerbates these problems by creating systemic issues rather than just individual false content pieces
Speakers
– David Caswell
– Chine Labbé
Arguments
Previous anti-disinformation efforts have been largely ineffective due to scale issues, perceived bias, and focus on individual artifacts rather than systems
AI creates vicious cycles where AI-generated false content gets validated by AI chatbots, creating self-reinforcing disinformation loops
Topics
Human rights | Sociocultural | Cybersecurity
Both speakers emphasize the critical importance of educational infrastructure and support systems for effectively teaching AI literacy, particularly focusing on practical implementation challenges
Speakers
– Olha Petriv
– Mikko Salo
Arguments
Educational initiatives should help children understand AI as a tool while developing skills to resist disinformation
Teachers need specific guidelines and support materials to effectively integrate AI literacy into education systems
Topics
Sociocultural | Development
Both speakers recognize the limitations of regulatory approaches alone and emphasize the importance of market-driven solutions through informed consumer choice and pressure
Speakers
– Chine Labbé
– Maria Nordstrom
Arguments
AI companies currently prioritize new features over safety, but user pressure could shift this balance toward reliability
Hard law regulation has limitations in requiring ‘truth’ from AI systems, making consumer empowerment and choice crucial
Topics
Economic | Legal and regulatory
Unexpected consensus
Self-regulation as viable interim solution
Speakers
– Olha Petriv
– Maria Nordstrom
Arguments
Self-regulation can serve as interim solution, with Ukrainian companies creating ethical AI codes of conduct while awaiting formal legislation
The Council of Europe’s AI Framework Convention provides first legally binding global treaty addressing AI’s impact on human rights, democracy and rule of law
Explanation
Despite representing different approaches (bottom-up self-regulation vs. top-down international treaty), both speakers see value in interim measures and voluntary compliance while formal legal frameworks develop. This suggests pragmatic consensus on multi-layered governance approaches
Topics
Legal and regulatory | Economic
AI chatbots as educational tools for AI literacy
Speakers
– David Caswell
– Olha Petriv
– Audience
Arguments
AI can make civic information more accessible across different literacy levels, languages, and format preferences
Educational initiatives should help children understand AI as a tool while developing skills to resist disinformation
Using AI chatbots to teach children about AI literacy represents an innovative educational approach
Explanation
There was unexpected enthusiasm across speakers for using AI itself as an educational tool to teach AI literacy. This meta-approach of using the technology to understand the technology represents innovative thinking that emerged during the discussion
Topics
Sociocultural | Development | Human rights
Overall assessment
Summary
The speakers demonstrated strong consensus on several key areas: the transformative scale of AI’s impact on information systems, the limitations of purely regulatory approaches, the critical importance of education and literacy, and the potential for AI to be part of the solution when properly implemented. There was also agreement on the need for multi-stakeholder approaches combining regulation, market incentives, and educational initiatives.
Consensus level
High level of consensus with complementary rather than conflicting perspectives. The speakers approached the topic from different angles (technical, policy, industry, civil society) but arrived at remarkably similar conclusions about both challenges and solutions. This suggests a mature understanding of the complexity of AI disinformation issues and the need for comprehensive, multi-faceted responses. The consensus has positive implications for developing coordinated international responses to AI disinformation challenges.
Differences
Different viewpoints
Effectiveness and bias of fact-checking approaches
Speakers
– David Caswell
– Mikko Salo
Arguments
Previous anti-disinformation efforts have been largely ineffective due to scale issues, perceived bias, and focus on individual artifacts rather than systems
Teachers need specific guidelines and support materials to effectively integrate AI literacy into education systems
Summary
Caswell argues that fact-checking has been ineffective and suffers from left-coding bias, citing Zuckerberg’s justification for ending fact-checking at META. Salo pushes back, suggesting that Zuckerberg’s position was politically motivated rather than evidence-based, and defends the work of fact-checkers who have written open letters explaining their case.
Topics
Human rights | Sociocultural | Legal and regulatory
Role of AI in replacing original source journalism
Speakers
– David Caswell
– Audience
Arguments
AI can enable systematic rather than selective journalism coverage, processing vast amounts of digitally accessible information
Using AI chatbots to teach children about AI literacy represents an innovative educational approach
Summary
An audience member questioned whether AI can truly replace journalism that requires original sources, particularly war journalism and personal stories that require human presence. Caswell acknowledged AI limitations but argued that much current journalism involves computer-based work that AI can handle, while the audience member emphasized the irreplaceable value of human-sourced reporting.
Topics
Human rights | Sociocultural
Unexpected differences
Trust versus skepticism balance in information literacy
Speakers
– Maria Nordstrom
– Irena Gríkova
Arguments
Finding balance between AI literacy education and maintaining public trust in information systems is a key policy challenge
Three-pillar approach needed: fact-checking integration, human rights-by-design platform principles, and user empowerment strategies
Explanation
While both speakers work for institutions focused on protecting democratic values, they reveal a subtle but significant tension. Nordstrom worries that too much skepticism about information could undermine democracy itself, while Gríkova suggests we may have entered an ‘era of mistrust’ that requires new approaches like public service AI chatbots. This disagreement is unexpected because it reveals philosophical differences about whether trust or skepticism should be the default stance in information literacy.
Topics
Human rights | Sociocultural
Overall assessment
Summary
The discussion revealed relatively low levels of fundamental disagreement among speakers, with most conflicts centered on implementation approaches rather than core goals. The main areas of disagreement involved the effectiveness of current fact-checking approaches, the extent to which AI can replace human journalism, and the balance between promoting healthy skepticism versus maintaining institutional trust.
Disagreement level
The disagreement level was moderate and constructive, with speakers generally building on each other’s points rather than opposing them. The most significant implication is that while there’s broad consensus on the problems AI poses for information integrity, there’s less agreement on solutions – particularly regarding the balance between technological fixes, regulatory approaches, and educational interventions. This suggests that policy development in this area will require careful coordination among different approaches rather than choosing a single strategy.
Partial agreements
Partial agreements
Similar viewpoints
Both speakers recognize that current approaches to combating disinformation are inadequate and that AI exacerbates these problems by creating systemic issues rather than just individual false content pieces
Speakers
– David Caswell
– Chine Labbé
Arguments
Previous anti-disinformation efforts have been largely ineffective due to scale issues, perceived bias, and focus on individual artifacts rather than systems
AI creates vicious cycles where AI-generated false content gets validated by AI chatbots, creating self-reinforcing disinformation loops
Topics
Human rights | Sociocultural | Cybersecurity
Both speakers emphasize the critical importance of educational infrastructure and support systems for effectively teaching AI literacy, particularly focusing on practical implementation challenges
Speakers
– Olha Petriv
– Mikko Salo
Arguments
Educational initiatives should help children understand AI as a tool while developing skills to resist disinformation
Teachers need specific guidelines and support materials to effectively integrate AI literacy into education systems
Topics
Sociocultural | Development
Both speakers recognize the limitations of regulatory approaches alone and emphasize the importance of market-driven solutions through informed consumer choice and pressure
Speakers
– Chine Labbé
– Maria Nordstrom
Arguments
AI companies currently prioritize new features over safety, but user pressure could shift this balance toward reliability
Hard law regulation has limitations in requiring ‘truth’ from AI systems, making consumer empowerment and choice crucial
Topics
Economic | Legal and regulatory
Takeaways
Key takeaways
AI has fundamentally transformed information creation from artisanal to automated processes, creating both unprecedented risks and opportunities for democratic dialogue
Current anti-disinformation efforts have been largely ineffective due to scale limitations, with AI now enabling malign actors to create sophisticated disinformation campaigns at unprecedented scale and low cost
AI chatbots authoritatively repeat false claims 26% of the time, creating vicious cycles where AI-generated disinformation gets validated by AI systems themselves
Children are particularly vulnerable to AI-generated disinformation and require early AI literacy education focused on critical thinking rather than AI prohibition
The Council of Europe’s AI Framework Convention provides the first legally binding global treaty addressing AI’s impact on human rights, democracy and rule of law
Consumer awareness and demand for truthful AI systems can drive industry improvements, as AI companies currently prioritize new features over safety and accuracy
AI offers opportunities for systematic rather than selective journalism coverage and can make civic information more accessible across different populations
Self-regulation by AI companies can serve as an interim solution while formal legislation is being developed
The fundamental challenge is preserving democratic institutions built on trust while navigating an era of widespread information mistrust
Resolutions and action items
Develop AI literacy educational materials and programs, including innovative approaches like using AI chatbots to teach children about AI
Create certification and labeling systems for credible information sources and AI systems to help users identify trustworthy content
Preserve and strengthen primary source journalism as the foundation for AI-based secondary journalism
Implement the Council of Europe’s three-pillar approach: fact-checking integration, human rights-by-design platform principles, and user empowerment strategies
Develop risk and impact assessment methodology for AI systems affecting democratic processes through the Council of Europe’s multi-stakeholder process
Raise consumer awareness about AI misinformation issues to drive market demand for more reliable AI systems
Support ratification and implementation of the Council of Europe’s AI Framework Convention
Invest in strengthening public service media and regulatory authorities’ capabilities to navigate the digital environment
Unresolved issues
How to find the right balance between AI literacy education and maintaining public trust in information systems without further eroding confidence
What is the optimal age and methodology for teaching children about AI and disinformation resistance
How to effectively regulate AI systems to require truthfulness when hard law has limitations in mandating ‘truth’
How to address the demand side of disinformation – making users seek and consume quality information even when available
How to scale fact-checking and content moderation to match the volume of AI-generated content
Whether AI can truly replace primary source journalism, particularly for on-ground reporting and original source gathering
How to prevent the fragmentation of shared narratives while enabling personalized AI-mediated information experiences
How to demonetize the disinformation economy and cut off financial incentives for spreading false information
Suggested compromises
Use AI as a tool to enhance rather than replace human journalism, allowing traditional media to focus on on-ground reporting while AI handles routine information processing
Implement hybrid approaches where humans remain in the loop for AI-assisted fact-checking and content moderation
Develop both hard law regulation (like the EU AI Act) and soft law measures (like industry self-regulation) to address different aspects of the AI disinformation challenge
Focus on empowering consumers to make informed choices about AI systems rather than attempting to regulate truth directly
Combine systematic AI-enabled information coverage with preservation of identity-based publications that serve community-building functions
Pursue both supply-side solutions (reducing harmful content production) and demand-side solutions (improving user resilience and critical thinking)
Thought provoking comments
We basically changed our information ecosystem from a one-to-many, or more accurately, a few-to-many shape, to a many-to-many shape… And this was the technical change that caused this cascade of activity over the last 15 years, including around disinformation and misinformation.
Speaker
David Caswell
Reason
This comment provides a fundamental framework for understanding the root cause of our current information crisis. Rather than focusing on symptoms, Caswell identifies the structural transformation that enabled mass disinformation – the democratization of mass communication itself.
Impact
This framing shifted the discussion from treating AI as the primary problem to understanding it as the latest evolution in a longer transformation. It established a historical context that influenced how other panelists discussed solutions, moving beyond reactive measures to systemic thinking.
I think there’s another deep risk that really is underappreciated here, which is that as we start to use these models as sort of core intelligence for our societies, that there are biases in these models… Elon Musk… has just recently announced that they’re going to use Grok to basically rebuild the archive on which they train the next version of Grok. So they’re going to write a new history, basically, of humanity, and then use that to train Grok.
Speaker
David Caswell
Reason
This insight reveals a terrifying feedback loop where AI systems don’t just reflect existing biases but actively reshape the information foundation of society. The Grok example illustrates how powerful actors can literally ‘rewrite history’ at the training data level.
Impact
This comment introduced a new dimension of concern that went beyond traditional content moderation discussions. It elevated the conversation to existential questions about truth and reality, influencing later discussions about the need for systematic approaches and public oversight.
So you have information created by AI, generated, then repeated through those websites and validated by the AI chatbots, the really vicious circle of disinformation.
Speaker
Chine Labbé
Reason
This identifies a critical self-reinforcing mechanism where AI-generated false information becomes ‘validated’ by other AI systems, creating an ecosystem of synthetic credibility that’s increasingly difficult to detect or counter.
Impact
This observation shifted the discussion from viewing AI as a tool that could be controlled to understanding it as creating autonomous disinformation ecosystems. It reinforced the urgency around developing systematic solutions rather than piecemeal approaches.
The important question here is, to what extent is it beneficial for the society when all information is questioned? What does it do with democracy and what does it do with our agency when we can no longer trust the information that we see, that we read, that we hear?
Speaker
Maria Nordstrom
Reason
This comment captures a fundamental paradox: efforts to combat disinformation through skepticism and education may inadvertently erode the shared trust that democracy requires. It highlights the delicate balance between critical thinking and social cohesion.
Impact
This shifted the conversation from purely technical solutions to philosophical questions about the foundations of democratic society. It influenced the moderator’s later observation about entering an ‘era of mistrust’ and shaped discussions about preserving trusted institutions.
And it’s not just a parental issue, it’s a generation’s lost potential… if we will not teach children how to understand news and understand AI, somebody else will teach them how to think.
Speaker
Olha Petriv
Reason
This reframes AI literacy education as an urgent societal imperative rather than an individual responsibility. The phrase ‘somebody else will teach them how to think’ powerfully captures the stakes of inaction in a world where malicious actors are actively exploiting AI.
Impact
This comment elevated the discussion of education from a nice-to-have to an existential necessity. It influenced other speakers to focus on practical implementation of AI literacy programs and sparked innovative ideas like using AI chatbots to teach AI literacy.
I think there’s a new deep need… we’ve depended for 400 years on these mechanisms, like the scientific method or like journalism… they’re truth-seeking and they’re also self-correcting… I think they also need to be deterministic rather than probabilistic.
Speaker
David Caswell
Reason
This insight identifies a fundamental incompatibility between how democratic institutions have historically operated (deterministic, verifiable, persistent) and how AI systems work (probabilistic, opaque, ephemeral). It suggests our entire epistemological framework may need updating.
Impact
This comment introduced a deeper philosophical dimension to the technical discussion, influencing conversations about the need for new institutional frameworks and the importance of preserving traditional journalistic methods alongside AI innovation.
Overall assessment
These key comments transformed what could have been a typical ‘AI is dangerous/helpful’ discussion into a sophisticated analysis of systemic challenges to democratic epistemology. Caswell’s historical framing established that we’re dealing with the latest phase of a longer transformation, while Labbé’s practical examples grounded abstract concerns in measurable realities. Nordstrom’s philosophical questioning and Petriv’s urgency about education elevated the stakes from technical problems to civilizational challenges. Together, these insights shifted the conversation from reactive problem-solving to proactive system design, emphasizing the need for new institutional frameworks, educational approaches, and governance mechanisms that can preserve democratic dialogue in an AI-mediated information ecosystem. The discussion evolved from cataloging problems to envisioning solutions that address root causes rather than symptoms.
Follow-up questions
How do you fact check your way out of a completely new alternative reality created by AI?
Speaker
Irena Gríkova
Explanation
This addresses the fundamental challenge of verifying information when AI can create entire plausible but false narratives at scale, making traditional fact-checking approaches insufficient
What is the right age to teach children about AI, and how should this education be structured?
Speaker
Mikko Salo
Explanation
This is crucial for developing AI literacy programs, as children need to understand critical thinking before using AI as a supportive tool, and the approach may be culturally dependent
To what extent is it beneficial for society when all information is questioned? What does it do with democracy and our agency when we can no longer trust the information we see?
Speaker
Maria Nordstrom
Explanation
This addresses the balance between healthy skepticism and the erosion of trust that could undermine democratic institutions and individual agency
How can we incentivize AI service providers to be more grounded on truth and reality?
Speaker
Jun Baek
Explanation
This explores market-based and regulatory approaches to encourage AI companies to prioritize accuracy over other features like efficiency or novelty
How do you motivate AI companies to comply with self-regulation?
Speaker
Irena Gríkova
Explanation
This examines the mechanisms needed to ensure voluntary compliance with ethical AI standards in the absence of binding regulations
What does ‘systematic versus selective’ news coverage mean in the context of AI journalism?
Speaker
Irena Gríkova
Explanation
This seeks clarification on how AI could transform journalism from resource-constrained selective reporting to comprehensive systematic coverage of information domains
Can AI really replace original source journalism, especially in areas requiring human presence like war journalism or personal stories?
Speaker
Frances (YouthDIG)
Explanation
This questions the limits of AI in journalism and the continued need for human reporters in certain contexts that require physical presence and human connection
How can we develop certification or ranking systems for AI chatbots to help users identify trustworthy sources?
Speaker
Irena Gríkova
Explanation
This explores the need for user-friendly systems to evaluate and compare the reliability of different AI information sources
Should we develop public service AI chatbots trained only on reliable data?
Speaker
Irena Gríkova
Explanation
This considers whether governments should provide trustworthy AI information services as a public good, similar to public service media
How can we preserve and strengthen primary source journalism as the foundation for AI-based secondary journalism?
Speaker
Irena Gríkova
Explanation
This addresses the need to maintain human-generated original reporting to prevent the information ecosystem from becoming entirely virtual and self-referential
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.