What Proliferation of Artificial Intelligence Means for Information Integrity?
8 Jul 2025 15:00h - 15:45h
What Proliferation of Artificial Intelligence Means for Information Integrity?
Session at a glance
Summary
This discussion, hosted by Latvia’s UN mission in Geneva, focused on the implications of artificial intelligence for information integrity in our global information environment. The panel brought together experts from the UN Human Rights Office, academia, fact-checking organizations, and civil society to examine how AI is transforming how we create, consume, and verify information.
The speakers identified several key risks that AI poses to information integrity. These include the proliferation of deepfakes and synthetic content, AI-generated hallucinations that spread false information, and the potential for discriminatory content moderation systems. Particularly concerning is the use of AI by malicious state actors to manipulate information, conduct surveillance, and suppress dissent, with Russia’s disinformation campaigns regarding Ukraine cited as a stark example. The panelists noted that while individual deepfakes haven’t yet caused measurable behavior changes, the mere existence of AI-generated content is creating widespread skepticism and eroding trust in institutions and information sources generally.
However, the discussion also highlighted AI’s potential benefits for information integrity. AI tools can help fact-checkers work more efficiently, enable better detection of disinformation campaigns, and provide new ways to educate different audience segments about media literacy. The speakers emphasized that different demographic groups require tailored approaches to information literacy, from Ukrainian refugees needing trustworthy local information to elderly populations requiring specialized digital education programs.
The panel concluded that addressing AI’s impact on information integrity requires coordinated action across multiple stakeholders, including governments, tech companies, academia, and civil society, with particular emphasis on transparency, investment in trust and safety measures, and maintaining human intelligence alongside artificial intelligence capabilities.
Keypoints
## Overall Purpose/Goal
This was a panel discussion organized by Latvia at the WSIS Plus 20 review event, focusing on the implications of artificial intelligence for information integrity. The goal was to examine how AI is changing the global information environment, assess the risks it presents to democratic societies, and explore potential solutions that governments, UN institutions, civil society, and academia can implement.
## Major Discussion Points
– **AI’s Transformative Impact on Information Environment**: Speakers emphasized that AI is rapidly changing how information is created, distributed, and consumed, with technologies like ChatGPT reaching 800 million weekly users. However, the full implications are not yet understood, and the pace of change is outstripping our ability to comprehend and respond effectively.
– **Emerging Risks and Threats**: Key concerns include AI-generated disinformation, deepfakes, content hallucination, malicious use by state actors (particularly Russia’s information warfare against Ukraine), micro-targeting of vulnerable populations, and the potential for AI content moderation to perpetuate discrimination and bias.
– **Information Nihilism and Declining Trust**: A critical finding that audiences are becoming increasingly skeptical of all information due to AI’s existence, leading to a “nothing is real” mentality. This erosion of trust in institutions and information sources poses significant challenges for democratic discourse and decision-making.
– **Need for Targeted, Evidence-Based Solutions**: Speakers stressed the importance of understanding different audiences (elderly, rural populations, refugees, youth) and developing tailored approaches rather than one-size-fits-all solutions. They emphasized the need for AI literacy combined with critical thinking skills, and warned against binary approaches to content moderation.
– **Multi-Stakeholder Response Strategy**: Discussion of roles for various actors including: academia maintaining strong human intelligence and knowledge dissemination; fact-checkers adapting tools and methods; governments developing appropriate regulation without censorship; and civil society organizations continuing transparency and accountability efforts despite reduced funding and political pressure.
## Overall Tone
The discussion maintained a consistently serious and somewhat pessimistic tone throughout. Speakers repeatedly acknowledged the gravity of the challenges, with phrases like “not a great landscape,” “very challenging,” and warnings about “blindly driving into a fog.” While some positive applications of AI were mentioned (such as human rights monitoring and fact-checking tools), the risks and challenges clearly dominated the conversation. The tone remained professional and academic, with speakers demonstrating deep expertise while expressing genuine concern about the rapid pace of change and society’s ability to adapt appropriately.
Speakers
– **Zaneta Ozolina** – Professor at University of Latvia, leads a project using AI to address disinformation, expert on information manipulation and young audiences
– **Ivars Pundurs** – Latvian ambassador to Geneva for United Nations institutions
– **Graham Brookie** – Vice President and Senior Director at the Atlantic Council, leads technology programs, expert on information manipulation, builds the Digital Forensic Research Lab
– **Viktors Makarovs** – Special Envoy on Digital Affairs at the Latvian Ministry of Foreign Affairs, moderator of the discussion
– **Peggy Hicks** – Director of the Thematic Engagement, Special Procedures and Rights to Development Division at the UN Human Rights Office
– **Septiaji Nugroho** – Co-founder and chairman of Mafinda (fact-checking organization in Indonesia), fact-checking expert
– **Audience** – Various audience members including Ila from CDAC Network, Claudio (high school student from Romania), and Boris Engelsson (freelance journalist)
**Additional speakers:**
– **Martin Stateris** – Colleague mentioned by Ambassador Pundurs as event organizer, ending his posting in Geneva
Full session report
# Artificial Intelligence and Information Integrity: A Multi-Stakeholder Discussion on Global Challenges and Solutions
## Executive Summary
This panel discussion, hosted by Latvia’s UN mission in Geneva as part of the WSIS Plus 20 review event, brought together leading experts to examine artificial intelligence’s impact on information integrity. The discussion featured Peggy Hicks, Director of the Thematic Engagement, Special Procedures and Rights to Development Division at the UN Human Rights Office; Professor Zaneta Ozolina from the University of Latvia; Graham Brookie, Vice President and Senior Director of the Atlantic Council’s Digital Forensic Research Lab; and Septiaji Nugroho from Mafindo, Indonesia’s fact-checking organization.
The panel identified critical challenges including AI-generated deepfakes, synthetic content, and discriminatory content moderation systems. Particularly concerning was the documented use of AI by malicious state actors, with Russia’s disinformation campaigns regarding Ukraine serving as a contemporary example. However, the most significant finding was that while individual deepfakes have not yet caused measurable behavioral changes, the mere existence of AI-generated content is creating widespread skepticism and “information nihilism” where “everything is possible and thus nothing is real.”
Despite these challenges, speakers highlighted AI’s potential benefits for information integrity, including enhanced fact-checking capabilities, better detection of disinformation campaigns, and innovative approaches to media literacy education. The panel emphasized that different demographic groups require tailored approaches, from Ukrainian refugees needing trustworthy local information to elderly populations requiring specialized digital education programs.
The discussion concluded with consensus that addressing AI’s impact requires coordinated action across governments, technology companies, academia, and civil society, with key recommendations including increased transparency in AI development, substantial investment in trust and safety measures, and maintaining human intelligence capabilities alongside artificial intelligence systems.
## Opening Context and Framework
Ambassador Ivars Pundurs opened by noting this represents the third consecutive year Latvia has convened this discussion, demonstrating sustained commitment to addressing AI’s impact on information environments. He emphasized how state actors are using AI to manipulate information and conduct surveillance, specifically citing Russia’s AI-driven narratives about Ukraine.
Moderator Viktors Makarovs, Special Envoy on Digital Affairs at the Latvian Ministry of Foreign Affairs, framed the discussion with Yoshua Bengio’s metaphor: “We’re blindly driving into a fog, and one of the areas where this seems to be true is the impact of AI on our information world, on the epistemology of the world.” This captured the fundamental uncertainty about AI’s impact on how we understand and process knowledge itself.
## AI’s Transformative Impact on Information Environment
### Rapid Technological Change and Scale
Speakers agreed that AI is rapidly transforming information environments in ways not yet fully understood. Graham Brookie noted that ChatGPT has reached massive scale adoption, demonstrating AI’s rapid integration into information systems. Septiaji Nugroho observed that people are applying more “gas” than “brakes” to AI adoption, with AI accelerating content creation and dissemination across all sectors.
Peggy Hicks emphasized that AI is changing information environments in ways we don’t fully comprehend, with platforms potentially fragmenting and evolving beyond their current forms. This transformation extends beyond technical capabilities to affect the very epistemology of information—how we determine what is true.
## Emerging Risks and Threats
### AI-Generated Disinformation and Synthetic Content
Peggy Hicks outlined key concerns including AI-generated hallucinations spreading false information, deepfakes convincingly impersonating real people, and biased content moderation systems perpetuating discrimination. These risks are compounded by questionable data provenance, making it increasingly difficult to trace information origins and reliability.
Professor Zaneta Ozolina emphasized that AI enables more sophisticated disinformation campaigns with well-planned tactics, moving beyond random narratives to strategic information operations. This represents a qualitative shift from opportunistic false information to systematic, AI-enhanced manipulation campaigns.
### Malicious Use by State Actors
Graham Brookie noted that AI is being rapidly adopted by bad actors, particularly state actors, for coding narratives and understanding cultural context in information operations. This represents a significant escalation in capabilities available to those seeking to manipulate information environments, enabling more sophisticated targeting and more convincing content creation.
### Micro-Targeting Vulnerable Populations
Septiaji Nugroho highlighted AI’s ability to enable micro-targeting of specific audiences such as elderly people and migrant workers for scams and manipulation. This capability allows malicious actors to craft highly personalized disinformation exploiting specific vulnerabilities, fears, and cultural contexts of particular demographic groups.
## The Information Nihilism Challenge
Perhaps the most significant finding was what Graham Brookie termed “information nihilism”—where “everything is possible and thus nothing is real.” Public polling data showed audiences becoming generally more skeptical and less trusting of institutions, creating an environment where AI’s mere existence undermines confidence in all information sources.
Brookie noted: “We’re seeing trust go down just because AI exists and thus people are a little bit more skeptical of navigating online information environments.” This represents a profound shift where the potential for AI manipulation creates doubt about all information, regardless of actual source or veracity.
Interestingly, Brookie reported that “there hasn’t been one case that we have seen in any place around the world where something like a deepfake or a single piece of synthetic content led to immediate behavior change.” The closest example was Slovakia’s parliamentary elections, but even there, direct behavioral impact was unclear. This suggests the immediate threat may be less about specific synthetic content causing direct harm and more about cumulative effects on trust and information processing.
## Opportunities and Positive Applications
### Enhanced Fact-Checking and Verification
Despite significant risks, speakers identified ways AI can positively contribute to information integrity. Septiaji Nugroho highlighted that AI chatbots can make fact-checking databases more accessible to the public, citing recent implementations improving public access to verified information. He also mentioned Google’s Synth ID technology as a promising development for content authentication.
Peggy Hicks noted that AI tools can help understand global situations in deeper, more nuanced ways at scale and in real-time, potentially improving human rights monitoring and documentation.
### Educational and Outreach Applications
Professor Ozolina emphasized that AI can assist in developing educational curricula and information packages for critical thinking, particularly for reaching underserved social groups such as rural populations. Through her AI for Debunk project, she noted that Ukrainian refugees need trustworthy European information rather than generic anti-disinformation messaging, highlighting the importance of context-specific approaches.
## Addressing Different Audiences and Digital Divides
### Tailored Approaches for Diverse Demographics
A key theme was recognizing that different audiences require specially designed approaches. Professor Ozolina noted that special programs are needed for digitally less educated elderly populations, often implemented through libraries at local levels. She referenced Latvia’s “Seniors Digital Year” program as an example of targeted intervention.
Septiaji Nugroho emphasized that different approaches are required for elderly audiences compared to young people, such as specialized digital academies. Both speakers highlighted particular challenges faced by elderly and rural populations in navigating AI-enhanced information environments.
### The Role of Traditional Institutions
An important theme was the critical role of traditional institutions like libraries in addressing modern AI challenges. Speakers noted that libraries and librarians possess traditional skills in information integrity that remain highly relevant in the digital age, serving as trusted local resources for digital literacy education.
## Educational Approaches and Critical Literacy
Septiaji Nugroho made a crucial distinction, arguing that “AI literacy should be accompanied with AI critical literacy, just like when we do critical thinking on digital literacy.” He emphasized the importance of Socratic prompting techniques and noted that information consumption patterns need to change from vertical to lateral reading when comparing AI-generated and traditional sources.
Professor Ozolina provided crucial framing: “It’s not about debating pro or con. Artificial intelligence is here to stay. So therefore, the question is how to balance human intelligence and artificial intelligence.” This moved the discussion beyond resistance versus acceptance to focus on practical coexistence strategies.
## Institutional Responses and Challenges
### Inadequate Current Responses
Speakers expressed concern about current institutional responses. Peggy Hicks noted that “government responses tend toward binary solutions that don’t work from a free expression standpoint and can enable censorship of dissent.” Graham Brookie complemented this by noting “there’s large-scale retrenchment from industry transparency efforts and reduced investment in trust and safety fields.”
### The Need for Evidence-Based Approaches
Peggy Hicks emphasized the importance of evidence-based approaches that identify good practices and incorporate academic research. She noted the need to “address these issues on a firm information basis… But also at pace,” highlighting the tension between evidence-based policy-making and urgency created by rapid technological change.
## Stakeholder Roles and Responsibilities
### Technology Platforms and Corporate Responsibility
Septiaji Nugroho argued that platforms should bear primary responsibility for identifying synthetic content through proper watermarking, noting that “fact-checkers now face impossible demands to verify whether content is synthetic, which should be platforms’ responsibility through proper watermarking.”
### Academic and Civil Society Roles
Professor Ozolina outlined academia’s role in “spreading knowledge, communicating with different societal groups, and filling information vacuums.” However, this was challenged by audience member Boris Engelsson, a freelance journalist, who questioned academic credibility, citing concerns that “most medical research in the past 50 years may be fake.”
Graham Brookie emphasized civil society’s continued importance in driving transparency and accountability, despite facing “challenges and reduced funding and political pressure.”
## Audience Engagement and Additional Perspectives
The discussion included several audience interventions that enriched the conversation. A Romanian high school student named Claudio asked about recent presidential elections and disinformation campaigns. Audience member Ila from the CDAC Network raised questions about blockchain technology for enhancing information integrity and transparency.
Questions were also raised about the role of libraries and librarians in information integrity, reflecting broader interest in how traditional information institutions can adapt to AI challenges.
## UN Human Rights Office Initiatives
Peggy Hicks outlined specific initiatives the UN Human Rights Office plans to implement, including developing a Human Rights Digital Advisory Service referenced in the Global Digital Compact to help states and businesses navigate AI challenges. The office also plans to continue the BTEC project encouraging companies to describe their human rights practices and promote best practices.
## Unresolved Questions and Future Challenges
The discussion revealed significant gaps in understanding AI’s actual impact on information environments. Graham Brookie emphasized the need to “collect more case studies and data to have higher confidence assessments about AI’s impact on information environments.”
Peggy Hicks raised fundamental questions about platform sustainability, noting that “platforms may not exist in their current form in the near future,” and asked “what type of narrative response is useful in countering disinformation without amplifying it?”
## Conclusion
The discussion demonstrated remarkable consensus on the fundamental nature of challenges while acknowledging significant disagreements about specific solutions. The conversation highlighted that while AI presents serious risks to information integrity, it also offers significant opportunities for enhancing our ability to create, verify, and disseminate reliable information.
The key challenge lies not in choosing between human and artificial intelligence, but in developing approaches that effectively combine both while maintaining critical thinking capabilities and institutional safeguards that democratic societies require. Different communities, cultures, and contexts require tailored approaches that build upon existing strengths and address specific vulnerabilities.
The path forward requires continued dialogue, research, and experimentation, combined with commitment to evidence-based approaches that can evolve as understanding of AI’s impact deepens. Most crucially, it requires maintaining focus on preserving and strengthening democratic discourse and decision-making processes as technological foundations of information systems continue to evolve rapidly.
As the moderator noted in closing, while we may be “blindly driving into a fog,” the collective expertise, commitment, and collaborative spirit evident in this conversation provide grounds for cautious optimism that these challenges can be navigated while preserving the values and capabilities that democratic societies require to thrive in an AI-enhanced world.
Session transcript
Ivars Pundurs: THE remainder of the episode is about the collapse and recession of the IMF by IWM. It’s basically about our relationship with the older generation and politics all over the world, and the implications that some things clarify. I’ve talked to people who have interested in my channel because I want to share my psychology there with them. Recording in progress. Hello, good afternoon. I am a Latvian ambassador here in Geneva to the all possible United Nations institutions and it falls on me to make an opening speech of this what I hope will be very interesting and stimulating discussion. So I’ll just get on with it. So excellencies, ladies, gentlemen, dear friends and colleagues, I’m delighted to welcome you to this panel discussion on the implications of artificial intelligence for information integrity. Like most countries, Latvia has entered the AI race. We have established a national AI center that brings together the public and private sectors, as well as academia to foster rapid AI adoption. But speed alone is This is the third consecutive year that Latvia has convened a discussion on how artificial intelligence affects our information environment. While technology has advanced a lot in this time, the issue at hand has only grown more urgent. This discussion is particularly relevant in the context of the ongoing WSIS Plus 20 review. When the WSIS action lines were first formulated over two decades ago, few could have anticipated the AI revolution. And yet many of those action lines on access to information and knowledge, the ethical dimensions of the information society, trust and security in the use of ICTs, and the role of media remain highly relevant. At the same time, the transformative change we have witnessed in recent years makes clear that information integrity has emerged as a distinct and critical challenge requiring focused attention and collective action. Threats to information integrity and to our society stem not only from the technology itself, but also from its malicious use by state actors. These actors harness artificial intelligence to manipulate information, shape minds, and behavior, conduct surveillance, censor, and suppress dissent. Such practices undermine democracy, erode societal cohesion, and jeopardize international peace, as well as our shared efforts to achieve the sustainable development goals. In Europe, the stark example is Russia’s use of AI-driven tools to spread narratives aimed at justifying its unprovoked war of aggression against Ukraine, a war that flagrantly violates international law, inflicts immense human suffering, and devastates infrastructure. Given the scale of these risks, it is essential that we engage in open dialogue, share experiences, and explore practical solutions. This panel provides an important opportunity to do so. do just that. I extend my sincere appreciation to our distinguished speakers from international organizations, civil society and academia who will share their insights today. I also thank each of you for taking your time from your busy schedules to join this discussion. I would like to express my gratitude also to the organizers of the WSIS plus 20 high-level event for making this session possible. I wish us all a thought-provoking and productive exchange. Last but not least, I wish to thank my dear colleague Martin Stateris who has put this event together. It is the last event for him in his posting in Geneva, so I wish all the best to him in his future career. And now it is my pleasure to give the floor to Mr. Viktors Makarovs, Special Envoy on Digital Affairs at the Latvian Ministry of Foreign Affairs who will moderate today’s discussion. Thank you.
Viktors Makarovs: Thank you very much, Ambassador, for the introduction. To introduce the topic again very quickly, I think the important thing to say is that it’s a juncture of two themes that obviously are on many people’s minds. One is AI. I don’t have to go into that because we are at an AI event, basically. But the other one is information integrity, which is a very important though recent and not quite well-known idea of an information environment that is global, open, free, but at the same time safe and secure. And we will be looking at how these two important phenomena that we are dealing with today are playing together and what they mean. Another thing to know for you about this discussion is that this is not the first of its kind that we have organized. We organized a similar event last year, and indeed Peggy Hicks, one of our speakers, was kind enough to participate in that conversation as well, and we have done it on other platforms and in other forums over the last two or three years. The reason why we want to repeat this exercise is that the world we live in has changed dramatically. Technologically speaking, in terms of AI, we’re in a completely different place. The technology has evolved, but also adoption has evolved or increased exponentially over the last year. Just ChatGPT alone has 800 million weekly users today, and it’s about four times increased since last year when we had the first installment of this discussion. One of the people looking closely at the risks presented by AI, Yoshua Dengio, said that we’re blindly driving into a fog, and one of the areas where this seems to be true is the impact of AI on our information world, on the epistemology of the world, and we want to address this and find out what the state of play is, but also what the outlook on AI and the risks it presents to information environments is today, and then the next stage will be to talk about things to do. What can governments do? What can the United Nations do? Civil society and academia, and to do that, we have four speakers, three here with us, and one will be joining us online. I will introduce them as they are invited to speak. We will have very quick interactions because we’re really short on time, and hopefully, After these interactions, we’ll have time to engage the audience as well. So as the first speaker, I would like to give the floor to Peggy Hicks, who is Director of the Thematic Engagement, Special Procedures and Rights to Development Division at the UN Human Rights Office. And the question I have for you, Peggy, as well as for the other speakers in this first round is, how is AI changing our information environment? What risks does it present to information integrity today and also in the near future? And can it also be a technological force for good and help support information integrity, open, free and trustworthy information? So over to you.
Peggy Hicks: Viktors, you always ask the easy questions. There is a lot tucked in there So I would say three pieces. The first is really looking at how the environment is changing. And I think it’s really important to say that this is a rapid change in the information environment that we really do not even yet understand fully where we’re going. I listened to a podcast recently where they were talking about the fact that social media may ultimately be turning into the much more fragmented platform-based, protocol-based approach that, you know, some of us had wondered if it might deliver something different and better than the existing platforms. But it’s coming at a time where there’s actually a question whether platforms themselves will continue to exist in the way that they do. So I do think in this information integrity realm, one of the greatest things I want to emphasize is that we need to address these issues on a firm information basis. And that’s why I’m glad we have academia with us and work being done in those areas. But also at pace. We can’t address the problems of yesterday rather than today. And I’m afraid we are too often in danger of doing that. Secondly, looking at what are the risks that we see now. And it’s really, it’s not a great landscape, but there is to say, there is so much that’s happening and we hear a lot about dis- and misinformation. Obviously, those problems exist regardless of generative AI and large language models, but let’s throw in. Developing the whole ideas of hallucination, which doesn’t seem to be getting any better. The possibility that we all rely on information that’s not what we hope it to be is very, very real, and then, of course, we have the issues around deepfakes and the impact there. Those are two of the areas that are most often talked about, but I also think we have to talk about the fact that AI content moderation that is likely to happen may be infused with some of the same flaws we see within AI machine learning generally, where it could fuel discrimination or exacerbate some of the problems that we already see, and, of course, this is all a landscape that’s based on data that we’re not really sure of the provenance or use of that data and the privacy of it as well, so there are all sorts of risks that we’re facing in terms of the AI itself, but the second set of risks I think we have to emphasize is how will government respond to these issues because, unfortunately, there, too, the landscape is not very promising. What we’ve seen is a tendency to look for sort of binary solutions. Information is good or information is bad. It’s true or it’s false. We can flip a switch and solve the issue of disinformation if only the companies were willing to do it. That is not going to work from a free expression standpoint, from a fact that there are always going to be facts that are contested and that there will always be actors who want to use that lever to flip the switch against to censor speech or to censor dissent in ways, so we have to be very careful about the tools that we deploy to address these problems as well, but finally, you asked me, is there anything good here? Do you have anything positive on the landscape for us to look at, and I think from a human rights perspective, we see lots of value here. We do understand that these tools are tools that will help us to understand what’s happening in the world in a much deeper and more nuanced way at scale and in real time in a way that could allow us to better address human rights problems globally. One of the things that we have always focused on in the human rights movement is that when there’s a spotlight on a situation, it tends to be much harder for things to happen in a negative way with regards to human rights. The fact that we now have access to data sets that we never did before and the fact that we’ll be able to allow people in general to engage in the human rights cause in new ways I think is very promising, but it’s making sure that those positive aspects keep up with some of the risks that we see that’s the real challenge. Thanks.
Viktors Makarovs: Thank you very much, Peggy. Looks like the risks part of the equation at the moment is much more weighty than the other way around. Professor Zaneta Ozolina, at University of Latvia, you do lots of things and the teams you work with, you are engaged, and I think the University of Latvia actually leads a project that is aimed at using AI to address disinformation, but I know you also look closely at the audiences, especially young people who are the potential targets and victims of information manipulation. How does it look from your perspective?
Zaneta Ozolina: Thank you. Thank you for inviting and to share some thoughts. So actually, I was prepared, Viktor, to respond to the questions which you already asked to Peggy, but I will leave them for the later stage. So, but I will respond directly to your question. And I think that it is very important whenever we are addressing information integrity or disinformation issues to know exactly what audience we are addressing and which audience we are targeting. And information integrity as such is a very relevant issue, but different audiences have completely different attitudes, what is relevant or not. For instance, when we started our project, AI for Debunk, then we found out different target audiences which are targeted by disinformation. And we also decided to interview Ukrainian refugees, expecting a priori, let’s say our hypothesis was that this is one of those target groups, which is very much exposed to Russia’s disinformation. And what we found out that this is a completely different situation, because mostly these are women who arrived with their children as refugees running away from war. And they know what war is about and what Russia is up to. They don’t have to be convinced about the disinformation’s role in their lives. But what they were lacking, they were lacking information about what is happening in Europe, in language which they could understand, information sources they could trust. So, therefore, it is a different issue which we are traditionally used to discuss in our countries. And it also highlighted how information integrity is relevant for different target groups, but in a very specially designed way and very specially prepared narratives and messages. Another point which we addressed in our project was comparative analysis of two big cases, War in Ukraine and Climate Change, and it was a targeted selection of those two cases because it’s interesting to see what are the differences and what are the similarities which ERAO actually highlights, which are disinformation campaigns, whether these are just random disinformation narratives thrown in different media sources or it is a well-planned strategic campaign. And what we found out is that even narratives could be different, the tactics, how it is applied, the models which are used, they are part of very well-planned campaigns. Another point, so you should control my time, so another part of the project is that we work together very closely with different IT companies because one task of the project is to elaborate a special tool which could assist in identifying disinformation. Honestly, I was quite skeptical at the very beginning, and honestly I’m sharing, this is my attitude, because so I was always questioning how artificial intelligence can actually compete with human intelligence in identifying disinformation campaigns, but in the end I found that there will be very excellent extra products produced in the end, because it’s not only about one universal tool which could work in all situations and in all target groups, but this tool will assist in, for instance, developing new curricula for schools. These tools will assist to produce some special information packages to be used for developing critical thinking again in different societal groups. It will assist to address Those for instance, social groups and societies that are not very keen on using classical information sources. Like, for instance, people in rural areas do not read the Financial Times and the Washington Post. But they also need information on what is happening in the political landscape.
Viktors Makarovs: Thank you very much. Actually, you partially answered the second question I had in mind, but that’s excellent. So, we’re very practical here looking at things to do. Now, the next speaker I want to go to is joining us remotely, Septiaji Nugroho, who is co-founder and chairman of Mafinda, which is a fact-checking organization in Indonesia, and I hope we have him online. Yes, I’m here. Fantastic, Septiaji, nice to hear you again. So, the first question I would like you to address is like, great to see you as well, is the same I asked the other speakers, which is about the state of play on AI and information integrity, where we are today, compared to maybe a year ago, when you also joined our discussion here at WSIS, where this is heading. Please, three, max four minutes. Thank you.
Septiaji Nugroho: Thank you, Mr. Ambassador, and also Latvia Mission for UN for inviting me. So, basically, even before the effectively work on combating those video disinformation. And now we arrive in AI era disinformations that actually AI bring both immense opportunities but also face significant challenges. So the AI itself, because they can accelerate content creation and disseminations, we feel that there are people now put more gas on AI using compared to the breaks that we want to have. So there are several risks that we are already facing. For example, now people are asking to us whether this content is synthetic or not synthetic, which is actually shouldn’t be our problem. It should be the job of especially digital platform to make sure that people understand whether this content is synthetic or not synthetic. Because if it is now delegated to the fact checkers, no fact checkers in the world can face this, so many AI contents already. And this deepfakes and synthetics media, they are now being really, really realistic. And the problem is that not every platform provide enough watermark that can be detected. Maybe a platform like Google, they put like Synth ID, then they also introduce the detector, so we can get help to detect very accurately whether the content is synthetic or not. also have now in a very difficult position. The tech platform also now, because of what happened in the United States, is also affecting how fact-checkers are now operating. So AI problems definitely is going to be a very, very big problem for fact-checkers around the world. Septiaji, just one additional question. Do you also register in your work user of AI to micro-target audiences in a way that perhaps is not really visible to the others? Does it show up as an issue? Well, yes, definitely. I think AI, because AI can also micro-targeting, especially now with what we see is using AI for scams, for example. So they can also target specific people because they have already database, they can target like elderly people, they target like migrant workers. using very convincing videos and audio. I think this is also like a big problem that now we are facing. Although in other side, we also try to make use of AI. For example, now we just launched like two weeks ago using chatbot AI. So to make sure that people can connect to our database well, because before using AI, it is quite difficult for people to try to use our database. But now it is going to be much, much easier. But I feel that the challenge is still much, much bigger than the opportunities that we can have.
Viktors Makarovs: Septiaji, thank you very much. We will come back to you in short order. So, fourth speaker, last but not the least, Graham Brookey, you are Vice President and Senior Director at the Atlantic Council. You lead the technology programs and it is a sprawling business. But we know that you are an expert on information manipulation, been in this area for many years, fighting it, obviously. So, what is your very concise take on the AI landscape today and tomorrow?
Graham Brookie: So, my job at the Atlantic Council, first and foremost, thank you. My job at the Atlantic Council over the last eight years is to build a thing called the Digital Forensic Research Lab, which is a team of open-source researchers spread out across 17 different countries on four different continents. And so, while my accent would indicate that I am from a US-based headquartered organization, our work is very much global. Now, on the question of how AI is changing information integrity, I think that there’s, number one, a scoping question. Number two, some key findings on narratives themselves. Number three, on how the tech is changing. And then, number four, on how stakeholders are changing. And so, the scoping question, how AI is changing the information environment, we’re not just talking about mis- and disinformation. We’re not just talking about what state actors are doing in the information environment. We’re talking about a really, really broad set of potential online harms, including basic scam activity, which was just mentioned, including things like CSAM, including any number of things that happen in online spaces that have negative externalities for society. And so, while we tend to talk about mis- and disinformation, most of all in this space, it is part of a much larger ecosystem of problems that we’re trying, or opportunities, in some cases, that we’re trying to meet. Now, what we’re seeing in terms of how AI is changing the information landscape from our research, I think there’s four key findings. And I will use a case study from the United States. I would be very, very clear in the sense of the United States is a very large media market where a lot of these platforms happen to be headquartered, but they aren’t that immune. I’m going to start by saying that the AI information ecosystem isn’t immune to vulnerabilities. In fact, it’s rife with vulnerabilities. And so I say that with a little bit of humility. The first finding is that on AI’s impact in the information environment, it’s early days. How bad actors are navigating or harnessing AI to do bad activity in an information environment or manipulative activity in an information environment, we don’t have that much amount of data yet. And I say that as the pace of technological change is very, very rapid. And so to Peggy’s point, we’re talking about platforms that might not be platforms here in the very near future. And yet our data sets are still early days. We need to collect more information. We need to collect more case studies in order to have higher confidence assessments. That’s number one. Number two is in everything that we are monitoring, we are seeing higher amounts of content that is created by generative artificial intelligence, GAI. So we are seeing in any given case, whether that’s scam activity, whether that’s election integrity or election information environments around election processes, we are seeing more content that is GAI. Now the third finding is that that doesn’t necessarily equate to behavior change. There hasn’t been one case that we have seen in any place around the world where something like a deepfake or a single piece of synthetic content led to immediate behavior change. We’ve seen some cases where it’s gotten close, very, very close, but for the most part, institutions around information integrity have been able to quickly identify and create conversation around that. The highest and best example on a deepfake changing the result of the election being a very near case in Slovenia, parliamentary elections is the main case study that has been mentioned most of all in Slovakia, sorry, Slovakia parliamentary elections. That’s my mistake entirely. There hasn’t been a single instance where a deepfake has had an immediate measurable change on behavior. In addition to that, we would generally agree with the assessment from some of the large platforms like OpenAI, where the GAI content that we are seeing doesn’t necessarily lead to more engagement on other platforms. To make that more granular, if a The piece of content is synthetic. If it’s a deepfake or a G.A.I. image or synthetic audio and then it spreads on other social media platforms, the fact that it was created by artificial intelligence doesn’t necessarily mean that audiences are engaging with it at a higher pace. And then the fourth thing is a very pessimistic view. What we are seeing from public polling is that audiences are generally more skeptical and less trusting of institutions in general and the information environment specifically. And so we’re in this situation that is accelerated or informed by this moment of A.I. in which everything is possible and thus nothing is real. Take that from a number of other foreign policy examples. And that’s not great for democratic outcomes. It’s not great for multi-stakeholder outcomes. It’s not great for the integrity of the information environment at large. So we’re seeing trust go down just because A.I. exists and thus people are a little bit more skeptical of navigating online information environments. The ecosystem changes. We’re seeing large-scale retrenchment from industry and transparency efforts. We’re seeing less investment in fields of trust and safety from governmental institutions, from civil society institutions. And so the landscape is very challenging. And then from a technical standpoint, I think it remains to be seen whether the actual tech change of A.I. is changing or increasing the defender’s dividend. And so whether A.I. tooling specifically for trust and safety is actually having a net positive impact. What we are seeing is pretty rapid adaptation by bad actors using A.I. And so where, for instance, state actors, we are seeing more use of A.I. by state actors in information operations in particular for things like coding narratives, for things like understanding cultural context or breaking down language barriers. And so we’re seeing rapid adaptation there. And it remains to be seen whether the defender’s dividend ticks up over time.
Viktors Makarovs: Okay, thank you, Graham. That was not a very rosy outlook from you either. Now, we don’t have as much time to address the second and probably first important question, what do we do with it? So I would ask you to perhaps offer a really brief take, if you can do it within two minutes, that would be fantastic. Go in the same order, Peggy. So, you know, you work with the focus on human rights. So a very important part of the UN, what can your system, so to say, do to help countries and civil societies address this issue? And what are the most international processes that we also as member states and stakeholders should be paying attention to? Again, easy questions.
Peggy Hicks: Yes, in short order. No, I mean, I think there is a lot that’s happening to address these issues. So, you know, I can’t give a full overview of it now. But I mean, I think one of the key things that we need to do, and actually A.I. can help us do this, is we need to look for good practices where things are being handled in a way that is human rights respecting and does a better job and incorporates the evidence and data. I mean, that’s the real problem here is that we tend to think we know what the answers to questions are without actually having looked at what are the real problems and what works to address them. So bringing in the academic community more successfully and allowing the good practices to be paralleled in a variety of different geographic and resource environments is crucial. We are trying to set up something called the Human Rights Digital Advisory Service as referenced in the Global Digital Compact that we hope will help us to do that and be a real resource with an academic network behind it of helping states and businesses to navigate this space. The second piece of it that we’re engaged in is that there’s a lot of talk, and I think it will be a big part of the conversations in this setting in general around WSIS and Tech for Good, is around the role of the companies. And the reality is we need to find a way that encourages, incentivizes and holds accountable companies for the way that they’re engaging in this space. We have a project called BTEC, which is a way to get companies to describe what are their human rights related practices for us to pull out the good examples from that and to try to push a race to the top and in the same time to distinguish between companies that are making the right type.
Viktors Makarovs: Fantastic. And you mentioned academic input. So Professor Rosalina, what is Latvian academia doing together with civil society? What’s going to be your input to address the issue?
Zaneta Ozolina: Yeah, indeed. Again, one of the questions which need definitely more time, I would divide my answer in two groups. So one would be more general character. When it comes to what we could do, I think it’s very important to balance our attitude towards artificial intelligence. Now it seems that society is divided into groups. Those who are praising results of artificial opportunities of artificial intelligence and those who are denying. So it’s not about debating pro or con. Artificial intelligence is here to stay. So therefore, the question is how to balance human intelligence and artificial intelligence. The other point is I also would like to join Peggy. It’s very important to think about new ways how to regulate and also how to govern artificial intelligence and the way how it’s influenced and how it leaves its impact on information integrity. As far as academia is concerned, I think one of very important points is to keep human intelligence strong and powerful is just to spread knowledge and to contribute to knowledge. There is no other way how to stay sober in this very digitalized world is to be very well equipped with knowledge. So that’s what academia can do. The second point is that academia knows how to communicate with society, how to communicate with different societal groups and faster artificial intelligence and actually technologies are growing. More communicators and mediators will be needed. And we are here to communicate with the younger generation, to communicate with those who are in need. And the third point which I wanted to mention is that today’s discussion is about information integrity. It’s very important to avoid information vacuum because what we are very often observing in the public space, in education system, in the way how younger generation receives and consumes information, it has a lot in common with vacuum. So therefore it’s important for politicians, for academia representatives to be more and more investing in information integrity which replaces information vacuum.
Viktors Makarovs: Thank you. Keeping human intelligence strong, that’s quite a challenge. We’ll try to do it. We go to Septiaji Nugroho now, if you’re still with us, a quick sort of advice from you. As a leading fact-checker, how are you adapting to the AI age and what are the lessons and your advice for other fact-checkers and perhaps those who support you? Again, please very quickly, just two minutes if you may. Thank you.
Septiaji Nugroho: Yeah, basically Mofindo is working. Hello, good evening. As I said, we are showing in two areas. One is fact-checking. Of course, AI is going to one way to help the fact-checkers to pinpoint very quickly the information that we are doing, the fact-checking. And also we are also using AI to connect our staff to personalize our content to be easier or digital literacy educations. Now government also speeding up on doing AI literacy, but we feel that sometimes they work in the wrong directions because they forget that AI literacy should be accompanied with AI critical literacy, just like when we do critical thinking on digital literacy. That’s why our role at the moment is helping, assisting when the government now initiate the curriculum for coding and programming and also AI as early as elementary schools. We are now involved in making sure that AI critical literacy is a big part of that to make sure that they are not only learning about prompting, especially for senior high schools, but also how to critically, how to understand how to use like a prompting in Socratic prompting to make sure that people shouldn’t especially don’t lose their ability to critical thinking on AI literacy. Thank you.
Viktors Makarovs: Thank you very much, Septelji. And Graham, same question by and large goes to you. What can we expect from civil society organizations like yours to do to address AI and how can other stakeholders support you?
Graham Brookie: You can expect us to remain engaged, which is maybe not a particularly novel thing to say, but it is increasingly difficult in this landscape. Right now, there is a narrative, including from some partner governments, that any effort to regulate this space or to moderate this space or to create transparency in this space equates to censorship. And that is fundamentally not true. And so we have to address that narrative. And number two, the investment in this landscape is retrenched and we have to drive more investment into these sets of issues in order to have. The Basic Transparency that Allows the Space for Accountability and Rights-Respecting and Rights-Protecting Approaches. And here, the prioritization has got to be on transparency for frontier models for this AI moment. Number two, trust and safety, an investment in trust and safety, especially by industry and especially as it relates to generative artificial intelligence’s impact on the information environment. Number three, most broadly, investment in the critical institutions that create and protect and sustain the multi-stakeholder system that has kept things like the Internet open, secure, and interoperable, the things that WSIS absolutely stands for. And so that’s absolutely critical. And I say investment in the institutions like academic institutions that are driving long-form research, like fact-checking organizations or independent media that create open dialogue, and like civil society organizations that are driving technical research.
Viktors Makarovs: Oh, fantastic. Thank you very much. Now, we have about five minutes to address questions. And I have to mention that because we are online as well, there have been some interesting comments like one member of the audience writing that we should also recognize the role of libraries and information services and the traditional skills of librarians. There is also a question about the use of blockchain technology to enhance integrity of information, trust, and transparency. Not exactly our topic there, perhaps, but also a question if information integrity is essential to mitigate the bad or adverse effects of disinformation and disinformation. I think it’s obvious. So we’ll perhaps leave it out there for the time being. Let’s take questions from the audience. Finally, I can see one hand there and you two. Let’s start with you, please. And please introduce yourself. And please, really, a short question.
Audience: Yes. So my name is Ila. I think you need to speak into the mic, so there’s really… My name is Ila, I’m with the CDAC Network, Communicating with Disaster-Affected Communities. And on your point about information nihilism, as it may or may not relate to Gen-AI, studies show that people’s willingness to believe false or true information is not really connected to the level… of Realism of the Content, but rather factors like repetition, narrative appeal, perceived authority, etc., etc., and the viewer’s state of mind. A key element of a long-term, portfolioed approach to respond to that would be to fund and support local independent journalism, but realistically funding prospects are bleak for local journalism. So how could Gen AI be used to provide information that meets those epistemic and social and psychological needs, say, to help local human rights-based CSOs create counter-narratives to incendiary information? Or should we avoid that, completely using Gen AI to create counter-narratives?
Viktors Makarovs: Okay, that’s one question out there, I think this goes to… Let’s take one or two more. I can see the hand over there, please.
Audience: Can you hear me? So hi, my name is Claudio. And I’m a high school student from Romania. I wanted to ask you this question because in our very recent presidential elections we actually had a huge disinformation campaign, I think you may have heard about it. And in your discourse from earlier, I heard a lot about educating the new generation. But what do you do with the people that remain illiterate digitally? The people from the countryside, the older people, don’t you think there is a need for regulation on the AI companies that generate the watermark or something like that? Thank you. Very good question. I would say two of them. But let’s take another one, please. Yes, Boris Engelsson, a journalist freelance. I heard praise about the academia to be able to safeguard some information integrity. One year back at the University of Geneva, medical faculty, there was a big symposium with a big shot of this domain claiming that most of medical research in in the past 50 years is fake. And they quoted the former editor-in-chief of The Lancet, I think, who confirmed that. I would have more questions, but I will stick to that one. So I’m not quite sure I understood your specific question, sir, to this panel. The question is that if even nearly medical faculties, like faculty, I am not even considered economics and psychology, which has been disqualified as science since long. But I mean, if even medical researchers confess now that they consider most of medical research fake, are these people the best source of integrity of information because they are called academics? Oh, that seems to be an interesting one to address.
Viktors Makarovs: Sorry. Sorry. Yes. We have a question on regulation of AI again, use of AI to actually inform and create counter-narratives. And also, what do you do with audiences that lack the AI literacy and AI critical literacy, as we just heard from one of the speakers? Who would like to address perhaps very quickly each of the panelists?
Peggy Hicks: Sure. I think each of the questions has its answer built in it to some extent. I think we have to explore. We’re actually doing some of that work internally and happy to talk more about it. But what type of narrative response is useful? But that’s part of what I meant. And I have to say, I think you’re absolutely right. There’s probably research out there that’s not good. But like with anything, we can’t just say there isn’t any good research. We have to actually look at the research, see how it’s done, and vet it to ensure it’s solid. But one of the things that we’ve learned from some of the research on disinformation is that some of the things that we think would be useful to counter it don’t work, and the alternative is true. That we need to make sure that we’re not actually bolstering disinformation by giving it greater breadth by responding to it in the wrong way. And so I really like the idea of how do we bring those pieces together and engage more. On your question, I think it’s a really valid one that making sure that our education systems, both for younger and older people, are helping people.
Zaneta Ozolina: There is no universal remedy to all those questions which were raised, because very often disinformation and also information integrity is country-specific. And regarding your question, a very similar problem we have in Latvia. Also, the elderly generation digitally is not so educated as the younger generation. And this is a government policy at the moment that this year is called Seniors Digital Year, when special programmes are offered on the local level, particularly to senior groups, in order to be prepared for the next rounds of digitalisation. And this, by the way, to a very large extent, is executed by librarians and in the libraries. So therefore, the question which was raised before, what is the role of libraries? Very great role, particularly addressing those target groups which are remotely placed.
Viktors Makarovs: Thank you. Septiaji, if you’re still there, we’ll get back to you for your quick comments. Let’s do it right now. Yes, 60 seconds max, please.
Septiaji Nugroho: Yes, I think for the first one, it is going to be very urgent that we need to explain that how we read usually information vertically needs to be changed to be read laterally. So that’s why we want to make sure that people need to exercise their freedom of expression. the way they are reading information, especially comparing analytically information coming from NGINI and also with the information from the library and everything. And the second one about how we target the elderly. Mafindo is now running the third year as we have an elderly digital academy. So basically we have a specific approach which is different when we target the young audience. So that’s why we would really like to share the experience. We are very happy to share you if you want to connect us on how the Indonesian version of elderly digital academy work, especially also on AI. You can contact me. Thank you.
Viktors Makarovs: Thank you very much, Septiaji. Something very specific there. And Graham, 30 seconds. And then we are finished and we have to leave.
Graham Brookie: I’m happy to get more in depth on the very complex conversations after the session. The advice is very simple. Yes, we have to avoid the oxygen of amplification, but you have got to engage better across all that. Don’t be afraid of your own shadow. Engage in the information environment that you have, not the information environment that you want. Thank you very much.
Viktors Makarovs: And the only thing I have to say, two things. Thank you very much to the speakers, to the audience, of course, also those online. And the second thing I have to say is there is coffee courtesy of this particular side event. So you’re welcome to enjoy it in the break. Thank you very much. And have a nice day.
Peggy Hicks
Speech speed
193 words per minute
Speech length
1296 words
Speech time
401 seconds
AI is rapidly changing the information environment in ways we don’t fully understand yet, with platforms potentially fragmenting and evolving beyond current forms
Explanation
Hicks argues that the information environment is undergoing rapid transformation that we haven’t fully grasped, with social media potentially moving toward more fragmented, protocol-based approaches. She emphasizes that platforms themselves may not continue to exist in their current form, making it crucial to address these issues with firm information basis and at pace.
Evidence
Referenced a podcast discussing social media’s potential shift to fragmented platform-based, protocol-based approaches
Major discussion point
AI’s Impact on Information Environment and Integrity
Topics
Content policy | Human rights principles | Digital business models
Agreed with
– Viktors Makarovs
– Graham Brookie
Agreed on
AI is rapidly transforming the information environment in ways that are not fully understood
AI presents risks through hallucination, deepfakes, biased content moderation, and questionable data provenance that threaten information reliability
Explanation
Hicks identifies multiple AI-related risks including the persistent problem of AI hallucination, the impact of deepfakes, and AI content moderation that may be infused with discriminatory flaws. She also highlights concerns about data provenance and privacy in the AI landscape.
Evidence
Mentioned that hallucination ‘doesn’t seem to be getting any better’ and that AI machine learning generally shows flaws that could fuel discrimination
Major discussion point
Risks and Challenges in the AI Information Landscape
Topics
Human rights principles | Privacy and data protection | Content policy
Government responses tend toward binary solutions that don’t work from a free expression standpoint and can enable censorship of dissent
Explanation
Hicks warns that governments are seeking overly simplistic solutions that treat information as simply good or bad, true or false. She argues this binary approach is problematic because facts are often contested and such tools can be misused by actors to censor speech or suppress dissent.
Evidence
Noted the tendency to look for solutions where ‘information is good or information is bad’ and the belief that companies could ‘flip a switch and solve the issue of disinformation’
Major discussion point
Risks and Challenges in the AI Information Landscape
Topics
Freedom of expression | Human rights principles | Legal and regulatory
AI tools can help understand global situations in deeper, more nuanced ways at scale and in real-time, potentially improving human rights monitoring
Explanation
Hicks sees positive potential in AI for human rights work, noting that these tools can provide access to previously unavailable datasets and enable real-time understanding of global situations. She emphasizes that spotlights on situations tend to make negative human rights outcomes less likely.
Evidence
Mentioned access to ‘data sets that we never did before’ and the principle that ‘when there’s a spotlight on a situation, it tends to be much harder for things to happen in a negative way with regards to human rights’
Major discussion point
Opportunities and Positive Applications of AI
Topics
Human rights principles | Digital access | Interdisciplinary approaches
Agreed with
– Zaneta Ozolina
– Septiaji Nugroho
Agreed on
AI presents both significant risks and potential benefits for information integrity
Need for evidence-based approaches that identify good practices and incorporate academic research rather than assuming solutions
Explanation
Hicks advocates for bringing in the academic community more successfully and allowing good practices to be replicated across different geographic and resource environments. She emphasizes the importance of looking at real problems and what actually works to address them rather than assuming we know the answers.
Evidence
Referenced the Human Rights Digital Advisory Service mentioned in the Global Digital Compact and the BTEC project for encouraging company accountability
Major discussion point
Solutions and Responses to AI Information Challenges
Topics
Human rights principles | Interdisciplinary approaches | Legal and regulatory
Agreed with
– Zaneta Ozolina
– Septiaji Nugroho
Agreed on
Different audiences require tailored approaches to information integrity and AI literacy
Disagreed with
– Septiaji Nugroho
Disagreed on
Role of platforms vs. fact-checkers in synthetic content identification
Zaneta Ozolina
Speech speed
133 words per minute
Speech length
1047 words
Speech time
469 seconds
AI enables more sophisticated disinformation campaigns with well-planned tactics and models, not just random narratives
Explanation
Ozolina’s research found that disinformation campaigns are strategically planned rather than random, with similar tactics and models used across different cases. Her comparative analysis of the War in Ukraine and Climate Change revealed that despite different narratives, the underlying campaign structures and methods are part of well-coordinated efforts.
Evidence
Comparative analysis of War in Ukraine and Climate Change cases showed that ‘even narratives could be different, the tactics, how it is applied, the models which are used, they are part of very well-planned campaigns’
Major discussion point
AI’s Impact on Information Environment and Integrity
Topics
Content policy | Cyberconflict and warfare | Human rights principles
Agreed with
– Ivars Pundurs
– Septiaji Nugroho
– Graham Brookie
Agreed on
AI enables sophisticated targeting and manipulation by malicious actors
Different audiences require specially designed approaches to information integrity, as demonstrated by Ukrainian refugees needing trustworthy European information rather than anti-disinformation messaging
Explanation
Ozolina discovered that Ukrainian refugees, primarily women with children fleeing war, already understood Russian disinformation and didn’t need convincing about its role. Instead, they needed accessible, trustworthy information about European developments in languages they could understand, highlighting how information integrity needs vary by audience.
Evidence
Interview findings with Ukrainian refugees showed they ‘know what war is about and what Russia is up to’ but were ‘lacking information about what is happening in Europe, in language which they could understand, information sources they could trust’
Major discussion point
Risks and Challenges in the AI Information Landscape
Topics
Human rights principles | Multilingualism | Cultural diversity
Agreed with
– Septiaji Nugroho
– Peggy Hicks
Agreed on
Different audiences require tailored approaches to information integrity and AI literacy
AI can assist in developing educational curricula, information packages for critical thinking, and reaching underserved social groups like rural populations
Explanation
Ozolina found that AI tools can help create educational materials and information packages tailored for different societal groups, particularly those who don’t consume traditional information sources. The tools can assist in developing critical thinking resources for schools and reaching rural populations who may not read mainstream publications.
Evidence
Mentioned that tools will help develop ‘new curricula for schools’ and ‘special information packages’ for ‘social groups and societies that are not very keen on using classical information sources’ like ‘people in rural areas [who] do not read the Financial Times and the Washington Post’
Major discussion point
Opportunities and Positive Applications of AI
Topics
Online education | Digital access | Inclusive finance
Agreed with
– Peggy Hicks
– Septiaji Nugroho
Agreed on
AI presents both significant risks and potential benefits for information integrity
Importance of balancing human intelligence with artificial intelligence rather than taking pro or con positions
Explanation
Ozolina argues that society is unnecessarily divided between those praising AI and those denying it, when the real issue is how to balance human and artificial intelligence. She emphasizes that AI is here to stay, so the focus should be on integration rather than opposition.
Evidence
Observed that ‘society is divided into groups. Those who are praising results of artificial opportunities of artificial intelligence and those who are denying’ and stated ‘Artificial intelligence is here to stay’
Major discussion point
Solutions and Responses to AI Information Challenges
Topics
Interdisciplinary approaches | Human rights principles | Online education
Academia must focus on spreading knowledge, communicating with different societal groups, and filling information vacuums
Explanation
Ozolina identifies three key roles for academia: maintaining strong human intelligence through knowledge dissemination, serving as communicators and mediators with various societal groups, and addressing information vacuums in public spaces and education systems. She emphasizes that faster technological growth requires more communicators and mediators.
Evidence
Noted that ‘faster artificial intelligence and actually technologies are growing. More communicators and mediators will be needed’ and observed ‘information vacuum’ in ‘public space, in education system, in the way how younger generation receives and consumes information’
Major discussion point
Solutions and Responses to AI Information Challenges
Topics
Online education | Interdisciplinary approaches | Cultural diversity
Disagreed with
– Audience
Disagreed on
Trust in academic institutions as guardians of information integrity
Special programs are needed for digitally less educated elderly populations, often implemented through libraries at the local level
Explanation
Ozolina explains that Latvia has designated this year as ‘Seniors Digital Year’ with special programs offered locally to prepare elderly populations for digitalization. She highlights the important role of librarians and libraries in executing these programs for remote and elderly target groups.
Evidence
Mentioned Latvia’s ‘Seniors Digital Year’ with ‘special programmes offered on the local level, particularly to senior groups’ and noted ‘this, by the way, to a very large extent, is executed by librarians and in the libraries’
Major discussion point
Addressing Different Audiences and Digital Divides
Topics
Digital access | Online education | Capacity development
Septiaji Nugroho
Speech speed
137 words per minute
Speech length
816 words
Speech time
356 seconds
AI accelerates content creation and dissemination, with people applying more ‘gas’ than ‘brakes’ to AI adoption
Explanation
Nugroho observes that AI significantly speeds up content creation and distribution, but people are rushing to adopt AI technology without adequate caution or restraint. This imbalance between acceleration and careful consideration creates risks in the information environment.
Evidence
Stated that ‘people now put more gas on AI using compared to the breaks that we want to have’
Major discussion point
AI’s Impact on Information Environment and Integrity
Topics
Content policy | Digital business models | Human rights principles
AI enables micro-targeting of specific audiences like elderly people and migrant workers for scams and manipulation
Explanation
Nugroho identifies AI’s capability to target specific demographic groups with tailored deceptive content as a significant problem. The technology allows bad actors to create convincing videos and audio specifically designed to exploit vulnerable populations like the elderly and migrant workers.
Evidence
Mentioned that AI can ‘target specific people because they have already database, they can target like elderly people, they target like migrant workers. using very convincing videos and audio’
Major discussion point
AI’s Impact on Information Environment and Integrity
Topics
Consumer protection | Human rights principles | Cybercrime
Agreed with
– Ivars Pundurs
– Zaneta Ozolina
– Graham Brookie
Agreed on
AI enables sophisticated targeting and manipulation by malicious actors
Fact-checkers now face impossible demands to verify whether content is synthetic, which should be platforms’ responsibility through proper watermarking
Explanation
Nugroho argues that the public is increasingly asking fact-checkers to determine if content is AI-generated, but this task should fall to digital platforms through adequate watermarking systems. He emphasizes that no fact-checking organization can handle the volume of AI content being produced.
Evidence
Noted that ‘people are asking to us whether this content is synthetic or not synthetic, which is actually shouldn’t be our problem’ and that ‘no fact checkers in the world can face this, so many AI contents already’
Major discussion point
Risks and Challenges in the AI Information Landscape
Topics
Content policy | Liability of intermediaries | Digital standards
Disagreed with
– Peggy Hicks
Disagreed on
Role of platforms vs. fact-checkers in synthetic content identification
AI chatbots can make fact-checking databases more accessible to the public, as demonstrated by recent implementations
Explanation
Nugroho describes how his organization launched an AI chatbot system to help people better access their fact-checking database. This represents a positive application of AI technology that makes information verification more user-friendly and accessible than previous systems.
Evidence
Mentioned they ‘just launched like two weeks ago using chatbot AI. So to make sure that people can connect to our database well, because before using AI, it is quite difficult for people to try to use our database’
Major discussion point
Opportunities and Positive Applications of AI
Topics
Digital access | Content policy | Online education
Agreed with
– Peggy Hicks
– Zaneta Ozolina
Agreed on
AI presents both significant risks and potential benefits for information integrity
AI literacy education must be accompanied by AI critical literacy, including Socratic prompting techniques to maintain critical thinking abilities
Explanation
Nugroho advocates for comprehensive AI education that goes beyond basic prompting skills to include critical thinking about AI use. He emphasizes the importance of Socratic prompting methods to ensure people don’t lose their ability to think critically when using AI tools.
Evidence
Mentioned involvement in curriculum development to ensure ‘AI critical literacy is a big part of that’ and teaching ‘Socratic prompting to make sure that people shouldn’t especially don’t lose their ability to critical thinking on AI literacy’
Major discussion point
Solutions and Responses to AI Information Challenges
Topics
Online education | Critical thinking | Capacity development
Different approaches are required for elderly audiences compared to young people, such as specialized digital academies
Explanation
Nugroho explains that his organization runs an ‘elderly digital academy’ with approaches specifically tailored for older audiences, recognizing that effective digital literacy programs must be adapted to different age groups and their unique needs and learning styles.
Evidence
Mentioned running ‘the third year as we have an elderly digital academy’ with ‘a specific approach which is different when we target the young audience’
Major discussion point
Addressing Different Audiences and Digital Divides
Topics
Digital access | Online education | Capacity development
Agreed with
– Zaneta Ozolina
– Peggy Hicks
Agreed on
Different audiences require tailored approaches to information integrity and AI literacy
Information consumption patterns need to change from vertical to lateral reading, especially when comparing AI-generated and traditional information sources
Explanation
Nugroho advocates for a fundamental shift in how people consume information, moving from traditional vertical reading patterns to lateral reading that involves comparing and analyzing information from multiple sources, particularly when dealing with AI-generated content versus traditional sources.
Evidence
Stated that ‘how we read usually information vertically needs to be changed to be read laterally’ and emphasized ‘comparing analytically information coming from NGINI and also with the information from the library and everything’
Major discussion point
Addressing Different Audiences and Digital Divides
Topics
Online education | Content policy | Critical thinking
Graham Brookie
Speech speed
150 words per minute
Speech length
1299 words
Speech time
519 seconds
While AI-generated content is increasing across all monitored cases, there hasn’t been documented behavior change from single deepfakes or synthetic content
Explanation
Brookie reports that while his organization sees more AI-generated content in all areas they monitor (scams, elections, etc.), there hasn’t been a single documented case where a deepfake or synthetic content piece led to immediate measurable behavior change. He notes they’ve seen cases that came close, particularly in Slovakia’s parliamentary elections.
Evidence
Mentioned ‘There hasn’t been one case that we have seen in any place around the world where something like a deepfake or a single piece of synthetic content led to immediate behavior change’ and referenced ‘Slovakia parliamentary elections’ as the closest example
Major discussion point
AI’s Impact on Information Environment and Integrity
Topics
Content policy | Human rights principles | Cyberconflict and warfare
Agreed with
– Peggy Hicks
– Viktors Makarovs
Agreed on
AI is rapidly transforming the information environment in ways that are not fully understood
AI is being rapidly adopted by bad actors, particularly state actors, for coding narratives and understanding cultural context in information operations
Explanation
Brookie observes that malicious actors, especially state-sponsored ones, are quickly adapting AI tools for information warfare purposes. They’re using AI for developing narrative frameworks and breaking down cultural and language barriers in their operations.
Evidence
Noted ‘rapid adaptation by bad actors using A.I.’ and specifically mentioned state actors using AI ‘for things like coding narratives, for things like understanding cultural context or breaking down language barriers’
Major discussion point
AI’s Impact on Information Environment and Integrity
Topics
Cyberconflict and warfare | Content policy | Human rights principles
Agreed with
– Ivars Pundurs
– Zaneta Ozolina
– Septiaji Nugroho
Agreed on
AI enables sophisticated targeting and manipulation by malicious actors
Public polling shows audiences are generally more skeptical and less trusting of institutions, creating an ‘everything is possible, nothing is real’ environment
Explanation
Brookie presents a pessimistic finding that the mere existence of AI is making people more skeptical of all information and less trusting of institutions generally. This creates a problematic environment where people doubt everything, which is harmful to democratic processes and multi-stakeholder cooperation.
Evidence
Referenced ‘public polling’ showing ‘audiences are generally more skeptical and less trusting of institutions’ and described the situation as ‘everything is possible and thus nothing is real’
Major discussion point
Risks and Challenges in the AI Information Landscape
Topics
Human rights principles | Content policy | Freedom of expression
There’s large-scale retrenchment from industry transparency efforts and reduced investment in trust and safety fields
Explanation
Brookie identifies a concerning trend where there’s decreased investment in trust and safety measures across governmental institutions, civil society, and industry. He also notes a narrative that equates any regulation or moderation efforts with censorship, which complicates efforts to address AI-related information problems.
Evidence
Mentioned ‘large-scale retrenchment from industry and transparency efforts’ and ‘less investment in fields of trust and safety from governmental institutions, from civil society institutions’
Major discussion point
Risks and Challenges in the AI Information Landscape
Topics
Legal and regulatory | Content policy | Human rights principles
Need for transparency in frontier AI models, investment in trust and safety, and support for critical institutions that sustain the multi-stakeholder system
Explanation
Brookie outlines three key priorities for addressing AI information challenges: ensuring transparency in advanced AI models, increasing investment in trust and safety measures (especially by industry), and supporting the institutions that maintain the open, secure, and interoperable internet that WSIS represents.
Evidence
Specifically mentioned ‘transparency for frontier models’, ‘trust and safety, an investment in trust and safety, especially by industry’ and ‘investment in the critical institutions that create and protect and sustain the multi-stakeholder system’
Major discussion point
Solutions and Responses to AI Information Challenges
Topics
Legal and regulatory | Digital standards | Critical internet resources
Ivars Pundurs
Speech speed
111 words per minute
Speech length
595 words
Speech time
321 seconds
Latvia has established a national AI center and this represents the third consecutive year of discussions on AI’s impact on information environment
Explanation
Pundurs explains that Latvia has created a national AI center that brings together public and private sectors along with academia to foster rapid AI adoption. He notes this is the third year Latvia has organized discussions specifically focused on how artificial intelligence affects the information environment.
Evidence
Mentioned ‘Latvia has entered the AI race. We have established a national AI center that brings together the public and private sectors, as well as academia’ and ‘This is the third consecutive year that Latvia has convened a discussion on how artificial intelligence affects our information environment’
Major discussion point
Opening Remarks and Context Setting
Topics
Capacity development | Digital access | Interdisciplinary approaches
The discussion addresses the intersection of AI advancement and information integrity as a critical challenge requiring collective action
Explanation
Pundurs frames the discussion within the context of the WSIS Plus 20 review, noting that while the original WSIS action lines from two decades ago remain relevant, the transformative changes in recent years have made information integrity a distinct and critical challenge that requires focused attention and collective international action.
Evidence
Referenced the ‘WSIS Plus 20 review’ and noted that ‘information integrity has emerged as a distinct and critical challenge requiring focused attention and collective action’
Major discussion point
Opening Remarks and Context Setting
Topics
Human rights principles | Content policy | Legal and regulatory
State actors are using AI to manipulate information and conduct surveillance, with Russia’s AI-driven narratives about Ukraine as a stark example
Explanation
Pundurs identifies the malicious use of AI by state actors as a major threat, specifically highlighting how these actors use AI to manipulate information, shape minds and behavior, conduct surveillance, and suppress dissent. He cites Russia’s use of AI-driven tools to spread narratives justifying its war against Ukraine as a concrete example.
Evidence
Specifically mentioned ‘Russia’s use of AI-driven tools to spread narratives aimed at justifying its unprovoked war of aggression against Ukraine, a war that flagrantly violates international law’
Major discussion point
Opening Remarks and Context Setting
Topics
Cyberconflict and warfare | Human rights principles | Content policy
Agreed with
– Zaneta Ozolina
– Septiaji Nugroho
– Graham Brookie
Agreed on
AI enables sophisticated targeting and manipulation by malicious actors
Viktors Makarovs
Speech speed
134 words per minute
Speech length
1445 words
Speech time
643 seconds
AI adoption has increased exponentially with ChatGPT alone reaching 800 million weekly users, representing a four-fold increase since last year
Explanation
Makarovs highlights the dramatic growth in AI adoption by citing specific user statistics for ChatGPT. He uses this data to demonstrate how rapidly the technological landscape has changed since their previous discussion on this topic.
Evidence
ChatGPT has 800 million weekly users today, which is about four times increased since last year when they had the first installment of this discussion
Major discussion point
AI’s Impact on Information Environment and Integrity
Topics
Digital business models | Digital access | Content policy
We are blindly driving into a fog regarding AI’s impact on our information world and epistemology
Explanation
Makarovs quotes AI researcher Yoshua Bengio to characterize the current state of uncertainty about AI’s effects on how we understand and process information. This metaphor emphasizes the lack of clear understanding about where AI developments are leading us in terms of information integrity.
Evidence
Referenced Yoshua Bengio’s statement that ‘we’re blindly driving into a fog’ in relation to AI risks
Major discussion point
AI’s Impact on Information Environment and Integrity
Topics
Human rights principles | Content policy | Interdisciplinary approaches
Agreed with
– Peggy Hicks
– Graham Brookie
Agreed on
AI is rapidly transforming the information environment in ways that are not fully understood
Information integrity represents a juncture of AI and the concept of a global information environment that is open, free, but also safe and secure
Explanation
Makarovs defines information integrity as the intersection of two critical contemporary issues: AI development and the need for an information environment that balances openness and freedom with safety and security. He frames this as a key challenge requiring examination of how these phenomena interact.
Evidence
Described information integrity as ‘a very important though recent and not quite well-known idea of an information environment that is global, open, free, but at the same time safe and secure’
Major discussion point
Opening Remarks and Context Setting
Topics
Human rights principles | Content policy | Freedom of expression
Audience
Speech speed
143 words per minute
Speech length
445 words
Speech time
186 seconds
People’s willingness to believe false or true information is not connected to content realism but to factors like repetition, narrative appeal, and perceived authority
Explanation
An audience member from the CDAC Network argues that information believability is not determined by how realistic content appears, but rather by psychological and social factors. This challenges assumptions about how people process information in the AI era.
Evidence
Referenced studies showing that belief in information is influenced by ‘repetition, narrative appeal, perceived authority, etc.’ and ‘the viewer’s state of mind’
Major discussion point
Addressing Different Audiences and Digital Divides
Topics
Content policy | Human rights principles | Interdisciplinary approaches
Funding prospects for local independent journalism are bleak, creating challenges for countering disinformation
Explanation
The same audience member identifies the financial crisis facing local journalism as a critical problem for maintaining information integrity. They suggest this funding shortage undermines the ability to provide authoritative, local information that could counter false narratives.
Evidence
Stated that ‘realistically funding prospects are bleak for local journalism’
Major discussion point
Solutions and Responses to AI Information Challenges
Topics
Freedom of the press | Digital business models | Content policy
There is a need for regulation requiring AI companies to implement watermarking, especially for digitally illiterate populations
Explanation
A Romanian high school student argues that while education is important for younger generations, regulatory measures are needed to protect older and rural populations who lack digital literacy. They suggest watermarking requirements for AI companies as a specific solution.
Evidence
Referenced Romania’s recent presidential elections with ‘a huge disinformation campaign’ and asked about ‘regulation on the AI companies that generate the watermark’
Major discussion point
Solutions and Responses to AI Information Challenges
Topics
Legal and regulatory | Digital access | Consumer protection
Academic credibility in information integrity is questionable given admissions that most medical research in the past 50 years may be fake
Explanation
A journalist challenges the panel’s reliance on academia as a source of information integrity by citing claims from medical faculty that much medical research has been falsified. This argument questions whether academics are the best guardians of information integrity.
Evidence
Referenced ‘a big symposium with a big shot of this domain claiming that most of medical research in in the past 50 years is fake’ and quoted ‘the former editor-in-chief of The Lancet’
Major discussion point
Solutions and Responses to AI Information Challenges
Topics
Interdisciplinary approaches | Human rights principles | Content policy
Disagreed with
– Zaneta Ozolina
Disagreed on
Trust in academic institutions as guardians of information integrity
Libraries and librarians should be recognized for their traditional skills in information integrity and their role in supporting digital literacy programs
Explanation
An online audience member emphasizes the importance of libraries and librarians in maintaining information integrity, highlighting their traditional expertise in information verification and organization. This argument advocates for recognizing existing institutional knowledge in addressing new AI challenges.
Evidence
Comment mentioned ‘the role of libraries and information services and the traditional skills of librarians’
Major discussion point
Solutions and Responses to AI Information Challenges
Topics
Online education | Digital access | Capacity development
Agreements
Agreement points
AI is rapidly transforming the information environment in ways that are not fully understood
Speakers
– Peggy Hicks
– Viktors Makarovs
– Graham Brookie
Arguments
AI is rapidly changing the information environment in ways we don’t fully understand yet, with platforms potentially fragmenting and evolving beyond current forms
We are blindly driving into a fog regarding AI’s impact on our information world and epistemology
While AI-generated content is increasing across all monitored cases, there hasn’t been documented behavior change from single deepfakes or synthetic content
Summary
All speakers acknowledge that AI is fundamentally changing how information is created, distributed, and consumed, but emphasize that we lack complete understanding of these changes and their implications
Topics
Content policy | Human rights principles | Interdisciplinary approaches
AI enables sophisticated targeting and manipulation by malicious actors
Speakers
– Ivars Pundurs
– Zaneta Ozolina
– Septiaji Nugroho
– Graham Brookie
Arguments
State actors are using AI to manipulate information and conduct surveillance, with Russia’s AI-driven narratives about Ukraine as a stark example
AI enables more sophisticated disinformation campaigns with well-planned tactics and models, not just random narratives
AI enables micro-targeting of specific audiences like elderly people and migrant workers for scams and manipulation
AI is being rapidly adopted by bad actors, particularly state actors, for coding narratives and understanding cultural context in information operations
Summary
Speakers agree that AI significantly enhances the capabilities of malicious actors to create targeted, sophisticated information manipulation campaigns
Topics
Cyberconflict and warfare | Content policy | Human rights principles
Different audiences require tailored approaches to information integrity and AI literacy
Speakers
– Zaneta Ozolina
– Septiaji Nugroho
– Peggy Hicks
Arguments
Different audiences require specially designed approaches to information integrity, as demonstrated by Ukrainian refugees needing trustworthy European information rather than anti-disinformation messaging
Different approaches are required for elderly audiences compared to young people, such as specialized digital academies
Need for evidence-based approaches that identify good practices and incorporate academic research rather than assuming solutions
Summary
Speakers emphasize that effective responses to AI and information integrity challenges must be customized for specific demographic groups and their unique needs
Topics
Online education | Digital access | Human rights principles
AI presents both significant risks and potential benefits for information integrity
Speakers
– Peggy Hicks
– Zaneta Ozolina
– Septiaji Nugroho
Arguments
AI tools can help understand global situations in deeper, more nuanced ways at scale and in real-time, potentially improving human rights monitoring
AI can assist in developing educational curricula, information packages for critical thinking, and reaching underserved social groups like rural populations
AI chatbots can make fact-checking databases more accessible to the public, as demonstrated by recent implementations
Summary
While acknowledging serious risks, speakers agree that AI can be leveraged positively for education, accessibility, and human rights monitoring when properly implemented
Topics
Online education | Digital access | Human rights principles
Similar viewpoints
Both speakers express concern about inadequate institutional responses to AI challenges, with governments seeking overly simplistic solutions and industry reducing investment in safety measures
Speakers
– Peggy Hicks
– Graham Brookie
Arguments
Government responses tend toward binary solutions that don’t work from a free expression standpoint and can enable censorship of dissent
There’s large-scale retrenchment from industry transparency efforts and reduced investment in trust and safety fields
Topics
Legal and regulatory | Human rights principles | Freedom of expression
Both speakers advocate for balanced, critical approaches to AI education that maintain human cognitive abilities while leveraging AI benefits
Speakers
– Zaneta Ozolina
– Septiaji Nugroho
Arguments
Importance of balancing human intelligence with artificial intelligence rather than taking pro or con positions
AI literacy education must be accompanied by AI critical literacy, including Socratic prompting techniques to maintain critical thinking abilities
Topics
Online education | Critical thinking | Interdisciplinary approaches
Multiple speakers recognize that elderly and digitally illiterate populations require special attention and targeted interventions to address AI-related information challenges
Speakers
– Zaneta Ozolina
– Septiaji Nugroho
– Audience
Arguments
Special programs are needed for digitally less educated elderly populations, often implemented through libraries at the local level
Different approaches are required for elderly audiences compared to young people, such as specialized digital academies
There is a need for regulation requiring AI companies to implement watermarking, especially for digitally illiterate populations
Topics
Digital access | Online education | Legal and regulatory
Unexpected consensus
The limited immediate behavioral impact of AI-generated content despite increased production
Speakers
– Graham Brookie
– Peggy Hicks
Arguments
While AI-generated content is increasing across all monitored cases, there hasn’t been documented behavior change from single deepfakes or synthetic content
AI presents risks through hallucination, deepfakes, biased content moderation, and questionable data provenance that threaten information reliability
Explanation
Despite widespread concern about AI-generated content, there’s surprising agreement that single pieces of synthetic content haven’t yet caused measurable behavior change, suggesting the threat may be more about cumulative effects and trust erosion rather than immediate manipulation
Topics
Content policy | Human rights principles | Cyberconflict and warfare
The critical role of traditional institutions like libraries in addressing AI challenges
Speakers
– Zaneta Ozolina
– Audience
Arguments
Special programs are needed for digitally less educated elderly populations, often implemented through libraries at the local level
Libraries and librarians should be recognized for their traditional skills in information integrity and their role in supporting digital literacy programs
Explanation
There’s unexpected consensus that traditional information institutions like libraries are crucial for addressing modern AI challenges, highlighting how established information literacy skills remain relevant in the digital age
Topics
Online education | Digital access | Capacity development
Overall assessment
Summary
Speakers demonstrate strong consensus on the fundamental challenges posed by AI to information integrity, the need for tailored educational approaches, and the dual nature of AI as both threat and opportunity. There’s also agreement on the inadequacy of current institutional responses.
Consensus level
High level of consensus on problem identification and general solution directions, with speakers complementing rather than contradicting each other. This suggests a mature understanding of the issues and potential for coordinated policy responses, though implementation details may require further discussion.
Differences
Different viewpoints
Role of platforms vs. fact-checkers in synthetic content identification
Speakers
– Septiaji Nugroho
– Peggy Hicks
Arguments
Fact-checkers now face impossible demands to verify whether content is synthetic, which should be platforms’ responsibility through proper watermarking
Need for evidence-based approaches that identify good practices and incorporate academic research rather than assuming solutions
Summary
Nugroho argues that platforms should bear responsibility for identifying synthetic content through watermarking, while Hicks emphasizes the need for evidence-based approaches and good practices rather than assuming technological solutions will work
Topics
Content policy | Liability of intermediaries | Digital standards
Trust in academic institutions as guardians of information integrity
Speakers
– Zaneta Ozolina
– Audience
Arguments
Academia must focus on spreading knowledge, communicating with different societal groups, and filling information vacuums
Academic credibility in information integrity is questionable given admissions that most medical research in the past 50 years may be fake
Summary
Ozolina advocates for academia’s central role in maintaining information integrity through knowledge dissemination, while an audience member challenges academic credibility by citing concerns about falsified research
Topics
Interdisciplinary approaches | Human rights principles | Content policy
Unexpected differences
Effectiveness of AI tools in combating disinformation
Speakers
– Zaneta Ozolina
– Septiaji Nugroho
Arguments
AI can assist in developing educational curricula, information packages for critical thinking, and reaching underserved social groups like rural populations
AI chatbots can make fact-checking databases more accessible to the public, as demonstrated by recent implementations
Explanation
While both speakers acknowledge AI’s potential benefits, Ozolina was initially skeptical about AI competing with human intelligence in identifying disinformation but became convinced of its utility for educational tools. Nugroho, while implementing AI solutions, emphasizes that challenges are ‘much, much bigger than the opportunities.’ This represents an unexpected nuanced disagreement about AI’s net benefit despite both using it practically
Topics
Content policy | Online education | Interdisciplinary approaches
Overall assessment
Summary
The discussion revealed relatively low levels of fundamental disagreement among speakers, with most conflicts centered on implementation approaches rather than core principles. Key areas of disagreement included the division of responsibility between platforms and fact-checkers, the role of academic institutions in information integrity, and the relative balance of AI’s benefits versus risks.
Disagreement level
Moderate disagreement with significant implications – while speakers generally agreed on the problems and broad solution categories (education, regulation, transparency), their different approaches to implementation could lead to conflicting policy recommendations. The disagreement about academic credibility is particularly significant as it challenges a foundational assumption about expertise and authority in information verification.
Partial agreements
Partial agreements
Similar viewpoints
Both speakers express concern about inadequate institutional responses to AI challenges, with governments seeking overly simplistic solutions and industry reducing investment in safety measures
Speakers
– Peggy Hicks
– Graham Brookie
Arguments
Government responses tend toward binary solutions that don’t work from a free expression standpoint and can enable censorship of dissent
There’s large-scale retrenchment from industry transparency efforts and reduced investment in trust and safety fields
Topics
Legal and regulatory | Human rights principles | Freedom of expression
Both speakers advocate for balanced, critical approaches to AI education that maintain human cognitive abilities while leveraging AI benefits
Speakers
– Zaneta Ozolina
– Septiaji Nugroho
Arguments
Importance of balancing human intelligence with artificial intelligence rather than taking pro or con positions
AI literacy education must be accompanied by AI critical literacy, including Socratic prompting techniques to maintain critical thinking abilities
Topics
Online education | Critical thinking | Interdisciplinary approaches
Multiple speakers recognize that elderly and digitally illiterate populations require special attention and targeted interventions to address AI-related information challenges
Speakers
– Zaneta Ozolina
– Septiaji Nugroho
– Audience
Arguments
Special programs are needed for digitally less educated elderly populations, often implemented through libraries at the local level
Different approaches are required for elderly audiences compared to young people, such as specialized digital academies
There is a need for regulation requiring AI companies to implement watermarking, especially for digitally illiterate populations
Topics
Digital access | Online education | Legal and regulatory
Takeaways
Key takeaways
AI is rapidly transforming the information environment in ways not yet fully understood, creating both opportunities and significant risks to information integrity
While AI-generated content is increasing across all monitored cases, there hasn’t been documented behavior change from single deepfakes or synthetic content pieces
Different audiences require specially designed approaches to information integrity – there is no universal solution
AI literacy education must be accompanied by AI critical literacy to maintain human critical thinking abilities
The challenge requires balancing human intelligence with artificial intelligence rather than taking binary pro/con positions
Evidence-based approaches incorporating academic research are essential rather than assuming solutions
Public trust in institutions and information sources is declining, creating an ‘everything is possible, nothing is real’ environment
Investment in trust and safety, transparency for frontier AI models, and support for critical institutions is crucial
Resolutions and action items
UN Human Rights Office to develop a Human Rights Digital Advisory Service as referenced in the Global Digital Compact to help states and businesses navigate AI challenges
Continue the BTEC project to encourage companies to describe their human rights practices and promote best practices
Academia to focus on spreading knowledge, communicating with different societal groups, and filling information vacuums
Fact-checking organizations to adapt by using AI tools to connect databases with users and develop educational content
Civil society organizations to remain engaged despite challenges and drive investment in transparency and trust and safety
Governments to implement specialized programs for digitally less educated populations, particularly elderly groups, often through libraries
Unresolved issues
How to effectively regulate AI companies to ensure proper watermarking and content identification
What to do with digitally illiterate populations, particularly elderly and rural communities
How to use AI to create effective counter-narratives without amplifying disinformation
Whether AI tooling for trust and safety is having a net positive impact (defender’s dividend unclear)
How to address the fundamental challenge that platforms may not exist in their current form in the near future
How to balance free expression concerns with the need to address AI-generated disinformation
How to maintain institutional trust while addressing legitimate concerns about AI-generated content
Suggested compromises
Avoid binary solutions to information problems and instead focus on evidence-based approaches that respect free expression
Combine AI literacy with AI critical literacy education rather than focusing solely on technical skills
Use AI tools to assist rather than replace human judgment in fact-checking and content verification
Engage in the information environment as it exists rather than as we want it to be
Focus on transparency and accountability measures for companies rather than outright restrictions
Develop specialized approaches for different target audiences rather than one-size-fits-all solutions
Thought provoking comments
We’re blindly driving into a fog, and one of the areas where this seems to be true is the impact of AI on our information world, on the epistemology of the world
Speaker
Viktors Makarovs
Reason
This metaphor effectively captures the fundamental uncertainty and philosophical implications of AI’s impact on how we understand and process knowledge itself. It frames the discussion not just as a technical challenge but as an epistemological crisis affecting the very foundations of how we know what we know.
Impact
This framing elevated the discussion from technical concerns to deeper philosophical questions about truth and knowledge, setting the stage for speakers to address both immediate risks and fundamental challenges to information integrity.
We can’t address the problems of yesterday rather than today… We need to address these issues on a firm information basis… But also at pace
Speaker
Peggy Hicks
Reason
This highlights a critical paradox in policy-making around rapidly evolving technology – the need for evidence-based responses while moving quickly enough to remain relevant. It challenges the traditional approach of thorough study before action.
Impact
This comment established a tension that ran throughout the discussion between the need for careful research and the urgency of the AI transformation, influencing how other speakers balanced immediate concerns with longer-term solutions.
Everything is possible and thus nothing is real… We’re seeing trust go down just because AI exists and thus people are a little bit more skeptical of navigating online information environments
Speaker
Graham Brookie
Reason
This captures a profound psychological and social consequence of AI – that its mere existence creates a crisis of epistemic confidence even before specific harms occur. It identifies ‘information nihilism’ as perhaps more dangerous than specific disinformation.
Impact
This insight shifted the discussion from focusing solely on technical solutions to addressing the broader erosion of trust in information systems, leading other speakers to consider psychological and social dimensions of the problem.
There hasn’t been one case that we have seen in any place around the world where something like a deepfake or a single piece of synthetic content led to immediate behavior change
Speaker
Graham Brookie
Reason
This empirical observation challenges common assumptions about AI’s immediate impact on behavior, suggesting that fears about deepfakes may be overblown while the real damage is more subtle and systemic.
Impact
This finding provided important nuance to the discussion, helping ground fears in actual evidence and redirecting attention from spectacular individual cases to systemic effects on trust and information processing.
It’s not about debating pro or con. Artificial intelligence is here to stay. So therefore, the question is how to balance human intelligence and artificial intelligence
Speaker
Zaneta Ozolina
Reason
This reframes the entire debate from resistance versus acceptance to integration and balance, moving beyond binary thinking to focus on practical coexistence strategies.
Impact
This perspective helped shift the discussion from defensive measures against AI to proactive strategies for human-AI collaboration, influencing how other speakers approached solutions and adaptation strategies.
AI literacy should be accompanied with AI critical literacy, just like when we do critical thinking on digital literacy
Speaker
Septiaji Nugroho
Reason
This distinguishes between technical AI skills and critical thinking about AI, highlighting that teaching people to use AI tools is insufficient without teaching them to question and evaluate AI outputs.
Impact
This insight influenced the discussion of educational approaches, emphasizing that solutions must go beyond technical training to include critical thinking skills, which other speakers then incorporated into their recommendations.
There is no universal remedy to all those questions which were raised, because very often disinformation and also information integrity is country-specific
Speaker
Zaneta Ozolina
Reason
This challenges the assumption that global problems require universal solutions, emphasizing the importance of local context, culture, and specific vulnerabilities in addressing information integrity.
Impact
This observation helped ground the discussion in practical realities, leading speakers to consider how solutions must be adapted to different contexts rather than seeking one-size-fits-all approaches.
Overall assessment
These key comments collectively transformed what could have been a technical discussion about AI tools into a nuanced exploration of epistemological, psychological, and social challenges. The most impactful insights reframed fundamental assumptions – moving from viewing AI as a problem to be solved to understanding it as a reality requiring adaptation, from focusing on spectacular individual harms to recognizing systemic erosion of trust, and from seeking universal solutions to acknowledging contextual complexity. The discussion evolved from initial concerns about specific AI capabilities to deeper questions about how societies can maintain information integrity while adapting to technological transformation. The interplay between empirical findings (like the lack of documented deepfake behavior change) and philosophical observations (like ‘everything is possible, nothing is real’) created a sophisticated dialogue that balanced immediate practical concerns with longer-term societal implications.
Follow-up questions
How can AI be used to provide information that meets epistemic and social psychological needs to help local human rights-based CSOs create counter-narratives to incendiary information?
Speaker
Ila (CDAC Network)
Explanation
This addresses the practical application of AI for positive counter-messaging while considering the psychological factors that influence belief in information
Should we avoid using Gen AI completely to create counter-narratives?
Speaker
Ila (CDAC Network)
Explanation
This explores the ethical and practical considerations of using AI-generated content to combat disinformation
What specific regulatory measures should be implemented for AI companies regarding watermarking and content identification?
Speaker
Claudio (high school student from Romania)
Explanation
This addresses the need for technical solutions and regulatory frameworks to help users identify AI-generated content
How do we address digital illiteracy among older populations and rural communities in the context of AI-driven information environments?
Speaker
Claudio (high school student from Romania)
Explanation
This highlights the challenge of protecting vulnerable populations who may lack the skills to navigate AI-enhanced information landscapes
How can blockchain technology be used to enhance integrity of information, trust, and transparency?
Speaker
Online audience member
Explanation
This explores alternative technological solutions for ensuring information authenticity and traceability
What is the role of libraries and information services and traditional skills of librarians in maintaining information integrity?
Speaker
Online audience member
Explanation
This examines how traditional information institutions can contribute to combating AI-driven misinformation
Whether AI tooling specifically for trust and safety is actually having a net positive impact – does it increase the defender’s dividend?
Speaker
Graham Brookie
Explanation
This addresses the effectiveness of AI-based solutions in defending against AI-generated threats and whether defensive capabilities are keeping pace with offensive ones
How do we collect more case studies and data to have higher confidence assessments about AI’s impact on information environments?
Speaker
Graham Brookie
Explanation
This highlights the need for more comprehensive research and data collection to better understand the evolving landscape
What type of narrative response is useful in countering disinformation without amplifying it?
Speaker
Peggy Hicks
Explanation
This explores the strategic communication challenges of responding to misinformation without inadvertently spreading it further
How do we ensure that our responses to AI-driven information threats don’t inadvertently bolster disinformation by giving it greater breadth?
Speaker
Peggy Hicks
Explanation
This addresses the unintended consequences of counter-disinformation efforts and the need for evidence-based approaches
How do we balance human intelligence and artificial intelligence in information environments?
Speaker
Zaneta Ozolina
Explanation
This explores the fundamental question of maintaining human agency and critical thinking in an AI-dominated information landscape
How do we develop new ways to regulate and govern artificial intelligence’s impact on information integrity?
Speaker
Zaneta Ozolina
Explanation
This addresses the need for updated governance frameworks that can effectively manage AI’s impact on information systems
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.