[Parliamentary session 1] Digital deceit: The societal impact of online mis- and disinformation

23 Jun 2025 09:45h - 11:00h

[Parliamentary session 1] Digital deceit: The societal impact of online mis- and disinformation

Session at a glance

Summary

This IGF Parliamentary Track session focused on the societal impact of risks to information integrity, including misinformation, disinformation, and the challenges posed by emerging AI technologies. The discussion was framed around the UN Global Principles for Information Integrity, launched a year prior by Secretary-General AntĂ³nio Guterres, which emphasizes societal trust, healthy incentives, public empowerment, independent media, and transparency.


Lindsay Gorman from the German Marshall Fund highlighted how AI is dramatically transforming the information landscape, noting that over a third of global elections in 2024 experienced major deepfake campaigns. She advocated for “democracy-affirming technologies” that embed democratic values like transparency and accountability into their core design, rather than relying solely on regulation. Camille Grenier presented findings from a comprehensive meta-analysis of misinformation research, emphasizing that big tech business models facilitate the weaponization of information and that media literacy alone is insufficient to combat disinformation.


UNESCO’s Marjorie Buchser outlined three key challenges with generative AI: increased information manipulation, transformation of how people access information, and threats to pluralism of voices. She recommended enhanced transparency, improved literacy programs, and public investment in open solutions. Dominique Hazael-Massieux from W3C discussed technical standards being developed to combat misinformation, including content authenticity technologies and trust-based systems.


The panelists emphasized the need for multi-stakeholder approaches, outcome-based governance rather than content regulation, and the importance of supporting independent journalism and media sustainability. The discussion concluded with calls for better cooperation between technical communities, policymakers, and parliamentarians to develop comprehensive solutions that protect democratic values while preserving freedom of expression.


Keypoints

## Major Discussion Points:


– **AI’s Impact on Information Integrity**: The discussion extensively covered how emerging AI technologies, particularly generative AI and deepfakes, are dramatically transforming the information landscape. Panelists highlighted the rapid proliferation of AI-generated content in elections globally, with over a third of 2024 elections experiencing major deepfake campaigns, and the challenges this poses for distinguishing authentic from manipulated content.


– **Platform Accountability and Commercial Interests**: A significant focus was placed on the role of big tech platforms in facilitating the spread of misinformation and their lack of cooperation with governments, particularly in the Global South. Panelists discussed how commercial business models prioritize profit over information integrity, creating dependencies and enabling the weaponization of information.


– **Multi-stakeholder Governance Approaches**: The conversation emphasized the need for collaborative, multi-layered approaches to addressing information integrity challenges, involving governments, civil society, private sector, and technical communities. Panelists stressed that regulation alone is insufficient and must be combined with innovation, transparency measures, and voluntary commitments.


– **Global Disparities and Cultural Representation**: Several speakers highlighted the Western bias in both AI training data and research on misinformation, emphasizing the need for more diverse perspectives from the Global South and support for local languages and cultures in digital ecosystems.


– **Solutions and Innovation**: The discussion covered various technical and policy solutions, including democracy-affirming technologies, content authentication systems, media literacy programs, and the importance of supporting independent journalism and diverse media voices.


## Overall Purpose:


The discussion aimed to examine the societal impacts of risks to information integrity, particularly focusing on misinformation and disinformation in the digital age. The session was designed to bring together parliamentarians, technical experts, and civil society representatives to share insights on current challenges, emerging trends, and policy responses for strengthening democratic resilience against information manipulation.


## Overall Tone:


The tone began as serious and somewhat alarming, with speakers outlining significant threats posed by AI-generated misinformation and platform failures. However, it evolved to become more constructive and solution-oriented as panelists shared concrete initiatives, technical innovations, and collaborative approaches. The discussion maintained a professional, academic quality throughout, with speakers demonstrating expertise while acknowledging the complexity and uncertainty of the challenges. Despite technical difficulties during the session, participants remained engaged and focused on practical recommendations for policymakers and legislators.


Speakers

**Speakers from the provided list:**


– **Charlotte Scaddan** – Senior Advisor on Information Integrity at the United Nations Department of Global Communications, based in New York at UN headquarters (Session moderator)


– **Lindsay Gorman** – Managing Director and Senior Fellow of the Technology Program at the Transatlantic German Marshall Fund of the U.S. (Participated online)


– **Abdelouahab Yagoubi** – Member of the Parliamentary Assembly of the Mediterranean and Rapporteur on Artificial Intelligence of Algeria (Participated online)


– **Tateishi Toshiaki** – Representative from the Japan Internet Providers Association (Participated online)


– **Dominique Hazael-Massieux** – Vice President of Global Impact at W3C (Participated online)


– **Camille Grenier** – Executive Director of the Forum on Information and Democracy


– **Marjorie Buchser** – Senior Consultant on Freedom of Expression and Safety of Journalism at UNESCO


– **Audience** – Various audience members who asked questions during the Q&A session


**Additional speakers:**


– **Manosha Rehman Khan** – Senator from Pakistan, former Minister for Information Technology and Telecommunication, member of the Senate Standing Committee on Information Technology


– **Ines Holzegger** – Member of the Austrian Parliament


– **Unnamed audience member** – Parliamentarian from Africa (country not specified) who asked about AI regulation and language models


Full session report

# IGF Parliamentary Track: Societal Impact of Risks to Information Integrity – Discussion Summary


## Introduction and Context


This IGF Parliamentary Track session examined the societal impacts of risks to information integrity, with particular focus on misinformation, disinformation, and the emerging challenges posed by artificial intelligence technologies. Charlotte Scaddan, Senior Advisor on Information Integrity at the UN Department of Global Communications, moderated the discussion, which brought together parliamentarians, technical experts, UN officials, and civil society representatives.


The session was anchored around the UN Global Principles for Information Integrity, launched a year prior by Secretary-General AntĂ³nio Guterres. Charlotte outlined the five key pillars of these principles: societal trust and resilience, healthy incentives, public empowerment, independent media, and transparency and research. She also mentioned the Global Initiative for Information Integrity on Climate Change, noting that “we’re seeing a lot of climate disinformation around COP29.”


The discussion aimed to foster dialogue between policymakers and technical communities on addressing information integrity challenges whilst preserving democratic values and freedom of expression.


## AI’s Rapid Impact on Democratic Processes


### Documented Evidence of AI-Generated Disinformation


Lindsay Gorman from the German Marshall Fund presented evidence of AI’s rapid impact on democratic processes. Her research documented 133 instances of deepfakes that made it into significant English-language reporting during 2024 elections, representing over one-third of global elections experiencing major deepfake campaigns.


Gorman provided specific examples of these campaigns: “We saw politicians in compromising positions. We saw fake audio about election tampering. We saw in Argentina, a fake audio about the price of beer going up.” She emphasized the rapid acceleration from theoretical concerns to documented reality: “The speed with which we went from, we should be worried about deep fakes, oh, but maybe deep fakes are too big of an overhype… to where we are today, where they are, I think, our research shows a fact of life in modern day elections, has really been zero to 60 in a nanosecond there.”


### Fundamental Changes to Information Access


Marjorie Buchser from UNESCO outlined three critical ways generative AI is transforming the information environment. First, AI dramatically increases the risk of information manipulation by making sophisticated content creation accessible to broader audiences. Second, it fundamentally changes how people access information, with younger generations increasingly bypassing established news sources in favor of AI-generated summaries. Third, AI poses threats to pluralism and diversity of voices, as most AI models are trained primarily on English-speaking data from the Global North.


Buchser highlighted a concerning trend regarding AI-generated content: “What Generative AI does is that it aggregates different version of this topic and bring it back to you. But it’s notoriously bad in citing, in quoting or references. So basically what it does, it removes traffic from established journalistic sources… there’s a tendency of user to use it not critically at all.”


She also noted being “five months pregnant and speaking for two,” adding a personal dimension to her advocacy for protecting information integrity for future generations.


## Platform Accountability and Business Models


### Research on Commercial Interests


Camille Grenier from the Forum on Information and Democracy presented findings from a comprehensive meta-analysis of over 3,000 academic sources on misinformation research. Her analysis revealed five key conclusions: big tech business models prioritize monetization over public interest; platforms create dependencies and facilitate weaponization of information; there is Western bias in misinformation research; media literacy alone is insufficient; and implementation of existing regulations like the Digital Services Act remains crucial.


### Platform Cooperation Questions


Senator Manosha Rehman Khan from Pakistan raised direct questions about platform motivations: “When the deepfake is easily possible to be spread on the platforms, my question frankly is that why is it so easy for the platforms to make the deepfakes to become accessible… there is a deep commercial interest that leads that deepfake to become a content of choice on the social media platforms.”


This highlighted questions about whether platforms are disinterested in addressing AI-generated content due to lack of economic incentive, or whether they benefit from such content.


## Governance Approaches and Regional Perspectives


### Multi-Stakeholder Consensus


Speakers agreed that effective governance requires multi-stakeholder approaches involving governments, civil society, private sector, and technical communities. Marjorie Buchser advocated for outcome-based approaches focusing on systems and processes rather than regulating individual pieces of content, emphasizing multi-layer governance combining statutory regulation with voluntary commitments.


### Local Solutions and Technological Sovereignty


An African parliamentarian challenged Western-centric approaches to solutions: “Should we be starting by thinking about literacy of the masses, or should we be thinking about the controls on the development and the deployment in our jurisdictions?… are you seeing any examples of countries in the southern hemisphere that are making good steps in legislation and regulation that is homegrown and that is speaking our language?”


Abdelouahab Yagoubi from Algeria suggested that African countries should work together on AI regulation, potentially following models like Saudi Arabia’s ethics platform. Camille Grenier noted that Latin American countries are developing interesting regulatory approaches that focus on processes rather than content, emphasizing that countries should develop homegrown legislation rather than copying models from other regions.


## Technical Solutions and Standards Development


### Democracy-Affirming Technologies


Lindsay Gorman introduced the concept of “democracy-affirming technologies” – systems with democratic values like transparency, privacy, and accountability built into their core design. She argued: “We need to be building these technologies in from the get go, these values in from the get go to the next generation of technologies… every generation of technology is a new opportunity to create something different and to try something else out.”


### Content Authenticity and Web Standards


Dominique Hazael-Massieux from W3C discussed technical standards being developed to combat misinformation, including content authenticity technologies that provide “digital ingredient lists” showing how content was created and modified. The W3C is examining technologies like C2PA (Coalition for Content Provenance and Authenticity) and TrustNet to create interoperable web standards.


Tateishi Toshiaki from the Japan Internet Providers Association provided a concrete example of misinformation impact, describing how false emergency calls during the Noto earthquake in Japan overwhelmed response systems. He emphasized the need for trusted organizations and audit associations for internet credibility assessment.


Both Gorman and Hazael-Massieux acknowledged that technical solutions only work with widespread adoption, requiring appropriate market incentives and cooperation between technical communities and policymakers.


## Education, Literacy, and Evidence Needs


### Balanced Approach to Media Literacy


Speakers agreed that education is crucial but disagreed on its effectiveness as a primary solution. Camille Grenier argued that “media and information literacy and AI literacy training is crucial, but it is not a standalone answer to mis- and disinformation problem… we clearly need a more systematic evidence of these initiatives globally and over time.”


Abdelouahab Yagoubi positioned digital education more centrally, stating that “digital education of citizens essential as best weapon against manipulation is knowledge.”


### Research Gaps and Transparency


Camille Grenier’s meta-analysis revealed Western bias in misinformation research, with studies concentrated in Europe and North America. Multiple speakers emphasized the need for enhanced transparency from AI companies and better access to platform data for researchers, civil society, and journalists.


Marjorie Buchser noted that current understanding of generative AI usage and impact is insufficient, with no transparency about who uses these technologies, for what purposes, and what the impacts are across different cultural contexts.


## Investment and Innovation Priorities


### Supporting Diverse Information Ecosystems


Speakers emphasized the need for positive investment in solutions rather than focusing primarily on restrictions. Marjorie Buchser argued that “information integrity should be viewed as investment in creating diverse online ecosystems, not just control,” emphasizing public investment in open solutions, low-resource languages, and cultural digitalization.


Lindsay Gorman advocated for creating research scholarships and career paths for democracy-affirming technology development. Multiple speakers emphasized supporting free, independent media and journalist safety as essential components of fighting disinformation.


## Implementation Challenges and Future Directions


### Existing Framework Implementation


Camille Grenier highlighted that implementation of existing regulations like the Digital Services Act is crucial, with parliamentarians having important roles at the national level. However, many countries still lack national laws implementing the DSA.


### Rapid Technological Change


A fundamental challenge acknowledged by speakers is the rapid pace of AI development, which creates uncertainty about management and mitigation strategies. The technology is evolving faster than research can assess its impacts, making evidence-based policymaking particularly challenging.


## Conclusions


The discussion revealed both the urgency and complexity of addressing information integrity challenges in the AI era. While there was consensus on the nature of the threats and the need for collaborative approaches, different perspectives emerged on implementation strategies.


Key themes included the need for approaches that serve diverse regional contexts, the importance of addressing underlying commercial incentives, and the recognition that effective solutions require both innovation and regulation working together. The session highlighted that addressing information integrity requires sustained collaboration between parliamentarians, technical communities, civil society, and international organizations.


The conversation demonstrated that information integrity is not merely a technical or regulatory challenge, but a fundamental question about how democratic societies can maintain diverse, trustworthy information ecosystems in an era of rapid technological change.


Session transcript

Charlotte Scaddan: I’m Charlotte Skadden, Senior Advisor on Information Integrity at the United Nations Department of Global Communications, based in New York at UN headquarters. Welcome to this session of the IGF Parliamentary Track, organized by the UN Department of Economic and Social Affairs, the Inter-Parliamentary Union, and our host, Norway’s strorting. And a warm welcome also to our online audience, we’ve got a lot of people following online. The Parliamentary Track brings together parliamentarians, private sector, technical experts, and civil society to address the challenges and opportunities in our information environment. Today’s hybrid session focuses, as you can see, on the societal impact from risks to information integrity, such as myths and disinformation. We’ll hear from our panelists on an overview of the current state of play and emerging trends, the mechanisms by which harmful content is propagated online, and policy responses around the world. Translation, as you just heard, is available in French and Spanish for this session. A quick word on today’s format. I’ll invite each panelist to respond to a question from me. We’ll have six to seven minutes for each answer, so panelists, bear that in mind. After the panel, we’ll have an open discussion. My colleague, Celine, will be helping to coordinate the online interventions. And then in closing, I’ll invite each panelist to share one concrete takeaway in their final minute. So we have an excellent lineup of speakers today. I’m very excited to have them with me. Joining us online, I think, I cannot see them, but is Lindsay Gorman, Managing Director and Senior Fellow of the Technology Program at the Transatlantic German Marshall Fund of the U.S. Abdoulouahab Yaghoubi, Member of the Parliamentary Assembly of the Mediterranean and Rapporteur on Artificial Intelligence of Algeria. Toshiaki Tateishi from the Japan Internet Providers Association. Dominique Hazael-Massieux, Vice President of Global Impact at W3C. Camille Grenier, Executive Director of the Forum on Information and Democracy. And last but not least, Marjorie Buxer, Senior Consultant on Freedom of Expression and Safety of Journalism. and my colleague at UNESCO. So to set the discussion going, I’m going to give a few short remarks to provide some context. A year ago tomorrow, in fact, the UN Secretary-General, Antonio Guterres, launched the UN Global Principles for Information Integrity. This multi-stakeholder framework for action took shape amid growing risks to the integrity of our information ecosystem, our global information ecosystem. These risks include misinformation, disinformation, hate speech, media suppression, and lack of access to reliable information, all of which undermine human rights. Add emerging technologies, and we can find the pace of these risks now accelerating, their scope expanding, thank you, and their impact deepening, particularly on vulnerable and marginalized groups and during times of crisis and important societal moments such as elections. In this rapidly evolving landscape, the UN Global Principles provide a foundational reference point, and the five principles are societal trust and resilience, which involves building resilient communities that can withstand risk to the integrity of the information ecosystem. The principle of healthy incentives focuses on innovating business models and engaging advertisers to demand transparency on where their ads are placed online and the content online that their ad budgets support. Public empowerment ensures everyone has the tools and literacy to engage safely and confidently online and can gain better control of their personal data. Independent free and pluralistic media supports a diverse range of trustworthy media voices free from undue influence or censorship. And finally, the principle of transparency and research promotes openness about how digital systems work and supports evidence-based policies. All of the principles are anchored in a strong commitment to upholding human rights and, of course, freedom of expression. The principles and their accompanying recommendations call for multi-stakeholder action. In the last year since launching, we’ve seen a hugely encouraging response to these calls to action with momentum and energy on many fronts. We’re bringing a range of actors together, such as governments, civil society, media, academia, the private sector, and local communities to implement the principles with relevant solutions that meet different information integrity needs. A key example of this is the Global Initiative for Information Integrity on Climate Change, led by Brazil, the UN, and UNESCO, along with civil society actors, including some with us today. Tomorrow, in the conference hall at 3.30 p.m., you can engage with a diverse panel on how to strengthen climate information integrity in the lead-up to COP30, and I welcome you all to join us. Camille here will be moderating that panel. The UN’s work on information integrity is also advancing through the Global Digital Compact agreed by all UN member states in September. A particular reference is Action 35E, which focuses on strengthening information integrity to assess and thereby support efforts to ensure that the sustainable development goals are not impeded by mis- and disinformation. So, to today’s session, it will explore some of the themes that I’ve just touched on around the risk to information integrity online, including, of course, the impact of emerging AI technologies. So, to get to our questions, my first is to Lindsay Gorman, who I still cannot see, but I believe is with us online, okay. How do emerging technologies alter the information environment, and what impact do digital technologies have on information integrity? What are some concrete recommendations, especially for legislators and policymakers, on how to implement and innovate democracy-affirming tech, the technologies that protect and promote democratic values and human rights? You have the floor.


Lindsay Gorman: Thanks so much, and I hope you can see me and hear me now. Yes, we can, yes. Now I can see myself, so fantastic. Yes, thanks for having me, and good to be with you virtually, and what a great list of incisive questions there. Maybe I would start with very briefly talking about AI’s impact on the information environment. As you’ve asked, I think it’s probably not gonna be news to too many folks in this room that AI is dramatically transforming the information landscape. I think right before our very eyes, and sometimes without us realizing it, the ability to create, and really the democratization of this ability to create realistic video, audio, in addition to images and text content that is fully generated by AI has exploded, and it is absolutely impacting democratic processes. and communications and democratic environments. And let me just say probably also, which will not be a surprise to anyone in the room, that we’re starting from not a great place in terms of a very congested, polluted information environment, even before the addition of artificial intelligence. And so adding AI as another layer onto that confusion and pollution, where it’s very hard to discern fact from fiction, is just another additional complication. And we at the German Marshall Fund have done some recent research and analysis actually tracking the spread of deepfake campaigns around global elections. We took last year, 2024, as this historic election year, and we tracked where deepfakes were happening around the globe, such that they made it into significant reporting in English language. So obviously not every deepfake campaign that’s happened everywhere, but we looked at these major campaigns that were large enough to attract significant media coverage. And we found that over a third of elections last year had these major deepfake campaigns associated with them. And we found 133 and counting instances of these big deepfake campaigns, specifically around global elections. So these were things like politicians who’d been deepfaked in compromising positions. We had deepfake audio campaigns of politicians’ fake claims on a fake recorded audio of tweaking the election results and tampering with the election. In one case, one candidate was faked to have said that they wanted to. raise the price of beer if elected. On the other end of the spectrum, we saw instances, such as in Argentina, of candidates using AI to create their own campaign posters and campaign messages in kind of an artistic way and paint themselves in a different and interesting light. So these things are no longer the province of science fiction, they’re absolutely in the real world. And I would just note, I think the speed with which we went from, we should be worried about deep fakes, oh, but maybe deep fakes are too big of an overhype, they’re not actually happening, it’s really about cheap fakes, and deep fakes aren’t in the real world yet, so how much should we worry about them, from that conversation, which was happening, I would say, just a couple of years ago, maybe two years ago, to where we are today, where they are, I think, our research shows a fact of life in modern day elections, has really been zero to 60 in a nanosecond there. So these things are everywhere. And it’s not just AI generated content to intend to deceive, we’re also seeing the rise of all other kinds of AI platforms that we’re starting to communicate with, whether it’s AI friends, or AI work agents, and these are all going to impact the information that we receive and the type of content that we can access, and ultimately, the trustworthiness, and of course, the sort of business imperative, I would say, for collecting more and more information to train these models, and the enabling of a surveillance state that we’re very much seeing in countries like China being built out around the world, around the digital silk road. So these things are only going to accelerate, they’re only going to to be more essential to modern life in some ways, as these tools become more useful. And that will have, I think, some dangerous impacts on not just human rights, but also this already congested and polluted information environment. So with that sort of downer note to start with, what can we do about it? Part of the work that we’ve been doing at the German Marshall Fund has been to promote the idea and the innovation and the adoption of these democracy-affirming technologies, which we define as technologies that have democratic values built into their very core. So what are these values? They’re things like transparency, privacy, accountability. So that our thesis is that we need to be democratic by design, that the next generation of technologies has to be built with our democratic values at their very core, or they will not support thriving democracies around the world. And I think we’ve seen that. We had a naivete that with social media, these technologies would be inherently democratizing because they were inherently connecting people. And we saw very early on in the Arab Spring how protests were gaining steam online. And we thought these would be these inherently democratizing forces. And I think sitting here today, we can all say that that absolutely has not come to pass. And that probably unbalanced the effect of these technologies on democratic values and governance has been a net negative. I’m not sure if this video feed is still working. Hmm. Seems like I’ve dropped, dropped off. Okay, well maybe I’m Maybe I’m back now. Okay, well, I guess I’ll just keep going. So yes, our thesis is that ultimately, we need to be building democratic values into technologies from the get go. And that’s going to take massive societal and entrepreneurial change, because right now, that’s not what we’re doing. And it’s not just about regulating technologies. Okay. Thank you.


Camille Grenier: We had a very consultative process with more than 400 experts consulted around the world, reaching out to private companies, global call for papers, and we arrived to three thematic working groups, one on AI, one on media, one on data governance, which is a critical issue, and all through the lens of myths and disinformation. The report is quite long, so I will not give a presentation of the 250 pages, but it covers indeed 3,000 academic sources from 84 countries, and it tries to answer to almost 40 research questions in nine chapters and 300 pages. We have different items coming from this meta report, including summary to policymakers, executive summaries. We have one specific report on future research priorities. We also have mapped, all this is on our website, we have mapped the research on all of these issues, and we have an interactive bibliography online. Getting to the conclusions now. First, and we should not have to restate this, states have a duty to protect human rights and fundamental freedoms, and really we saw that research consistently emphasizes the need to differentiate between normative goals and principles at a global level, and how these are translated into practice at the regional, country, and local level in ways that failed to uphold this duty of states. This means, very concretely, that criminalization of the spread of disinformation may not be an option, as it poses direct risks to the right to freedom of expression, and really, clearly, other solutions exist. The second conclusion that we got from the report is the consensus that big tech business models prioritize their… have laid out monetization for profit. And these business models create dependencies for private and public organizations, as well as individuals, and facilitate the weaponization of information, making social media attractive targets for mis- and disinformation campaigns that are incompatible with a diverse, plural, and public sphere. Third, the exclusion from an inequitable inclusion in information ecosystems at the local, national, and regional level is persistent, and associated with the monopolistic power of big tech companies, which leads to harmful discrimination and exclusion. Four, transparency and accountability. I think it has been said already, it will be mentioned again. These are some of the core principles that we have at the Forum on Information and Democracy. And these are measures that are essential to mitigate the harms of mis- and disinformation. Research demonstrates the need to reinforce big tech company governance, to promote AI systems transparency, especially using independent audits, and to ensure that accurate information reaches a wide range of stakeholders. And last, but maybe not least, media and information literacy and AI literacy training is crucial, but it is not a standalone answer to mis- and disinformation problem. And we clearly need a more systematic evidence of these initiatives globally and over time. And we’ve noticed as well, an insufficient attention to children’s literacy. During all this process, so these are the five main outcomes of the meta-analysis, but we also addressed the need to strengthen research on mis- and disinformation and on information integrity. The first thing that came is to address the Western bias. We’ve seen how research is concentrated in basically… Europe and Northern America. And we need more research from the, thank you, from the global majority world. And we are really trying to address this. We need a multi-dimensional research that addresses the complex components of information ecosystems. Something that we’ve shown is really how different, you know, from societal to technological to political dimensions need to be addressed. And last but not least, and we’ve been advocating that, and a lot of people in the room have been advocating for more access to data. So we really need to build a framework to ensure that researchers, civil society, journalists have access to more data from platforms so that we have a better understanding of what’s actually happening in the information ecosystem. Thank you.


Charlotte Scaddan: Thank you so much, Camille, and I think you touched on so many excellent points there. You stressed many things, including the, how big tech is used to facilitate the weaponization of disinformation behaviors, the importance of transparency and accountability, on which I think we all agree. That media and information literacy is crucial, but not a panacea, and I think that is a point that is really, is really key. And of course, your point on strengthening research outside of the English-speaking Western countries, I think, is something that we’re all very concerned about. It was one of the key points in the global principles as well, and one of the key objectives of our global initiative on information integrity, really to kind of get outside of that bubble and find out what’s happening in information ecosystems globally. So if you haven’t read the report, please do. It’s an excellent contribution to the information integrity space, and I really, thanks for all the work on that. So I believe that Lindsay is back with us. Lindsay, do you hear me? Yes, I do. Do you hear me? Oh, yes. Excellent. I’m sorry that we lost you. We were still following the script of what you were saying, which was appearing here on a screen, but we couldn’t see or hear you. So I think you left off just when you had actually turned to some of the solutions to some of the AI risks that you had so eloquently outlined. So perhaps you just want to, in the last couple of minutes, finish up that.


Lindsay Gorman: Absolutely, yes. Thank you so much. And I will also drop in the chat this report that some of these comments and ideas have been encapsulated in that we just put out actually last month on democracy affirming technologies. And so our thesis is that we need to be building these technologies in from the get go, these values in from the get go to the next generation of technologies. And the unfortunate thing about some of the technologies and the business models that we have today is that they are not very democracy affirming. But the good thing is that with technology, every generation of technology is a new opportunity to create something different and to try something else out. And it’s never static. So what we have today is not what we’re going to be stuck with for the next 10, 20, 50 years. We always have a new opportunity to innovate and technology is always racing ahead. So can we get in at the get go very early on the technology development process and build the next generation of technologies that go viral to be explicitly in support of democratic values? And this is not to say that regulation doesn’t have a role, that policy and governance, all of these things have roles in creating a better democratic technological ecosystem that supports our values and our security. But that innovation also should play a role too. And so that’s what we outlined in this report, where we’ve conducted two pilot projects on innovating and adopting democracy-affirming technologies. The first on innovating was we actually held a hackathon for teams of entrepreneurs, technologists, we had coders on the teams, as well as academics and civil society activists. We did this in Mexico City around the context of Mexico’s election to try to create new innovative technologies. And we got some very interesting prototypes. And I think the next step really is to build a community of not just entrepreneurs, but also potential funders and investors and policy organs that want to take some of this work forward, not just at the hackathon that we held, but at efforts that are springing up around the world to do similar. For example, just this past month, I was in Germany in Leipzig with Sprint, which is Germany’s new innovation agency, and I was on the jury for a similar competition for AI to fight mis and disinformation, deep fake detection, and prevention. And there are some incredible technologists and ideas and solutions that are popping up. None of them, I don’t think anyone thinks, are going to be a panacea, but innovation needs to be in this game as well. So there are a bunch of efforts here, and what we really need on the policy side is a significant oomph we need from funders, from philanthropic organizations, from investors, investments in these technologies. One of the recommendations in our report is also to create research scholarships for young researchers and entrepreneurs who want to build these technologies. Is there a career path around democracy-affirming technologies other than just joining one of the large technology companies? Can there be a… innovation in the public interest career effort. And that’s where I think a lot of governments and philanthropic organizations and advocacy organizations can really come in and create a career path for younger folks starting out in the technology field who do want to build democratic values in. The second project we conducted was one on adoption of democracy affirming technologies. And in this one, also in the context of Mexico’s election, we partnered with two media organizations, one photojournalist agency in Mexico City, Obturador MX, and a larger international media organization, the Canadian Broadcasting Corporation to build a content authenticated repository of images around Mexico’s election. And so what we did is we worked with Obturador to take these amazing photos, really they took all the amazing photos of Mexico’s election last year, including some fantastic images of now President Claudia Sheinbaum at the polling places, at the boxes, casting her vote. And then we partnered with technology providers, Microsoft and TruePic, a smaller company working on content authenticity and provenance technology to authenticate the images and create a tamper resistant record of how these images have been taken, whether they’ve been modified along the way from the camera to your social media feed. And we’ve posted this online and I’ll drop the link in the chat as well, as I think the first sort of tamper resistant repository of election imagery. And also got that one of these photos featured in the Canadian Broadcasting Corporation’s coverage actually that they took of Mexico’s election with a content authenticated image logo and all the metadata that goes with that actually in CBC’s coverage. So we’re moving slowly. I think on the adoption. in front, but these sorts of pilot projects can help us take this next step towards technologies that better promote democracy and democratic values. And for example, with content authenticated images, these really promote transparency so that essentially you’re getting kind of a digital ingredient list of how an image has been created, how it’s been modified, how it’s come to your feed, so that as the user, you can decide for yourself whether to trust it. I like to think of these things like text. If we read a text that’s been very well cited, where all the assertions made in the article can be very well referenced and traced back to the original sources, and we can actually go into those sources and see, does the claim made in the original article, is it actually supported by what’s cited? That builds trust. That sort of transparency builds trusted information. Whereas if we read an article with no citations, with claims that come out of nowhere, we don’t really know whether to trust it. And so this is what we’re trying to do in the visual information environment with video and with images on content authenticity technologies. So this was our pilot project. I’ll drop the link in the chat so you can see some of the images that are authenticated. But this was in our sort of realm of adoption. And then just quickly on some of the policy recommendations, both in the innovation and the adoption, we recommend that online platforms and websites should be incorporating some of the existing democracy affirming tools like content authenticity technology into products and new product design. Other technologies in our democracy affirming tech suite include privacy enhancing technologies, some of the censorship circumvention technologies that are used around the world to to counter government censorship. Government agencies, private sector champions, philanthropic organizations and universities should create these democracy-affirming tech research scholarships to spur innovation for a new generation of technologies. Governments also should be working with technology providers and even the hacker community to red team these emerging democracy-affirming technologies to shore up any vulnerabilities and move them from concept to true adoption. And then also should provide guidance on how to use these technologies and how these implementations can advance cyber and national security in addition to democratic values. So there’s a lot to be done, but I think that, yes, every new generation is a new opportunity. There’s definitely steam around the movement for content authenticity technologies. And can we as a global community give some of these technologies an oomph as well as create the space and the incentive really for younger folks coming into the technology field to make their career and life’s work on building the next generation of these technologies to support and enhance values that we would like to see in our information environment. So I’ll stop there. But thanks for the discussion and looking forward to the next speakers and questions.


Charlotte Scaddan: Thank you so much, Lindsay. I’m glad we managed to get you back because it’s really important to focus on solutions and not just the risks. And I really appreciate your focus there on the innovation that’s going to be needed in this space and how we’re going to have to attract the talent that we need to take those effective solutions forward. So without further ado, and I just want to let everyone know that we are able because we started a little bit late and we had a few technical hiccups. to extend the end of the session. So we’ve got, we can go to 11.15. So rest assured, there will be plenty of time for questions. I don’t know if all of our panelists can stay, but they don’t have much choice because I’ve just said it and otherwise, you know, they’re kind of trapped here. So I’m hoping they can. All right, so moving on to Marjorie. UNESCO is the UN agency in charge of the promoting of the safety of journalists. So what are some of the trends to counter risk to information integrity that you’re seeing? What are some of the impacts that you’re seeing in the space on freedom of expression, on the press and on access to information? And what’s UNESCO’s role in the response to that?


Marjorie Buchser: And it’s a pleasure to be back at the IGF in Norway, which is my first time here and really looking forward to visit a little bit more afterwards. First, I need to preface, if you have the impression that I’m catching my breath and I deliver my remark in a slightly slow fashion, this is true. I’m currently five months pregnant. So I will take my time because I’m speaking for two. And Charlotte, but do feel free to, you know, let me know when we are time and I’ll make sure that I wrap up. And so as Charlotte mentioned, UNESCO is a specialized UN agencies, which holds a mandate of promoting and protecting free flow of ideas, which includes freedom of expression, access to information and safety of journalists. And of course, digital technologies have really kept us busy because this is really a changing space. Now, there are many trends that I could have focused on, but I will, like Lindsay, look specifically at advanced AI models and generative AI, which is this category of AI that sort of generate seemingly original new information on the basis of a simple. question prompt from a user. And what is, I think, very important to realize in this space is that while you may have a lot of headlines about generative AI, this is very much a technology in a nascent stage. And if you read the scientific report on generative AI, its impact and how to manage it, I think the two main points that are important to mention, the first one is that it’s changing and performing drastically differently almost on a monthly basis. So you have this rapid evolution coupled with deep uncertainty on how actually to manage and mitigate it. So if you have the impression that you can’t follow, you’re not the only one. Even experts, technical experts in this field, struggle with the pace and the lack of real evidence on this technology. However, and I think that’s why we’re here, there’s a strong consensus that has emerged that this technology has a particular impact on information integrity. And if you look at the UN last report that was made by the high-level advisory body on AI, most of the experts, 80% of them sort of agree, and that’s across background and across region, that information integrity is likely to be highly damaged by, or has a strong risk with advanced AI models. So what I want to do is talk to you about three specific challenges and few governance solutions and specific recommendation that UNESCO has put forward. This is also the topic of one of our latest report on generative AI. So the first one, and this is something Lindsay and as well as Camille mentioned, is obviously AI increase the risk of information manipulation. Lindsay mentioned all the dig, fake campaign, but it’s also the convergence. with Genitive AI, where it allows you to create fake content very easily, and digital platform that allows you to disseminate it to a really broad audience. That is the key problem. And so what you have is increasingly confusion online on the authenticity and authorship of the content you see. And I also want to invite you not to think about it as a dichotomy. Is this AI generated? Is not the same thing as is it trustworthy? If you, for example, want to post a LinkedIn comment to say you’re in IGF and you ask one of the popular generative application to generate it for you, is it not trustworthy? It may be completely true, but it’s completely AI generated on your prompt. So the real question is also about what is the context in which this content has been used? What’s the purpose of it? And so it’s really deeper contextual discussion that needs to take place. The second aspect and challenges, AI has fundamentally transformed the way we access information. What we see in younger generation is they increasingly bypass established news websites to only use recommender feeds or basically Genitive AI application. And this is a major difference because if you look at traditional search engine, what they will do is be an intermediary and basically lead you to traditional or journalistic sources. What Genitive AI does is that it aggregates different version of this topic and bring it back to you. But it’s notoriously bad in citing, in quoting or references. So basically what it does, it removes traffic from established journalistic sources. And this is also a key problem because what we see is increasingly user over rely on AI. So they trust AI outputs even more than some journalistic content. And so this is, you know, we know that AI confabulate, we know that it’s biased. Nonetheless, there’s a tendency of user to use it not critically at all. And that’s not only average user, what we see increasingly is that people are quoting, you know, false book or false references because it was generated by AI and not questioned. And I think the last challenge I want to highlight before I move on to the solutions that is, I think for us, potentially the most fundamental one is that AI power application poses significant threat to pluralism of voices and diversity of content. At a fundamental level, the AI model that you see today are trained on English speaking data from the internet that is mainly generated in the global north or in the north. And what it means is that it really sort of aggregate a specific vision of the world with its inherent biases. And so that is, for us, one of the most significant threat to the plurality and diversity of the ecosystem information ecosystem that you have online. And now, there are many challenges and I think we’ll talk about it a lot but I think it’s really also as I say in an introduction important to remember that we still uncertain about the trajectory, this is still something that we can shape. And this is obviously what UNESCO stand for it’s sort of evidence based inclusive governance model and so there are three points on governance and principle that I want to highlight as well as maybe more specific recommendations. I think the three really key principle that are core to a lot of our guidelines that the first one is the notion of outcome based approach. And for a long time what we see is a tendency to try to regulate every piece of content, every algorithm that exists, which is impossible but also has a very chilling effect on freedom of expression. And so one of the recommendation is really to think about the system, what are the process and mechanism that could be put in place that mitigate the most negative outcomes. So that’s one. The second one, and it’s not going to come as a surprise, but it’s the notion of multi-stakeholder approach, but also multi-stakeholder that is contextual. Today AI is developed by commercial labs and it’s not necessarily ill intended, but it’s a very specific vision of the world with a very specific culture and it’s very important to have multi-stakeholder perspective, but also from different local and regional level to input diversity in every stage of the AI life cycle from the data to the outputs. And the last one in terms of specifically on the governance level, and I think that’s also something that Lindsay mentioned, is a multi-layer approach. And I can imagine as parliamentarian to think of statutory regulation and sort of statutory framework as the initial approach, which is of course a very important one, but what we see is that different sort of layer from core regulatory to voluntary commitment can also have positive consequences. Some of them can be very technical, so less prone to regulatory framework and more for voluntary commitment. So that’s at the principle level. And to finish, maybe three more specific recommendation for generative AI. The first one is really the importance of enhanced transparency. Today we do not know who uses generative AI, for what purpose and what’s the impact. If you look at the transparency index that scientific organization have done, we just don’t know. And not only we don’t know, but there’s also no indicator in terms of how it impacts different region, different culture, different people. that culture was really different effect. So that’s a very important element was maybe preferential access to researcher and journalist to the data and the process of tech companies. The second one, and it’s something that Camille mentioned, is the importance of literacy. As I say, the way user access information is changing. And what is really essential is to help them critically assess the use of those tools and just really trying to understand the limitation of it. And I could come with clear labeling in terms of the provenance, the context, but really trying to help those new users really sort of understand the beneficial use, but also limited or limitation of the technology they’re employing. And finally, I think the notion of public investment and open solutions to really support freedom of expression in all its form is necessary. What we know is commercial dynamic are just insufficient to provide the diversity that we need. And so it’s an investment in law resources and language, culture, so that it could be digitalized, it could be brought about, and there could be data and models based on that. It’s also about representation and access, and in its context, open solution that allows more parties to sort of develop maybe more context-specific models are very beneficial for those problems. So that’s a different aspect I wanted to leave with you. I hope I didn’t take too long. I think we should set aside. Thank you.


Charlotte Scaddan: Thank you. Thank you, Marjorie, for that important intervention and for highlighting those recommendations. And I’m sure there will be questions related to the AI space, because it is. is obviously a topic that many here are very concerned with. And I do want to make sure we have time for questions on that note. So I will move to our next speaker, Dominique. Can you let us know more about W3C’s work? What role can web standards play in mitigating the impact of misinformation and other information integrity risks on society? And what is W3C’s roadmap in this space?


Dominique Hazael-Massieux: Thank you, and thank you for having me on this panel. So let me first introduce quickly W3C. W3C stands for the World Wide Web Consortium. We are a nonprofit international organization dedicated to building standards for the web. Basically, we convene a big part of the tech community with a mission of making the web work for everyone. Of course, the web has become this platform that has become key to so many parts of our digital societies. Making the web work for everyone is not an easy job. It’s also something that is very fast evolving. We’ve already had a number of discussions around AI, which is directly feeding from the web and also creating new challenges for the web, which is one of the topics I oversee in W3C. But when you think about making the web work for everyone, in W3C, for instance, we’ve been doing work in making the web accessible to people with disabilities for more than 20 years, making sure the web can work across languages and culture, which is key in making sure that content, for instance, that is used within these AI systems also is exposed to different cultures, different languages, also different abilities, not just people that can walk, speak, see, but also people that have limitations in their abilities to interact with digital content. But beyond that… A big part of our belief is that the web is a platform for good, a platform that benefits society when we want it to work for everyone. Mis and disinformation is a direct harm to societies. Instead of having the web work for everyone, it starts working against people. And that’s something, of course, that we don’t want to see happen. We don’t want to see remain. Our work is anchored in building these technical standards. So if you open any website, you would be interacting with tens or dozens of four standards. As those of you participating remotely, you’re probably using one of our technologies, WebRTC, that enables remote communication on the web. The question when we develop standards for the web is, what is it that we want to bring a scale to? What is it that the platform, the web as a platform benefits from interoperability that will make sure that a solution that we develop is not something that will just work in a specific market, a specific region, or a specific type of environment, but something that will work on a global scale. That’s a scary responsibility, but it’s one that we take with a lot of passion and attention. And in particular, a lot of the ways these technical standards are developed are, I think, very similar to your experience as parliamentarians. We take a very deliberative approach about the analysis of the problem space, analysis of the solution space. And so very recently, we’ve restarted some of our work around this problem of decent misinformation, trying to understand what recent changes, not only in the ecosystem, obviously, the rise of generative AI as a multiplication of contents that can be developed through those mechanisms, but also the new social media ecosystem, which is, I think, a lot more fragmented than it was a few years ago. New technologies that have emerged, Lince earlier alluded to some technologies emerging in content authenticity. Certification. So we are basically looking at this new ecosystem, looking at the solutions that have emerged in that space. So C2PA is one of them in terms of facilitating content authenticity. We’re looking at a research effort from MIT called TrustNet, which is about how using social trust relationship as a way of helping people assess whether they can or cannot trust a particular piece of information. Building on the existing mechanisms that we as humans have used for thousands or millions of years to make our opinion, to make sense of the world. I assume very few of you actually know how quantum mechanics works, yet we are using these devices that only work because we all actually benefit from quantum mechanics. So it’s not because we know the theory behind quantum mechanics that we trust it. We trust it because people we trust have actually been able to make use of those technologies. So TrustNet is this approach in building up trust over our relationship between people to, again, reflect the way humans actually build their world knowledge. Another technology we’re looking at coming from Japan is called originator profile. Again, trying to build on existing social systems that have been built to represent trust into the ecosystem. And I won’t go through the full list of technologies we’re looking at, but what we’re trying to do is take a very systematic approach at reviewing what these technologies enable, understanding the risks that they might bring. Clearly, we want to fight misinformation. We don’t want to create censorship, which is the other side of the coin. If you try to control too much the information ecosystem, you risk depriving it from important diversity of perspectives and opinions. So again, we try to take this systematical approach in reviewing these various technologies, understanding which part of them these technologies deserve to gain scale, gain end user visibility. Again, one of our role is basically to bring technologies to the end users through web browsers, for instance, so understanding which of these technologies can be meaningful to end users can have a meaningful impact on the way end users perceive the information ecosystem. That’s what we are currently doing. Where will we land? We don’t know yet. You probably know that deliberations lead the community where it can land. One thing that is quite clear is wherever we land, the standards we develop only make sense if and when they get adopted. We are developing voluntary standards, so it’s not like we can say this is a new standard and everyone will adopt them. They only get adopted if the right incentives exist for the people in the ecosystem to adopt them. Of course, we have a lot of the right people participating in these discussions. All the major technology providers are part and active in the RCC. We have lots of content and media publishers. We have lots of civil societies, NGOs, government agencies, so lots of people are already involved. But at the end of the day, if we come up with these new technical standards, they will only be adopted if the right incentives exist, including in the market. That’s where I see a critical need for a stronger cooperation between the regulators, the policymakers, the legislators, including the parliaments around the world, in making sure that as you develop these new policies, as you develop these new approaches in managing misinformation, they integrate, as they build on top of those technologies, those analyses we’re trying to conduct, to make sure that they work together, rather than against one another. I guess if there is one message that I’m trying to communicate today is that there is a lot of thinking and work happening inside the technical community, and in particular in WCC around these questions, and making sure as you develop your own policy agenda in that space, that you talk to us, that you talk with us, that you maybe even contribute to our work to make sure that at the end of the day our work complement one another and build on one another. I think this is the most likely path to success in what is a very complex and multifaceted program space.


Charlotte Scaddan: Thank you so much for outlining some of the very concrete solutions that you’re working on and indeed stressing the need for global solutions to what are global challenges and highlighting the importance again and I think this is a common thread of establishing, bolstering and ensuring trust in the information ecosystem and for stressing the importance of market incentives. I think you know that is part of the reason you mentioned at the end partnership and I think that this multi-stakeholder approach is really key in this space so thank you. Okay we have two more speakers left and we definitely want to get to questions. So moving on to Tokuyoshi Tateishi, thank you for being with us. So with increased efforts to strengthen information integrity and encounter information manipulation, could you share with us the web credibility assessment challenge that you are now attempting to begin in Japan?


Tateishi Toshiaki: Thank you very much. So thank you for inviting me here. So I’m very glad to, it’s my honor. And so I think that the disinformation about the countermeasure, the transparency is the most important thing. So then how we provide the trusted internet to the users, to the internet is all the users. So ASP is trying to make effort mostly about 20 years or more. So then, talking about restoring the internet credibility, so we need to make some audit association or something, organization should be must be created. So we have so many technical management issues for their trusted sometimes photo or something other digital contents. So two years ago, sorry, one and a half years ago, we have a big earthquake in the Noto area in Japan. So at that time, I feel something terrible, because someone have an emergency call. But so then the firefighter tried to get there, but there is no one. So something like that happened at that time. So we have to, I think that it has many or long efforts, but we have to have some trusted, what can I say, measurement and so on. But as you know, the vulnerable users cannot have that. It’s not easy thing. So in spite of that, probably we can do something, make trusted organization something to make sure, for example, to go there and we meet someone who try to, the people will lead a good way. So I think that is a very… very hard way and take so much time. But I think that it’s a very important thing. So simultaneously, so in Japan, so sometime, maybe 15 years ago, so we have how to block the China pornography. So we tried to resist that system, install our network, but we gave up because of the human rights of the children. So as a national people, we do it for that. So even it is illegal. But nowadays, maybe seven years ago, we have a pirate site program. So this year, we have online casino site programs. So many people easy to say that the block these sites, illegal and harmful sites right to book. But that in the mechanism of the internet, block something is every time violate your rights to be about the secrecy of the telecommunication. So and in Japan, I think the secrecy of the telecommunication is the most big pillar to maintain the democracy in Japan. So if we dedicate much more time, we do it, make some eligible and trusted organization. Thank you very much.


Charlotte Scaddan: Thank you so much for highlighting that very interesting and important initiative at the national level. And my last but not least. My final speaker today is Abdelouahab Yagoubi, who will be speaking in French, so for those of you who need to put on headsets, do so now. And my question to you is from the perspective of a parliamentarian, what can parliaments do to strengthen the resilience of societies and democratic systems in the context of the misuse of AI and other emerging tech? The floor is yours.


Abdelouahab Yagoubi: Thank you. Thank you, Charlotte. Excellencies, distinguished colleagues, ladies and gentlemen, I am really, it’s a real pleasure to be with you and participate to this important panel. And as there is not Arabic translation, and I am not perfectly fluent in English, I invite you to switch to French channel because it’s more comfortable for me. Thank you. I will take revenge and I will tell you about my life. So I speak on behalf of the Parliamentary Assembly of the Mediterranean, which includes 35 parliaments around the northern and southern shores of the Mediterranean Sea, as well as the Gulf countries. The theme of this session is hot news, in light of the upheavals that we are observing, both at the regional and international level. Today, The proliferation of disinformation and misinformation campaigns online, the massive spread of false news and fallacious content, are deeply damaging to our societies. They weaken social cohesion, weaken trust among citizens, public institutions and economic actors, and feed a deleterious climate. Faced with this, the members of the Parliament of the Mediterranean, which represents, as I said, the Euro-Mediterranean and Gulf regions, express a shared concern. This is why our Assembly is acting. We closely monitor these phenomena. We share the data and good practices between the delegations of the various Parliaments, and we build together concrete and coordinated responses. Among the tools that we mobilize, I would like to cite a striking example. We are currently finalizing a report entitled The resilience of democratic systems in the face of the use of artificial intelligence, information and communication technologies, and emerging technologies. This report, rigorously evaluated by experts from universities and research centres around the world, warns of growing internal and external pressures that threaten the stability of our democracies. Because, it must be said clearly, the manipulation of information feeds fear, anxiety and confusion. These are the most vulnerable populations who are paying the price. Women, migrants and minorities. Even more disturbing, on a global scale, are the systematic and coordinated campaigns of disinformation orchestrated by state and non-state actors, notably the Russian Federation, China and North Korea. They directly target democratic processes. They compromise the free choice of voters, stain transparency and endanger the legitimacy of electoral results. These interventions have already been seen, notably in France, Romania, Moldova and Georgia. In this context, parliamentary responsibility is more than ever crucial. We must first design robust legislative and ethical frameworks to regulate the digital space while respecting human rights. Secondly, strengthen our alliances with governments, civil society, the private sector, community and religious actors, in the spirit of universal principles and the United Nations for the integrity of information and the Pact for the Future. And thirdly, above all, invest in the digital education of our citizens, because the best weapon against manipulation is knowledge. Before concluding, allow me to underline two other concrete initiatives of the APM. First, We have launched a specific communication campaign against hate speech. Secondly, thanks to our global research center, the CGS, we are now publishing a daily bulletin on artificial intelligence and emerging technologies, an initiative integrated into the World Parliamentary Observatory on Artificial Intelligence, based in Saint-Marin. Ladies and gentlemen, to fight against the manipulation of information is to defend democracy. And this defense requires commitment, cooperation and lucidity. Thank you for your attention. May peace, mercy and blessings of Allah be upon you.


Charlotte Scaddan: Thank you so much. You were, in fact, incredibly concise and right to time. And as I said, last but absolutely not least, you had the final word. So we do have time now for questions. I believe colleagues will be coming around with a mic for those of you who want to ask. So if you could raise your hand so that I can see you. I have already in the front here and then a gentleman over there and then a couple of others at the back here. So maybe we start at the front right here. Thank you. And if you could let us know if your question is directed to a specific panelist, please, that would be very helpful. The mic is not working. Oh, wait now, I believe I. Yeah. No. Here we go, we’ve got you another one. Thank you.


Audience: Yes, thank you very much. I’m Manosha Rehman Khan. I’m Senator from Pakistan. I have been a former Minister for Information Technology and Telecommunication. And I’m now a part of the Senate Standing Committee on Information Technology. I’m into telecommunication and looking after this as an interest area for the last 32 years and have worked very closely with Telenor. So my question basically about the presentations today has an angle which everybody wants to know. When the deepfake is easily possible to be spread on the platforms, my question frankly is that why is it so easy for the platforms to make the deepfakes to become accessible on platforms to the point where it is spread like a viral drug? And why is it that the platforms do not wish to cooperate with the countries to remove on their complaints that such and such content is deepfake and should not be there? So what we are now fighting with is not so much as to what is deepfake. My observation is that there is a deep commercial interest that leads that deepfake to become a content of choice on the social media platforms. Now to say that the parliamentarians are not thinking of regulation and should think about educating the citizens is not an easy task when the brainwashing is happening by the social media platforms. So my question is that this debate that it was very insightful, what are the solutions that we would like to see happen for the social media platforms to bring out more cooperation, to bring in more? more stability and return democratic governments to the countries.


Charlotte Scaddan: Thank you. Who among us would like to take that question? And actually, also, Lindsay, I don’t know if Lindsay is still online, but I unfortunately can’t see her. So, Lindsay, if you would like to jump in for any of these, just please do speak up. Why don’t we answer that one first, because I’m just not sure how much time I have. Yeah, so, go ahead, Camille.


Camille Grenier: Yeah, very, I’ll try to be brief with a very important question. I feel that, you know, we try to work with platforms. They, last year, during the election year, they had this tech agreement, tech accord on AI and deepfakes. When we reached out to them earlier this year, they said, well, actually, we’ve seen that we don’t have a problem with AI and deepfakes. We don’t have the AI apocalypse and the deepfake apocalypse that everyone was talking about. So, there is this trend from these companies that say, well, actually, it’s not really a problem because we don’t have so much deepfakes. That’s the first part of the answer. The second part is regarding the economic interests. I think it’s an economic disinterest. When we see that if this is not a very important market for companies, they will basically not answer to any queries. And, you know, I remember a former head of the regulator in Tunisia, for example, they would tell us that, you know, in Europe, they answer the phones and they sometimes comply, but in Tunisia, they don’t even pick up the phone. They don’t care. Or they send you to someone who is in a totally different region. Calling for more accountability, calling for the presence of, for example, moderators in different regions, in speaking different languages is one of our top priorities. And we clearly see that we have some complete blind spots in tech governance on that regard. It may not fully… answer your question which is a very important one but I think there are some structural aspects there that needs to be addressed as well.


Charlotte Scaddan: Thanks Camille and I think what you just what you just outlined there I think reflects the experience and relationship that many of us have now with the big tech platforms which is that things are perhaps moving not in the direction that we had hoped and especially over the past six months or so well we do have a lot of questions of it but yeah yeah yeah and and addressing those commercial interests I think it’s key and that’s why I think many of you are here right okay next question I think there was a gentleman there in the second row who had his hand up thank you very much and


Audience: good morning all first it’s to thank all the panelists today for a great and insightful presentations that have been done and sitting here as a parliamentarian the question that is ringing to my mind is that we are already too far out in the development of AI which is also leading to disinformation and misinformation and wondering where is a place for forums like IGF to first start out by having a conversation on even what language this technology is speaking because as it seems the languages that are speaking the language models are depending on dominant languages for the development of these technologies and in that way really fighting against the diversity and in and in such a situation we are asking ourselves even as legislators from Africa should we be starting by thinking about literacy of the masses, or should we be thinking about the controls on the development and the deployment in our jurisdictions? And in that sense, I’m asking the panelists, would they be recommending for countries to take approaches that look at their laws for either repurposing or retiring or developing new laws that are locally developed, rather than depending on the central top-down presentation as we saw in data protection. We saw the models that were taken was an American model developing on one side of the globe, the GDPR developing in Europe, and the rest of the world being told to follow in forming their laws in that way. So I think Abdelouahab Yagoubi can answer by telling us, are you seeing any examples of countries in the southern hemisphere that are making good steps in legislation and regulation that is homegrown and that is speaking our language, that now starts us to feeding the information that goes to power these language models, so that then we are not disenfranchised by speaking a language that is not our own, trying to play catch-up, and borrowing laws that are not our own. Is there a place for me to say, Kenya first, before I think about anybody else? Or should I jump onto this IGF bandwagon and say, let’s see what is happening elsewhere, and copy-paste that? I thank you.


Charlotte Scaddan: Thank you. Go ahead. Yeah.


Abdelouahab Yagoubi: Thank you dear colleague. I try to answer in English. Your question is very important. I think in Africa, we have to work together and cooperate to regulate. regulate artificial intelligence. Some example, in Algeria, my country, we have, after the European Act on Artificial Intelligence, introduced a proposal of law to regulate use and usage of artificial intelligence in Algeria. I think Saudi Arabia, with their ethic platform, is a very good model to follow, but I think we have to introduce a framework of cooperation between African countries and follow the evolution of regulation of these technologies over the world. Thank you.


Charlotte Scaddan: Yeah, thank you. And Camille, you wanted to add something?


Camille Grenier: Very quickly, in terms of Global South, I feel that a lot of very interesting things are happening in Latin America. For example, if you take a look at Brazil, they have their ongoing discussions on AI regulation right now at the Parliament. Uruguay is starting some work, Dominican Republic as well. I think, really, Latin America, there is a whole movement of, we regulate the processes rather than the content. It is really interesting development there.


Charlotte Scaddan: Thank you. Okay, I think we might have time for one more question. I just am aware that I would like some gender equality, and maybe even some diversity in age. There is a young lady at the back there, if we could please get her the mic. Thank you.


Audience: Thank you very much. I promise I’ll keep it short. Ines Holzegger, a member of the Austrian Parliament. I’m wondering also about all of these issues. And in the EU, we do have the DSA and other regulations. So how do you view that? Does it go too far, not far enough? Is there something missing that we can help with this destruction, against this destruction of democracy?


Charlotte Scaddan: Panelists, anyone want to speak to the DSA? Go on, go Kimi.


Camille Grenier: Sorry, sorry. I feel like I’m monopolizing. But for Parliament, act, act fast, and really work on the implementation. I was recently in Portugal, they still don’t have a law that implements the DSA in national regulation. And I think parliamentarians really have an important role in using this tool that is unique around the world, using the DSA to ensure that we protect our democracies, and that we, again, have this structural approach.


Marjorie Buchser: Maybe just a few words in terms of the DSA. It has been created in a very specific context where most European countries have independent regulatory authorities and a set of infrastructure that also support this regulatory framework. That being said, I think that from the governance principle that I highlighted, I think the notion of system-based or outcome-based that looks at, let’s look at the mechanism, not at the content, I think is very interesting in the DSA. The notion of a symmetrical requirement to tackle the issues that the lady mentioned in terms of trying to identify those big actors that are really structuring the space. I think our interesting principle that could be, you know, considered not only in the DSA, but are all in the same system as we see in the European states.


Charlotte Scaddan: Yeah, thanks. So I think important points there, actually. Implementation is key. Having the legislation in place is obviously a huge first step, but it has to be implemented. But also, as we’ve heard, you know, the DSA, we cannot take a cookie-cutter approach, right? I mean, you know, we absolutely need to look at national context when rolling out. So I think maybe if the panelists, I could give you 30 seconds. I think we’re a minute over, just to wrap up with any final thoughts. And maybe we’ll start at that end and work our way down. Please.


Tateishi Toshiaki: So I think that the legislation is very different from its countries. So I think that we have to work together. So that is the best way to solve the problems, I think. Thank you.


Charlotte Scaddan: Thanks.


Camille Grenier: I’d be very brief, but a good solution to fighting disinformation is also to have free, independent, reliable information and ensuring that in this discussion we also address press freedom, media sustainability, the safety of journalists around the world, that we also don’t forget those who are fighting on the sidelines and make sure that they can do their work freely. Thank you.


Charlotte Scaddan: Thank you. Another important point. Please.


Abdelouahab Yagoubi: Thank you, Charlotte. I think we have together to profit by the work of the Council of Europe about democracy and artificial intelligence. And I invite you to visit our website, pam.int, to follow the Digit daily published about artificial intelligence. Thank you.


Marjorie Buchser: But in a context where there’s a tendency to say we have to regulate now, I think it’s important to think of information integrity not only about the risk on manipulation, etc., but also as an investment. It’s an investment in sort of creating and digitalizing law resources, culture, and language, integrating local solution. So it’s not only control, it’s also investment in enabling the cultural and linguistic diversity to be part of the online ecosystem.


Dominique Hazael-Massieux: Yeah, and I’ll just reiterate my call for working with the technical community as you develop new regulation in that space, making sure you understand what we are also trying to build in defending the web, the internet from this flood of mis and disinformation, and I’ll be more than happy to hear from you after this event to see how that could work. Thank you very much.


Charlotte Scaddan: And I don’t know if we have Lindsay online still. Lindsay? No? Okay. Well, with that, we’ll wrap up. Thank you so much for bearing with us through the technical challenges. We really appreciate it. It was an excellent discussion. Thank you all. And I believe now there is a coffee break. Yes. Okay. Thank you.


L

Lindsay Gorman

Speech speed

141 words per minute

Speech length

2308 words

Speech time

981 seconds

AI dramatically transforms information landscape through democratized creation of realistic deepfake content

Explanation

AI has made it possible for anyone to create realistic video, audio, images and text content that is fully generated by AI, fundamentally changing the information environment. This democratization of content creation capabilities is happening rapidly and is already impacting democratic processes and communications.


Evidence

The ability to create realistic deepfake content has ‘exploded’ and is ‘absolutely impacting democratic processes’


Major discussion point

AI’s Impact on Information Environment and Democratic Processes


Topics

Sociocultural | Legal and regulatory


Agreed with

– Camille Grenier
– Marjorie Buchser

Agreed on

AI dramatically transforms information landscape and poses significant risks to democratic processes


Over one-third of 2024 global elections had major deepfake campaigns with 133+ documented instances

Explanation

Research by the German Marshall Fund tracked deepfake campaigns around global elections in 2024, finding widespread use of this technology in electoral contexts. The campaigns included various forms of manipulation from compromising politicians to fake policy statements.


Evidence

Specific examples include politicians deepfaked in compromising positions, fake audio of election tampering, a candidate faked to say they wanted to raise beer prices, and candidates using AI for campaign materials in Argentina


Major discussion point

AI’s Impact on Information Environment and Democratic Processes


Topics

Sociocultural | Human rights


Democracy-affirming technologies must have democratic values like transparency, privacy, and accountability built into their core

Explanation

The next generation of technologies needs to be designed with democratic values from the beginning, rather than trying to regulate them after development. This represents a shift from the naive assumption that technologies like social media would be inherently democratizing.


Evidence

Social media was initially thought to be democratizing due to the Arab Spring protests gaining steam online, but ‘sitting here today, we can all say that that absolutely has not come to pass’ and the effect has been ‘a net negative’


Major discussion point

Technical Solutions and Standards Development


Topics

Infrastructure | Human rights


Agreed with

– Camille Grenier
– Marjorie Buchser
– Tateishi Toshiaki

Agreed on

Transparency and accountability are essential for addressing information integrity challenges


Disagreed with

– Marjorie Buchser
– Audience

Disagreed on

Primary approach to addressing AI and information integrity challenges


Content authenticity technologies provide digital ingredient lists showing how content was created and modified

Explanation

These technologies work like citations in academic texts, providing transparency about how images and videos have been created and modified from camera to social media feed. This allows users to make informed decisions about whether to trust the content.


Evidence

Partnership with Mexican photojournalist agency Obturador MX and Canadian Broadcasting Corporation created first tamper-resistant repository of election imagery using Microsoft and TruePic technology


Major discussion point

Technical Solutions and Standards Development


Topics

Infrastructure | Sociocultural


Need for research scholarships and career paths for democracy-affirming technology development

Explanation

There should be alternative career paths for young technologists who want to build democratic values into technology, rather than just joining large technology companies. This requires investment from governments, philanthropic organizations, and advocacy groups.


Evidence

Conducted hackathon in Mexico City around election context and participated as jury member in Germany’s Sprint innovation agency competition for AI to fight misinformation


Major discussion point

Investment and Innovation Needs


Topics

Development | Economic


C

Camille Grenier

Speech speed

144 words per minute

Speech length

1154 words

Speech time

478 seconds

Big tech business models prioritize monetization over public interest, creating dependencies and facilitating weaponization of information

Explanation

Research consistently shows that big tech companies’ business models focus on profit maximization, which creates dependencies for organizations and individuals. These models make social media platforms attractive targets for misinformation campaigns that are incompatible with a diverse public sphere.


Evidence

Meta-analysis covering 3,000 academic sources from 84 countries through consultative process with 400+ experts


Major discussion point

Business Models and Platform Accountability


Topics

Economic | Legal and regulatory


Agreed with

– Lindsay Gorman
– Marjorie Buchser

Agreed on

AI dramatically transforms information landscape and poses significant risks to democratic processes


Monopolistic power of big tech leads to harmful discrimination and exclusion from information ecosystems

Explanation

The concentration of power among big tech companies results in exclusion and inequitable inclusion in information ecosystems at local, national, and regional levels. This monopolistic control creates barriers to diverse participation in the information environment.


Evidence

Findings from Forum on Information and Democracy meta-analysis report covering 250 pages with research from 84 countries


Major discussion point

Business Models and Platform Accountability


Topics

Economic | Human rights


Media and information literacy crucial but not standalone solution to misinformation problems

Explanation

While media and AI literacy training is essential for addressing misinformation, it cannot be the only approach to solving these problems. There is insufficient systematic evidence of literacy initiatives globally and inadequate attention to children’s literacy needs.


Evidence

Meta-analysis findings showing need for more systematic evidence of literacy initiatives and insufficient attention to children’s literacy


Major discussion point

Education and Literacy Solutions


Topics

Sociocultural | Development


Agreed with

– Marjorie Buchser
– Abdelouahab Yagoubi

Agreed on

Education and literacy are crucial but insufficient as standalone solutions


Disagreed with

– Abdelouahab Yagoubi

Disagreed on

Role and effectiveness of media literacy as a solution


Meta-analysis of 3,000 academic sources reveals Western bias in research, need for Global South perspectives

Explanation

Research on misinformation and information integrity is heavily concentrated in Europe and North America, creating a significant gap in understanding from the global majority world. This bias limits the comprehensiveness of solutions and understanding of information ecosystem challenges globally.


Evidence

Research concentrated in Europe and Northern America, with Forum actively trying to address this through their global initiative


Major discussion point

Research and Evidence Gaps


Topics

Development | Sociocultural


Agreed with

– Marjorie Buchser
– Audience

Agreed on

Global South and diverse perspectives are underrepresented in research and solutions


Framework needed to ensure researchers, civil society, and journalists have access to platform data

Explanation

Better understanding of information ecosystems requires improved access to data from platforms for research purposes. Current lack of access limits the ability to study and address misinformation effectively.


Evidence

Advocacy work by Forum on Information and Democracy and others in the room for more data access


Major discussion point

Research and Evidence Gaps


Topics

Legal and regulatory | Human rights


Agreed with

– Lindsay Gorman
– Marjorie Buchser
– Tateishi Toshiaki

Agreed on

Transparency and accountability are essential for addressing information integrity challenges


Platforms claim AI and deepfakes aren’t major problems, showing economic disinterest in smaller markets

Explanation

Despite having tech agreements on AI and deepfakes during election year, platforms later claimed they don’t see significant problems with AI and deepfakes. This reflects economic disinterest in markets that aren’t profitable, with different levels of responsiveness between regions.


Evidence

Example of former Tunisian regulator head saying that in Europe platforms answer phones and sometimes comply, but in Tunisia they don’t pick up the phone or send queries to wrong regions


Major discussion point

Business Models and Platform Accountability


Topics

Economic | Legal and regulatory


Disagreed with

– Audience

Disagreed on

Platform cooperation and commercial interests


Latin American countries showing interesting regulatory developments focusing on processes rather than content

Explanation

Several Latin American countries are developing innovative approaches to AI regulation that focus on regulating processes and systems rather than content. This represents a different model from content-focused regulation approaches.


Evidence

Brazil has ongoing AI regulation discussions in Parliament, Uruguay and Dominican Republic are starting work in this area


Major discussion point

Global Cooperation and Implementation


Topics

Legal and regulatory | Development


Implementation of existing regulations like DSA crucial, with parliamentarians having important role

Explanation

Having legislation like the Digital Services Act is only the first step; proper implementation at national levels is essential. Parliamentarians need to act quickly to ensure these tools are used effectively to protect democracies.


Evidence

Portugal still doesn’t have national law implementing the DSA despite its existence


Major discussion point

Global Cooperation and Implementation


Topics

Legal and regulatory | Human rights


Supporting free, independent media and journalist safety essential alongside fighting disinformation

Explanation

A comprehensive approach to fighting disinformation must include ensuring that journalists and media can operate freely and safely. Press freedom, media sustainability, and journalist safety are fundamental components of information integrity.


Major discussion point

Global Cooperation and Implementation


Topics

Human rights | Sociocultural


M

Marjorie Buchser

Speech speed

147 words per minute

Speech length

1868 words

Speech time

758 seconds

AI increases risk of information manipulation through convergence with digital platforms for broad dissemination

Explanation

The key problem is not just AI’s ability to create fake content easily, but its convergence with digital platforms that can disseminate this content to broad audiences. This combination creates unprecedented opportunities for information manipulation and confusion about content authenticity.


Evidence

Generative AI allows easy creation of fake content while digital platforms enable broad dissemination


Major discussion point

AI’s Impact on Information Environment and Democratic Processes


Topics

Sociocultural | Legal and regulatory


Agreed with

– Lindsay Gorman
– Camille Grenier

Agreed on

AI dramatically transforms information landscape and poses significant risks to democratic processes


AI fundamentally changes how people access information, with younger generations bypassing established news sources

Explanation

Younger generations increasingly use AI applications and recommender feeds instead of traditional news websites, which removes traffic from established journalistic sources. Unlike search engines that lead to original sources, generative AI aggregates information but is poor at citing references.


Evidence

Traditional search engines act as intermediaries leading to journalistic sources, while generative AI aggregates different versions but is ‘notoriously bad in citing, in quoting or references’


Major discussion point

AI’s Impact on Information Environment and Democratic Processes


Topics

Sociocultural | Human rights


AI poses significant threats to pluralism and diversity due to training on English-speaking, Global North data

Explanation

AI models are fundamentally trained on English-speaking data from the internet generated mainly in the Global North, which means they aggregate a specific vision of the world with inherent biases. This represents one of the most significant threats to plurality and diversity in information ecosystems.


Evidence

AI models trained on English speaking data from the internet mainly generated in the global north


Major discussion point

AI’s Impact on Information Environment and Democratic Processes


Topics

Sociocultural | Human rights


Agreed with

– Camille Grenier
– Audience

Agreed on

Global South and diverse perspectives are underrepresented in research and solutions


Need for outcome-based approach focusing on systems and processes rather than regulating every piece of content

Explanation

Rather than trying to regulate every piece of content or algorithm, which is impossible and has chilling effects on freedom of expression, the focus should be on systems and processes that mitigate negative outcomes. This represents a more effective and rights-respecting approach to governance.


Evidence

Tendency to regulate every piece of content and algorithm is ‘impossible but also has a very chilling effect on freedom of expression’


Major discussion point

Governance and Regulatory Approaches


Topics

Legal and regulatory | Human rights


Disagreed with

– Lindsay Gorman
– Audience

Disagreed on

Primary approach to addressing AI and information integrity challenges


Importance of multi-stakeholder, contextual approaches involving diverse local and regional perspectives

Explanation

AI development by commercial labs represents a specific vision and culture, making it essential to have multi-stakeholder perspectives from different local and regional levels. This diversity needs to be integrated at every stage of the AI lifecycle from data to outputs.


Evidence

AI developed by commercial labs with ‘very specific vision of the world with a very specific culture’


Major discussion point

Governance and Regulatory Approaches


Topics

Legal and regulatory | Development


Agreed with

– Charlotte Scaddan
– Dominique Hazael-Massieux
– Abdelouahab Yagoubi

Agreed on

Multi-stakeholder approaches are necessary for effective governance


Multi-layer governance approach combining statutory regulation with voluntary commitments

Explanation

Effective governance requires different layers from core regulatory frameworks to voluntary commitments, as some aspects are technical and less suitable for regulatory frameworks. This multi-layered approach can address different aspects of the challenge more effectively.


Evidence

Some aspects ‘can be very technical, so less prone to regulatory framework and more for voluntary commitment’


Major discussion point

Governance and Regulatory Approaches


Topics

Legal and regulatory | Infrastructure


Enhanced transparency needed as current understanding of generative AI usage and impact is insufficient

Explanation

There is currently insufficient knowledge about who uses generative AI, for what purposes, and what the impacts are. Transparency indices show significant gaps in understanding, with no indicators of how AI impacts different regions, cultures, and populations.


Evidence

Scientific organizations’ transparency indices show ‘we just don’t know’ about AI usage and impacts


Major discussion point

Research and Evidence Gaps


Topics

Legal and regulatory | Infrastructure


Agreed with

– Lindsay Gorman
– Camille Grenier
– Tateishi Toshiaki

Agreed on

Transparency and accountability are essential for addressing information integrity challenges


AI literacy training needed to help users critically assess tools and understand their limitations

Explanation

As users increasingly access information through AI, it’s essential to help them critically assess these tools and understand their limitations. This includes clear labeling of content provenance and context to help users understand both beneficial uses and limitations.


Evidence

Users ‘over rely on AI’ and ‘trust AI outputs even more than some journalistic content’ despite knowing AI can confabulate and is biased


Major discussion point

Education and Literacy Solutions


Topics

Sociocultural | Development


Agreed with

– Camille Grenier
– Abdelouahab Yagoubi

Agreed on

Education and literacy are crucial but insufficient as standalone solutions


Public investment in open solutions necessary to support freedom of expression and cultural diversity

Explanation

Commercial dynamics alone are insufficient to provide the diversity needed in information ecosystems. Public investment is required in low-resource languages and cultures for digitalization, and open solutions that allow more parties to develop context-specific models.


Evidence

Commercial dynamics ‘are just insufficient to provide the diversity that we need’


Major discussion point

Investment and Innovation Needs


Topics

Development | Human rights


Rapid evolution of AI technology creates deep uncertainty about management and mitigation strategies

Explanation

Generative AI is changing and performing drastically differently almost monthly, creating rapid evolution coupled with deep uncertainty about management. Even technical experts struggle with the pace and lack of evidence about this nascent technology.


Evidence

Technology is ‘changing and performing drastically differently almost on a monthly basis’ with ‘deep uncertainty on how actually to manage and mitigate it’


Major discussion point

Research and Evidence Gaps


Topics

Infrastructure | Legal and regulatory


Information integrity should be viewed as investment in creating diverse online ecosystems, not just control

Explanation

Rather than focusing solely on control and regulation, information integrity should be seen as an investment opportunity in creating and digitalizing diverse resources, cultures, and languages. This positive approach enables cultural and linguistic diversity to be part of the online ecosystem.


Major discussion point

Investment and Innovation Needs


Topics

Development | Sociocultural


D

Dominique Hazael-Massieux

Speech speed

143 words per minute

Speech length

1336 words

Speech time

559 seconds

W3C develops interoperable web standards to make solutions work globally, examining technologies like C2PA and TrustNet

Explanation

W3C takes a systematic approach to reviewing emerging technologies for content authenticity and trust, including C2PA for content authenticity certification and TrustNet which uses social trust relationships. The goal is to bring technologies to scale that can work globally across different markets and regions.


Evidence

Specific technologies mentioned include C2PA for content authenticity, TrustNet from MIT using social trust relationships, and Originator Profile from Japan


Major discussion point

Technical Solutions and Standards Development


Topics

Infrastructure | Legal and regulatory


Need for cooperation between regulators and technical community to ensure complementary approaches

Explanation

Standards only get adopted when the right incentives exist, including market incentives created by policymakers and regulators. There’s a critical need for stronger cooperation to ensure technical standards and policy approaches work together rather than against each other.


Evidence

W3C involves major technology providers, content publishers, civil society, NGOs, and government agencies in discussions


Major discussion point

Technical Solutions and Standards Development


Topics

Legal and regulatory | Infrastructure


Agreed with

– Charlotte Scaddan
– Marjorie Buchser
– Abdelouahab Yagoubi

Agreed on

Multi-stakeholder approaches are necessary for effective governance


T

Tateishi Toshiaki

Speech speed

101 words per minute

Speech length

466 words

Speech time

276 seconds

Trusted organizations and audit associations needed for internet credibility assessment

Explanation

Japan Internet Providers Association believes transparency is most important for providing trusted internet to users, requiring creation of audit associations and organizations. This involves technical management of trusted digital content and addressing emergency response issues.


Evidence

Example of false emergency call during Noto earthquake where firefighters responded but found no one, highlighting need for trusted verification systems


Major discussion point

Technical Solutions and Standards Development


Topics

Infrastructure | Cybersecurity


Agreed with

– Lindsay Gorman
– Camille Grenier
– Marjorie Buchser

Agreed on

Transparency and accountability are essential for addressing information integrity challenges


A

Abdelouahab Yagoubi

Speech speed

88 words per minute

Speech length

719 words

Speech time

488 seconds

Parliamentary responsibility includes designing robust legislative frameworks while respecting human rights

Explanation

Parliamentarians must create strong legislative and ethical frameworks to regulate digital spaces while maintaining respect for human rights. This includes strengthening alliances with various stakeholders and investing in digital education as the best defense against manipulation.


Evidence

Parliamentary Assembly of the Mediterranean is finalizing report on ‘The resilience of democratic systems in the face of the use of artificial intelligence, information and communication technologies, and emerging technologies’


Major discussion point

Governance and Regulatory Approaches


Topics

Legal and regulatory | Human rights


Agreed with

– Charlotte Scaddan
– Marjorie Buchser
– Dominique Hazael-Massieux

Agreed on

Multi-stakeholder approaches are necessary for effective governance


Digital education of citizens essential as best weapon against manipulation is knowledge

Explanation

The most effective defense against information manipulation is educating citizens with digital literacy skills. This educational approach is fundamental to building resilience against misinformation and disinformation campaigns.


Evidence

Parliamentary Assembly of the Mediterranean launched communication campaign against hate speech and publishes daily bulletin on AI and emerging technologies


Major discussion point

Education and Literacy Solutions


Topics

Sociocultural | Development


Agreed with

– Camille Grenier
– Marjorie Buchser

Agreed on

Education and literacy are crucial but insufficient as standalone solutions


Disagreed with

– Camille Grenier

Disagreed on

Role and effectiveness of media literacy as a solution


African countries need to work together and cooperate on AI regulation, following models like Saudi Arabia’s ethics platform

Explanation

African nations should collaborate on AI regulation rather than working in isolation, learning from successful models like Saudi Arabia’s ethics platform. Algeria has introduced AI regulation proposals following the European AI Act as an example of this approach.


Evidence

Algeria introduced proposal for law to regulate AI usage after European Act on Artificial Intelligence, Saudi Arabia’s ethics platform cited as good model


Major discussion point

Global Cooperation and Implementation


Topics

Legal and regulatory | Development


A

Audience

Speech speed

132 words per minute

Speech length

708 words

Speech time

321 seconds

Commercial interests make deepfakes attractive content for social media platforms, with platforms showing reluctance to cooperate on removal

Explanation

There appears to be a deep commercial interest that makes deepfake content attractive to social media platforms, leading to easy accessibility and viral spread. Platforms demonstrate unwillingness to cooperate with countries on removing deepfake content when complaints are made.


Evidence

Question from Pakistani Senator with 32 years experience in telecommunications, noting platforms’ reluctance to cooperate on deepfake removal


Major discussion point

Business Models and Platform Accountability


Topics

Economic | Legal and regulatory


Disagreed with

– Camille Grenier

Disagreed on

Platform cooperation and commercial interests


Countries should develop homegrown legislation rather than copying models from dominant regions

Explanation

Instead of following top-down models like American or European approaches to data protection and AI regulation, countries should develop locally appropriate laws that speak their language and reflect their contexts. This is particularly important for ensuring diverse voices feed into AI language models.


Evidence

Reference to data protection following American model and GDPR from Europe, with concern about African countries being told to follow rather than develop their own approaches


Major discussion point

Governance and Regulatory Approaches


Topics

Legal and regulatory | Development


Agreed with

– Camille Grenier
– Marjorie Buchser

Agreed on

Global South and diverse perspectives are underrepresented in research and solutions


Disagreed with

– Lindsay Gorman
– Marjorie Buchser

Disagreed on

Primary approach to addressing AI and information integrity challenges


C

Charlotte Scaddan

Speech speed

158 words per minute

Speech length

2318 words

Speech time

876 seconds

UN Global Principles for Information Integrity provide foundational framework with five key principles for multi-stakeholder action

Explanation

The UN Secretary-General launched the Global Principles for Information Integrity as a multi-stakeholder framework addressing growing risks to information ecosystem integrity. The five principles are: societal trust and resilience, healthy incentives, public empowerment, independent free and pluralistic media, and transparency and research, all anchored in human rights and freedom of expression.


Evidence

Principles launched by UN Secretary-General Antonio Guterres, with specific examples like the Global Initiative for Information Integrity on Climate Change led by Brazil, UN, and UNESCO


Major discussion point

UN Global Framework and Principles


Topics

Legal and regulatory | Human rights


Information integrity risks are accelerating due to emerging technologies, particularly affecting vulnerable groups during crises and elections

Explanation

The pace of information integrity risks is accelerating with emerging technologies, their scope is expanding, and their impact is deepening. These risks particularly harm vulnerable and marginalized groups during times of crisis and important societal moments such as elections.


Evidence

Risks include misinformation, disinformation, hate speech, media suppression, and lack of access to reliable information, all of which undermine human rights


Major discussion point

UN Global Framework and Principles


Topics

Human rights | Sociocultural


Global Digital Compact Action 35E focuses on strengthening information integrity to support sustainable development goals

Explanation

The Global Digital Compact agreed by all UN member states includes specific action on strengthening information integrity. Action 35E specifically focuses on assessing and supporting efforts to ensure that sustainable development goals are not impeded by misinformation and disinformation.


Evidence

Global Digital Compact agreed by all UN member states in September, with specific reference to Action 35E


Major discussion point

UN Global Framework and Principles


Topics

Development | Legal and regulatory


Multi-stakeholder approach essential for addressing global information integrity challenges

Explanation

The principles call for bringing together diverse actors including governments, civil society, media, academia, private sector, and local communities to implement solutions. This collaborative approach is necessary to meet different information integrity needs across various contexts.


Evidence

Examples include the Global Initiative for Information Integrity on Climate Change and the diverse panel on climate information integrity in lead-up to COP30


Major discussion point

UN Global Framework and Principles


Topics

Legal and regulatory | Development


Agreed with

– Marjorie Buchser
– Dominique Hazael-Massieux
– Abdelouahab Yagoubi

Agreed on

Multi-stakeholder approaches are necessary for effective governance


Implementation of existing legislation like DSA requires urgent action from parliamentarians at national level

Explanation

Having legislation in place is only the first step; proper implementation at national levels is crucial for effectiveness. Parliamentarians have a key role in ensuring these regulatory tools are properly implemented to protect democratic systems.


Evidence

Example given of Portugal still not having national law implementing the DSA despite its existence


Major discussion point

Global Cooperation and Implementation


Topics

Legal and regulatory | Human rights


Technical solutions and innovation must be prioritized alongside regulation to address information integrity challenges

Explanation

The discussion emphasized the importance of focusing on solutions and innovation rather than just identifying risks. Attracting talent and developing effective technical solutions is essential for moving forward in addressing information integrity challenges.


Evidence

Appreciation expressed for focus on innovation and solutions, noting the need to attract talent for effective solutions


Major discussion point

Technical Solutions and Standards Development


Topics

Infrastructure | Development


Agreements

Agreement points

AI dramatically transforms information landscape and poses significant risks to democratic processes

Speakers

– Lindsay Gorman
– Camille Grenier
– Marjorie Buchser

Arguments

AI dramatically transforms information landscape through democratized creation of realistic deepfake content


Big tech business models prioritize monetization over public interest, creating dependencies and facilitating weaponization of information


AI increases risk of information manipulation through convergence with digital platforms for broad dissemination


Summary

All speakers agree that AI fundamentally changes the information environment in ways that threaten democratic processes, whether through deepfakes, business model incentives, or manipulation capabilities


Topics

Sociocultural | Legal and regulatory


Transparency and accountability are essential for addressing information integrity challenges

Speakers

– Lindsay Gorman
– Camille Grenier
– Marjorie Buchser
– Tateishi Toshiaki

Arguments

Democracy-affirming technologies must have democratic values like transparency, privacy, and accountability built into their core


Framework needed to ensure researchers, civil society, and journalists have access to platform data


Enhanced transparency needed as current understanding of generative AI usage and impact is insufficient


Trusted organizations and audit associations needed for internet credibility assessment


Summary

All speakers emphasize that transparency and accountability mechanisms are fundamental to addressing information integrity challenges, whether through technology design, data access, or institutional frameworks


Topics

Legal and regulatory | Infrastructure


Multi-stakeholder approaches are necessary for effective governance

Speakers

– Charlotte Scaddan
– Marjorie Buchser
– Dominique Hazael-Massieux
– Abdelouahab Yagoubi

Arguments

Multi-stakeholder approach essential for addressing global information integrity challenges


Importance of multi-stakeholder, contextual approaches involving diverse local and regional perspectives


Need for cooperation between regulators and technical community to ensure complementary approaches


Parliamentary responsibility includes designing robust legislative frameworks while respecting human rights


Summary

Speakers consistently advocate for inclusive, multi-stakeholder governance approaches that bring together diverse perspectives and expertise to address information integrity challenges


Topics

Legal and regulatory | Development


Education and literacy are crucial but insufficient as standalone solutions

Speakers

– Camille Grenier
– Marjorie Buchser
– Abdelouahab Yagoubi

Arguments

Media and information literacy crucial but not standalone solution to misinformation problems


AI literacy training needed to help users critically assess tools and understand their limitations


Digital education of citizens essential as best weapon against manipulation is knowledge


Summary

Speakers agree that while education and literacy are essential components of addressing misinformation, they cannot be the only solution and must be part of a broader strategy


Topics

Sociocultural | Development


Global South and diverse perspectives are underrepresented in research and solutions

Speakers

– Camille Grenier
– Marjorie Buchser
– Audience

Arguments

Meta-analysis of 3,000 academic sources reveals Western bias in research, need for Global South perspectives


AI poses significant threats to pluralism and diversity due to training on English-speaking, Global North data


Countries should develop homegrown legislation rather than copying models from dominant regions


Summary

There is strong consensus that current approaches to information integrity are dominated by Western/Global North perspectives, and there is urgent need for more diverse, locally-appropriate solutions


Topics

Development | Sociocultural


Similar viewpoints

Both speakers advocate for technical solutions that provide transparency about content provenance and authenticity, with Lindsay focusing on content authentication and Dominique on web standards for similar technologies

Speakers

– Lindsay Gorman
– Dominique Hazael-Massieux

Arguments

Content authenticity technologies provide digital ingredient lists showing how content was created and modified


W3C develops interoperable web standards to make solutions work globally, examining technologies like C2PA and TrustNet


Topics

Infrastructure | Legal and regulatory


Both express frustration with platform companies’ lack of cooperation and accountability, particularly their different treatment of different markets based on economic interests

Speakers

– Camille Grenier
– Audience

Arguments

Platforms claim AI and deepfakes aren’t major problems, showing economic disinterest in smaller markets


Commercial interests make deepfakes attractive content for social media platforms, with platforms showing reluctance to cooperate on removal


Topics

Economic | Legal and regulatory


Both advocate for systemic approaches to regulation that focus on processes and outcomes rather than content-specific regulation, emphasizing the importance of proper implementation

Speakers

– Marjorie Buchser
– Camille Grenier

Arguments

Need for outcome-based approach focusing on systems and processes rather than regulating every piece of content


Implementation of existing regulations like DSA crucial, with parliamentarians having important role


Topics

Legal and regulatory | Human rights


Unexpected consensus

Investment and innovation focus rather than purely regulatory approaches

Speakers

– Lindsay Gorman
– Marjorie Buchser
– Charlotte Scaddan

Arguments

Need for research scholarships and career paths for democracy-affirming technology development


Information integrity should be viewed as investment in creating diverse online ecosystems, not just control


Technical solutions and innovation must be prioritized alongside regulation to address information integrity challenges


Explanation

Unexpectedly, speakers from different backgrounds (policy research, UN agencies, and session moderation) converged on the need for positive investment in solutions rather than focusing primarily on restrictions and regulation


Topics

Development | Infrastructure


Rapid pace of AI development creates uncertainty even among experts

Speakers

– Marjorie Buchser
– Lindsay Gorman

Arguments

Rapid evolution of AI technology creates deep uncertainty about management and mitigation strategies


Over one-third of 2024 global elections had major deepfake campaigns with 133+ documented instances


Explanation

Both technical and policy experts acknowledge the unprecedented speed of AI development and its real-world impacts, showing unusual consensus on the challenge of keeping pace with technological change


Topics

Infrastructure | Legal and regulatory


Need for contextual, locally-appropriate solutions rather than universal approaches

Speakers

– Abdelouahab Yagoubi
– Camille Grenier
– Audience
– Charlotte Scaddan

Arguments

African countries need to work together and cooperate on AI regulation, following models like Saudi Arabia’s ethics platform


Latin American countries showing interesting regulatory developments focusing on processes rather than content


Countries should develop homegrown legislation rather than copying models from dominant regions


Implementation of existing legislation like DSA requires urgent action from parliamentarians at national level


Explanation

Speakers from different regions and roles unexpectedly agreed that one-size-fits-all approaches don’t work, and that local context and regional cooperation are essential for effective solutions


Topics

Legal and regulatory | Development


Overall assessment

Summary

Strong consensus emerged around core challenges (AI risks, platform accountability, need for transparency) and solution approaches (multi-stakeholder governance, investment in innovation, contextual implementation). Speakers consistently emphasized that technical solutions, education, and regulation must work together rather than as isolated approaches.


Consensus level

High level of consensus with significant implications for policy development. The agreement across diverse stakeholders (UN agencies, technical organizations, parliamentarians, civil society) suggests these priorities have broad legitimacy and could form the basis for coordinated international action on information integrity.


Differences

Different viewpoints

Primary approach to addressing AI and information integrity challenges

Speakers

– Lindsay Gorman
– Marjorie Buchser
– Audience

Arguments

Democracy-affirming technologies must have democratic values like transparency, privacy, and accountability built into their core


Need for outcome-based approach focusing on systems and processes rather than regulating every piece of content


Countries should develop homegrown legislation rather than copying models from dominant regions


Summary

Lindsay emphasizes building democratic values into technology from the start through innovation, Marjorie advocates for system-based governance approaches rather than content regulation, while audience members push for locally-developed regulatory solutions rather than copying Western models


Topics

Legal and regulatory | Infrastructure | Development


Role and effectiveness of media literacy as a solution

Speakers

– Camille Grenier
– Abdelouahab Yagoubi

Arguments

Media and information literacy crucial but not standalone solution to misinformation problems


Digital education of citizens essential as best weapon against manipulation is knowledge


Summary

Camille argues that media literacy is important but insufficient on its own and cannot be a panacea, while Abdelouahab positions digital education as the primary and most effective defense against manipulation


Topics

Sociocultural | Development


Platform cooperation and commercial interests

Speakers

– Camille Grenier
– Audience

Arguments

Platforms claim AI and deepfakes aren’t major problems, showing economic disinterest in smaller markets


Commercial interests make deepfakes attractive content for social media platforms, with platforms showing reluctance to cooperate on removal


Summary

Camille suggests platforms downplay the deepfake problem due to economic disinterest in smaller markets, while audience members argue that commercial interests actually make deepfakes attractive content that platforms are reluctant to remove


Topics

Economic | Legal and regulatory


Unexpected differences

Effectiveness of current regulatory frameworks like the DSA

Speakers

– Camille Grenier
– Audience

Arguments

Implementation of existing regulations like DSA crucial, with parliamentarians having important role


Countries should develop homegrown legislation rather than copying models from dominant regions


Explanation

While Camille advocates for better implementation of existing frameworks like the DSA, audience members question whether copying Western regulatory models is appropriate, suggesting a fundamental disagreement about the universality versus localization of regulatory approaches that wasn’t anticipated given the general consensus on multi-stakeholder cooperation


Topics

Legal and regulatory | Development


Platform motivations regarding deepfakes and AI content

Speakers

– Camille Grenier
– Audience

Arguments

Platforms claim AI and deepfakes aren’t major problems, showing economic disinterest in smaller markets


Commercial interests make deepfakes attractive content for social media platforms, with platforms showing reluctance to cooperate on removal


Explanation

This represents an unexpected disagreement about platform behavior – whether platforms are disinterested in addressing AI/deepfake issues due to lack of economic incentive, or whether they actively benefit from such content and therefore resist cooperation. This disagreement has significant implications for regulatory strategy


Topics

Economic | Legal and regulatory


Overall assessment

Summary

The discussion revealed moderate levels of disagreement primarily around implementation approaches rather than fundamental goals. Key areas of disagreement included the balance between innovation versus regulation, the role of media literacy, platform motivations, and whether to adopt global standards or develop local solutions.


Disagreement level

The disagreement level is moderate but significant for policy implications. While speakers generally agreed on the importance of addressing information integrity challenges, they differed substantially on methods – some favoring technological innovation, others regulatory frameworks, and still others emphasizing education. These disagreements reflect deeper tensions between global standardization and local contextualization, between market-based and regulatory solutions, and between proactive innovation and reactive governance approaches. The implications are substantial as these different approaches could lead to fragmented or conflicting policy responses globally.


Partial agreements

Partial agreements

Similar viewpoints

Both speakers advocate for technical solutions that provide transparency about content provenance and authenticity, with Lindsay focusing on content authentication and Dominique on web standards for similar technologies

Speakers

– Lindsay Gorman
– Dominique Hazael-Massieux

Arguments

Content authenticity technologies provide digital ingredient lists showing how content was created and modified


W3C develops interoperable web standards to make solutions work globally, examining technologies like C2PA and TrustNet


Topics

Infrastructure | Legal and regulatory


Both express frustration with platform companies’ lack of cooperation and accountability, particularly their different treatment of different markets based on economic interests

Speakers

– Camille Grenier
– Audience

Arguments

Platforms claim AI and deepfakes aren’t major problems, showing economic disinterest in smaller markets


Commercial interests make deepfakes attractive content for social media platforms, with platforms showing reluctance to cooperate on removal


Topics

Economic | Legal and regulatory


Both advocate for systemic approaches to regulation that focus on processes and outcomes rather than content-specific regulation, emphasizing the importance of proper implementation

Speakers

– Marjorie Buchser
– Camille Grenier

Arguments

Need for outcome-based approach focusing on systems and processes rather than regulating every piece of content


Implementation of existing regulations like DSA crucial, with parliamentarians having important role


Topics

Legal and regulatory | Human rights


Takeaways

Key takeaways

AI is dramatically transforming the information landscape with over one-third of 2024 global elections experiencing major deepfake campaigns, demonstrating the technology’s real-world impact on democratic processes


Big tech business models prioritize profit over public interest, creating systemic vulnerabilities that facilitate information manipulation and show reluctance to cooperate with governments on content removal


Effective governance requires outcome-based approaches focusing on systems and processes rather than content regulation, combined with multi-stakeholder collaboration that includes diverse regional perspectives


Technical solutions like democracy-affirming technologies and content authenticity standards are being developed, but require cooperation between regulators and technical communities for successful implementation


Research shows significant Western bias in information integrity studies, highlighting the need for Global South perspectives and better access to platform data for researchers


Media literacy is crucial but insufficient as a standalone solution – it must be combined with structural reforms, public investment, and support for independent journalism


Countries should develop context-appropriate, homegrown legislation rather than copying regulatory models from dominant regions like the US or EU


Information integrity should be viewed as an investment opportunity in cultural and linguistic diversity, not just a control mechanism


Resolutions and action items

Parliamentarians should work with the technical community (particularly W3C) when developing new regulations to ensure complementary approaches


Countries need to accelerate implementation of existing regulations like the DSA, with parliamentarians playing a key role in national-level implementation


African countries should cooperate on AI regulation development, potentially following models like Saudi Arabia’s ethics platform


Investment needed in research scholarships and career paths for democracy-affirming technology development


Framework should be established to ensure researchers, civil society, and journalists have better access to platform data


Public investment required in open solutions, low-resource languages, and cultural digitalization to support diversity in information ecosystems


Enhanced transparency measures needed from AI companies regarding usage, impact, and decision-making processes


Unresolved issues

How to effectively compel platform cooperation on deepfake and misinformation removal, particularly given commercial interests that may benefit from viral false content


Whether countries should prioritize mass digital literacy education or focus on controlling technology development and deployment in their jurisdictions


How to balance the rapid pace of AI development with the need for evidence-based policymaking when the technology is evolving faster than research can assess its impacts


How to address the fundamental challenge that AI models are trained primarily on English-speaking, Global North data, potentially marginalizing other cultures and languages


Whether current regulatory approaches like the DSA go too far, not far enough, or are missing key components for protecting democracy


How to create sustainable funding models for independent journalism and media diversity in the face of AI-driven changes to information consumption patterns


How to develop effective international cooperation mechanisms when different regions have varying regulatory approaches and technical capabilities


Suggested compromises

Multi-layer governance approach combining statutory regulation with voluntary commitments to address both technical and policy aspects


Focus on regulating processes and systems rather than specific content to balance free expression concerns with harm prevention


Develop context-appropriate national implementations of international frameworks rather than one-size-fits-all global solutions


Combine innovation incentives with regulatory oversight to encourage democracy-affirming technology development while maintaining accountability


Balance transparency requirements with technical feasibility by working collaboratively between regulators and technology developers


Treat information integrity as both a control mechanism and an investment opportunity to satisfy both security and development concerns


Thought provoking comments

We found that over a third of elections last year had these major deepfake campaigns associated with them. And we found 133 and counting instances of these big deepfake campaigns, specifically around global elections… the speed with which we went from, we should be worried about deep fakes, oh, but maybe deep fakes are too big of an overhype… to where we are today, where they are, I think, our research shows a fact of life in modern day elections, has really been zero to 60 in a nanosecond there.

Speaker

Lindsay Gorman


Reason

This comment provided concrete, quantified evidence of the rapid acceleration of AI-generated disinformation in democratic processes, moving the discussion from theoretical concerns to documented reality. It challenged any remaining skepticism about the immediacy of the threat.


Impact

This set the urgent tone for the entire session and established the factual foundation that other panelists built upon. It shifted the conversation from ‘whether’ AI poses risks to ‘how’ to address these documented threats, influencing subsequent speakers to focus on solutions rather than debating the existence of the problem.


Media and information literacy and AI literacy training is crucial, but it is not a standalone answer to mis- and disinformation problem… we clearly need a more systematic evidence of these initiatives globally and over time.

Speaker

Camille Grenier


Reason

This comment challenged the common assumption that education alone can solve disinformation problems, introducing nuance to what is often presented as a simple solution. It highlighted the gap between popular policy responses and their actual effectiveness.


Impact

This reframed the discussion away from over-relying on literacy as a panacea and pushed other speakers to consider more systemic approaches. It influenced later comments about the need for structural changes in platform governance and multi-layered solutions.


What Genitive AI does is that it aggregates different version of this topic and bring it back to you. But it’s notoriously bad in citing, in quoting or references. So basically what it does, it removes traffic from established journalistic sources… there’s a tendency of user to use it not critically at all.

Speaker

Marjorie Buchser


Reason

This insight revealed a fundamental shift in how people access information that goes beyond traditional concerns about fake content. It identified how AI is restructuring the information ecosystem itself, potentially undermining legitimate journalism through changed user behavior.


Impact

This comment deepened the discussion by showing how AI threats extend beyond content manipulation to economic and structural disruption of reliable information sources. It influenced the conversation to consider broader systemic impacts on media sustainability and democratic information infrastructure.


When the deepfake is easily possible to be spread on the platforms, my question frankly is that why is it so easy for the platforms to make the deepfakes to become accessible… there is a deep commercial interest that leads that deepfake to become a content of choice on the social media platforms.

Speaker

Senator Manosha Rehman Khan (audience)


Reason

This question cut through technical discussions to expose the underlying economic incentives that may be driving the problem. It challenged the framing of the issue as primarily technical rather than commercial/political.


Impact

This shifted the discussion toward examining platform business models and commercial interests as root causes. It prompted Camille Grenier to acknowledge the economic disinterest of platforms in addressing problems in smaller markets, adding a crucial power dynamics perspective to the conversation.


Should we be starting by thinking about literacy of the masses, or should we be thinking about the controls on the development and the deployment in our jurisdictions?… are you seeing any examples of countries in the southern hemisphere that are making good steps in legislation and regulation that is homegrown and that is speaking our language?

Speaker

African parliamentarian (audience)


Reason

This comment challenged the implicit Western-centric approach to solutions and raised fundamental questions about technological sovereignty and cultural representation in AI development. It highlighted how current approaches may perpetuate global inequalities.


Impact

This question fundamentally reoriented the discussion toward issues of technological colonialism and the need for locally-appropriate solutions. It prompted speakers to acknowledge regional variations and the importance of Global South perspectives, moving beyond one-size-fits-all approaches.


We need to be building these technologies in from the get go, these values in from the get go to the next generation of technologies… every generation of technology is a new opportunity to create something different and to try something else out.

Speaker

Lindsay Gorman


Reason

This comment reframed the discussion from reactive regulation to proactive design, introducing the concept of ‘democracy-affirming technologies’ and shifting focus from controlling harmful tech to building beneficial alternatives.


Impact

This introduced an optimistic, forward-looking perspective that influenced other speakers to consider innovation and technical solutions alongside regulatory approaches. It helped balance the discussion between identifying problems and creating solutions.


Overall assessment

These key comments fundamentally shaped the discussion by moving it through several important transitions: from theoretical concerns to documented evidence of AI threats; from simple solutions to recognition of systemic complexity; from Western-centric approaches to acknowledgment of global power dynamics; and from purely regulatory responses to innovation-based solutions. The most impactful comments challenged assumptions, introduced data-driven perspectives, and highlighted structural issues that required the group to think more deeply about root causes rather than surface symptoms. The discussion evolved from a technical problem-solving session into a more nuanced examination of power, economics, and global equity in addressing information integrity challenges.


Follow-up questions

How can we address the Western bias in research on misinformation and disinformation?

Speaker

Camille Grenier


Explanation

Research is concentrated in Europe and Northern America, and more research is needed from the global majority world to understand information integrity challenges globally


How can we ensure researchers, civil society, and journalists have better access to platform data?

Speaker

Camille Grenier


Explanation

A framework is needed to provide access to data from platforms so there can be better understanding of what’s happening in the information ecosystem


What career paths can be created for young researchers and entrepreneurs who want to build democracy-affirming technologies?

Speaker

Lindsay Gorman


Explanation

There’s a need to create research scholarships and career opportunities beyond just joining large technology companies for those wanting to build technologies with democratic values


How can we measure the effectiveness of media and information literacy initiatives globally and over time?

Speaker

Camille Grenier


Explanation

There’s insufficient systematic evidence of these initiatives globally and over time, and insufficient attention to children’s literacy


How can we ensure AI systems represent diverse cultures, languages, and perspectives beyond the Global North?

Speaker

Marjorie Buchser


Explanation

AI models are trained on English-speaking data mainly from the Global North, creating biases and threatening plurality and diversity of the information ecosystem


What indicators are needed to understand how generative AI impacts different regions, cultures, and people?

Speaker

Marjorie Buchser


Explanation

There’s currently no transparency about who uses generative AI, for what purpose, and what the impact is, particularly across different cultural contexts


How can countries develop homegrown legislation for AI regulation rather than copying models from other regions?

Speaker

Audience member from Kenya


Explanation

There’s concern about whether countries should develop locally appropriate laws rather than following top-down models like GDPR or American data protection frameworks


Why do social media platforms make it easy for deepfakes to spread and resist cooperation with countries to remove such content?

Speaker

Senator Manosha Rehman Khan from Pakistan


Explanation

There appears to be commercial interests that make deepfake content attractive to platforms, and they often don’t cooperate with government requests to remove such content


How can the implementation of the Digital Services Act (DSA) be accelerated at the national level?

Speaker

Camille Grenier


Explanation

Many countries still don’t have national laws implementing the DSA, and parliamentarians have an important role in using this tool to protect democracies


How can we ensure that content authenticity technologies and other democracy-affirming technologies gain widespread adoption?

Speaker

Lindsay Gorman and Dominique Hazael-Massieux


Explanation

These technologies only work if they are adopted, which requires the right market incentives and cooperation between technical communities and policymakers


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.