Parliamentary Roundtable Safeguarding Democracy in the Digital Age Legislative Priorities and Policy Pathways
23 Jun 2025 14:00h - 15:30h
Parliamentary Roundtable Safeguarding Democracy in the Digital Age Legislative Priorities and Policy Pathways
Session at a glance
Summary
This discussion focused on safeguarding democracy in the digital age through legislative priorities and policy pathways, bringing together members of parliament from around the world to address how digital technologies impact democratic institutions. The session was part of the parliamentary track at the Internet Governance Forum (IGF) 2025, with participants from Norway, Kenya, California, Barbados, and Tajikistan sharing their national experiences and challenges.
Key speakers emphasized the urgent need to balance freedom of expression with combating misinformation and disinformation, particularly as artificial intelligence technologies blur the lines between fact and fiction. Norwegian MP Grunde Almeland highlighted concerns about truth becoming less relevant in political discourse, while advocating for strengthening independent media organizations as a crucial countermeasure. Kenyan Senator Catherine Mumma outlined her country’s comprehensive legal framework including cybercrime and data protection acts, while noting ongoing challenges with hate speech and electoral misinformation that sometimes leads to violence.
California Assembly Member Rebecca Bauer-Kahn discussed the state’s pioneering role in privacy legislation and AI regulation, including requirements for watermarking AI-generated content and disclosure in political advertisements. She emphasized the need for “technology for good” and increased investment in academic institutions to compete with profit-driven tech companies. Barbados MP Marsha Caddle shared experiences with deepfakes targeting political leaders and stressed the importance of democratic literacy and creating a culture of evidence-based information.
Several participants raised concerns about technological dumping by advanced economies onto developing nations, comparing it to historical patterns of exploitation. The discussion concluded with calls for stronger international cooperation, similar to nuclear weapons treaties, to hold big tech companies and advanced nations accountable for their global impact on democratic processes and human rights.
Keypoints
## Major Discussion Points:
– **Balancing Freedom of Expression with Combating Misinformation**: Parliamentarians discussed the challenge of protecting free speech while addressing the spread of false information, particularly how AI and deepfakes are blurring the lines between fact and fiction in democratic discourse.
– **Legislative Frameworks and International Cooperation**: Panel members shared their countries’ approaches to digital governance, from Kenya’s comprehensive legal framework to California’s privacy legislation, emphasizing the need for harmonized international standards rather than fragmented national approaches.
– **Electoral Integrity and Democratic Trust**: Significant focus on protecting elections from AI-generated disinformation, deepfakes, and manipulation, with examples ranging from deepfakes of political leaders to concerns about electronic voting systems across different jurisdictions.
– **Technology for Good vs. Profit-Driven Solutions**: Discussion of the need to invest in academic institutions and civil society to develop beneficial AI tools, rather than leaving technology development solely to well-funded private companies focused on profit.
– **Global Digital Divide and Technological Responsibility**: Strong emphasis on addressing “technological dumping” where advanced economies and big tech companies export harmful practices to developing nations, with calls for accountability similar to nuclear weapons treaties or climate agreements.
## Overall Purpose:
The discussion aimed to bring together parliamentarians from different countries to share legislative approaches and policy solutions for safeguarding democratic institutions in the digital age, while fostering international cooperation on digital governance frameworks.
## Overall Tone:
The discussion maintained a serious but collaborative tone throughout. It began with formal opening remarks emphasizing urgency and responsibility, evolved into practical sharing of national experiences and challenges, and concluded with passionate calls for global accountability and cooperation. While acknowledging significant challenges, the tone remained constructive and solution-oriented, with participants demonstrating mutual respect and shared commitment to democratic values despite representing diverse jurisdictions and political systems.
Speakers
**Speakers from the provided list:**
– **Nikolis Smith** – Founder and President of StratAlliance Global, a strategic advisory firm supporting public-private partnerships and technology policy engagement; Session moderator
– **Junhua LI** – Undersecretary General of the UN Department of Economic and Social Affairs
– **Martin Chungong** – Secretary General of the Inter-Parliamentary Union (appeared via video message)
– **Catherine Mumma** – Senator from Kenya
– **Rebecca Bauer-Kahn** – California Assembly Member, Chair of the Privacy and Consumer Protection Committee
– **Grunde Almeland** – Member of Parliament from Norway
– **Marsha Caddle** – Member of Parliament from Barbados, former Minister of Innovation and Technology
– **Zafar Alizoda** – Member of Parliament from Tajikistan
– **Audience** – Various audience members who asked questions during Q&A sessions
**Additional speakers:**
– **Kenneth Pugh** – Senator from Chile, South America
– **Mounir Souri** – Member of Parliament from the Kingdom of Bahrain
– **Hugo Carneiro** – Member of Parliament from Portugal
– **John K.J. Kiarie** – Member of Parliament from Kenya
– **Anna Luhmann** – Member of Parliament from Germany
Full session report
# Safeguarding Democracy in the Digital Age: A Parliamentary Perspective on Legislative Priorities and Policy Pathways
## Executive Summary
This comprehensive discussion brought together parliamentarians from across the globe to address the challenge of protecting democratic institutions in an era of rapid digital transformation. The session, moderated by Nikolis Smith of StratAlliance Global as part of the Internet Governance Forum (IGF) 2025 parliamentary track, featured representatives from Norway, Kenya, California, Barbados, Tajikistan, and other jurisdictions sharing their national experiences and legislative approaches to digital governance.
The discussion revealed both the universal nature of digital threats to democracy and the diverse approaches being taken to address them. From deepfakes targeting world leaders to sophisticated misinformation campaigns undermining electoral processes, participants shared practical experiences and legislative solutions while emphasizing the need for international cooperation and balanced approaches that protect both democratic processes and fundamental rights.
## Opening Context and Urgency
The session began with a video message from Martin Chungong, Secretary General of the Inter-Parliamentary Union, who emphasized that digital technologies have fundamentally altered the information landscape, creating an environment where governments struggle to distinguish fact from fiction and electoral processes face constant manipulation. He highlighted how artificial intelligence has transformed the misinformation landscape through deepfakes, AI-generated content, and algorithmic amplification.
Junhua Li, Undersecretary General of the UN Department of Economic and Social Affairs, reinforced the need for global cooperation on combating misinformation, noting that fragmented approaches risk undermining democratic discourse worldwide.
## National Experiences and Legislative Approaches
### Kenya’s Comprehensive Framework and Challenges
Senator Catherine Mumma from Kenya provided a detailed overview of her country’s approach to digital governance. Kenya has established a comprehensive legal framework including the Computer Misuse and Cyber Protection Act, the Data Protection Act, and the Media Council Act. However, she acknowledged significant gaps, particularly in addressing misinformation and disinformation specifically.
“We don’t have a law that specifically addresses misinformation and disinformation,” Mumma explained, noting the challenge of “hitting the balance between protection of human rights and regulating and also allowing innovation to progress unhinged, is something that is beyond legislation, is something that sometimes is beyond the politics of the day.”
She described how Kenya faces particular challenges with misinformation and disinformation on social media during electoral periods, which sometimes escalates to violence and ethnic tensions. Mumma emphasized that electoral integrity depends largely on neutral electoral management bodies rather than just technology, highlighting the importance of institutional frameworks alongside technological solutions.
### California’s Pioneering Regulatory Approach
California Assembly Member Rebecca Bauer-Kahn shared her state’s experience as a pioneer in both privacy legislation and AI regulation. She detailed California’s Consumer Privacy Act (CCPA) and ongoing efforts to implement watermarking requirements and disclosure laws for AI-generated political content.
Bauer-Kahn focused heavily on technological solutions, emphasizing watermarking technology and device-level authentication for distinguishing reality from AI-generated content. She described California’s push for embedded authentication technology in cameras and requirements for platforms to implement watermarking systems.
She acknowledged constitutional constraints, noting that “constitutional protections of free speech create challenges in regulating misinformation while requiring creative solutions like disclosure requirements.” Her approach emphasized “technology for good,” advocating for increased funding for academic institutions to compete with large AI companies and ensure democratic alternatives to profit-driven technological development.
### Norway’s Media-Centric Strategy
Grunde Almeland from Norway presented his country’s approach, which centers on strengthening independent media organizations as a crucial countermeasure to misinformation. He detailed Norway’s legislative measures that prevent media owners from interfering with editorial decisions and provide extensive public funding for media organizations.
“Truth is becoming less relevant,” Almeland observed, explaining that AI-powered content creation enables people to remain in confirmation bias bubbles where they engage only with information that confirms their existing beliefs. He argued that this makes it “hard and harder to pierce with factual debate and true, well, facts.”
Almeland’s perspective was notably pragmatic, arguing that “most things are already quite heavily legislated” and that international cooperation is often more important than creating new legislation. He referenced an expert group on AI and elections and emphasized providing people with fundamental information to make their own decisions rather than making judgments for them.
### Barbados’ Transparent Parliamentary Process
Marsha Caddle from Barbados shared her country’s experience with implementing cybercrime legislation through transparent parliamentary processes that included extensive citizen input. She described Barbados’s transparency measures, including broadcasting parliamentary committee meetings and Prime Minister speeches.
Caddle provided a compelling example of the real-world impact of deepfakes: “The deepfake was about the prime minister saying something in relation to another major world power. Now that has the potential to completely, especially in this current global political environment, to completely put at risk a lot of what a country is doing with respect to policy and global engagement.”
She emphasized the responsibility of platforms to implement better verification methods while balancing accessibility concerns, and called for building local tech ecosystems that can create tools to fight misinformation while promoting innovation.
### Central Asian Perspectives
Zafar Alizoda from Tajikistan highlighted challenges facing Central Asian countries, particularly how global platforms apply different policies to different regions. He noted that while EU citizens benefit from GDPR protections, developing countries often lack the same priority in platform policies, creating unequal protection standards globally.
His intervention highlighted the unequal treatment of different regions by global technology platforms and the need for more equitable international standards.
## Critical Interventions and Broader Perspectives
### Addressing Technological Inequality
A significant intervention came from John K.J. Kiarie, a Member of Parliament from Kenya, who challenged assumptions about technological equality through a post-colonial lens. “To imagine that countries in places like Africa will at one point be at par with Silicon Valley is a fallacy,” Kiarie stated. “To imagine that such advanced economies do not have responsibilities is also wrong.”
He drew explicit parallels to historical exploitation: “what will happen with this AI is that my people will be condemned to digital plantations, just like they were condemned with sugar cane and with coffee and with all these other things that happened in slave trade.”
This intervention prompted discussion about the responsibilities of advanced economies and technology companies, with speakers acknowledging the need for more equitable approaches to global digital governance.
### Audience Engagement and Practical Concerns
The session included substantial audience participation, with questions covering electronic voting security, financial scams targeting vulnerable populations, and age verification challenges. These interventions highlighted practical implementation challenges beyond the policy frameworks discussed by panelists.
Questions about children’s rights and access to information in the context of age restrictions revealed tensions between protection and access that remain unresolved in many jurisdictions.
## Areas of Common Ground
Despite representing diverse jurisdictions, participants found common ground on several key principles:
### International Cooperation
Speakers consistently emphasized that digital governance challenges require coordinated international responses. Almeland suggested that IGF could serve as a platform for developing shared rules, while Mumma described how African parliamentarians have formed regional caucuses to share experiences and develop common approaches.
### Supporting Independent Media and Verification
There was broad agreement on the importance of independent media and verification technologies, though speakers proposed different approaches. The discussion covered various verification methods, from technological solutions like watermarking to institutional approaches focused on editorial independence.
### Balanced Approaches
All speakers emphasized the need to balance protection of democratic processes with preservation of fundamental rights like freedom of expression, though they acknowledged this remains challenging in practice.
## Ongoing Challenges and Future Directions
The discussion identified several unresolved challenges:
### Implementation Gaps
Multiple speakers acknowledged that policy development moves slower than technological advancement, creating persistent gaps between emerging challenges and regulatory responses.
### Cross-Border Enforcement
The global nature of digital platforms creates significant enforcement challenges, with existing international cooperation mechanisms often inadequate for addressing sophisticated cross-border digital manipulation.
### Technological Inequality
The discussion highlighted fundamental questions about ensuring equitable access to digital technologies and preventing the reproduction of historical patterns of exploitation in digital forms.
## Practical Outcomes and Commitments
The session produced several concrete commitments:
– Participants agreed to carry IGF 2025 outcomes back to their respective countries to drive policy coherence
– California committed to continuing legislative efforts on watermarking requirements and embedded authentication technology
– African parliamentarians indicated they would continue using regional caucuses to develop common approaches
– Speakers agreed to explore developing codes of conduct for social media platforms
## Conclusion
The discussion demonstrated both the complexity of challenges facing democratic institutions in the digital age and the potential for meaningful international cooperation. While participants represented different political systems and levels of technological development, they found substantial common ground on fundamental principles while acknowledging that implementation must be adapted to local contexts.
The conversation revealed that effective digital governance requires moving beyond purely regulatory approaches to encompass investment in beneficial technologies, strengthening of democratic institutions like independent media, and genuine international cooperation that addresses power imbalances. The parliamentarians’ commitment to continuing engagement through regional and international forums suggests potential for meaningful progress, though significant challenges around enforcement, technological inequality, and the pace of change remain to be addressed.
Session transcript
Nikolis Smith: Good afternoon, everyone, and welcome back. I trust that everyone was able to get a bite to eat and their stomachs are replenished and ready for more IGF 2025. So today’s session, the title of today’s session is Safeguarding Democracy in the Digital Age, Legislative Priorities and Policy Pathways. This session gathers members of parliament from across the globe to discuss how digital technologies are impacting democracy and what legislative and policy actions are being taken to preserve democratic institutions and trust. Again, my name is Nikolis Smith. I’m the founder and president of StratAlliance Global. StratAlliance is a strategic advisory firm supporting public-private partnerships and technology policy engagement. Now, before we call our distinguished panel to the floor, we have to start by recognizing a familiar face that I’m going to call in just a second. He’s been an advocate for the IGF since its existence. Please welcome Mr. Junhua LI, Undersecretary General of the UN Department of Economic and Social Affairs.
Junhua LI: Your Excellencies, distinguished members of the parliament, dear colleagues, good afternoon. It is my great pleasure to welcome you all to the parliamentary tract of the IGF 2025 in Lillestern. As we convened this important meeting, our purpose is very clear, to bring legislators together with all the other stakeholders in shaping digital policies and legislative frameworks to ensure an open, inclusive, and secure Internet for all. Under the overarching theme of the IGF 2025, Building Digital Governance Together, we will focus on the critical needs for international digital cooperation to address today’s digital challenges. Among the most urgent of these is the dual imperative to protect the freedom of expression while combating the rampant spread of misinformation and disinformation. The ability to speak freely, access accurate information, and engage in an open online discourse is the bedrock of democratic societies. Yet, these fundamental rights are being tested, not only by disinformation and censorship, but also by the rise of powerful technologies like the generative AI that further blurs the lines between fact and fiction, challenging our very understanding of truth. We face profound challenges, from the false narratives that erode trust in public institutions to the targeted disinformation campaigns that threaten peace and stability. The digital environment demands new approaches that uphold human rights while preserving civic space. At the same time, We must ensure that the responses to these threats do not infringe upon the very freedoms we seek to protect. As members of the Parliament, your role in navigating this complex terrain is pivotal. You have the authority to craft the legislation that safeguards the freedoms of the expression and the access to the information. Promotes the media and the information literature and strengthens the resilience of the democratic discourse. You can foster a digital environment where the right to express the diverse views is protected and respected. And where the reliable fact-based information is prioritized over the manipulation and distortion. This is how we can ensure that the innovation and inclusion advance in lockstep with the human dignity and safety. By actively engaging in this forum, you are not only contributing to a vital global dialogue on the digital policies, but it also shaping the national frameworks that reflect these shared values. I appealed and urged all of you to carry the outcomes of our discussions here in IGF 2025, back to your respective departments. Driving continued momentum and policy coherence at the both national and the regional levels. Over the past years, we have seen encouraging progress while expanding the parliamentary engagement in national and the regional IGFs. From West Africa to the Asia-Pacific, this localization of our goal conversation is essential. We are eager to learn from your insights and national experiences and identify the new avenues for collaboration. Let us strengthen this engagement and champion the digital governance that respects the freedom of expression, addresses the information integrity, and supports open, inclusive, and rights-based digital space. I extend my sincere thanks to the Inter-Parliamentary Union, the Norwegian Parliament, and our partners for their invaluable collaboration on the parliamentarian track, and for their commitment to integrating the parliamentary voices into the UN processes. I wish you a very fruitful exchange and impactful outcomes. Thank you.
Nikolis Smith: Thank you very much. Mr. Li, thank you very much for those words of encouragement. As we go through the challenges with Internet governance, now, this would not be a proper parliamentary track session if we did not hear from a very respected person that we all know, Mr. Martin Chungong, Secretary General of the Inter-Parliamentary Union, and as a prominent advocate for resilient democratic institutions, we have a video message that we would like to show you now.
Martin Chungong: Mr. Under-Secretary General, Distinguished Parliamentarians and IGF Participants, I have great pleasure in welcoming you to this Parliamentary Roundtable at the 20th Internet Governance Forum. This session provides a unique platform for parliamentarians, policymakers and digital governance experts to build consensus on one of the most pressing challenges, safeguarding democratic institutions in the digital age. At a time when democratic norms face unprecedented pressure and public trust continues to erode, global cooperation on combating misinformation is more crucial than ever. A fragmented approach to information integrity risks undermining the very foundations of democratic discourse and exacerbating the crisis of trust that threatens our societies. The rapid spread of misinformation through digital technologies has fundamentally altered the information landscape in which our democracies operate. Governments struggle to distinguish fact from fiction, electoral processes face manipulation through coordinated disinformation campaigns, and democratic institutions find their legitimacy questioned based on false narratives. The rise of artificial intelligence has fundamentally transformed the misinformation landscape, with deepfakes, AI-generated content and algorithmic amplification creating unprecedented challenges for democratic discourse. Yet, within this challenge lies profound opportunity. By working together across borders and political systems, we can develop common principles that preserve both free expression and democratic integrity. Parliaments as the voice of the people have a pivotal role in ensuring that digital transformation strengthens rather than weakens democratic governance. In our response, we are guided by the Global Digital Compact, an emerging international consensus on information integrity. And while the Global Digital Compact represents an important foundation, there is still much work to transform its vision into effective safeguards for democracy. I encourage all participants to actively engage in these discussions, recognizing that the frameworks we develop today will determine whether democratic institutions emerge stronger from the digital transformation. Together we can ensure that democracy not only survives the digital age, but emerges more resilient, transparent, and responsive to the citizens we serve. Thank you.
Nikolis Smith: Okay, thank you Mr. Chungong for those remarks. Now it is my deep honor to introduce our panel that’s going to be with us this afternoon. First we have Senator Catherine Mumma from Ghana, I’m from Kenya. Then we have Rebecca Bauer-Kahn, California Assembly Member, Chair of the Privacy and Consumer Protection Committee. Grunde Almeland from Norway, Member of Parliament. Marsha Caddle from Barbados, also a Member of Parliament and former Minister of Innovation and Technology. Zafar Alizoda from Tajikistan, Member of Parliament as well. I’d like to welcome them to the stage. Full disclosure, everyone, I made the first mistake, said that one of our first speakers is actually from Ghana, but she’s from Kenya. My apologies. Wanted to get that out there first and foremost. Okay, so here we are. We’re back. This is the parliamentary track session. We have a lot to talk about over this next hour. So what we’re going to do is we’re going to have our distinguished panelists here. We’ll go through a series of questions. We will also leave time for the audience to ask questions because that’s very important. And then we’ll have some closing remarks as well. So let me start first with the host of this year’s IGF, Grunde Almeland. Recently you just concluded an expert group on AI and elections, was recently tasked with this. What are the biggest challenges that you see, Gunda, from the Norwegian democracy in terms of how it faces and what are those challenges? What are you doing exactly in parliament? Because I know, you know, when we think of AI, it’s at every intercourse that we see, you know, from not only just the IGF, but other bodies. But I know that it’s very important for Norway and the parliament. So if you can just kind of enlighten us and kind of where you guys are at this point.
Grunde Almeland: Of course, first of all, it’s an honor to host this event here and it’s an honor for all of us in Parliament as well that this event is taking place here in our country. But to your question, I think what worries me the most is one of the key findings in the report you’re referencing, and that is that truth is becoming less relevant. And with that, the report that went through all these different elections in 2024 and 2024, how AI is super powering content, creating so much more content to engage with for people, we see that truth is becoming less and less important because what you engage with, what you look at, is things, content that is already confirming your held beliefs and are kind of helping you stay in this comfortable bubble that it’s hard and harder to pierce with factual debate and true, well, facts, so to say. And I don’t want to be all doom and gloom, because there is a lot that we can do. And one of the, you know, they look at a lot of different measures in that report. You know, they look at how you can build competency, you know, how to implement stuff in schools, how you, you know, you should advance research. But one of the key measures is supporting and strengthening independent media organizations. And I think this is the measure that I want to focus on in the beginning now, because it is such an important measure in order to have something that can kind of combat this reality that is being created in a lot of different bubbles. And there is such a connection between our trust as politicians, trust in us as politicians, and people having access to true information. and having access to also media or content that is being edited by a professional, well, independent media, edited media, so to say that they know that what we are doing as politicians are being checked, you know, that we are being transparent about what we are doing and this is where the media comes in. And I think for Norway, independent media has been an important political issue across the isle for a lot of years. I’m very happy to say that we are number one on the press freedom index and it’s partially because of what we are doing in parliament, but of course hugely what the reporters are doing every day on their work. But looking at what the parliaments can do, you know, starting with a strong legislative foundation and that is having, we are having an act that ensures independence, editorial independence, that ensures that owners of a newspaper cannot go in and challenge what the editor or the journalist are reporting on, making sure that we as politicians are not, we do not have access, even though we do allocate a lot of funds to the media, we are not able to access independent decisions on what is being reported. Any owner of a newspaper cannot require to see a journalist’s work before it’s being published. You know, these kind of legislative measures are really important to have a strong foundation. And then comes, you know, funding and we do quite extensively fund the media in Norway. I think this is very important in order to have all these, not only national newspapers that would be there, that could thrive in almost any kind of society, but also having those small local news outlets that can also check what the politicians and staff is doing on a local level. And I think having this kind of built-up media system ensures that people who know it can, you know, they know that they can access information on what we’re actually doing. And there is a lot to be said for this, you know, we have a lot to work on in order to become more transparent, especially when you see a shift in how we also communicate as politicians going from, you know, more simpler days of writing letters to each other and going now on to all these different channels of technology, you know, there is a lot of things to be done. But I think this is a good starting point and I’ll end on this note saying that the report also, you know, is also highlighting one last thing and that is that we have to be level-headed and not exaggerate the impact of AI because exaggerating it, you know, and trying to fearmonger as politicians is also a way of making that kind of misinformation have a stronger meaning in itself.
Nikolis Smith: Thank you for those remarks and you’re absolutely right about keeping the level, right, because we don’t want to approach everything with fear, right? We have to remember that AI is a tool that was invented by humans, right? But there’s benefits, right, to AI and what we’re going to talk about today will speak to that. And so I appreciate those introductory remarks. I wanted to turn now to you, Senator, my good friend from Kenya. You’re very well known in circles on the African continent. You’re very active in the regional things that are happening as it relates to the IGF. Can you help us kind of take us through what are you seeing as kind of the emerging threats in Kenya? And kind of what are the countermeasures that you’re taking to this point?
Catherine Mumma: Thank you very much. First to say that Kenya has embraced matters relating to digital technology in a profound way. We recognize that this is where the world is headed, not just on actually all democratic matters, including both politics and development. So Kenya has kind of anticipated this, and I would want to say that we have a good legal framework that currently supports the growth of internet and digital technological advances. We have a very facilitative constitution that protects freedoms of expression, the right to access information, but also provides for protection of human rights. It is very strong on human rights. It also provides for protection of consumer rights. As a result, we have a number of laws that actually guide or regulate issues relating to matters of internet and digital technology. We have the Computer Misuse and Cyber Protection Act. We have the Data Protection Act. We have the Media Council Act. We have the Copyrights Act that protects intellectual property. And we have the National Cohesion and Integration Act that set up a commission to deal with matters relating to hate speech. But we still have are challenges when it comes to misinformation and disinformation using the social medias. And we don’t have a law that specifically addresses misinformation and disinformation, not because the law is somewhere and needs to quickly come, as you will appreciate from the conversations we’ve had since this morning, hitting the balance between protection of human rights and regulating and also allowing innovation to unhinged, to progress unhinged, is something that is beyond legislation, is something that sometimes is beyond the politics of the day. Because as politicians, a lot of the misuse, the disinformation and misinformation is particularly during electoral times. And for us in Kenya, every time is election time. We actually finish elections today, and the next day we are competing for the next, we are already campaigning. So there is a lot of disinformation use of hate speech in our part of the country, or a part of where our country is. We have been, or we have suffered post-election violence following hate speech that was negatively, I mean, that used negative ethnicity. And that’s how we came up with the Cohesion Commission. But now with matters, digital technology, we’ve had a lot of misinformation that is used by political competing, politically competing groups to actually use demonizing language, misinformation around maybe national policies that are happening. to try and demonize a government, or information to try and demonize an opposition leader. And it’s happening to a stage where it’s ending up in violence. And I would want to say that our challenge, really, is on how we can regulate that without looking as if we are over-correcting or over-enforcing. There is also the challenge of the possibility of abuse of office, misuse by government of some of the privileges. How would we use, for instance, surveillance around matters, digital content, to the advantage of the government and to abuse rights of opponents, political opponents. So I would say we have a good legislative framework not complete enough in the sense that we still have to find ways of protecting rights, including rights of children. I think we have a lot of, we have a lot of access for children on the internet that is actually harming their health, including matters relating to pornography and so on. Now, we would need to think through to find out how do we, beyond the Kenyan parliament and Kenyan legislation, how would we think through a violator that is situated in another jurisdiction? What kind of conversation can we have in forums like the IGF to ensure that beyond national legislation, we are able to come up whether with its codes of conduct that would one hold, accountable, those in charge of these platforms, as well as ensure that the freedom to advance in digital technology happens. There is also, when you’re talking about human rights, we also need to think about beyond the issue of information and disinformation, how do we include more people? In our area, I think one of the things we need to do is have greater investment. Beyond regulation, we need to give some financial investment in the necessary public digital infrastructure that would see those in rural areas equally participating in the benefits of the digital space and technology, to see more women participating in this space, to see more other vulnerable and minority groups participating in this space. So as a country, I believe beyond protecting against disinformation, there is the issue of also inclusion, which is a human rights issue that we need to look at. So as we discuss this issue, beyond just discussing the regulation, we need to discuss how best to invest more in order for more people to participate in this space.
Nikolis Smith: Well, let me just say that, hats off to all the work that you’re doing in Kenya, because as you listed a long list of laws that you’ve been able to implement, so that’s progress, right? Obviously, you made the point clear that there’s still more progress to be done, but I think that Kenya’s in the right direction, they’re going in the right direction, and I commend you guys for that. Thank you. You know, on this same topic, I wanna move now to… Assemblymember Rebecca Bauer-Kahn of California. You know, I lived in California as a kid as well, so there’s a little bit of priority here with that. But California has been very active in this space. Probably more active than other states in the country. So tell us how, you know, what are the approaches now? Knowing what you did back in 2018 when CCPA was passed, and we’re looking now into the future, where are we going forward? And when we think about regulatory approaches, what’s being explored in terms of, to ensure information integrity, right? Because now, as the Senator mentioned, it’s elections, right? And we just got through one election, right? And there’s gonna be more on the horizon. So in terms of integrity, where do we go now?
Rebecca Bauer-Kahn: Well, thank you so much for this conversation. It’s my first time at the IGF, and I have to say that one of my takeaways so far this morning is that, despite the fact that all of our jurisdictions are so different, we’re really all struggling with this same issue of information integrity. And for those that don’t know, the law that was cited is our privacy law. We were the first state in the United States to pass a privacy protective piece of legislation shortly after the European Union passed their privacy laws. And some states have followed, but we’re still not nearly as protective as the European Union. And, you know, I should ground this in being from California. We are home to 32 of the top 50 AI companies. We are home to all of the major social media companies. So, you know, these are the people I represent. They make this technology. They proliferate it to the world. And with that, I think we feel a great responsibility and sitting there this morning, listening to what is happening across the world as a result of some of this, it’s intense what these companies are doing to change the global ecosystem. And we have the federal government. I have long believed the federal government is in the best position to regulate these technologies for us as a country, but they won’t, and they don’t. And so the states are taking it upon themselves to protect our constituents and to try to push these companies in the direction of responsibility. But as I’m sure many people in the room are aware, we too have constitutional protections of freedom of speech. Our constitutional protections say that we may not stop people from speaking. It also says we cannot force people to speak, which is an interesting dynamic because one of the ways that we have tried to combat mis- and disinformation as a result of our First Amendment, our protection of freedom of speech, is to require more speech, to say you have to disclose when you’re using AI in a political advertisement so people know that they’re seeing something that’s AI-generated. That’s held up in the courts right now because the courts are seeing that as forced speech. And so we have a very complicated dynamic of how we get at this issue of mis- and disinformation when we have such strong protections around speech. But that’s one way, and so we continue to try to do that. We’ve passed legislation that requires those disclosures, that require the platforms to take down serious misinformation in the political context, although political speech is even more protected than your average speech in America, so that is a really challenging thing to do. And so the next step we’re taking is trying to push forward on watermarking, which I know is something that the European Union has pushed for. But this ability to understand reality from fiction I think is fundamental to protecting our democracies. And so watermarking and the technology that will go along with it is critical. And I think with the EU pushing on watermarking and California pushing. You know, we are the fourth largest economy in the world. We have a lot of tech companies in our backyard, and can we really make sure that technology comes to fruition so that around the world we can all require it? We can all say we need watermarking. Right now the technology is not yet where we want it to be, but if it is there, maybe it will give us that ability for constituents to know what is real and what is not, and I think that would be game changing. Just a few hours ago someone asked me about what’s happening in California. Many have seen in the worldwide news about what is happening in my home state as it relates to our friction with the federal government right now, and one of the things we’ve faced is massive disinformation. So many deepfakes about what is happening on the streets of Los Angeles. I was there just a week ago. It is incredibly peaceful. That is not what you’re seeing on the social media sites because of all the deepfakes that are being generated, and when people cannot tell that from reality, it leads to serious outcomes in our elections, in our society, and we have to do more, and so California’s gonna continue to push, although I will say that right now the federal government is moving what would be a 10-year ban on state enforcement of artificial intelligence, which would stop most of California’s efforts, and so there really is, when I talk about friction with the federal government in California right now, it’s, I can’t overstate it.
Nikolis Smith: Thank you for that. As a former federal employee, I’m not going to start any more frictions right now, especially that we’re both from the same state, so I wouldn’t want to do that. I do want to turn, though, to another region of the world, and for everybody here who does not speak Russian, please use your headsets. I want to turn now to our friend from Tajikistan. Mr. Alizoda, in kind of the Central Asian response to information manipulation and the steps that are being taken, can you talk to us about the measures that you are taking in terms of building that type of institutional resilience as it relates to everything that you’ve heard so far?
Zafar Alizoda: Thank you, Nikolis. I would like to note and provide information on international stability and the assistance of comprehensive information in Central Asian countries, including the protection of personal data, which is one of the most important tasks in terms of the application of digital technology in Central Asian countries. Each country has its own laws that regulate the collection, storage and use of personal data. In Central Asian countries, personal data falls under the broader concept of the right to protect private life and privacy. According to the legislation of these countries, privacy is a personal property. Responsibility for violations related to personal data depends on specific circumstances, as well as on the legislative norms and rules in the country. Sensitive personal data is a category of personal data that relates to the most personal and confidential information of a person. They can include such data as race or ethnic origin, political beliefs, religious and philosophical beliefs, professional affiliation, medical information, biometric data, information on finance and credit history, and so on. Personal data of this category requires special protection and processing, as their disclosure or use can lead to discrimination, stigmatization and other negative consequences for a person. The formation of legislation that comprehensively provides for the right to regulate relations related to the protection of personal data is one of the most complex tasks of the state. Currently, the Central Asian countries are actively developing a legal institute for the protection of personal data. However, it should be noted that the legislation does not regulate many issues in the field of the protection of personal data of citizens. It can be attributed to the absence of national legislation on the protection of personal data. on measures to react quickly when personal data is leaked and the necessary measures to minimize the consequences of such a leak, as well as the absence of obligations on the notification of authorities for the owner or operator. The provision of digital privacy is a complex problem affecting the rights and legal interests of the public and private sectors. According to the assessment of national experts, it is necessary to revise the law on the protection of personal data and to make adjustments to it that reflect the modern applications of advanced technologies. The development of the digital economy, even with all its good intentions, should not have the consequences of the refusal to protect human rights and freedoms. Any current and proposed business practice should envisage the assessment of its consequences for the non-permanent privacy so that there is an opportunity to consider the provision of information on how politics and technology ensure the alleviation of risks for the non-permanence of private life. In parallel with European law, as our colleague has already said, on the protection of personal data, legislators and central countries should consider the possibility of introducing a legal mechanism for assessing the risks of the General Regulation on the Protection of Personal Data, GDPR. It should be noted that the DPI assessment procedure is not always used, especially when the processing of data is associated with a high risk of violation of the rights and legal interests of citizens. For the conscientious and effective development of the technology used, society should have modern effective legal instruments and independent control over the compliance of human rights to the non-permanence of personal life and confidentiality of personal data. It should also be noted that at the legislative level, there is an important role for the promotion of information policy. and the strategy of legal measures that allow to increase integrity and balance of the information space, the Central Asian countries are state members of the Inter-Parliamentary Assembly of the CIS and jointly develop joint proposals of national parliaments of the countries of cooperation
Rebecca Bauer-Kahn: on issues of mutual interest in this direction. Thank you.
Nikolis Smith: As you mentioned, I think the first kind of task I think our panelists would agree is that domestically as you’re going through a process you have to have something to formulate a risk assessment. It’s key. I think that using the IGF to discuss these issues would be a great opportunity so hopefully you’ll be able to take back what you hear here and take it back to Tajikistan and continue those efforts. Last but not least from the island of Barbados Miss Marsha Caddle What steps are underway in Barbados to rebuild public trust in elections and the democratic system?
Marsha Caddle: Thanks. That’s a big question.
Nikolis Smith: It’s loaded.
Marsha Caddle: Let me just set some context by saying that Barbados is a small island in the Caribbean. Population of 270,000 and declining, which is another of the existential threats that we’re facing. Falling population and aging population. Barbados has always had since independence a history of stable, free and fair elections and a high degree of political stability. I think it’s important to set that context because you know the circumstances in which we are talking about these issues of maintaining democracy and democratic participation are against the backdrop of expectations of stability and truth. The other thing that we have in Barbados is extremely high internet and digital penetration. So you’ll see our numbers say something like 114% mobile penetration. So we are kind of over the maximum, right? People have very immediate access to information and high expectations about that information. So then the question becomes about not just access, but meaningful access and use as we talk about these issues. Just before I got on a plane to come here, the office of the prime minister in Barbados had to push out urgently warnings about a deepfake that had just been circulating. And I want to share the example because it highlights not just the domestic issues when it comes to democratic participation and trust, but also the potential risk to destabilize global relations, international relations, and foreign policy. The deepfake was about the prime minister saying something in relation to another major world power and saying untruthfully that Barbados was taking a certain diplomatic stance with respect to this country. Now that has the potential to completely, especially in this current global political environment, to completely put at risk a lot of what a country is doing with respect to policy and global engagement. So we’re not just talking about domestic trust, but we’re talking about international and a country’s global position in the world. And so I wanted to say a little bit in answer to your question about what we are doing. One of the things I think is very important. is this issue of democratic literacy. How do people interact with policy conversations, with electoral processes? So one of the things that we’ve tried to do simply is to push out as much truth and transparency as we can. Start to get people accustomed to an environment of truth and evidence, because that, even before we started talking about technology, is perhaps something that hasn’t been as strong as we would like. So for example, we have these joint select committees of parliament that consider issues before they’re passed, consider legislation that have to do with governance, with social issues, and so on, and they’re broadcast. There are very few things that the Prime Minister says, speeches, engagements, that are not broadcast either in real time or recorded and shared. And why? Because we want to be able to get people used to the idea of truth. This is the original source, and this is where you can find it. You can find it on these channels, you can find it in these ways. The other thing that we are working on is investing in a tech ecosystem that can balance or build tools that essentially fight against misinformation, so that there are others who are investing very heavily in misinformation. What can we do to invest in tech creators who are going to combat that with things that promote truth? One of the things, though, that we think is very important is encouraging platforms to return to more robust methods of verification. We think that that is critical. And I’ll say very quickly, I think it was Rebecca who said earlier that political speech is very protected in general in your jurisdiction. The interesting thing is that while political speech is protected and while I can sit in one jurisdiction in a country like Barbados and see things proliferating about political actors in my space, on the other hand, as a political actor on a social media platform sitting in Barbados, I am not trusted to generate content. And so as soon as I try to generate content as a politician, I’m told, well actually you’re a politician and we’re not sure that you should be able to say these things on our platform. So the question of being able to combat misinformation is also, I’m also constrained because of some of the rules that platforms that are generating in other parts of the world but impact the way I can talk to my constituents, the way that they operate. So I think that these are some of the ways that we’ve started to really try and encourage an environment of real evidence and truth. There is legislation. I was the minister who brought cybercrime legislation. We took it to the Joint Select Committee. We heard evidence. We heard pushback. We heard concerns on human rights from citizens and we amended the legislation. And so I think that this is a healthy way to get people in the conversation and make sure that we realize that democratic participation and adherence to truth and evidence is everybody’s concern.
Nikolis Smith: Thank you very much. So we’ve talked an array of different areas in terms of what we’re doing within our governments, the challenges that we’re facing. What else? We’ll start back with you, Senator. But what are the other gaps that we’re missing? I think there’s room there where we can recognize the existing challenges that are there. But are there areas that we’re not focusing on? Maybe that could help us kind of bring this together? And then the second piece of that is that on the non-legislative side, right? Are there areas where we can collaborate? Obviously the IGF is a great platform, right? But are there other areas that we could be doing on an international front, right? Because we’re looking at it from a domestic lens, right? To make sure that we come together. So I’ll start with you.
Catherine Mumma: Thank you very much. Now, because we are parliamentarians, I have noticed that we tend to focus on the impact of technology in the political space. But I would want us to think broader and imagine the innovation in the health sector, for instance, with digital technology. How will telemedicine look like? And how should parliament anticipate the possibilities of human rights violations with advancement of digital technology in the health sector? And therefore, how would that law look like? So we should not be fixated with a particular law that would deal with matters of digital technology. We need to think broadly to see, would we need to look at the laws in the health sector? Do we need to tweak something in the health sector, in the water sector, in the other sectors, so that we know the dangers that we are seeing now around democratic spaces could actually extend and have even more, or even more profound implications to the common person. So we need to think broadly around that and agree on how best to deal with this. So I think laws on digital technology are not about a particular legislation. It’s cross-border. And we need to think beyond this and allow our professionals. in all sectors to help us think around this. Now when it comes to thinking on what to do internationally and nationally, first to thank the IGF for the proactive way in which they are actually moving this agenda and getting us to learn and also discuss more within their forums. In the Africa space, African parliamentarians have actually taken the liberty to form the African Parliamentary Caucuses, the Africa-wide in West Africa, in East Africa, so that we can actually compare notes and know that what happens in Tanzania will affect us and Kenya will affect those in Malawi, will affect those in Nigeria. So we need to start borrowing from each other and listening to each other and learning to grow on this. And beyond legislation, we need to find out how the mechanisms we have in place could be built upon to do, somebody in the morning talked about a mechanism for auditing information. How would that look like? Kenya has the Data Protection Commissioner. It also has the Media Council. Should we add on to their mandates some more clauses that will help us to monitor the area better? Do we need maybe an African Union or East African community mechanism that will help us to check the situation further? So there is all these opportunities. Thank you.
Nikolis Smith: Thank you. We have about 10 minutes left of this section, then we’re gonna go into some Q&A. So I’m just gonna go down the line. Rebecca, I’ll turn to you next.
Rebecca Bauer-Kahn: So I think that where I’m sitting, one of the things that is missing is technology for good. We see sort of technology in the hands of very few players right now. that are, for better or worse, profit-driven. And how do we push technology to be the solution in the technology age? And I think that that’s something that we really need to be working on, both locally in California, but also globally. And so part of what we’re trying to figure out is how can we fund that? How can we put more money into our academic institutions to have the compute power to compete with the largest AI companies? Right now, the only companies able to build large language models are these very well-funded companies, and our academics need to be in that space. Our civil society needs to be playing in this space to create technology for good. There’s one example for us in the United States. On the intellectual property front, the University of Chicago created an AI model that allows you to put something into your copyrighted material that if a model is trained with your material, it will actually refuse it, if you will. And that’s not a legal protection, it’s a technology protection. And those are the kind of tools that I think we need to really allow us to battle against, as you said, the misinformation and the disinformation ecosystem that is growing and can, I believe, be solved in part by better technology for good, and we have a real role in doing so. And part of that, I also, the reason we believe we’re home to so many of these companies is our academic institutions, is the training that they provide. And if we’re investing in technology for good at our academic institutions, are we then putting people into the world to create companies for good? And how do we create that ecosystem, I think is really important. And then I’ll say on the global landscape, I think it is this kind of collaboration. I think it is understanding we’re all trying different things. We’re all out there in a world that was created over the last decade, trying to find solutions to very new problems. And as has been said many times today, this technology is moving faster than the policy. And so to the extent that we can listen to each other. and hear what is working in your jurisdiction, how can I bring it home to help the people where I live, I think we’re better off, because we have to move fast in order to protect our societies, and the only way to do that is in collaboration and partnership.
Nikolis Smith: Thank you, Rebecca. Grunde?
Grunde Almeland: Well, I think I’ll pick up on technology for good, and as you were talking about democratic literacy, we know very well that being able to adapt digitally does not necessarily translate to democratic literacy or media literacy, but one of the reasons why I wanted to focus on independent media in the beginning is because the example of Norway is also an example of how digital adaptability enabled us to still have quite a high level of trust in media, and how that technology actually were able to create a foundation for people being able to access independent media, being able to have this high level of media literacy that we are very fortunate to have, and I think it’s such a good example, because in the months and years that we’re moving towards, using that same kind of inspiration on taking technology as a tool to enable to have more transparency, making sure that we adapt technology that strengthens these kind of institutions that we want to uphold, strengthen democracy is such an opportunity. There are so easy to point at all the challenges, because they are so evident and apparent to us, but it’s also such big possibilities in having these tools that also can create more transparency. Just as a small example to end off, we have a lot of complaints from journalists in Norway of how much time we’re using to review their applications for access for information in government institutions. It’s such a small example of how you can simplify a lot of these processes as well. That ensures the whole process being more simple, easier accessible for journalists and also more transparent for the public.
Nikolis Smith: Thank you.
Marsha Caddle: Yes. So I think that, you know, creating this culture of evidence so that people feel that, OK, should I propagate something if I cannot show that it is true? I think that that is something that that has certainly helped. But also investing in in the kind of learning, certainly in countries in the in the global majority countries, investing in the kind of learning that will allow our people to create tools that they find useful and that they that they generate and are able themselves to trust one of the ways in Barbados that we started when I was minister of technology is to be able to train people in things like data analytics and data science. And this is not just informal academic institutions, but partnering with companies. There’s one, for example, that does a lot of work on the continent of Africa called Zindi that we’re working with so that people can learn some of these skills and be able to create. Tools and play in that space, we think that there’s an AI value chain, which means that for some countries in the global majority, it may not be practical to say we are going to build these large language models, but we can create at some part of that value chain and start to create some of these technology tools. So I do think that the culture of evidence to support strong legislation and establishing sources of truth that people see that they can trust is a part of of the puzzle as well.
Nikolis Smith: Thank you. Headsets for.
Zafar Alizoda: I would like to add that the policy of global platforms is different for different countries and regions. If, for example, the data of EU citizens is protected by GDPR, then small developing countries in the regions of Asia are deprived of such a priority. For example, the legislation of Tajikistan, although it is close to GDPR, but still many issues remain unresolved, such as the trans-border data transfer, the third-party data transfer in order to improve the product of the company. It is possible to improve the legislation of Tajikistan, and the legislators of Tajikistan always work on these issues. However, there is a question of the application practice. It is difficult to control the implementation of the law-enforcing levers due to the limited market for global platforms. In this regard, I think it is necessary for global platforms to improve their policies for all users, regardless of the country of the user.
Nikolis Smith: You can go to the microphone. Make sure to state your name and affiliation, please.
Audience: Thank you. Hi, I will make my question in Spanish. So, put the headphones on. I would like to ask you about the Tribunal of Ethics of the Council of the Peruvian Press. In the electoral processes, there is a basic problem, which is a massive action to move towards the digital vote. We have already had a couple of problems in Latin America. One is the data transmission, not precisely in the electoral process, but in the transmission. and the other in electoral processes, as was the case in Bolivia and Venezuela a few years ago. There is a movement of retrogression in declaring electronic votes unconstitutional in various countries, not only in the Latin American region, but also in other countries around the world. From the legislative point of view of these countries, how can we avoid the misuse of these electronic systems, especially the electronic vote, to avoid affecting democratic processes? And this in addition to the processes prior to the elections, as happened in Romania, where disinformation or misinformation is used, if you want to put it that way, to affect an electoral process. So we have, on the one hand, the problems prior to electoral processes, and on the other hand, we have the problems of the electoral process itself. Thank you very much. I’m Senator Kenneth Pugh from Chile, South America, and I would like to ask the panel precisely about an issue. We are human beings. Humans, we have human rights, and that’s Article 19 in the chapter of human rights. Problem is, in order to get confidence, we need to know each other, we need to talk, we need to have a will, and then we will trust. So human beings need to be in contact. How we are gonna achieve that in the digital environment with digital trust, when we are providing one of the most important right, which is freedom of expression to artificial intelligence, which are not humans. How we are gonna define who has a human identity, it doesn’t mean that we are getting the ID provided by the government. How we are gonna differentiate humans from not humans in the cyberspace. How we will know if they are minors or not, because in the real life, we can see them. It’s a young boy or girl. But how we will do it in the cyberspace. If you have anything to share, I will be very grateful. Thank you very much.
Nikolis Smith: Thank you. So why don’t we pause there and start addressing before we go through the whole line, and I lose track of all the questions. So why don’t we start directly with Hiz. Anybody on stage wanna go first?
Marsha Caddle: Yeah, I think Rebecca will end up talking a lot longer, so let me take the last one. Yeah. Let me take the last one about how do you, really this is about intent. The last question, how can you differentiate the origin of an idea or a certain idea? set of information, and I think it is less about origin and more about output, that’s going to be what we end up having to regulate and legislate, because it is going to be very difficult to say, just like often we don’t know now, we can’t see the author behind something, we may be able to eventually verify, but that takes time and more and more people are very impatient when it comes to information. But I think one of the things that we’re going to have to concern ourselves with is really verification of what is generated, and as well as being able to tell where it was generated from, so to require that we can see that this particular output used AI, but also to be able to use different pieces of legislation to generate output. So for example, we’ve seen cases recently where there was coercion used to get young people to do certain things, and this came from an AI actor, and what that jurisdiction, I don’t want to mention which country it was in, but what that jurisdiction ended up having to start to look at is, well, what is the kind of content, no matter the source, that can make its way into this space where vulnerable people are represented, and to start to use kinds of keyword technology and authentication technology to say, look, because this content has come into this space, it cannot be allowed here because of the nature of the people who are here, whether they’re young people and children and so on. So I think that, as you say, it’s going to be very difficult more and more to kind of police the difference between the two, but I think identifying the and then regulating or being able to direct the content and the outcome is going to be more and more the kinds of work that we’re going to have to do.
Nikolis Smith: Thank you for that.
Catherine Mumma: On the issue, I think it was the first question about what can Parliament do about electoral laws and electoral systems where there is misuse of technology, I guess, for rigging elections. My view is that a good electoral system is largely dependent on the electoral management body because for the fraud to happen, whether with AI or any, it usually will take place with some collusion by people within the electoral management board. So whether it is through the person they procure to carry out the elections, whether it is through their own IT systems and Kenya has usually gone to the Supreme Court to discuss the issue of manipulation of the transmission of elections, the electronic transmission of the presidential results and we’ve had issues where, for instance, the last election, the electoral management body completely refused to open the service to audit the electoral system that transmitted the results. So I believe it’s not so much the technology as is a corrupt set of minds that are behind this whole thing and I believe if electoral management bodies is remained neutral, then elections, whether digitally driven or not, will remain credible.
Nikolis Smith: I know both, I was looking at Grunde and Rebecca, you guys are both vying for one, so I’ll let you go first.
Rebecca Bauer-Kahn: No, I love, I just want to start there, which is I love that perspective in the United States, every state runs their own elections, and so it’s done differently across our whole country. And I think that this is a question not just is the election integrity real, but do people believe it, right? I mean, part of what holds our democracy up is sort of an agreed principle that our elections are free and fair, and that’s been a challenge. And so I do think that one of the things that’s critical personally is a paper trail. So even if you’re using a machine that you get a receipt, that there is a way to audit it, which I think is so critically important, so I’ll just add that on. Somebody asked about our watermarking legislation in California, last year we did pass a law, and it was signed by the governor that requires the platforms to show a watermark in a few years. And we did that because we wanted to signal to the innovation economy that California was going to require this, because we knew that although the technology isn’t there today, if we required it, the brilliant minds out there would create the technology because there would be a market for it. So I believe that’s coming, which is very exciting, I think, for the whole world. This year, we’re moving a piece of legislation that would require the devices, so your camera, to have embedded in it the technology that would authenticate. We actually, as a legislature, we’re one of the entities that used Adobe’s technology for the first time, where every image we took in-house was run through their watermarking technology, so that when we then put it out into the world, we could trace it back. We could prove whether this was a real image or whether it was a doctored image. So that technology is coming. it in practice and I think it’s really exciting because it is one of the things that will enable us to see fiction from reality.
Nikolis Smith: There was a question about financial scams. That’s something that has come up. We have not moved legislation yet on AI and financial scams but I think it’s so important and I think the foundation of that is privacy legislation. Part of why the financial scams are getting so sophisticated is because there is so much access to information about every single one of us. When they call and they say they’re your aunt or your uncle and they know your children’s names, you fall for that scam in a way you wouldn’t if they didn’t have that much information about you. So part of it is protecting privacy which I think is critical and then the second piece is making sure that as you said, this isn’t just about AI legislation. It’s about legislation. We already have laws that outlaw these type of scams. So how do we say that it’s as much a violation of the law if it’s done by a real person or done by an AI tool and making sure that our laws that protect our communities extend out to all AI actors which I think goes perfectly to the last question which was about this question about AI being humans and that’s an interesting question in the United States because we just had a court for the first time have to struggle with this question. There was a chatbot that a young boy in Florida died by suicide because a chatbot told him to take his own life and the mother has sued the company and the company claims they had a First Amendment protected right to speak. They had a right to speak. The chatbot could say whatever it wanted and the court said no. The chatbots do not have constitutional rights like humans do. So that was a huge win. It’s one court but I think it’s a step in the right direction to saying these AI companies are not humans. They are not the same as you and I. They do not have the rights that we do and really pushing that forward I think will be critical in making sure that we have the protections necessary to protect them from these AI tools.
Grunde Almeland: I think it’s important to remember as politicians and legislators that we should not meet this whole new world of technology with panic and believing that we do not have anything legislated already, because most things are already quite heavily legislated. And sometimes it has to be amended, and sometimes we need to come up with new legislation, but most of things are already legislated. We just have to see how technology fits into it. And I think this relates to a lot of the different questions that are put forward here. And talking about the scam as well, I think this is, well, AI used in scams, you know, it falls under, I think if you look in any kind of country and legislative space, you see that this would fall under what is already in criminal codes. But the issue is really that there is a lack of cooperation between countries in order to tackle these new challenges. And I think when you see, we have a really good example that came out in Norwegian media just a few weeks ago of a system called Magicat, and it’s a great piece of journalism that is available in English as well, you can Google it, and it shows how sophisticated these kind of scams are, and how not well prepared Norwegian society in this case was also to actually tackle these kind of international scams. So I think international cooperation is often more the answer than, you know, coming up with the exact new legislation. And just a quick remark on the watermarking side of it. We have a good case study about this in the Norwegian society as well, the media landscape in Norway came together to create this technology, and are cooperating with BBC and New York Times, a lot of these big media outlets to have good watermarking technology in place. implemented in journalism, but what is the key component of that is not, you know, having this kind of verification check, but it’s having the information accessible for people to, who took the photo, where is it from, this kind of essential information that gives people an opportunity to make their own decision upon the content and are not trying to force them to by saying this is real or this is not real. So I think this is something also to remember for us as politicians that we need to give people the fundamental information, just not always trying to decide for them.
Nikolis Smith: So we are running out of time, but I want to see if we can be really efficient in the queue line. I know that there’s some members of Parliament that is also looking to ask some questions. So can we just do really shrunk-created questions, make sure they’re not too long, so we can get some quick responses before we go into the closing part. Please.
Audience: Assalamu alaikum. I am Mounir Souri from the Kingdom of Bahrain, a member of the Parliament. Can you hear me or do you want me to speak in English? Is it okay in Arabic? English would be great. I think it’s very important that we have the power to control the legislations in order to protect the society. We say today that it’s difficult, the legislations have changed. The technology has evolved day by day and the legislation is difficult. If we make a legislation today, the technology will evolve tomorrow. If we make it, it will be difficult to preserve it. My question today is, is it possible that there are other powers other than the legislations to protect privacy while we have freedom? We are between transparency and freedom. Do we want to protect the society? Can we make the AI control the content? So that we don’t depend on humans? The AI is producing information and people are misusing it. Can we, as artists and officials, make the content balanced? Does it help itself to protect the content? Does the AI protect the content? Thank you. My name is Hugo Carneiro and I’m an MP from Portugal. So the questions are this. Social networks should verify news and fake news and misinformation. But following the US elections, we became aware that, for example, Facebook Meta will stop doing that kind of verification. What do you think that we should do? So regulation should be a solution for these kinds of cases or we should trust in these big companies to verify this information. Second question. a colleague asked before about the ID, when we want to open an account, a bank account, for example, even if we use a cell phone, we have face recognition, we can take a photo from our ID, should we implement these solutions, for example, when someone wants to open an account in a social network, because there are a lot of fake profiles, and I don’t see other solution if we don’t give a stronger step on this. And last question, the French president, Emmanuel Macron, announced that probably he will enact laws to prohibit youngsters under 15 years old to have a social network account. So there are a lot of misinformation and fake news that influence the political decisions that our youngsters are taking or are learning. Do you believe that a solution or a path should be to prohibit youngsters of having social network accounts? Thank you.
Nikolis Smith: So we’re going to have time for one question for each side, and we’ll try to do our best in a kind of a lightning round as we respond to this. I know I wish we had more time, but one more question from each, and I would encourage you at the conclusion of this, there will be a reception this evening. Some of the MPs, you’ll be able to kind of engage with them there, if you don’t get a chance to answer your question here now. So one more on each side, thank you.
Audience: Thank you very much. John K.J. Kiarie, Member of Parliament in Kenya, and it is to all the people on the panel, including the moderator, and mine is to ask, to your mind, what do you think are practical pragmatic steps that IGF can take to place responsibilities not only on big tech developers but also on economies that are advanced, in jurisdictions that are advanced to an extent that they are feeding all the other countries with the technology they are doing so that we have so much of technological dumping because to imagine that countries in places like Africa will at one point be at par with Silicon Valley is a fallacy. To imagine that such advanced economies do not have responsibilities is also wrong. We are here at IGF. It takes a lot for even some of these countries to be represented at IGF and we have real cases of technological dumping that does not speak to even the basic human rights. In Kenya, for example, we had a company walk into the country and start collecting biometric data, scanning people’s irises and inducing vulnerable populations with tokens in the name of world coin and the behavior that they brought to Kenya are things that they would never do in their own countries, even with the existing laws. But whenever the countries in the southern hemisphere raise this, we are told to go and develop our own laws. So I’m asking, is IGF practically able to rein in on us so that we can place responsibilities not only on big tech, but even on countries that develop this technology to the extent that they carry responsibility to carry everyone along. Because as we speak today, even as we talk about Internet, everything about Internet is never manufactured in Africa. We do not manufacture the fiber optic cables. We do not manufacture the devices. We do not have a single satellite in the terrestrials. To imagine that we are all on the same table and working on the same laws at IGF and working on the same conventions would be a fallacy. So I am asking practically, is IGF able to do what the world did on the onset of the nuclear weapons? Because that was fast. We are here in the 20th IGF, but when we look at nuclear, the bomb was invented in 1945, and by 1957, there were already treaties that were putting responsibilities on the developers and on the inventors of that technology. When will that happen for internet? When will that happen for social media? When will that happen for big tech? When will that happen for the countries that are so advanced? Because if we do not do anything right now, we will end up exasperating the divisions that exist and the disparities that exist, and what will happen with this AI is that my people will be condemned to digital plantations, just like they were condemned with sugar cane and with coffee and with all these other things that happened in slave trade. To imagine that we will all work together as a world is a big fallacy. What practical examples can we take out of IGF? What practical actions can we take to put responsibilities where it belongs? Because to imagine that we are all okay in that part would be a big fallacy. Thank you very much.
Nikolis Smith: Thank you very much. So, for the last question, what we’re gonna do, we’ll do something a little bit different instead of having everybody respond to all those questions. Why don’t we package that in our closing remarks? And we’ll start with Mr. Alizoda because he didn’t get a chance to respond, and that way we can still finish. One last question? Yes. One second.
Audience: I tried to be very brief.
Nikolis Smith: Thank you.
Audience: And a concrete question. because I would be very interested in a Kenyan perspective on this issue. You said earlier that you hope that IGF will help to facilitate a code of conduct for social media organisations’ platforms. I’m wondering if you really think that that will be enough, a voluntary code of conduct for social media organisations, or if we rather need more like standards regulation plus also alternative platforms that actually work for democracy instead of undermining it, that work for freedom of speech instead of restricting it, and what is a Kenyan perspective on these kind of new social media platforms, who could do it, how, and what would be something that you would want there. My name is Anna Luhmann. I’m a member of parliament from Germany. Thank you.
Nikolis Smith: Thank you. Thank you, everybody. Thank you, as I said. We’ll start with Mr. Alizoda. If you can, I know you heard, you didn’t get a chance to respond in the first round, so as you’re thinking about your closing remarks, you can try to think it and kind of contextualise it in a way that you can respond to some of these questions from the audience. Thank you.
Zafar Alizoda: Thank you, Nikolis. I agree with the proposal of the Kenyan representative. Indeed, the efforts of parliamentarians and experts from all countries in the discussion of the issue of the protection of information once again confirm the fact that no country can be an outsider in this matter. Becoming an equal participant in these interrelationships on the Internet, we must respect the general conditions and measures to regulate the preservation of the integrity of information as a whole. Thank you. and also to harmonize the regulations in each country with global principles and standards. Thank you.
Marsha Caddle: And this is really a kind of a repeat of that conversation, right? That we saw the dumping, in this case of greenhouse gas emissions, we experienced it, we suffered from it, and then we started to slowly try to regulate a global system that would see the polluting countries start to invest in adaptation and mitigation, and it’s been a long, arduous process that has not settled. And so for me, we have to learn the lessons of nuclear regulation, we have to learn the lessons of climate that we’re still experiencing now, and to be able to say, look, these are the things that we require of major tech countries and major tech companies. For example, you’ll see that major social network companies and creators can benefit hugely. They don’t even have to physically come to a country and collect data, but they can benefit hugely just from pushing information into a jurisdiction where there is little control. So I agree with you. I cannot speak to whether IGF is that place, but I do think that we have to learn the lessons of the last three decades in climate and in other areas, and rather than having it take another three decades to come to a global compact that is about accountability, that that needs to happen. We already have models for it. We already know what that looks like. And it’s just time to act in a global way.
Grunde Almeland: Well, IGF certainly can be a space where we are able to find this kind of common ground, and I really hope it will be, because this is such an international question that trying to, I think, you know, often trying to regulate this in our national jurisdiction is just creating a lot of Swiss cheese for these companies. And while it is delicious with Swiss cheese, it’s not always good for you. And I really do believe that we need to find these spaces where we work together internationally in order to find this common ground, a common set of rules. And there are a lot of challenges, and I think a lot of the questions point to those challenges, when in terms of verification, you know. Having a set rule on verification also excludes vulnerable people in vulnerable situations, or is able to exclude them. You know, having people in, making sure that in areas that are able to actually speak up is also important, you know. Requiring an age verification for children to access networks, while it is an active discussion in Norway as well, it still has the dilemmas of, you know, children also have fundamental rights in order to gain information, be active in, you know, they are not small people that are being put in a room until they become adults. They should be an active part of society, you know. These are all dilemmas that we have to navigate as well, while we still try to protect, but not overprotect.
Nikolis Smith: Thank you so much.
Rebecca Bauer-Kahn: Yeah, I think, I mean, I think what is being said is really at the crux of all of it, which is global cooperation. I think, you know, we talk about… so much of what the world has done, and we’ve gone different directions. I mean, I don’t know if we have MPs from Australia here, but they have banned social media for young people. How is that going for them? Is it having the problems you describe? I think we can learn so much from one another and really move the ball forward, because as the gentleman from Bahrain said, policy moves slower than technology, and I think only through that collaboration can we really move forward in a way that protects our communities. You know, there was a question about privacy versus some of these society-protecting tools, and I think we can figure this out together. I mean, we’re moving a piece of legislation this year that would require devices to be able to verify your identity so that you don’t have to share that with the platforms, that there is a way technologically to do that in a privacy-protective way, and if we do that together, I think we can move the world forward, not have Swiss cheese, and have societies that are protected from some of the ills that we’ve talked about today. I want to acknowledge that I live in one of those jurisdictions that is responsible for these tech companies, and the weight of that is real, and you also can imagine how it affects our electoral politics, especially in a country where you can buy elections, and that’s perfectly legal now in America, and so we live in a very complicated political dynamic, but I will say that this topic of technology and its impact on society is becoming one of the most agreed-upon political topics, because I think that we are living in a reality where we see the downsides, whether it be for our children and how their mental health is being affected, or our democracies and truth, and so I’m hopeful that even in the complicated country and democracy I live in, we’ll be able to move forward solutions that will be protective, not just of our own people, but of the world. Thank you.
Nikolis Smith: Thank you.
Catherine Mumma: Now, codes of ethics are… The first tool for self-regulation for a lot of professionals and professional associations and organizations. So I think the first self-regulation opportunity lies in some codes of ethics. And since the big tech may not necessarily be in an association where we can say as an association come up with a code of ethics, I’m thinking IGF could be a good space to initiate this. And I would want to suggest that you look at the IPU resolution on AI. Also look at the IPU draft code of ethics on science and technology which might give some suggestions on what could happen. But that would be actually extending some opportunity for big tech companies to realize that the freedoms may have crossed the lines in terms of freedoms and what they’re doing may be harming very vulnerable populations, especially in countries that may not be as enabled. And that brings me to the point my colleague KJ just raised around the more developed countries taking responsibility and the big tech companies taking responsibility in regard to what the negative sides of tech is happening in the more developing countries. I would want to say first we need to recognize that the international protective mechanism around human rights is breaking as far as I’m concerned. We’ve seen what’s happened in Gaza. and we are all helpless, or the world seems to be helpless as a lot of human rights violations are happening, not just in Gaza, but in other places, in Sudan, in Ukraine, and wherever else. I would want to first query whether we need to reimagine what international cooperation was supposed to be, and whether that international cooperation can be rethought and reimagined to truly provide the protections that it’s supposed to provide. Meantime, I think as the small countries, we may need to do what we have to do. One of the things I think from the morning session, I would think we must, in protection of our vulnerable populations, we must start putting conditionalities to the licenses that we give until that time when we will have the tech companies realize that ruining our young people through facilitating access to what they wouldn’t do in their own countries is a violation of human rights. That distorting, facilitating, or enabling the distortion of elections in our countries in order for us to end up with wrong governments is a violation of our rights. So even as we place the responsibility to the United Nations and the international community, we must start looking inward and determine the incremental kind of arrangement that we will have with these companies to ensure that for the very vulnerable, we give conditionalities to the licenses before we issue those licenses to the big tech companies. Thank you very much.
Nikolis Smith: Wow, I’m seeing the flashing red. We should be done already. I just want to say, can we just give a round of applause to this great panel, this discussion? Thank you all. Two things that I just want to make sure that I underscore here. Number one is that this is still day zero. Day one is actually tomorrow, so this track will continue tomorrow morning. So make sure that you’re looking at the schedule. You’ll have more opportunities throughout the week to talk to some of these people on stage through other sessions. So make sure you take advantage of that, especially the folks that didn’t get a chance to as they were queuing to ask questions. One more thing before we close for all members of Parliament that will be going to an event and reception at the Parliament this evening. What you’re going to do when we leave here, we’ll exit out and go to the left and down. There will be some folks waiting for you guys to take you. The bus, I believe, that’s going to escort you leaves at 1600. So 4 p.m. If you have any questions, you can come talk to some of us as we come off the stage as well. But again, thank you so much and enjoy the rest of your week. Thank you. Yeah.
Grunde Almeland
Speech speed
155 words per minute
Speech length
1737 words
Speech time
671 seconds
Truth is becoming less relevant as AI superpowers content creation, leading people to engage only with information confirming their existing beliefs
Explanation
Almeland argues that AI is creating so much more content that people primarily engage with information that confirms their held beliefs, making them stay in comfortable bubbles that are increasingly difficult to pierce with factual debate and true facts. This represents one of the key findings from a report on AI and elections that examined different elections in 2024.
Evidence
Referenced a report that went through different elections in 2024 and analyzed how AI is super powering content creation
Major discussion point
Misinformation and Disinformation Challenges
Topics
Human rights | Sociocultural
Independent media organizations are crucial for combating misinformation, requiring strong legislative foundations ensuring editorial independence and public funding
Explanation
Almeland emphasizes that supporting and strengthening independent media is a key measure to combat the reality being created in different information bubbles. He argues there is a strong connection between trust in politicians and people having access to true information from professional, independent media that can check what politicians are doing.
Evidence
Norway is number one on the press freedom index, has legislative measures ensuring owners cannot challenge editorial decisions, extensive public funding for media including local outlets
Major discussion point
Supporting Independent Media and Transparency
Topics
Human rights | Legal and regulatory
Agreed with
– Marsha Caddle
Agreed on
Supporting independent media and transparency mechanisms
Norway’s success stems from legislative measures preventing owners from interfering with editorial decisions and extensive media funding
Explanation
Almeland details Norway’s approach which includes acts ensuring editorial independence, preventing owners from accessing independent editorial decisions, and prohibiting owners from requiring to see journalists’ work before publication. This is combined with extensive public funding to support both national and local media outlets.
Evidence
Norway ranks number one on press freedom index, has specific legislative protections for editorial independence, funds media extensively including small local news outlets
Major discussion point
Supporting Independent Media and Transparency
Topics
Human rights | Legal and regulatory
Technology should be used as a tool to strengthen democratic institutions and create more transparency
Explanation
Almeland argues that while there are evident challenges with technology, there are also big possibilities in using these tools to create more transparency and strengthen institutions that uphold democracy. He emphasizes the opportunity to adapt technology that strengthens democratic institutions rather than focusing only on the problems.
Evidence
Example of simplifying processes for journalists’ access to government information, Norway’s experience using digital adaptability to maintain high trust in media
Major discussion point
Technology for Good and Innovation
Topics
Human rights | Infrastructure
Agreed with
– Rebecca Bauer-Kahn
– Marsha Caddle
Agreed on
Technology should be leveraged for democratic good and transparency
Most issues are already legislated and need adaptation rather than entirely new laws, with international cooperation being more crucial than new legislation
Explanation
Almeland argues that politicians and legislators should not approach new technology with panic, believing they have nothing legislated already. Most things are already heavily legislated and sometimes need amendment or new legislation, but often it’s about seeing how technology fits into existing frameworks.
Evidence
Example of AI-used scams falling under existing criminal codes, Norwegian case study of Magicat scam system showing need for international cooperation rather than new legislation
Major discussion point
Legislative and Regulatory Approaches
Topics
Legal and regulatory | Cybersecurity
Disagreed with
– Catherine Mumma
– Zafar Alizoda
Disagreed on
Legislative approach – new laws versus adapting existing frameworks
Verification should provide fundamental information allowing people to make their own decisions rather than forcing judgments about what is real
Explanation
Almeland describes Norway’s approach to watermarking technology in media, which cooperates with major outlets like BBC and New York Times. The key component is not just verification checks, but making essential information accessible about who took photos and where they’re from, giving people the opportunity to make their own decisions.
Evidence
Norwegian media landscape cooperation with BBC and New York Times on watermarking technology, focus on providing source information rather than declaring content real or fake
Major discussion point
Verification and Authentication Solutions
Topics
Human rights | Sociocultural
Agreed with
– Rebecca Bauer-Kahn
– Marsha Caddle
Agreed on
Importance of verification and authentication solutions
Disagreed with
– Rebecca Bauer-Kahn
Disagreed on
Approach to content regulation and verification
IGF can serve as a platform for finding common ground and developing shared rules rather than creating fragmented national regulations
Explanation
Almeland believes IGF can be a space for finding common ground on international digital governance issues. He argues that trying to regulate these issues in national jurisdictions creates ‘Swiss cheese’ for companies, and international cooperation is needed to find common rules.
Evidence
Metaphor of Swiss cheese regulation being delicious but not always good, emphasis on need for common set of rules internationally
Major discussion point
International Cooperation and Global Standards
Topics
Legal and regulatory | Human rights
Agreed with
– Martin Chungong
– Catherine Mumma
– Rebecca Bauer-Kahn
– Zafar Alizoda
– Audience
Agreed on
Need for international cooperation and global standards rather than fragmented national approaches
Martin Chungong
Speech speed
86 words per minute
Speech length
319 words
Speech time
220 seconds
Digital technologies have fundamentally altered the information landscape, with governments struggling to distinguish fact from fiction and electoral processes facing manipulation
Explanation
Chungong argues that the rapid spread of misinformation through digital technologies has fundamentally changed how democracies operate. Governments face challenges distinguishing fact from fiction, electoral processes are manipulated through coordinated disinformation campaigns, and democratic institutions find their legitimacy questioned based on false narratives.
Evidence
References to coordinated disinformation campaigns affecting electoral processes and false narratives undermining institutional legitimacy
Major discussion point
Misinformation and Disinformation Challenges
Topics
Human rights | Sociocultural
The rise of AI has transformed the misinformation landscape with deepfakes, AI-generated content, and algorithmic amplification creating unprecedented challenges
Explanation
Chungong emphasizes that artificial intelligence has fundamentally transformed the misinformation landscape through deepfakes, AI-generated content, and algorithmic amplification. These technologies create unprecedented challenges for democratic discourse by blurring the lines between fact and fiction.
Evidence
Specific mention of deepfakes, AI-generated content, and algorithmic amplification as new technological challenges
Major discussion point
Misinformation and Disinformation Challenges
Topics
Human rights | Sociocultural
Global cooperation on combating misinformation is crucial as fragmented approaches risk undermining democratic discourse
Explanation
Chungong argues that at a time when democratic norms face unprecedented pressure and public trust continues to erode, global cooperation on combating misinformation is more crucial than ever. A fragmented approach to information integrity risks undermining the foundations of democratic discourse and exacerbating the crisis of trust.
Evidence
Reference to the Global Digital Compact as emerging international consensus on information integrity
Major discussion point
International Cooperation and Global Standards
Topics
Human rights | Legal and regulatory
Agreed with
– Grunde Almeland
– Catherine Mumma
– Rebecca Bauer-Kahn
– Zafar Alizoda
– Audience
Agreed on
Need for international cooperation and global standards rather than fragmented national approaches
Catherine Mumma
Speech speed
118 words per minute
Speech length
1910 words
Speech time
969 seconds
Kenya has established a comprehensive legal framework including Computer Misuse and Cyber Protection Act, Data Protection Act, and Media Council Act, though gaps remain in addressing misinformation specifically
Explanation
Mumma explains that Kenya has embraced digital technology and established a facilitative constitutional framework protecting freedom of expression and human rights. The country has implemented several relevant laws but still lacks specific legislation addressing misinformation and disinformation, particularly during electoral periods.
Evidence
Listed specific laws: Computer Misuse and Cyber Protection Act, Data Protection Act, Media Council Act, Copyrights Act, National Cohesion and Integration Act; mentioned constitutional protections for freedom of expression and access to information
Major discussion point
Legislative and Regulatory Approaches
Topics
Legal and regulatory | Human rights
Disagreed with
– Grunde Almeland
– Zafar Alizoda
Disagreed on
Legislative approach – new laws versus adapting existing frameworks
Kenya faces challenges with misinformation and disinformation on social media, particularly during electoral periods, leading to violence and ethnic tensions
Explanation
Mumma describes how Kenya experiences continuous electoral competition with significant misuse of social media for disinformation and hate speech. The country has suffered post-election violence following hate speech using negative ethnicity, and digital technology has amplified politically motivated misinformation that sometimes leads to violence.
Evidence
Kenya’s experience with post-election violence following hate speech, establishment of Cohesion Commission, continuous electoral campaigning environment
Major discussion point
Misinformation and Disinformation Challenges
Topics
Human rights | Sociocultural
Beyond regulation, need greater investment in public digital infrastructure to ensure rural areas, women, and vulnerable groups can participate in digital spaces
Explanation
Mumma argues that beyond protecting against disinformation, there’s a human rights issue of inclusion that requires financial investment in necessary public digital infrastructure. This would enable greater participation by those in rural areas, women, and other vulnerable and minority groups in the benefits of digital space and technology.
Evidence
Emphasis on need for investment in public digital infrastructure for rural areas, women, and vulnerable groups
Major discussion point
Digital Inclusion and Infrastructure
Topics
Development | Human rights
Electoral integrity depends largely on neutral electoral management bodies rather than just technology
Explanation
Mumma argues that electoral fraud, whether with AI or other means, usually requires collusion by people within electoral management bodies. She believes that if electoral management bodies remain neutral, then elections will remain credible regardless of whether they are digitally driven or not.
Evidence
Kenya’s experience with Supreme Court cases on electronic transmission of presidential results, electoral management body’s refusal to allow audit of electoral transmission systems
Major discussion point
Supporting Independent Media and Transparency
Topics
Human rights | Legal and regulatory
Need to think broadly about digital technology impacts across all sectors including health, water, and other areas beyond just political spaces
Explanation
Mumma emphasizes that parliamentarians tend to focus on technology’s impact in political spaces, but need to think broader about innovations in health sector, telemedicine, and other sectors. Laws on digital technology should be cross-sectoral rather than focused on particular legislation.
Evidence
Examples of telemedicine and digital technology applications in health, water, and other sectors
Major discussion point
Digital Inclusion and Infrastructure
Topics
Development | Legal and regulatory
African parliamentarians have formed regional caucuses to share experiences and develop common approaches
Explanation
Mumma explains that African parliamentarians have proactively formed African Parliamentary Caucuses across West Africa and East Africa to compare notes and learn from each other. They recognize that what happens in one country affects others, so they need to borrow from each other and learn together.
Evidence
Formation of Africa-wide, West African, and East African parliamentary caucuses for sharing experiences on digital governance
Major discussion point
International Cooperation and Global Standards
Topics
Legal and regulatory | Development
Agreed with
– Grunde Almeland
– Martin Chungong
– Rebecca Bauer-Kahn
– Zafar Alizoda
– Audience
Agreed on
Need for international cooperation and global standards rather than fragmented national approaches
Need for international mechanisms similar to nuclear weapons treaties to place responsibilities on big tech developers and advanced economies
Explanation
Mumma supports the need for international protective mechanisms, noting that current human rights protections are breaking down. She argues for reimagining international cooperation to truly provide protections, while also suggesting that smaller countries should place conditionalities on licenses given to tech companies to protect vulnerable populations.
Evidence
Comparison to nuclear weapons treaties developed quickly after 1945 bomb invention, examples of human rights violations in Gaza, Sudan, Ukraine
Major discussion point
International Cooperation and Global Standards
Topics
Legal and regulatory | Human rights
Agreed with
– Grunde Almeland
– Martin Chungong
– Rebecca Bauer-Kahn
– Zafar Alizoda
– Audience
Agreed on
Need for international cooperation and global standards rather than fragmented national approaches
Rebecca Bauer-Kahn
Speech speed
189 words per minute
Speech length
2003 words
Speech time
634 seconds
California has passed privacy legislation (CCPA) and is working on watermarking requirements and disclosure laws for AI-generated political content
Explanation
Bauer-Kahn explains that California was the first US state to pass privacy protective legislation after the EU, and is home to major AI and social media companies. The state is taking responsibility by requiring disclosure of AI use in political advertisements and mandating platforms to take down serious political misinformation, though constitutional free speech protections create challenges.
Evidence
California is home to 32 of top 50 AI companies and all major social media companies; passed CCPA after EU privacy laws; legislation requiring AI disclosure in political ads currently in courts
Major discussion point
Legislative and Regulatory Approaches
Topics
Legal and regulatory | Human rights
Need to invest in technology for good through academic institutions and civil society to compete with profit-driven companies
Explanation
Bauer-Kahn argues that technology is currently in the hands of very few profit-driven players, and there’s a need to push technology to be the solution in the technology age. This requires funding academic institutions to have compute power to compete with largest AI companies and putting civil society into the space to create technology for good.
Evidence
University of Chicago example of AI model that allows copyrighted material to refuse training by AI models; emphasis on academic institutions as source of tech company talent
Major discussion point
Technology for Good and Innovation
Topics
Development | Economic
Agreed with
– Grunde Almeland
– Marsha Caddle
Agreed on
Technology should be leveraged for democratic good and transparency
Watermarking technology and device-level authentication are critical for distinguishing reality from AI-generated content
Explanation
Bauer-Kahn describes California’s approach to requiring watermarking technology, first from platforms and then from devices themselves. The state legislature used Adobe’s technology to watermark their own images, demonstrating the technology’s capability to trace and authenticate content.
Evidence
California passed watermarking law for platforms, moving legislation for device-level authentication, California legislature’s use of Adobe watermarking technology for their own images
Major discussion point
Verification and Authentication Solutions
Topics
Legal and regulatory | Infrastructure
Agreed with
– Grunde Almeland
– Marsha Caddle
Agreed on
Importance of verification and authentication solutions
Disagreed with
– Grunde Almeland
Disagreed on
Approach to content regulation and verification
Privacy legislation is fundamental to preventing sophisticated financial scams that exploit personal information
Explanation
Bauer-Kahn argues that financial scams are becoming sophisticated because there is so much access to personal information about individuals. When scammers call knowing children’s names and family details, people fall for scams they wouldn’t otherwise. Privacy protection is therefore foundational to preventing these scams.
Evidence
Example of scammers calling with detailed family information making scams more believable
Major discussion point
Privacy and Human Rights Protection
Topics
Human rights | Cybersecurity
Constitutional protections of free speech create challenges in regulating misinformation while requiring creative solutions like disclosure requirements
Explanation
Bauer-Kahn explains that US constitutional protections prevent stopping people from speaking and also prevent forcing people to speak. This creates complications when trying to combat misinformation through required disclosures, as courts may see disclosure requirements as forced speech, particularly for political speech which is even more protected.
Evidence
Court challenges to California’s AI disclosure requirements in political advertisements being seen as forced speech
Major discussion point
Privacy and Human Rights Protection
Topics
Human rights | Legal and regulatory
Disagreed with
– Marsha Caddle
Disagreed on
Role of government versus platforms in content moderation
Paper trails and audit capabilities are essential for maintaining election integrity and public trust
Explanation
Bauer-Kahn emphasizes that election integrity requires not just real integrity but public belief in that integrity. She advocates for paper trails even when using electronic voting machines, so there are receipts and ways to audit elections, which is critically important for maintaining democratic legitimacy.
Evidence
US system where every state runs elections differently, emphasis on agreed principle that elections are free and fair
Major discussion point
Verification and Authentication Solutions
Topics
Human rights | Legal and regulatory
Marsha Caddle
Speech speed
151 words per minute
Speech length
1749 words
Speech time
693 seconds
Deepfakes are creating serious problems, including false diplomatic statements that could destabilize international relations
Explanation
Caddle describes how Barbados had to urgently warn about a deepfake of the Prime Minister making false statements about the country’s diplomatic stance toward a major world power. This highlights how deepfakes risk not just domestic trust but can completely destabilize a country’s global position and international relations.
Evidence
Specific example of deepfake about Barbados Prime Minister’s diplomatic statements requiring urgent government response
Major discussion point
Misinformation and Disinformation Challenges
Topics
Human rights | Sociocultural
Barbados has implemented cybercrime legislation developed through transparent parliamentary processes with citizen input
Explanation
Caddle explains that as the minister who brought cybercrime legislation, she took it to Joint Select Committee, heard evidence and pushback including human rights concerns from citizens, and amended the legislation accordingly. This represents a healthy way to get people in conversation and ensure democratic participation in addressing truth and evidence.
Evidence
Personal experience as minister bringing cybercrime legislation through Joint Select Committee with public hearings and amendments based on citizen feedback
Major discussion point
Legislative and Regulatory Approaches
Topics
Legal and regulatory | Human rights
Promoting transparency through broadcasting parliamentary proceedings and providing original sources helps establish a culture of truth and evidence
Explanation
Caddle describes Barbados’s approach of broadcasting joint select committee proceedings and most Prime Minister speeches either in real time or recorded. The goal is to get people accustomed to an environment of truth and evidence by providing access to original sources and establishing where people can find accurate information.
Evidence
Broadcasting of parliamentary joint select committees and Prime Minister’s speeches, emphasis on providing original sources
Major discussion point
Supporting Independent Media and Transparency
Topics
Human rights | Sociocultural
Agreed with
– Grunde Almeland
Agreed on
Supporting independent media and transparency mechanisms
Investment in tech ecosystems that can build tools to fight misinformation while promoting innovation is essential
Explanation
Caddle argues for investing in tech ecosystems that can balance or build tools to fight against misinformation, since others are investing heavily in creating misinformation. The focus should be on investing in tech creators who will combat misinformation with tools that promote truth.
Evidence
Emphasis on need to invest in tech creators to combat misinformation with truth-promoting tools
Major discussion point
Technology for Good and Innovation
Topics
Development | Economic
Agreed with
– Grunde Almeland
– Rebecca Bauer-Kahn
Agreed on
Technology should be leveraged for democratic good and transparency
Countries should invest in training people in data analytics and science to create their own tools rather than just consuming technology
Explanation
Caddle describes Barbados’s approach of training people in data analytics and data science, not just in formal academic institutions but partnering with companies. The goal is to enable people to create tools they find useful and can trust, participating in the AI value chain even if not building large language models.
Evidence
Partnership with Zindi company for data analytics training, concept of AI value chain allowing participation at different levels
Major discussion point
Technology for Good and Innovation
Topics
Development | Economic
High internet penetration creates expectations for meaningful access and use of information
Explanation
Caddle notes that Barbados has extremely high internet and digital penetration (114% mobile penetration), which creates high expectations about immediate access to information. The challenge becomes not just access, but meaningful access and use as these issues are discussed.
Evidence
Barbados has 114% mobile penetration rate indicating over-maximum coverage
Major discussion point
Digital Inclusion and Infrastructure
Topics
Development | Infrastructure
Platforms should return to more robust verification methods while balancing accessibility concerns
Explanation
Caddle argues that encouraging platforms to return to more robust methods of verification is critical. However, she notes the contradiction where political actors are constrained from generating content on platforms due to restrictions, while misinformation about them proliferates freely.
Evidence
Personal experience as politician being restricted from generating content on platforms while misinformation about political actors spreads
Major discussion point
Verification and Authentication Solutions
Topics
Human rights | Legal and regulatory
Agreed with
– Grunde Almeland
– Rebecca Bauer-Kahn
Agreed on
Importance of verification and authentication solutions
Disagreed with
– Rebecca Bauer-Kahn
Disagreed on
Role of government versus platforms in content moderation
Zafar Alizoda
Speech speed
142 words per minute
Speech length
884 words
Speech time
371 seconds
Central Asian countries need to revise personal data protection laws and consider GDPR-like risk assessment mechanisms
Explanation
Alizoda explains that Central Asian countries are actively developing legal institutes for personal data protection, but legislation doesn’t regulate many issues in this field. He argues for revising laws on personal data protection and making adjustments that reflect modern applications of advanced technologies, including considering GDPR-like risk assessment procedures.
Evidence
Current gaps in Central Asian legislation including absence of national personal data protection laws, lack of breach notification requirements, need for Data Protection Impact Assessment procedures like GDPR
Major discussion point
Legislative and Regulatory Approaches
Topics
Legal and regulatory | Human rights
Disagreed with
– Grunde Almeland
– Catherine Mumma
Disagreed on
Legislative approach – new laws versus adapting existing frameworks
Personal data protection requires special attention for sensitive categories and comprehensive legislative frameworks
Explanation
Alizoda details that sensitive personal data includes race, ethnic origin, political beliefs, religious beliefs, professional affiliation, medical information, biometric data, and financial information. This category requires special protection and processing as disclosure can lead to discrimination, stigmatization and other negative consequences.
Evidence
Detailed categorization of sensitive personal data types and their potential negative consequences if disclosed
Major discussion point
Privacy and Human Rights Protection
Topics
Human rights | Legal and regulatory
Global platforms have different policies for different regions, with developing countries lacking the same protections as EU citizens under GDPR
Explanation
Alizoda argues that global platforms have different policies for different countries and regions, with EU citizens protected by GDPR while small developing countries in Asian regions are deprived of such priority. Even when legislation is close to GDPR, many issues remain unresolved and enforcement is difficult due to limited market leverage.
Evidence
Comparison between EU GDPR protections and lack of similar protections for developing countries, specific mention of Tajikistan’s legislation being close to GDPR but with enforcement challenges
Major discussion point
International Cooperation and Global Standards
Topics
Legal and regulatory | Human rights
Agreed with
– Grunde Almeland
– Martin Chungong
– Catherine Mumma
– Rebecca Bauer-Kahn
– Audience
Agreed on
Need for international cooperation and global standards rather than fragmented national approaches
Audience
Speech speed
142 words per minute
Speech length
1539 words
Speech time
647 seconds
Need for international mechanisms similar to nuclear weapons treaties to place responsibilities on big tech developers and advanced economies
Explanation
John K.J. Kiarie from Kenya argues that IGF should take practical steps to place responsibilities on big tech developers and advanced economies, similar to how nuclear weapons were quickly regulated after 1945. He emphasizes that technological dumping occurs where companies engage in practices in developing countries that they would never do in their home countries, and that imagining all countries can be at par with Silicon Valley is a fallacy.
Evidence
Nuclear weapons example where treaties were established by 1957 just 12 years after 1945 invention; World Coin example in Kenya collecting biometric data with tokens in ways not done in home countries; Africa’s complete dependence on imported internet infrastructure
Major discussion point
International Cooperation and Global Standards
Topics
Legal and regulatory | Development
Agreed with
– Grunde Almeland
– Martin Chungong
– Catherine Mumma
– Rebecca Bauer-Kahn
– Zafar Alizoda
Agreed on
Need for international cooperation and global standards rather than fragmented national approaches
Nikolis Smith
Speech speed
161 words per minute
Speech length
2307 words
Speech time
856 seconds
AI is a tool invented by humans that has benefits alongside challenges
Explanation
Smith emphasizes the importance of maintaining a balanced perspective on AI, acknowledging that while there are legitimate concerns about its impact on democracy and society, AI fundamentally remains a human-created tool that offers significant benefits. He advocates against approaching AI with fear and instead focusing on both its positive potential and necessary safeguards.
Evidence
Reminder that AI was invented by humans and has benefits that should be recognized alongside challenges
Major discussion point
Technology for Good and Innovation
Topics
Human rights | Sociocultural
The IGF provides a valuable platform for international collaboration on digital governance issues
Explanation
Smith positions the IGF as an important forum for bringing together parliamentarians, policymakers, and digital governance experts to build consensus on pressing challenges like safeguarding democratic institutions. He emphasizes that the IGF offers opportunities for continued dialogue and learning throughout the week beyond individual sessions.
Evidence
Organization of parliamentary track sessions, facilitation of panel discussions with MPs from multiple countries, provision of networking opportunities
Major discussion point
International Cooperation and Global Standards
Topics
Legal and regulatory | Development
Junhua LI
Speech speed
104 words per minute
Speech length
541 words
Speech time
310 seconds
The IGF 2025 aims to bring legislators together with stakeholders to shape digital policies ensuring an open, inclusive, and secure Internet for all
Explanation
Li emphasizes that the parliamentary track of IGF 2025 has a clear purpose of bringing legislators together with other stakeholders to shape digital policies and legislative frameworks. Under the theme ‘Building Digital Governance Together,’ the focus is on international digital cooperation to address today’s digital challenges while ensuring an open, inclusive, and secure Internet for all.
Evidence
IGF 2025 theme ‘Building Digital Governance Together’ and focus on international digital cooperation
Major discussion point
International Cooperation and Global Standards
Topics
Legal and regulatory | Human rights
The dual imperative of protecting freedom of expression while combating misinformation and disinformation is among the most urgent digital challenges
Explanation
Li identifies the need to balance protecting freedom of expression with combating the spread of misinformation and disinformation as one of the most critical challenges facing digital governance today. He argues that the ability to speak freely, access accurate information, and engage in open online discourse forms the bedrock of democratic societies, but these rights are being tested by disinformation, censorship, and AI technologies that blur the lines between fact and fiction.
Evidence
Reference to generative AI blurring lines between fact and fiction, false narratives eroding trust in public institutions, targeted disinformation campaigns threatening peace and stability
Major discussion point
Misinformation and Disinformation Challenges
Topics
Human rights | Sociocultural
Parliamentarians have pivotal authority to craft legislation that safeguards freedoms while strengthening democratic resilience
Explanation
Li emphasizes that members of parliament have unique authority and responsibility to navigate the complex terrain of digital governance. They can craft legislation that safeguards freedom of expression and access to information, promotes media and information literacy, and strengthens the resilience of democratic discourse while ensuring that responses to digital threats do not infringe upon the very freedoms they seek to protect.
Evidence
Parliamentary authority to craft legislation, promote media literacy, and strengthen democratic discourse
Major discussion point
Legislative and Regulatory Approaches
Topics
Legal and regulatory | Human rights
Expanding parliamentary engagement in national and regional IGFs is essential for localizing digital governance conversations
Explanation
Li highlights the encouraging progress in expanding parliamentary engagement in national and regional IGFs across different regions from West Africa to Asia-Pacific. He argues that this localization of digital governance conversations is essential and that learning from national experiences and identifying new avenues for collaboration strengthens the overall framework for digital governance.
Evidence
Examples of parliamentary engagement from West Africa to Asia-Pacific, emphasis on localization of digital governance conversations
Major discussion point
International Cooperation and Global Standards
Topics
Legal and regulatory | Development
Agreements
Agreement points
Need for international cooperation and global standards rather than fragmented national approaches
Speakers
– Grunde Almeland
– Martin Chungong
– Catherine Mumma
– Rebecca Bauer-Kahn
– Zafar Alizoda
– Audience
Arguments
IGF can serve as a platform for finding common ground and developing shared rules rather than creating fragmented national regulations
Global cooperation on combating misinformation is crucial as fragmented approaches risk undermining democratic discourse
African parliamentarians have formed regional caucuses to share experiences and develop common approaches
Need for international mechanisms similar to nuclear weapons treaties to place responsibilities on big tech developers and advanced economies
Global platforms have different policies for different regions, with developing countries lacking the same protections as EU citizens under GDPR
Need for international mechanisms similar to nuclear weapons treaties to place responsibilities on big tech developers and advanced economies
Summary
All speakers agreed that digital governance challenges require coordinated international responses rather than isolated national efforts, with IGF serving as a key platform for developing common standards and approaches
Topics
Legal and regulatory | Human rights | Development
Technology should be leveraged for democratic good and transparency
Speakers
– Grunde Almeland
– Rebecca Bauer-Kahn
– Marsha Caddle
Arguments
Technology should be used as a tool to strengthen democratic institutions and create more transparency
Need to invest in technology for good through academic institutions and civil society to compete with profit-driven companies
Investment in tech ecosystems that can build tools to fight misinformation while promoting innovation is essential
Summary
Speakers agreed that technology should be actively developed and deployed to strengthen democratic institutions and combat misinformation, rather than being left solely to profit-driven entities
Topics
Development | Economic | Human rights
Importance of verification and authentication solutions
Speakers
– Grunde Almeland
– Rebecca Bauer-Kahn
– Marsha Caddle
Arguments
Verification should provide fundamental information allowing people to make their own decisions rather than forcing judgments about what is real
Watermarking technology and device-level authentication are critical for distinguishing reality from AI-generated content
Platforms should return to more robust verification methods while balancing accessibility concerns
Summary
All three speakers emphasized the critical need for verification and authentication technologies, though with different approaches – from watermarking to providing source information to enable informed decision-making
Topics
Legal and regulatory | Infrastructure | Human rights
Supporting independent media and transparency mechanisms
Speakers
– Grunde Almeland
– Marsha Caddle
Arguments
Independent media organizations are crucial for combating misinformation, requiring strong legislative foundations ensuring editorial independence and public funding
Promoting transparency through broadcasting parliamentary proceedings and providing original sources helps establish a culture of truth and evidence
Summary
Both speakers agreed that independent media and transparent government processes are fundamental to combating misinformation and maintaining democratic trust
Topics
Human rights | Legal and regulatory | Sociocultural
Similar viewpoints
Both emphasized the need for rapid international action similar to nuclear weapons regulation, with specific focus on addressing technological dumping and ensuring big tech companies are held accountable for practices in developing countries that they wouldn’t engage in at home
Speakers
– Catherine Mumma
– Audience
Arguments
Need for international mechanisms similar to nuclear weapons treaties to place responsibilities on big tech developers and advanced economies
Need for international mechanisms similar to nuclear weapons treaties to place responsibilities on big tech developers and advanced economies
Topics
Legal and regulatory | Human rights | Development
Both emphasized the importance of building local capacity and alternative technology ecosystems that serve democratic purposes rather than just profit motives, though Bauer-Kahn focused on academic institutions while Caddle emphasized practical skills training
Speakers
– Rebecca Bauer-Kahn
– Marsha Caddle
Arguments
Need to invest in technology for good through academic institutions and civil society to compete with profit-driven companies
Countries should invest in training people in data analytics and science to create their own tools rather than just consuming technology
Topics
Development | Economic
Both recognized that existing legal frameworks can address many digital challenges but require adaptation and creative approaches, particularly when balancing free speech protections with the need to combat misinformation
Speakers
– Grunde Almeland
– Rebecca Bauer-Kahn
Arguments
Most issues are already legislated and need adaptation rather than entirely new laws, with international cooperation being more crucial than new legislation
Constitutional protections of free speech create challenges in regulating misinformation while requiring creative solutions like disclosure requirements
Topics
Legal and regulatory | Human rights
Unexpected consensus
Balanced approach to AI regulation without fear-mongering
Speakers
– Grunde Almeland
– Nikolis Smith
Arguments
Most issues are already legislated and need adaptation rather than entirely new laws, with international cooperation being more crucial than new legislation
AI is a tool invented by humans that has benefits alongside challenges
Explanation
Despite the serious concerns about AI’s impact on democracy, both speakers emphasized avoiding panic and fear-mongering, instead advocating for measured responses that recognize both challenges and opportunities. This balanced perspective was unexpected given the gravity of the democratic threats discussed
Topics
Human rights | Sociocultural | Legal and regulatory
Cross-sectoral approach to digital governance beyond just political applications
Speakers
– Catherine Mumma
– Zafar Alizoda
Arguments
Need to think broadly about digital technology impacts across all sectors including health, water, and other areas beyond just political spaces
Personal data protection requires special attention for sensitive categories and comprehensive legislative frameworks
Explanation
Both speakers from very different regions (Kenya and Tajikistan) independently emphasized that digital governance must extend beyond political concerns to encompass healthcare, personal data protection, and other sectors. This holistic view was unexpected in a session focused on democratic safeguards
Topics
Legal and regulatory | Human rights | Development
Overall assessment
Summary
Strong consensus emerged around the need for international cooperation, technology for democratic good, verification solutions, and supporting independent media. Speakers from diverse regions and political systems found common ground on fundamental principles while acknowledging different implementation approaches.
Consensus level
High level of consensus on core principles with recognition that implementation must be adapted to local contexts. The agreement suggests potential for meaningful international collaboration on digital governance frameworks, though speakers acknowledged significant challenges in enforcement and ensuring equitable treatment across different jurisdictions.
Differences
Different viewpoints
Approach to content regulation and verification
Speakers
– Grunde Almeland
– Rebecca Bauer-Kahn
Arguments
Verification should provide fundamental information allowing people to make their own decisions rather than forcing judgments about what is real
Watermarking technology and device-level authentication are critical for distinguishing reality from AI-generated content
Summary
Almeland advocates for providing information and letting people decide for themselves what is real, while Bauer-Kahn emphasizes the need for technological solutions like watermarking to definitively distinguish reality from AI-generated content
Topics
Human rights | Legal and regulatory
Legislative approach – new laws versus adapting existing frameworks
Speakers
– Grunde Almeland
– Catherine Mumma
– Zafar Alizoda
Arguments
Most issues are already legislated and need adaptation rather than entirely new laws, with international cooperation being more crucial than new legislation
Kenya has established a comprehensive legal framework including Computer Misuse and Cyber Protection Act, Data Protection Act, and Media Council Act, though gaps remain in addressing misinformation specifically
Central Asian countries need to revise personal data protection laws and consider GDPR-like risk assessment mechanisms
Summary
Almeland believes most issues are already covered by existing laws that need adaptation, while Mumma and Alizoda emphasize the need for new specific legislation to address gaps in digital governance
Topics
Legal and regulatory | Human rights
Role of government versus platforms in content moderation
Speakers
– Rebecca Bauer-Kahn
– Marsha Caddle
Arguments
Constitutional protections of free speech create challenges in regulating misinformation while requiring creative solutions like disclosure requirements
Platforms should return to more robust verification methods while balancing accessibility concerns
Summary
Bauer-Kahn focuses on government regulatory approaches constrained by constitutional protections, while Caddle emphasizes the responsibility of platforms themselves to implement better verification
Topics
Human rights | Legal and regulatory
Unexpected differences
Trust in electoral management versus technology solutions
Speakers
– Catherine Mumma
– Rebecca Bauer-Kahn
Arguments
Electoral integrity depends largely on neutral electoral management bodies rather than just technology
Paper trails and audit capabilities are essential for maintaining election integrity and public trust
Explanation
This disagreement is unexpected because both speakers are concerned with election integrity, but Mumma emphasizes human institutional factors while Bauer-Kahn focuses on technological safeguards. This reveals different cultural and systemic approaches to the same problem
Topics
Human rights | Legal and regulatory
Overall assessment
Summary
The main areas of disagreement center on regulatory approaches (new laws vs. adapting existing ones), the balance between government regulation and platform responsibility, verification methods (information provision vs. technological authentication), and institutional vs. technological solutions for election integrity
Disagreement level
The level of disagreement is moderate and constructive. Speakers share common goals of protecting democracy and human rights in the digital age, but differ on implementation strategies. These disagreements reflect different national contexts, constitutional frameworks, and development levels rather than fundamental philosophical differences. The implications are positive as they provide multiple pathways for addressing digital governance challenges, allowing different jurisdictions to adopt approaches suited to their specific circumstances while maintaining overall coherence in global digital governance efforts.
Partial agreements
Partial agreements
Similar viewpoints
Both emphasized the need for rapid international action similar to nuclear weapons regulation, with specific focus on addressing technological dumping and ensuring big tech companies are held accountable for practices in developing countries that they wouldn’t engage in at home
Speakers
– Catherine Mumma
– Audience
Arguments
Need for international mechanisms similar to nuclear weapons treaties to place responsibilities on big tech developers and advanced economies
Need for international mechanisms similar to nuclear weapons treaties to place responsibilities on big tech developers and advanced economies
Topics
Legal and regulatory | Human rights | Development
Both emphasized the importance of building local capacity and alternative technology ecosystems that serve democratic purposes rather than just profit motives, though Bauer-Kahn focused on academic institutions while Caddle emphasized practical skills training
Speakers
– Rebecca Bauer-Kahn
– Marsha Caddle
Arguments
Need to invest in technology for good through academic institutions and civil society to compete with profit-driven companies
Countries should invest in training people in data analytics and science to create their own tools rather than just consuming technology
Topics
Development | Economic
Both recognized that existing legal frameworks can address many digital challenges but require adaptation and creative approaches, particularly when balancing free speech protections with the need to combat misinformation
Speakers
– Grunde Almeland
– Rebecca Bauer-Kahn
Arguments
Most issues are already legislated and need adaptation rather than entirely new laws, with international cooperation being more crucial than new legislation
Constitutional protections of free speech create challenges in regulating misinformation while requiring creative solutions like disclosure requirements
Topics
Legal and regulatory | Human rights
Takeaways
Key takeaways
Truth is becoming less relevant in the digital age as AI-powered content creation enables people to remain in confirmation bias bubbles, making factual debate harder to achieve
Supporting and strengthening independent media organizations is crucial for combating misinformation, requiring strong legislative foundations that ensure editorial independence and adequate funding
Most digital governance challenges can be addressed through existing legislation that needs adaptation rather than entirely new laws, with international cooperation being more important than creating new regulations
A comprehensive approach is needed that balances protecting human rights and freedom of expression while regulating harmful content and allowing innovation to progress
Technology for good initiatives must be prioritized, including investment in academic institutions and civil society to create tools that compete with profit-driven platforms
Global platforms apply different policies to different regions, with developing countries lacking the same protections as more developed jurisdictions
Watermarking and authentication technologies are critical for distinguishing between real and AI-generated content, requiring both legislative mandates and technological development
Electoral integrity depends more on neutral electoral management bodies than on technology itself, though paper trails and audit capabilities remain essential
Digital inclusion requires investment in public infrastructure to ensure rural areas, women, and vulnerable groups can meaningfully participate in digital spaces
Resolutions and action items
Parliamentarians should carry IGF 2025 outcomes back to their respective countries to drive policy coherence at national and regional levels
Continue expanding parliamentary engagement in national and regional IGFs, particularly in West Africa and Asia-Pacific regions
California will continue pushing watermarking requirements for platforms and devices, with legislation requiring embedded authentication technology in cameras
African parliamentarians will continue using regional caucuses (Africa-wide, West Africa, East Africa) to share experiences and develop common approaches
Countries should consider implementing conditionalities on licenses given to big tech companies to protect vulnerable populations
IGF should explore developing codes of conduct for social media platforms, potentially building on IPU resolutions on AI and draft codes of ethics on science and technology
Parliamentarians should look beyond political spaces to consider digital technology impacts across all sectors including health, water, and other areas
Unresolved issues
How to effectively regulate misinformation and disinformation without appearing to over-correct or enabling government abuse of surveillance powers
How to place meaningful responsibilities on big tech developers and advanced economies that export technology to developing countries without adequate protections
Whether voluntary codes of conduct for social media platforms will be sufficient or if mandatory standards and regulations are needed
How to verify human identity versus AI-generated content in digital spaces while protecting privacy and avoiding exclusion of vulnerable populations
How to handle cross-border enforcement when violators are situated in different jurisdictions from those being harmed
Whether and how to restrict social media access for minors while respecting children’s fundamental rights to information and participation
How to balance age verification requirements with privacy protection and inclusion of vulnerable populations
How to address the challenge that policy moves slower than technology development
How to ensure meaningful international cooperation when existing international protective mechanisms for human rights appear to be failing
Suggested compromises
Focus on regulating output and content rather than trying to identify the origin of AI-generated material, as verification of source becomes increasingly difficult
Provide fundamental information through watermarking and verification systems that allow people to make their own decisions rather than forcing judgments about what is real or fake
Use existing criminal codes and legislation to address AI-enabled crimes like scams, rather than creating entirely new legal frameworks
Implement incremental arrangements with tech companies through licensing conditionalities while working toward broader international cooperation
Require disclosure and transparency (such as watermarking AI-generated political content) as an alternative to restricting speech, though this faces constitutional challenges
Invest in both regulation and digital infrastructure to ensure broader participation while protecting against harmful uses
Combine legislative approaches with investment in technology for good and media literacy education
Use keyword technology and authentication to regulate content based on the nature of vulnerable populations in specific spaces rather than blanket restrictions
Thought provoking comments
Truth is becoming less relevant… what you engage with, what you look at, is things, content that is already confirming your held beliefs and are kind of helping you stay in this comfortable bubble that it’s hard and harder to pierce with factual debate and true, well, facts, so to say.
Speaker
Grunde Almeland
Reason
This comment cuts to the philosophical heart of the democratic crisis in the digital age – not just that misinformation exists, but that truth itself is losing its currency as people retreat into confirmation bias bubbles. It reframes the problem from technical to epistemological.
Impact
This observation set the tone for the entire discussion by establishing that the challenge isn’t just about regulating technology, but about fundamental changes in how societies relate to truth. It influenced subsequent speakers to focus on building trust and verification mechanisms rather than just content moderation.
We don’t have a law that specifically addresses misinformation and disinformation, not because the law is somewhere and needs to quickly come… hitting the balance between protection of human rights and regulating and also allowing innovation to unhinged, to progress unhinged, is something that is beyond legislation, is something that sometimes is beyond the politics of the day.
Speaker
Catherine Mumma
Reason
This comment reveals the profound complexity of democratic governance in the digital age – acknowledging that some challenges transcend traditional legislative solutions and require deeper societal consensus-building.
Impact
This shifted the conversation from a focus on what laws to pass toward a more nuanced discussion about the limits of legislation and the need for multi-stakeholder approaches, influencing other panelists to discuss non-legislative solutions like media literacy and international cooperation.
The deepfake was about the prime minister saying something in relation to another major world power… Now that has the potential to completely, especially in this current global political environment, to completely put at risk a lot of what a country is doing with respect to policy and global engagement. So we’re not just talking about domestic trust, but we’re talking about international and a country’s global position in the world.
Speaker
Marsha Caddle
Reason
This comment expanded the scope of the discussion beyond domestic democratic concerns to international relations and diplomacy, showing how digital manipulation can destabilize global governance systems.
Impact
This observation elevated the urgency of the discussion by demonstrating that digital threats to democracy have immediate geopolitical consequences, leading other speakers to emphasize the need for international cooperation and shared standards.
To imagine that countries in places like Africa will at one point be at par with Silicon Valley is a fallacy. To imagine that such advanced economies do not have responsibilities is also wrong… what will happen with this AI is that my people will be condemned to digital plantations, just like they were condemned with sugar cane and with coffee and with all these other things that happened in slave trade.
Speaker
John K.J. Kiarie (audience member from Kenya)
Reason
This powerful intervention reframed the entire discussion through a post-colonial lens, challenging the assumption that all countries are equal participants in digital governance and drawing explicit parallels to historical exploitation.
Impact
This comment fundamentally shifted the conversation’s power dynamics, forcing panelists to confront issues of technological colonialism and global inequality. It led to more substantive discussion about the responsibilities of developed nations and tech companies, with multiple panelists acknowledging the validity of this critique in their closing remarks.
We see sort of technology in the hands of very few players right now that are, for better or worse, profit-driven. And how do we push technology to be the solution in the technology age?… how can we fund that? How can we put more money into our academic institutions to have the compute power to compete with the largest AI companies?
Speaker
Rebecca Bauer-Kahn
Reason
This comment identified a structural problem – the concentration of technological power – and proposed a concrete alternative pathway through public investment in ‘technology for good,’ moving beyond regulatory responses to proactive solutions.
Impact
This shifted the discussion from defensive measures (regulating harmful technology) to offensive strategies (developing beneficial alternatives), inspiring other panelists to discuss investment in local tech ecosystems and capacity building.
Most things are already quite heavily legislated… sometimes it has to be amended, and sometimes we need to come up with new legislation, but most of things are already legislated. We just have to see how technology fits into it… I think international cooperation is often more the answer than, you know, coming up with the exact new legislation.
Speaker
Grunde Almeland
Reason
This comment challenged the prevailing assumption that new technologies require entirely new legal frameworks, suggesting instead that existing laws need better enforcement and international coordination.
Impact
This pragmatic perspective helped ground the discussion in practical governance realities, leading other speakers to focus more on implementation challenges and international cooperation mechanisms rather than drafting new legislation.
Overall assessment
These key comments fundamentally shaped the discussion by progressively expanding its scope and depth. The conversation began with technical concerns about AI and elections but evolved into a sophisticated analysis of global power structures, epistemological challenges to democracy, and the limits of traditional governance approaches. The intervention by the Kenyan MP was particularly transformative, forcing the panel to confront uncomfortable truths about technological inequality and historical patterns of exploitation. This led to a more honest and substantive discussion about the responsibilities of developed nations and the need for truly equitable international cooperation. The comments collectively moved the discussion from a narrow focus on content moderation and election security to broader questions about truth, power, and justice in the digital age, ultimately producing a more nuanced understanding of the challenges facing democratic governance in the 21st century.
Follow-up questions
How can we regulate digital technology violations across different sectors beyond just political spaces (health, water, etc.)?
Speaker
Catherine Mumma
Explanation
She emphasized the need to think broadly about digital technology laws across all sectors, not just focusing on political/democratic spaces, as violations could have profound implications in healthcare, water management, and other critical areas.
What mechanisms can be developed for auditing information at regional and international levels?
Speaker
Catherine Mumma
Explanation
She suggested exploring mechanisms like expanding mandates of existing bodies (Data Protection Commissioner, Media Council) and creating African Union or East African community mechanisms for better monitoring.
How can we invest in technology for good and fund academic institutions to compete with large AI companies?
Speaker
Rebecca Bauer-Kahn
Explanation
She highlighted the need for more funding for academic institutions to have compute power to build large language models and create technology solutions that serve public good rather than just profit.
How can watermarking technology be improved and implemented globally?
Speaker
Rebecca Bauer-Kahn
Explanation
She noted that while California and EU are pushing for watermarking requirements, the technology needs further development to be effective globally in distinguishing real from AI-generated content.
How can international cooperation be improved to tackle cross-border digital crimes and scams?
Speaker
Grunde Almeland
Explanation
He emphasized that most digital crimes fall under existing criminal codes but lack of international cooperation makes enforcement difficult, citing sophisticated international scam operations.
How can global platforms improve their policies for all users regardless of country?
Speaker
Zafar Alizoda
Explanation
He pointed out that platform policies differ by region, with EU citizens protected by GDPR while developing countries lack such priority, creating unequal protection standards.
How can we prevent technological dumping and ensure advanced economies take responsibility for technology impacts in developing countries?
Speaker
John K.J. Kiarie
Explanation
He raised concerns about advanced countries and big tech companies engaging in practices in developing countries that they wouldn’t do in their own jurisdictions, calling for practical IGF actions to address this disparity.
What are practical steps to prevent electronic voting system misuse and maintain electoral integrity?
Speaker
Audience member from Peru
Explanation
The question addressed concerns about data transmission problems and electoral process manipulation in electronic voting systems, seeking legislative solutions to protect democratic processes.
How can we differentiate between human and AI-generated content/actors in digital spaces?
Speaker
Senator Kenneth Pugh from Chile
Explanation
He raised fundamental questions about human identity verification in cyberspace and how to maintain human rights protections when AI systems are given freedom of expression capabilities.
Should social media platforms be required to verify user identity similar to banking systems?
Speaker
Hugo Carneiro from Portugal
Explanation
He questioned whether stronger identity verification requirements for social media accounts could help combat fake profiles and misinformation.
What is the effectiveness of age restrictions for social media access for minors?
Speaker
Hugo Carneiro from Portugal
Explanation
He referenced France’s proposed ban on social media for under-15s and asked whether such restrictions are effective solutions for protecting young people from misinformation.
Are voluntary codes of conduct sufficient for social media platforms or do we need mandatory regulation and alternative platforms?
Speaker
Anna Luhmann from Germany
Explanation
She questioned whether voluntary self-regulation would be adequate or if stronger regulatory measures and democracy-supporting alternative platforms are needed.
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event
